Sample records for computing biological functions

  1. Computing biological functions using BioΨ, a formal description of biological processes based on elementary bricks of actions

    PubMed Central

    Pérès, Sabine; Felicori, Liza; Rialle, Stéphanie; Jobard, Elodie; Molina, Franck

    2010-01-01

    Motivation: In the available databases, biological processes are described from molecular and cellular points of view, but these descriptions are represented with text annotations that make it difficult to handle them for computation. Consequently, there is an obvious need for formal descriptions of biological processes. Results: We present a formalism that uses the BioΨ concepts to model biological processes from molecular details to networks. This computational approach, based on elementary bricks of actions, allows us to calculate on biological functions (e.g. process comparison, mapping structure–function relationships, etc.). We illustrate its application with two examples: the functional comparison of proteases and the functional description of the glycolysis network. This computational approach is compatible with detailed biological knowledge and can be applied to different kinds of systems of simulation. Availability: www.sysdiag.cnrs.fr/publications/supplementary-materials/BioPsi_Manager/ Contact: sabine.peres@sysdiag.cnrs.fr; franck.molina@sysdiag.cnrs.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20448138

  2. Computational protein design-the next generation tool to expand synthetic biology applications.

    PubMed

    Gainza-Cirauqui, Pablo; Correia, Bruno Emanuel

    2018-05-02

    One powerful approach to engineer synthetic biology pathways is the assembly of proteins sourced from one or more natural organisms. However, synthetic pathways often require custom functions or biophysical properties not displayed by natural proteins, limitations that could be overcome through modern protein engineering techniques. Structure-based computational protein design is a powerful tool to engineer new functional capabilities in proteins, and it is beginning to have a profound impact in synthetic biology. Here, we review efforts to increase the capabilities of synthetic biology using computational protein design. We focus primarily on computationally designed proteins not only validated in vitro, but also shown to modulate different activities in living cells. Efforts made to validate computational designs in cells can illustrate both the challenges and opportunities in the intersection of protein design and synthetic biology. We also highlight protein design approaches, which although not validated as conveyors of new cellular function in situ, may have rapid and innovative applications in synthetic biology. We foresee that in the near-future, computational protein design will vastly expand the functional capabilities of synthetic cells. Copyright © 2018. Published by Elsevier Ltd.

  3. Computational approaches to metabolic engineering utilizing systems biology and synthetic biology.

    PubMed

    Fong, Stephen S

    2014-08-01

    Metabolic engineering modifies cellular function to address various biochemical applications. Underlying metabolic engineering efforts are a host of tools and knowledge that are integrated to enable successful outcomes. Concurrent development of computational and experimental tools has enabled different approaches to metabolic engineering. One approach is to leverage knowledge and computational tools to prospectively predict designs to achieve the desired outcome. An alternative approach is to utilize combinatorial experimental tools to empirically explore the range of cellular function and to screen for desired traits. This mini-review focuses on computational systems biology and synthetic biology tools that can be used in combination for prospective in silico strain design.

  4. High performance hybrid functional Petri net simulations of biological pathway models on CUDA.

    PubMed

    Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.

  5. Applications of large-scale density functional theory in biology

    NASA Astrophysics Data System (ADS)

    Cole, Daniel J.; Hine, Nicholas D. M.

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  6. Synthetic biology: insights into biological computation.

    PubMed

    Manzoni, Romilde; Urrios, Arturo; Velazquez-Garcia, Silvia; de Nadal, Eulàlia; Posas, Francesc

    2016-04-18

    Organisms have evolved a broad array of complex signaling mechanisms that allow them to survive in a wide range of environmental conditions. They are able to sense external inputs and produce an output response by computing the information. Synthetic biology attempts to rationally engineer biological systems in order to perform desired functions. Our increasing understanding of biological systems guides this rational design, while the huge background in electronics for building circuits defines the methodology. In this context, biocomputation is the branch of synthetic biology aimed at implementing artificial computational devices using engineered biological motifs as building blocks. Biocomputational devices are defined as biological systems that are able to integrate inputs and return outputs following pre-determined rules. Over the last decade the number of available synthetic engineered devices has increased exponentially; simple and complex circuits have been built in bacteria, yeast and mammalian cells. These devices can manage and store information, take decisions based on past and present inputs, and even convert a transient signal into a sustained response. The field is experiencing a fast growth and every day it is easier to implement more complex biological functions. This is mainly due to advances in in vitro DNA synthesis, new genome editing tools, novel molecular cloning techniques, continuously growing part libraries as well as other technological advances. This allows that digital computation can now be engineered and implemented in biological systems. Simple logic gates can be implemented and connected to perform novel desired functions or to better understand and redesign biological processes. Synthetic biological digital circuits could lead to new therapeutic approaches, as well as new and efficient ways to produce complex molecules such as antibiotics, bioplastics or biofuels. Biological computation not only provides possible biomedical and biotechnological applications, but also affords a greater understanding of biological systems.

  7. First Steps in Computational Systems Biology: A Practical Session in Metabolic Modeling and Simulation

    ERIC Educational Resources Information Center

    Reyes-Palomares, Armando; Sanchez-Jimenez, Francisca; Medina, Miguel Angel

    2009-01-01

    A comprehensive understanding of biological functions requires new systemic perspectives, such as those provided by systems biology. Systems biology approaches are hypothesis-driven and involve iterative rounds of model building, prediction, experimentation, model refinement, and development. Developments in computer science are allowing for ever…

  8. Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment

    PubMed Central

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

  9. Secure encapsulation and publication of biological services in the cloud computing environment.

    PubMed

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved.

  10. Recent Developments in the Application of Biologically Inspired Computation to Chemical Sensing

    NASA Astrophysics Data System (ADS)

    Marco, S.; Gutierrez-Gálvez, A.

    2009-05-01

    Biological olfaction outperforms chemical instrumentation in specificity, response time, detection limit, coding capacity, time stability, robustness, size, power consumption, and portability. This biological function provides outstanding performance due, to a large extent, to the unique architecture of the olfactory pathway, which combines a high degree of redundancy, an efficient combinatorial coding along with unmatched chemical information processing mechanisms. The last decade has witnessed important advances in the understanding of the computational primitives underlying the functioning of the olfactory system. In this work, the state of the art concerning biologically inspired computation for chemical sensing will be reviewed. Instead of reviewing the whole body of computational neuroscience of olfaction, we restrict this review to the application of models to the processing of real chemical sensor data.

  11. Digital and biological computing in organizations.

    PubMed

    Kampfner, Roberto R

    2002-01-01

    Michael Conrad unveiled many of the fundamental characteristics of biological computing. Underlying the behavioral variability and the adaptability of biological systems are these characteristics, including the ability of biological information processing to exploit quantum features at the atomic level, the powerful 3-D pattern recognition capabilities of macromolecules, the computational efficiency, and the ability to support biological function. Among many other things, Conrad formalized and explicated the underlying principles of biological adaptability, characterized the differences between biological and digital computing in terms of a fundamental tradeoff between adaptability and programmability of information processing, and discussed the challenges of interfacing digital computers and human society. This paper is about the encounter of biological and digital computing. The focus is on the nature of the biological information processing infrastructure of organizations and how it can be extended effectively with digital computing. In order to achieve this goal effectively, however, we need to embed properly digital computing into the information processing aspects of human and social behavior and intelligence, which are fundamentally biological. Conrad's legacy provides a firm, strong, and inspiring foundation for this endeavor.

  12. Synthetic analog computation in living cells.

    PubMed

    Daniel, Ramiz; Rubens, Jacob R; Sarpeshkar, Rahul; Lu, Timothy K

    2013-05-30

    A central goal of synthetic biology is to achieve multi-signal integration and processing in living cells for diagnostic, therapeutic and biotechnology applications. Digital logic has been used to build small-scale circuits, but other frameworks may be needed for efficient computation in the resource-limited environments of cells. Here we demonstrate that synthetic analog gene circuits can be engineered to execute sophisticated computational functions in living cells using just three transcription factors. Such synthetic analog gene circuits exploit feedback to implement logarithmically linear sensing, addition, ratiometric and power-law computations. The circuits exhibit Weber's law behaviour as in natural biological systems, operate over a wide dynamic range of up to four orders of magnitude and can be designed to have tunable transfer functions. Our circuits can be composed to implement higher-order functions that are well described by both intricate biochemical models and simple mathematical functions. By exploiting analog building-block functions that are already naturally present in cells, this approach efficiently implements arithmetic operations and complex functions in the logarithmic domain. Such circuits may lead to new applications for synthetic biology and biotechnology that require complex computations with limited parts, need wide-dynamic-range biosensing or would benefit from the fine control of gene expression.

  13. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    PubMed

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  14. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    PubMed Central

    Gu, Shuo

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed. PMID:28690664

  15. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective.

    PubMed

    Gu, Shuo; Pei, Jianfeng

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  16. Findings

    MedlinePlus

    ... Issue All Issues Explore Findings by Topic Cell Biology Cellular Structures, Functions, Processes, Imaging, Stress Response Chemistry ... Glycobiology, Synthesis, Natural Products, Chemical Reactions Computers in Biology Bioinformatics, Modeling, Systems Biology, Data Visualization Diseases Cancer, ...

  17. Introduction to bioinformatics.

    PubMed

    Can, Tolga

    2014-01-01

    Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.

  18. The Center for Computational Biology: resources, achievements, and challenges

    PubMed Central

    Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott

    2011-01-01

    The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains. PMID:22081221

  19. The Center for Computational Biology: resources, achievements, and challenges.

    PubMed

    Toga, Arthur W; Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott

    2012-01-01

    The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains.

  20. The evolvability of programmable hardware.

    PubMed

    Raman, Karthik; Wagner, Andreas

    2011-02-06

    In biological systems, individual phenotypes are typically adopted by multiple genotypes. Examples include protein structure phenotypes, where each structure can be adopted by a myriad individual amino acid sequence genotypes. These genotypes form vast connected 'neutral networks' in genotype space. The size of such neutral networks endows biological systems not only with robustness to genetic change, but also with the ability to evolve a vast number of novel phenotypes that occur near any one neutral network. Whether technological systems can be designed to have similar properties is poorly understood. Here we ask this question for a class of programmable electronic circuits that compute digital logic functions. The functional flexibility of such circuits is important in many applications, including applications of evolutionary principles to circuit design. The functions they compute are at the heart of all digital computation. We explore a vast space of 10(45) logic circuits ('genotypes') and 10(19) logic functions ('phenotypes'). We demonstrate that circuits that compute the same logic function are connected in large neutral networks that span circuit space. Their robustness or fault-tolerance varies very widely. The vicinity of each neutral network contains circuits with a broad range of novel functions. Two circuits computing different functions can usually be converted into one another via few changes in their architecture. These observations show that properties important for the evolvability of biological systems exist in a commercially important class of electronic circuitry. They also point to generic ways to generate fault-tolerant, adaptable and evolvable electronic circuitry.

  1. The evolvability of programmable hardware

    PubMed Central

    Raman, Karthik; Wagner, Andreas

    2011-01-01

    In biological systems, individual phenotypes are typically adopted by multiple genotypes. Examples include protein structure phenotypes, where each structure can be adopted by a myriad individual amino acid sequence genotypes. These genotypes form vast connected ‘neutral networks’ in genotype space. The size of such neutral networks endows biological systems not only with robustness to genetic change, but also with the ability to evolve a vast number of novel phenotypes that occur near any one neutral network. Whether technological systems can be designed to have similar properties is poorly understood. Here we ask this question for a class of programmable electronic circuits that compute digital logic functions. The functional flexibility of such circuits is important in many applications, including applications of evolutionary principles to circuit design. The functions they compute are at the heart of all digital computation. We explore a vast space of 1045 logic circuits (‘genotypes’) and 1019 logic functions (‘phenotypes’). We demonstrate that circuits that compute the same logic function are connected in large neutral networks that span circuit space. Their robustness or fault-tolerance varies very widely. The vicinity of each neutral network contains circuits with a broad range of novel functions. Two circuits computing different functions can usually be converted into one another via few changes in their architecture. These observations show that properties important for the evolvability of biological systems exist in a commercially important class of electronic circuitry. They also point to generic ways to generate fault-tolerant, adaptable and evolvable electronic circuitry. PMID:20534598

  2. Supporting Representational Competence in High School Biology with Computer-Based Biomolecular Visualizations

    ERIC Educational Resources Information Center

    Wilder, Anna; Brinkerhoff, Jonathan

    2007-01-01

    This study assessed the effectiveness of computer-based biomolecular visualization activities on the development of high school biology students' representational competence as a means of understanding and visualizing protein structure/function relationships. Also assessed were students' attitudes toward these activities. Sixty-nine students…

  3. Chromatin Computation

    PubMed Central

    Bryant, Barbara

    2012-01-01

    In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this “chromatin computer” to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal – and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines. PMID:22567109

  4. Exploring biological interaction networks with tailored weighted quasi-bicliques

    PubMed Central

    2012-01-01

    Background Biological networks provide fundamental insights into the functional characterization of genes and their products, the characterization of DNA-protein interactions, the identification of regulatory mechanisms, and other biological tasks. Due to the experimental and biological complexity, their computational exploitation faces many algorithmic challenges. Results We introduce novel weighted quasi-biclique problems to identify functional modules in biological networks when represented by bipartite graphs. In difference to previous quasi-biclique problems, we include biological interaction levels by using edge-weighted quasi-bicliques. While we prove that our problems are NP-hard, we also describe IP formulations to compute exact solutions for moderately sized networks. Conclusions We verify the effectiveness of our IP solutions using both simulation and empirical data. The simulation shows high quasi-biclique recall rates, and the empirical data corroborate the abilities of our weighted quasi-bicliques in extracting features and recovering missing interactions from biological networks. PMID:22759421

  5. The fusion of biology, computer science, and engineering: towards efficient and successful synthetic biology.

    PubMed

    Linshiz, Gregory; Goldberg, Alex; Konry, Tania; Hillson, Nathan J

    2012-01-01

    Synthetic biology is a nascent field that emerged in earnest only around the turn of the millennium. It aims to engineer new biological systems and impart new biological functionality, often through genetic modifications. The design and construction of new biological systems is a complex, multistep process, requiring multidisciplinary collaborative efforts from "fusion" scientists who have formal training in computer science or engineering, as well as hands-on biological expertise. The public has high expectations for synthetic biology and eagerly anticipates the development of solutions to the major challenges facing humanity. This article discusses laboratory practices and the conduct of research in synthetic biology. It argues that the fusion science approach, which integrates biology with computer science and engineering best practices, including standardization, process optimization, computer-aided design and laboratory automation, miniaturization, and systematic management, will increase the predictability and reproducibility of experiments and lead to breakthroughs in the construction of new biological systems. The article also discusses several successful fusion projects, including the development of software tools for DNA construction design automation, recursive DNA construction, and the development of integrated microfluidics systems.

  6. Biological Basis For Computer Vision: Some Perspectives

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.

    1990-03-01

    Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.

  7. Roles for text mining in protein function prediction.

    PubMed

    Verspoor, Karin M

    2014-01-01

    The Human Genome Project has provided science with a hugely valuable resource: the blueprints for life; the specification of all of the genes that make up a human. While the genes have all been identified and deciphered, it is proteins that are the workhorses of the human body: they are essential to virtually all cell functions and are the primary mechanism through which biological function is carried out. Hence in order to fully understand what happens at a molecular level in biological organisms, and eventually to enable development of treatments for diseases where some aspect of a biological system goes awry, we must understand the functions of proteins. However, experimental characterization of protein function cannot scale to the vast amount of DNA sequence data now available. Computational protein function prediction has therefore emerged as a problem at the forefront of modern biology (Radivojac et al., Nat Methods 10(13):221-227, 2013).Within the varied approaches to computational protein function prediction that have been explored, there are several that make use of biomedical literature mining. These methods take advantage of information in the published literature to associate specific proteins with specific protein functions. In this chapter, we introduce two main strategies for doing this: association of function terms, represented as Gene Ontology terms (Ashburner et al., Nat Genet 25(1):25-29, 2000), to proteins based on information in published articles, and a paradigm called LEAP-FS (Literature-Enhanced Automated Prediction of Functional Sites) in which literature mining is used to validate the predictions of an orthogonal computational protein function prediction method.

  8. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  9. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  10. Mathematical and Computational Modeling in Complex Biological Systems

    PubMed Central

    Li, Wenyang; Zhu, Xiaoliang

    2017-01-01

    The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558

  11. Mathematical and Computational Modeling in Complex Biological Systems.

    PubMed

    Ji, Zhiwei; Yan, Ke; Li, Wenyang; Hu, Haigen; Zhu, Xiaoliang

    2017-01-01

    The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.

  12. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    PubMed Central

    2010-01-01

    Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an external source for a map of their stored information or for an operational instruction set; rather, they must contain an organizational template conserved within their intra-nuclear architecture that "manipulates" the laws of chemistry and physics into a highly robust instruction set. We propose that the epigenetic structure of the intra-nuclear environment and the non-coding RNA may play the roles of a Biological File Allocation Table (BFAT) and biological operating system (Bio-OS) in eukaryotic cells. Conclusions The comparison of functional and structural characteristics of the DNA complex and the computer hard drive leads to a new descriptive paradigm that identifies the DNA as a dynamic storage system of biological information. This system is embodied in an autonomous operating system that inductively follows organizational structures, data hierarchy and executable operations that are well understood in the computer science industry. Characterizing the "DNA hard drive" in this fashion can lead to insights arising from discrepancies in the descriptive framework, particularly with respect to positing the role of epigenetic processes in an information-processing context. Further expansions arising from this comparison include the view of cells as parallel computing machines and a new approach towards characterizing cellular control systems. PMID:20092652

  13. A comparative approach for the investigation of biological information processing: an examination of the structure and function of computer hard drives and DNA.

    PubMed

    D'Onofrio, David J; An, Gary

    2010-01-21

    The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an external source for a map of their stored information or for an operational instruction set; rather, they must contain an organizational template conserved within their intra-nuclear architecture that "manipulates" the laws of chemistry and physics into a highly robust instruction set. We propose that the epigenetic structure of the intra-nuclear environment and the non-coding RNA may play the roles of a Biological File Allocation Table (BFAT) and biological operating system (Bio-OS) in eukaryotic cells. The comparison of functional and structural characteristics of the DNA complex and the computer hard drive leads to a new descriptive paradigm that identifies the DNA as a dynamic storage system of biological information. This system is embodied in an autonomous operating system that inductively follows organizational structures, data hierarchy and executable operations that are well understood in the computer science industry. Characterizing the "DNA hard drive" in this fashion can lead to insights arising from discrepancies in the descriptive framework, particularly with respect to positing the role of epigenetic processes in an information-processing context. Further expansions arising from this comparison include the view of cells as parallel computing machines and a new approach towards characterizing cellular control systems.

  14. Modelling of pathologies of the nervous system by the example of computational and electronic models of elementary nervous systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumilov, V. N., E-mail: vnshumilov@rambler.ru; Syryamkin, V. I., E-mail: maximus70sir@gmail.com; Syryamkin, M. V., E-mail: maximus70sir@gmail.com

    The paper puts forward principles of action of devices operating similarly to the nervous system and the brain of biological systems. We propose an alternative method of studying diseases of the nervous system, which may significantly influence prevention, medical treatment, or at least retardation of development of these diseases. This alternative is to use computational and electronic models of the nervous system. Within this approach, we represent the brain in the form of a huge electrical circuit composed of active units, namely, neuron-like units and connections between them. As a result, we created computational and electronic models of elementary nervousmore » systems, which are based on the principles of functioning of biological nervous systems that we have put forward. Our models demonstrate reactions to external stimuli and their change similarly to the behavior of simplest biological organisms. The models possess the ability of self-training and retraining in real time without human intervention and switching operation/training modes. In our models, training and memorization take place constantly under the influence of stimuli on the organism. Training is without any interruption and switching operation modes. Training and formation of new reflexes occur by means of formation of new connections between excited neurons, between which formation of connections is physically possible. Connections are formed without external influence. They are formed under the influence of local causes. Connections are formed between outputs and inputs of two neurons, when the difference between output and input potentials of excited neurons exceeds a value sufficient to form a new connection. On these grounds, we suggest that the proposed principles truly reflect mechanisms of functioning of biological nervous systems and the brain. In order to confirm the correspondence of the proposed principles to biological nature, we carry out experiments for the study of processes of formation of connections between neurons in simplest biological objects. Based on the correspondence of function of the created models to function of biological nervous systems we suggest the use of computational and electronic models of the brain for the study of its function under normal and pathological conditions, because operating principles of the models are built on principles imitating the function of biological nervous systems and the brain.« less

  15. Modelling of pathologies of the nervous system by the example of computational and electronic models of elementary nervous systems

    NASA Astrophysics Data System (ADS)

    Shumilov, V. N.; Syryamkin, V. I.; Syryamkin, M. V.

    2015-11-01

    The paper puts forward principles of action of devices operating similarly to the nervous system and the brain of biological systems. We propose an alternative method of studying diseases of the nervous system, which may significantly influence prevention, medical treatment, or at least retardation of development of these diseases. This alternative is to use computational and electronic models of the nervous system. Within this approach, we represent the brain in the form of a huge electrical circuit composed of active units, namely, neuron-like units and connections between them. As a result, we created computational and electronic models of elementary nervous systems, which are based on the principles of functioning of biological nervous systems that we have put forward. Our models demonstrate reactions to external stimuli and their change similarly to the behavior of simplest biological organisms. The models possess the ability of self-training and retraining in real time without human intervention and switching operation/training modes. In our models, training and memorization take place constantly under the influence of stimuli on the organism. Training is without any interruption and switching operation modes. Training and formation of new reflexes occur by means of formation of new connections between excited neurons, between which formation of connections is physically possible. Connections are formed without external influence. They are formed under the influence of local causes. Connections are formed between outputs and inputs of two neurons, when the difference between output and input potentials of excited neurons exceeds a value sufficient to form a new connection. On these grounds, we suggest that the proposed principles truly reflect mechanisms of functioning of biological nervous systems and the brain. In order to confirm the correspondence of the proposed principles to biological nature, we carry out experiments for the study of processes of formation of connections between neurons in simplest biological objects. Based on the correspondence of function of the created models to function of biological nervous systems we suggest the use of computational and electronic models of the brain for the study of its function under normal and pathological conditions, because operating principles of the models are built on principles imitating the function of biological nervous systems and the brain.

  16. Computational Study on Atomic Structures, Electronic Properties, and Chemical Reactions at Surfaces and Interfaces and in Biomaterials

    NASA Astrophysics Data System (ADS)

    Takano, Yu; Kobayashi, Nobuhiko; Morikawa, Yoshitada

    2018-06-01

    Through computer simulations using atomistic models, it is becoming possible to calculate the atomic structures of localized defects or dopants in semiconductors, chemically active sites in heterogeneous catalysts, nanoscale structures, and active sites in biological systems precisely. Furthermore, it is also possible to clarify physical and chemical properties possessed by these nanoscale structures such as electronic states, electronic and atomic transport properties, optical properties, and chemical reactivity. It is sometimes quite difficult to clarify these nanoscale structure-function relations experimentally and, therefore, accurate computational studies are indispensable in materials science. In this paper, we review recent studies on the relation between local structures and functions for inorganic, organic, and biological systems by using atomistic computer simulations.

  17. Systems Biology in Immunology – A Computational Modeling Perspective

    PubMed Central

    Germain, Ronald N.; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra; Fraser, Iain D. C.

    2011-01-01

    Systems biology is an emerging discipline that combines high-content, multiplexed measurements with informatic and computational modeling methods to better understand biological function at various scales. Here we present a detailed review of the methods used to create computational models and conduct simulations of immune function, We provide descriptions of the key data gathering techniques employed to generate the quantitative and qualitative data required for such modeling and simulation and summarize the progress to date in applying these tools and techniques to questions of immunological interest, including infectious disease. We include comments on what insights modeling can provide that complement information obtained from the more familiar experimental discovery methods used by most investigators and why quantitative methods are needed to eventually produce a better understanding of immune system operation in health and disease. PMID:21219182

  18. BeeSpace Navigator: exploratory analysis of gene function using semantic indexing of biological literature.

    PubMed

    Sen Sarma, Moushumi; Arcoleo, David; Khetani, Radhika S; Chee, Brant; Ling, Xu; He, Xin; Jiang, Jing; Mei, Qiaozhu; Zhai, ChengXiang; Schatz, Bruce

    2011-07-01

    With the rapid decrease in cost of genome sequencing, the classification of gene function is becoming a primary problem. Such classification has been performed by human curators who read biological literature to extract evidence. BeeSpace Navigator is a prototype software for exploratory analysis of gene function using biological literature. The software supports an automatic analogue of the curator process to extract functions, with a simple interface intended for all biologists. Since extraction is done on selected collections that are semantically indexed into conceptual spaces, the curation can be task specific. Biological literature containing references to gene lists from expression experiments can be analyzed to extract concepts that are computational equivalents of a classification such as Gene Ontology, yielding discriminating concepts that differentiate gene mentions from other mentions. The functions of individual genes can be summarized from sentences in biological literature, to produce results resembling a model organism database entry that is automatically computed. Statistical frequency analysis based on literature phrase extraction generates offline semantic indexes to support these gene function services. The website with BeeSpace Navigator is free and open to all; there is no login requirement at www.beespace.illinois.edu for version 4. Materials from the 2010 BeeSpace Software Training Workshop are available at www.beespace.illinois.edu/bstwmaterials.php.

  19. Protein interaction network topology uncovers melanogenesis regulatory network components within functional genomics datasets.

    PubMed

    Ho, Hsiang; Milenković, Tijana; Memisević, Vesna; Aruri, Jayavani; Przulj, Natasa; Ganesan, Anand K

    2010-06-15

    RNA-mediated interference (RNAi)-based functional genomics is a systems-level approach to identify novel genes that control biological phenotypes. Existing computational approaches can identify individual genes from RNAi datasets that regulate a given biological process. However, currently available methods cannot identify which RNAi screen "hits" are novel components of well-characterized biological pathways known to regulate the interrogated phenotype. In this study, we describe a method to identify genes from RNAi datasets that are novel components of known biological pathways. We experimentally validate our approach in the context of a recently completed RNAi screen to identify novel regulators of melanogenesis. In this study, we utilize a PPI network topology-based approach to identify targets within our RNAi dataset that may be components of known melanogenesis regulatory pathways. Our computational approach identifies a set of screen targets that cluster topologically in a human PPI network with the known pigment regulator Endothelin receptor type B (EDNRB). Validation studies reveal that these genes impact pigment production and EDNRB signaling in pigmented melanoma cells (MNT-1) and normal melanocytes. We present an approach that identifies novel components of well-characterized biological pathways from functional genomics datasets that could not have been identified by existing statistical and computational approaches.

  20. Protein interaction network topology uncovers melanogenesis regulatory network components within functional genomics datasets

    PubMed Central

    2010-01-01

    Background RNA-mediated interference (RNAi)-based functional genomics is a systems-level approach to identify novel genes that control biological phenotypes. Existing computational approaches can identify individual genes from RNAi datasets that regulate a given biological process. However, currently available methods cannot identify which RNAi screen "hits" are novel components of well-characterized biological pathways known to regulate the interrogated phenotype. In this study, we describe a method to identify genes from RNAi datasets that are novel components of known biological pathways. We experimentally validate our approach in the context of a recently completed RNAi screen to identify novel regulators of melanogenesis. Results In this study, we utilize a PPI network topology-based approach to identify targets within our RNAi dataset that may be components of known melanogenesis regulatory pathways. Our computational approach identifies a set of screen targets that cluster topologically in a human PPI network with the known pigment regulator Endothelin receptor type B (EDNRB). Validation studies reveal that these genes impact pigment production and EDNRB signaling in pigmented melanoma cells (MNT-1) and normal melanocytes. Conclusions We present an approach that identifies novel components of well-characterized biological pathways from functional genomics datasets that could not have been identified by existing statistical and computational approaches. PMID:20550706

  1. 10 years for the Journal of Bioinformatics and Computational Biology (2003-2013) -- a retrospective.

    PubMed

    Eisenhaber, Frank; Sherman, Westley Arthur

    2014-06-01

    The Journal of Bioinformatics and Computational Biology (JBCB) started publishing scientific articles in 2003. It has established itself as home for solid research articles in the field (~ 60 per year) that are surprisingly well cited. JBCB has an important function as alternative publishing channel in addition to other, bigger journals.

  2. Biocellion: accelerating computer simulation of multicellular biological system models.

    PubMed

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Computational biology for cardiovascular biomarker discovery.

    PubMed

    Azuaje, Francisco; Devaux, Yvan; Wagner, Daniel

    2009-07-01

    Computational biology is essential in the process of translating biological knowledge into clinical practice, as well as in the understanding of biological phenomena based on the resources and technologies originating from the clinical environment. One such key contribution of computational biology is the discovery of biomarkers for predicting clinical outcomes using 'omic' information. This process involves the predictive modelling and integration of different types of data and knowledge for screening, diagnostic or prognostic purposes. Moreover, this requires the design and combination of different methodologies based on statistical analysis and machine learning. This article introduces key computational approaches and applications to biomarker discovery based on different types of 'omic' data. Although we emphasize applications in cardiovascular research, the computational requirements and advances discussed here are also relevant to other domains. We will start by introducing some of the contributions of computational biology to translational research, followed by an overview of methods and technologies used for the identification of biomarkers with predictive or classification value. The main types of 'omic' approaches to biomarker discovery will be presented with specific examples from cardiovascular research. This will include a review of computational methodologies for single-source and integrative data applications. Major computational methods for model evaluation will be described together with recommendations for reporting models and results. We will present recent advances in cardiovascular biomarker discovery based on the combination of gene expression and functional network analyses. The review will conclude with a discussion of key challenges for computational biology, including perspectives from the biosciences and clinical areas.

  4. Modeling Structure-Function Relationships in Synthetic DNA Sequences using Attribute Grammars

    PubMed Central

    Cai, Yizhi; Lux, Matthew W.; Adam, Laura; Peccoud, Jean

    2009-01-01

    Recognizing that certain biological functions can be associated with specific DNA sequences has led various fields of biology to adopt the notion of the genetic part. This concept provides a finer level of granularity than the traditional notion of the gene. However, a method of formally relating how a set of parts relates to a function has not yet emerged. Synthetic biology both demands such a formalism and provides an ideal setting for testing hypotheses about relationships between DNA sequences and phenotypes beyond the gene-centric methods used in genetics. Attribute grammars are used in computer science to translate the text of a program source code into the computational operations it represents. By associating attributes with parts, modifying the value of these attributes using rules that describe the structure of DNA sequences, and using a multi-pass compilation process, it is possible to translate DNA sequences into molecular interaction network models. These capabilities are illustrated by simple example grammars expressing how gene expression rates are dependent upon single or multiple parts. The translation process is validated by systematically generating, translating, and simulating the phenotype of all the sequences in the design space generated by a small library of genetic parts. Attribute grammars represent a flexible framework connecting parts with models of biological function. They will be instrumental for building mathematical models of libraries of genetic constructs synthesized to characterize the function of genetic parts. This formalism is also expected to provide a solid foundation for the development of computer assisted design applications for synthetic biology. PMID:19816554

  5. Predicting phenotype from genotype: Improving accuracy through more robust experimental and computational modeling

    PubMed Central

    Gallion, Jonathan; Koire, Amanda; Katsonis, Panagiotis; Schoenegge, Anne‐Marie; Bouvier, Michel

    2017-01-01

    Abstract Computational prediction yields efficient and scalable initial assessments of how variants of unknown significance may affect human health. However, when discrepancies between these predictions and direct experimental measurements of functional impact arise, inaccurate computational predictions are frequently assumed as the source. Here, we present a methodological analysis indicating that shortcomings in both computational and biological data can contribute to these disagreements. We demonstrate that incomplete assaying of multifunctional proteins can affect the strength of correlations between prediction and experiments; a variant's full impact on function is better quantified by considering multiple assays that probe an ensemble of protein functions. Additionally, many variants predictions are sensitive to protein alignment construction and can be customized to maximize relevance of predictions to a specific experimental question. We conclude that inconsistencies between computation and experiment can often be attributed to the fact that they do not test identical hypotheses. Aligning the design of the computational input with the design of the experimental output will require cooperation between computational and biological scientists, but will also lead to improved estimations of computational prediction accuracy and a better understanding of the genotype–phenotype relationship. PMID:28230923

  6. Predicting phenotype from genotype: Improving accuracy through more robust experimental and computational modeling.

    PubMed

    Gallion, Jonathan; Koire, Amanda; Katsonis, Panagiotis; Schoenegge, Anne-Marie; Bouvier, Michel; Lichtarge, Olivier

    2017-05-01

    Computational prediction yields efficient and scalable initial assessments of how variants of unknown significance may affect human health. However, when discrepancies between these predictions and direct experimental measurements of functional impact arise, inaccurate computational predictions are frequently assumed as the source. Here, we present a methodological analysis indicating that shortcomings in both computational and biological data can contribute to these disagreements. We demonstrate that incomplete assaying of multifunctional proteins can affect the strength of correlations between prediction and experiments; a variant's full impact on function is better quantified by considering multiple assays that probe an ensemble of protein functions. Additionally, many variants predictions are sensitive to protein alignment construction and can be customized to maximize relevance of predictions to a specific experimental question. We conclude that inconsistencies between computation and experiment can often be attributed to the fact that they do not test identical hypotheses. Aligning the design of the computational input with the design of the experimental output will require cooperation between computational and biological scientists, but will also lead to improved estimations of computational prediction accuracy and a better understanding of the genotype-phenotype relationship. © 2017 The Authors. **Human Mutation published by Wiley Periodicals, Inc.

  7. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.

    PubMed

    Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.

  8. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems

    PubMed Central

    Timmis, Jon; Qwarnstrom, Eva E.

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414

  9. Magnetic skyrmion-based artificial neuron device

    NASA Astrophysics Data System (ADS)

    Li, Sai; Kang, Wang; Huang, Yangqi; Zhang, Xichao; Zhou, Yan; Zhao, Weisheng

    2017-08-01

    Neuromorphic computing, inspired by the biological nervous system, has attracted considerable attention. Intensive research has been conducted in this field for developing artificial synapses and neurons, attempting to mimic the behaviors of biological synapses and neurons, which are two basic elements of a human brain. Recently, magnetic skyrmions have been investigated as promising candidates in neuromorphic computing design owing to their topologically protected particle-like behaviors, nanoscale size and low driving current density. In one of our previous studies, a skyrmion-based artificial synapse was proposed, with which both short-term plasticity and long-term potentiation functions have been demonstrated. In this work, we further report on a skyrmion-based artificial neuron by exploiting the tunable current-driven skyrmion motion dynamics, mimicking the leaky-integrate-fire function of a biological neuron. With a simple single-device implementation, this proposed artificial neuron may enable us to build a dense and energy-efficient spiking neuromorphic computing system.

  10. On the limitations of standard statistical modeling in biological systems: a full Bayesian approach for biology.

    PubMed

    Gomez-Ramirez, Jaime; Sanz, Ricardo

    2013-09-01

    One of the most important scientific challenges today is the quantitative and predictive understanding of biological function. Classical mathematical and computational approaches have been enormously successful in modeling inert matter, but they may be inadequate to address inherent features of biological systems. We address the conceptual and methodological obstacles that lie in the inverse problem in biological systems modeling. We introduce a full Bayesian approach (FBA), a theoretical framework to study biological function, in which probability distributions are conditional on biophysical information that physically resides in the biological system that is studied by the scientist. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. A Combination of Hand-Held Models and Computer Imaging Programs Helps Students Answer Oral Questions about Molecular Structure and Function: A Controlled Investigation of Student Learning

    ERIC Educational Resources Information Center

    Harris, Michelle A.; Peck, Ronald F.; Colton, Shannon; Morris, Jennifer; Neto, Elias Chaibub; Kallio, Julie

    2009-01-01

    We conducted a controlled investigation to examine whether a combination of computer imagery and tactile tools helps introductory cell biology laboratory undergraduate students better learn about protein structure/function relationships as compared with computer imagery alone. In all five laboratory sections, students used the molecular imaging…

  12. Persistent Memory in Single Node Delay-Coupled Reservoir Computing.

    PubMed

    Kovac, André David; Koall, Maximilian; Pipa, Gordon; Toutounji, Hazem

    2016-01-01

    Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.

  13. Persistent Memory in Single Node Delay-Coupled Reservoir Computing

    PubMed Central

    Pipa, Gordon; Toutounji, Hazem

    2016-01-01

    Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality. PMID:27783690

  14. Constructing Smart Protocells with Built-In DNA Computational Core to Eliminate Exogenous Challenge.

    PubMed

    Lyu, Yifan; Wu, Cuichen; Heinke, Charles; Han, Da; Cai, Ren; Teng, I-Ting; Liu, Yuan; Liu, Hui; Zhang, Xiaobing; Liu, Qiaoling; Tan, Weihong

    2018-06-06

    A DNA reaction network is like a biological algorithm that can respond to "molecular input signals", such as biological molecules, while the artificial cell is like a microrobot whose function is powered by the encapsulated DNA reaction network. In this work, we describe the feasibility of using a DNA reaction network as the computational core of a protocell, which will perform an artificial immune response in a concise way to eliminate a mimicked pathogenic challenge. Such a DNA reaction network (RN)-powered protocell can realize the connection of logical computation and biological recognition due to the natural programmability and biological properties of DNA. Thus, the biological input molecules can be easily involved in the molecular computation and the computation process can be spatially isolated and protected by artificial bilayer membrane. We believe the strategy proposed in the current paper, i.e., using DNA RN to power artificial cells, will lay the groundwork for understanding the basic design principles of DNA algorithm-based nanodevices which will, in turn, inspire the construction of artificial cells, or protocells, that will find a place in future biomedical research.

  15. System biology of gene regulation.

    PubMed

    Baitaluk, Michael

    2009-01-01

    A famous joke story that exhibits the traditionally awkward alliance between theory and experiment and showing the differences between experimental biologists and theoretical modelers is when a University sends a biologist, a mathematician, a physicist, and a computer scientist to a walking trip in an attempt to stimulate interdisciplinary research. During a break, they watch a cow in a field nearby and the leader of the group asks, "I wonder how one could decide on the size of a cow?" Since a cow is a biological object, the biologist responded first: "I have seen many cows in this area and know it is a big cow." The mathematician argued, "The true volume is determined by integrating the mathematical function that describes the outer surface of the cow's body." The physicist suggested: "Let's assume the cow is a sphere...." Finally the computer scientist became nervous and said that he didn't bring his computer because there is no Internet connection up there on the hill. In this humorous but explanatory story suggestions proposed by theorists can be taken to reflect the view of many experimental biologists that computer scientists and theorists are too far removed from biological reality and therefore their theories and approaches are not of much immediate usefulness. Conversely, the statement of the biologist mirrors the view of many traditional theoretical and computational scientists that biological experiments are for the most part simply descriptive, lack rigor, and that much of the resulting biological data are of questionable functional relevance. One of the goals of current biology as a multidisciplinary science is to bring people from different scientific areas together on the same "hill" and teach them to speak the same "language." In fact, of course, when presenting their data, most experimentalist biologists do provide an interpretation and explanation for the results, and many theorists/computer scientists aim to answer (or at least to fully describe) questions of biological relevance. Thus systems biology could be treated as such a socioscientific phenomenon and a new approach to both experiments and theory that is defined by the strategy of pursuing integration of complex data about the interactions in biological systems from diverse experimental sources using interdisciplinary tools and personnel.

  16. Structure, function, and behaviour of computational models in systems biology

    PubMed Central

    2013-01-01

    Background Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such “bio-models” necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. Results We present a conceptual framework – the meaning facets – which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model’s components (structure), the meaning of the model’s intended use (function), and the meaning of the model’s dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. Conclusions The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research. PMID:23721297

  17. Bacteria as computers making computers

    PubMed Central

    Danchin, Antoine

    2009-01-01

    Various efforts to integrate biological knowledge into networks of interactions have produced a lively microbial systems biology. Putting molecular biology and computer sciences in perspective, we review another trend in systems biology, in which recursivity and information replace the usual concepts of differential equations, feedback and feedforward loops and the like. Noting that the processes of gene expression separate the genome from the cell machinery, we analyse the role of the separation between machine and program in computers. However, computers do not make computers. For cells to make cells requires a specific organization of the genetic program, which we investigate using available knowledge. Microbial genomes are organized into a paleome (the name emphasizes the role of the corresponding functions from the time of the origin of life), comprising a constructor and a replicator, and a cenome (emphasizing community-relevant genes), made up of genes that permit life in a particular context. The cell duplication process supposes rejuvenation of the machine and replication of the program. The paleome also possesses genes that enable information to accumulate in a ratchet-like process down the generations. The systems biology must include the dynamics of information creation in its future developments. PMID:19016882

  18. Bacteria as computers making computers.

    PubMed

    Danchin, Antoine

    2009-01-01

    Various efforts to integrate biological knowledge into networks of interactions have produced a lively microbial systems biology. Putting molecular biology and computer sciences in perspective, we review another trend in systems biology, in which recursivity and information replace the usual concepts of differential equations, feedback and feedforward loops and the like. Noting that the processes of gene expression separate the genome from the cell machinery, we analyse the role of the separation between machine and program in computers. However, computers do not make computers. For cells to make cells requires a specific organization of the genetic program, which we investigate using available knowledge. Microbial genomes are organized into a paleome (the name emphasizes the role of the corresponding functions from the time of the origin of life), comprising a constructor and a replicator, and a cenome (emphasizing community-relevant genes), made up of genes that permit life in a particular context. The cell duplication process supposes rejuvenation of the machine and replication of the program. The paleome also possesses genes that enable information to accumulate in a ratchet-like process down the generations. The systems biology must include the dynamics of information creation in its future developments.

  19. Computation of forces from deformed visco-elastic biological tissues

    NASA Astrophysics Data System (ADS)

    Muñoz, José J.; Amat, David; Conte, Vito

    2018-04-01

    We present a least-squares based inverse analysis of visco-elastic biological tissues. The proposed method computes the set of contractile forces (dipoles) at the cell boundaries that induce the observed and quantified deformations. We show that the computation of these forces requires the regularisation of the problem functional for some load configurations that we study here. The functional measures the error of the dynamic problem being discretised in time with a second-order implicit time-stepping and in space with standard finite elements. We analyse the uniqueness of the inverse problem and estimate the regularisation parameter by means of an L-curved criterion. We apply the methodology to a simple toy problem and to an in vivo set of morphogenetic deformations of the Drosophila embryo.

  20. A machine-learned computational functional genomics-based approach to drug classification.

    PubMed

    Lötsch, Jörn; Ultsch, Alfred

    2016-12-01

    The public accessibility of "big data" about the molecular targets of drugs and the biological functions of genes allows novel data science-based approaches to pharmacology that link drugs directly with their effects on pathophysiologic processes. This provides a phenotypic path to drug discovery and repurposing. This paper compares the performance of a functional genomics-based criterion to the traditional drug target-based classification. Knowledge discovery in the DrugBank and Gene Ontology databases allowed the construction of a "drug target versus biological process" matrix as a combination of "drug versus genes" and "genes versus biological processes" matrices. As a canonical example, such matrices were constructed for classical analgesic drugs. These matrices were projected onto a toroid grid of 50 × 82 artificial neurons using a self-organizing map (SOM). The distance, respectively, cluster structure of the high-dimensional feature space of the matrices was visualized on top of this SOM using a U-matrix. The cluster structure emerging on the U-matrix provided a correct classification of the analgesics into two main classes of opioid and non-opioid analgesics. The classification was flawless with both the functional genomics and the traditional target-based criterion. The functional genomics approach inherently included the drugs' modulatory effects on biological processes. The main pharmacological actions known from pharmacological science were captures, e.g., actions on lipid signaling for non-opioid analgesics that comprised many NSAIDs and actions on neuronal signal transmission for opioid analgesics. Using machine-learned techniques for computational drug classification in a comparative assessment, a functional genomics-based criterion was found to be similarly suitable for drug classification as the traditional target-based criterion. This supports a utility of functional genomics-based approaches to computational system pharmacology for drug discovery and repurposing.

  1. Systematic computation with functional gene-sets among leukemic and hematopoietic stem cells reveals a favorable prognostic signature for acute myeloid leukemia.

    PubMed

    Yang, Xinan Holly; Li, Meiyi; Wang, Bin; Zhu, Wanqi; Desgardin, Aurelie; Onel, Kenan; de Jong, Jill; Chen, Jianjun; Chen, Luonan; Cunningham, John M

    2015-03-24

    Genes that regulate stem cell function are suspected to exert adverse effects on prognosis in malignancy. However, diverse cancer stem cell signatures are difficult for physicians to interpret and apply clinically. To connect the transcriptome and stem cell biology, with potential clinical applications, we propose a novel computational "gene-to-function, snapshot-to-dynamics, and biology-to-clinic" framework to uncover core functional gene-sets signatures. This framework incorporates three function-centric gene-set analysis strategies: a meta-analysis of both microarray and RNA-seq data, novel dynamic network mechanism (DNM) identification, and a personalized prognostic indicator analysis. This work uses complex disease acute myeloid leukemia (AML) as a research platform. We introduced an adjustable "soft threshold" to a functional gene-set algorithm and found that two different analysis methods identified distinct gene-set signatures from the same samples. We identified a 30-gene cluster that characterizes leukemic stem cell (LSC)-depleted cells and a 25-gene cluster that characterizes LSC-enriched cells in parallel; both mark favorable-prognosis in AML. Genes within each signature significantly share common biological processes and/or molecular functions (empirical p = 6e-5 and 0.03 respectively). The 25-gene signature reflects the abnormal development of stem cells in AML, such as AURKA over-expression. We subsequently determined that the clinical relevance of both signatures is independent of known clinical risk classifications in 214 patients with cytogenetically normal AML. We successfully validated the prognosis of both signatures in two independent cohorts of 91 and 242 patients respectively (log-rank p < 0.0015 and 0.05; empirical p < 0.015 and 0.08). The proposed algorithms and computational framework will harness systems biology research because they efficiently translate gene-sets (rather than single genes) into biological discoveries about AML and other complex diseases.

  2. Applications of systems biology towards microbial fuel production.

    PubMed

    Gowen, Christopher M; Fong, Stephen S

    2011-10-01

    Harnessing the immense natural diversity of biological functions for economical production of fuel has enormous potential benefits. Inevitably, however, the native capabilities for any given organism must be modified to increase the productivity or efficiency of a biofuel bioprocess. From a broad perspective, the challenge is to sufficiently understand the details of cellular functionality to be able to prospectively predict and modify the cellular function of a microorganism. Recent advances in experimental and computational systems biology approaches can be used to better understand cellular level function and guide future experiments. With pressure to quickly develop viable, renewable biofuel processes a balance must be maintained between obtaining depth of biological knowledge and applying that knowledge. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Towards a behavioral-matching based compilation of synthetic biology functions.

    PubMed

    Basso-Blandin, Adrien; Delaplace, Franck

    2015-09-01

    The field of synthetic biology is looking forward engineering framework for safely designing reliable de-novo biological functions. In this undertaking, Computer-Aided-Design (CAD) environments should play a central role for facilitating the design. Although, CAD environment is widely used to engineer artificial systems the application in synthetic biology is still in its infancy. In this article we address the problem of the design of a high level language which at the core of CAD environment. More specifically the Gubs (Genomic Unified Behavioural Specification) language is a specification language used to describe the observations of the expected behaviour. The compiler appropriately selects components such that the observation of the synthetic biological function resulting to their assembly complies to the programmed behaviour.

  4. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we willmore » instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.« less

  5. A basis for a visual language for describing, archiving and analyzing functional models of complex biological systems

    PubMed Central

    Cook, Daniel L; Farley, Joel F; Tapscott, Stephen J

    2001-01-01

    Background: We propose that a computerized, internet-based graphical description language for systems biology will be essential for describing, archiving and analyzing complex problems of biological function in health and disease. Results: We outline here a conceptual basis for designing such a language and describe BioD, a prototype language that we have used to explore the utility and feasibility of this approach to functional biology. Using example models, we demonstrate that a rather limited lexicon of icons and arrows suffices to describe complex cell-biological systems as discrete models that can be posted and linked on the internet. Conclusions: Given available computer and internet technology, BioD may be implemented as an extensible, multidisciplinary language that can be used to archive functional systems knowledge and be extended to support both qualitative and quantitative functional analysis. PMID:11305940

  6. Biology Today. Ah, Sweet Mysteries of Life.

    ERIC Educational Resources Information Center

    Flannery, Maura C.

    1991-01-01

    Mysteries of the biological past that paleontologists are trying to solve are discussed. Topics include first seeds, fossils and computers, packrat middens, charcoal clues, soft parts, Burgess shale, halkieriids, toe count, whales with feet, long necks, and changing functions. (KR)

  7. Protein Engineering: Development of a Metal Ion Dependent Switch

    DTIC Science & Technology

    2017-05-22

    Society of Chemistry Royal Society of Chemistry Biochemistry PNAS Escherichia coli Journal of Biotechnology Biochemistry Nature Protocols Journal of...Molecular Biology Biochemistry Royal Society of Chemistry Proteins: Structure, Function, and Bioinformatics Journal of Molecular Biology Biophysical...Biophysical Journal Protein Science Journal of Computational Chemistry Current Opinion in Chemical Biology Royal Society of Chemistry

  8. A Diagrammatic Language for Biochemical Networks

    NASA Astrophysics Data System (ADS)

    Maimon, Ron

    2002-03-01

    I present a diagrammatic language for representing the structure of biochemical networks. The language is designed to represent modular structure in a computational fasion, with composition of reactions replacing functional composition. This notation is used to represent arbitrarily large networks efficiently. The notation finds its most natural use in representing biological interaction networks, but it is a general computing language appropriate to any naturally occuring computation. Unlike lambda-calculus, or text-derived languages, it does not impose a tree-structure on the diagrams, and so is more effective at representing biological fucntion than competing notations.

  9. Computer-aided design of biological circuits using TinkerCell

    PubMed Central

    Bergmann, Frank T; Sauro, Herbert M

    2010-01-01

    Synthetic biology is an engineering discipline that builds on modeling practices from systems biology and wet-lab techniques from genetic engineering. As synthetic biology advances, efficient procedures will be developed that will allow a synthetic biologist to design, analyze and build biological networks. In this idealized pipeline, computer-aided design (CAD) is a necessary component. The role of a CAD application would be to allow efficient transition from a general design to a final product. TinkerCell is a design tool for serving this purpose in synthetic biology. In TinkerCell, users build biological networks using biological parts and modules. The network can be analyzed using one of several functions provided by TinkerCell or custom programs from third-party sources. Since best practices for modeling and constructing synthetic biology networks have not yet been established, TinkerCell is designed as a flexible and extensible application that can adjust itself to changes in the field. PMID:21327060

  10. ENFIN--A European network for integrative systems biology.

    PubMed

    Kahlem, Pascal; Clegg, Andrew; Reisinger, Florian; Xenarios, Ioannis; Hermjakob, Henning; Orengo, Christine; Birney, Ewan

    2009-11-01

    Integration of biological data of various types and the development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing an adapted infrastructure to connect databases, and platforms to enable both the generation of new bioinformatics tools and the experimental validation of computational predictions. With the aim of bridging the gap existing between standard wet laboratories and bioinformatics, the ENFIN Network runs integrative research projects to bring the latest computational techniques to bear directly on questions dedicated to systems biology in the wet laboratory environment. The Network maintains internally close collaboration between experimental and computational research, enabling a permanent cycling of experimental validation and improvement of computational prediction methods. The computational work includes the development of a database infrastructure (EnCORE), bioinformatics analysis methods and a novel platform for protein function analysis FuncNet.

  11. Knowledge Driven Variable Selection (KDVS) – a new approach to enrichment analysis of gene signatures obtained from high–throughput data

    PubMed Central

    2013-01-01

    Background High–throughput (HT) technologies provide huge amount of gene expression data that can be used to identify biomarkers useful in the clinical practice. The most frequently used approaches first select a set of genes (i.e. gene signature) able to characterize differences between two or more phenotypical conditions, and then provide a functional assessment of the selected genes with an a posteriori enrichment analysis, based on biological knowledge. However, this approach comes with some drawbacks. First, gene selection procedure often requires tunable parameters that affect the outcome, typically producing many false hits. Second, a posteriori enrichment analysis is based on mapping between biological concepts and gene expression measurements, which is hard to compute because of constant changes in biological knowledge and genome analysis. Third, such mapping is typically used in the assessment of the coverage of gene signature by biological concepts, that is either score–based or requires tunable parameters as well, limiting its power. Results We present Knowledge Driven Variable Selection (KDVS), a framework that uses a priori biological knowledge in HT data analysis. The expression data matrix is transformed, according to prior knowledge, into smaller matrices, easier to analyze and to interpret from both computational and biological viewpoints. Therefore KDVS, unlike most approaches, does not exclude a priori any function or process potentially relevant for the biological question under investigation. Differently from the standard approach where gene selection and functional assessment are applied independently, KDVS embeds these two steps into a unified statistical framework, decreasing the variability derived from the threshold–dependent selection, the mapping to the biological concepts, and the signature coverage. We present three case studies to assess the usefulness of the method. Conclusions We showed that KDVS not only enables the selection of known biological functionalities with accuracy, but also identification of new ones. An efficient implementation of KDVS was devised to obtain results in a fast and robust way. Computing time is drastically reduced by the effective use of distributed resources. Finally, integrated visualization techniques immediately increase the interpretability of results. Overall, KDVS approach can be considered as a viable alternative to enrichment–based approaches. PMID:23302187

  12. Rational protein design: developing next-generation biological therapeutics and nanobiotechnological tools.

    PubMed

    Wilson, Corey J

    2015-01-01

    Proteins are the most functionally diverse macromolecules observed in nature, participating in a broad array of catalytic, biosensing, transport, scaffolding, and regulatory functions. Fittingly, proteins have become one of the most promising nanobiotechnological tools to date, and through the use of recombinant DNA and other laboratory methods we have produced a vast number of biological therapeutics derived from human genes. Our emerging ability to rationally design proteins (e.g., via computational methods) holds the promise of significantly expanding the number and diversity of protein therapies and has opened the gateway to realizing true and uncompromised personalized medicine. In the last decade computational protein design has been transformed from a set of fundamental strategies to stringently test our understanding of the protein structure-function relationship, to practical tools for developing useful biological processes, nano-devices, and novel therapeutics. As protein design strategies improve (i.e., in terms of accuracy and efficiency) clinicians will be able to leverage individual genetic data and biological metrics to develop and deliver personalized protein therapeutics with minimal delay. © 2014 Wiley Periodicals, Inc.

  13. A biomimetic colorimetric logic gate system based on multi-functional peptide-mediated gold nanoparticle assembly

    NASA Astrophysics Data System (ADS)

    Li, Yong; Li, Wang; He, Kai-Yu; Li, Pei; Huang, Yan; Nie, Zhou; Yao, Shou-Zhuo

    2016-04-01

    In natural biological systems, proteins exploit various functional peptide motifs to exert target response and activity switch, providing a functional and logic basis for complex cellular activities. Building biomimetic peptide-based bio-logic systems is highly intriguing but remains relatively unexplored due to limited logic recognition elements and complex signal outputs. In this proof-of-principle work, we attempted to address these problems by utilizing multi-functional peptide probes and the peptide-mediated nanoparticle assembly system. Here, the rationally designed peptide probes function as the dual-target responsive element specifically responsive to metal ions and enzymes as well as the mediator regulating the assembly of gold nanoparticles (AuNPs). Taking advantage of Zn2+ ions and chymotrypsin as the model inputs of metal ions and enzymes, respectively, we constructed the peptide logic system computed by the multi-functional peptide probes and outputted by the readable colour change of AuNPs. In this way, the representative binary basic logic gates (AND, OR, INHIBIT, NAND, IMPLICATION) have been achieved by delicately coding the peptide sequence, demonstrating the versatility of our logic system. Additionally, we demonstrated that the three-input combinational logic gate (INHIBIT-OR) could also be successfully integrated and applied as a multi-tasking biosensor for colorimetric detection of dual targets. This nanoparticle-based peptide logic system presents a valid strategy to illustrate peptide information processing and provides a practical platform for executing peptide computing or peptide-related multiplexing sensing, implying that the controllable nanomaterial assembly is a promising and potent methodology for the advancement of biomimetic bio-logic computation.In natural biological systems, proteins exploit various functional peptide motifs to exert target response and activity switch, providing a functional and logic basis for complex cellular activities. Building biomimetic peptide-based bio-logic systems is highly intriguing but remains relatively unexplored due to limited logic recognition elements and complex signal outputs. In this proof-of-principle work, we attempted to address these problems by utilizing multi-functional peptide probes and the peptide-mediated nanoparticle assembly system. Here, the rationally designed peptide probes function as the dual-target responsive element specifically responsive to metal ions and enzymes as well as the mediator regulating the assembly of gold nanoparticles (AuNPs). Taking advantage of Zn2+ ions and chymotrypsin as the model inputs of metal ions and enzymes, respectively, we constructed the peptide logic system computed by the multi-functional peptide probes and outputted by the readable colour change of AuNPs. In this way, the representative binary basic logic gates (AND, OR, INHIBIT, NAND, IMPLICATION) have been achieved by delicately coding the peptide sequence, demonstrating the versatility of our logic system. Additionally, we demonstrated that the three-input combinational logic gate (INHIBIT-OR) could also be successfully integrated and applied as a multi-tasking biosensor for colorimetric detection of dual targets. This nanoparticle-based peptide logic system presents a valid strategy to illustrate peptide information processing and provides a practical platform for executing peptide computing or peptide-related multiplexing sensing, implying that the controllable nanomaterial assembly is a promising and potent methodology for the advancement of biomimetic bio-logic computation. Electronic supplementary information (ESI) available: Additional figures (Tables S1-S3 and Fig. S1-S6). See DOI: 10.1039/c6nr01072e

  14. Multilevel functional genomics data integration as a tool for understanding physiology: a network biology perspective.

    PubMed

    Davidsen, Peter K; Turan, Nil; Egginton, Stuart; Falciani, Francesco

    2016-02-01

    The overall aim of physiological research is to understand how living systems function in an integrative manner. Consequently, the discipline of physiology has since its infancy attempted to link multiple levels of biological organization. Increasingly this has involved mathematical and computational approaches, typically to model a small number of components spanning several levels of biological organization. With the advent of "omics" technologies, which can characterize the molecular state of a cell or tissue (intended as the level of expression and/or activity of its molecular components), the number of molecular components we can quantify has increased exponentially. Paradoxically, the unprecedented amount of experimental data has made it more difficult to derive conceptual models underlying essential mechanisms regulating mammalian physiology. We present an overview of state-of-the-art methods currently used to identifying biological networks underlying genomewide responses. These are based on a data-driven approach that relies on advanced computational methods designed to "learn" biology from observational data. In this review, we illustrate an application of these computational methodologies using a case study integrating an in vivo model representing the transcriptional state of hypoxic skeletal muscle with a clinical study representing muscle wasting in chronic obstructive pulmonary disease patients. The broader application of these approaches to modeling multiple levels of biological data in the context of modern physiology is discussed. Copyright © 2016 the American Physiological Society.

  15. 2K09 and thereafter : the coming era of integrative bioinformatics, systems biology and intelligent computing for functional genomics and personalized medicine research.

    PubMed

    Yang, Jack Y; Niemierko, Andrzej; Bajcsy, Ruzena; Xu, Dong; Athey, Brian D; Zhang, Aidong; Ersoy, Okan K; Li, Guo-Zheng; Borodovsky, Mark; Zhang, Joe C; Arabnia, Hamid R; Deng, Youping; Dunker, A Keith; Liu, Yunlong; Ghafoor, Arif

    2010-12-01

    Significant interest exists in establishing synergistic research in bioinformatics, systems biology and intelligent computing. Supported by the United States National Science Foundation (NSF), International Society of Intelligent Biological Medicine (http://www.ISIBM.org), International Journal of Computational Biology and Drug Design (IJCBDD) and International Journal of Functional Informatics and Personalized Medicine, the ISIBM International Joint Conferences on Bioinformatics, Systems Biology and Intelligent Computing (ISIBM IJCBS 2009) attracted more than 300 papers and 400 researchers and medical doctors world-wide. It was the only inter/multidisciplinary conference aimed to promote synergistic research and education in bioinformatics, systems biology and intelligent computing. The conference committee was very grateful for the valuable advice and suggestions from honorary chairs, steering committee members and scientific leaders including Dr. Michael S. Waterman (USC, Member of United States National Academy of Sciences), Dr. Chih-Ming Ho (UCLA, Member of United States National Academy of Engineering and Academician of Academia Sinica), Dr. Wing H. Wong (Stanford, Member of United States National Academy of Sciences), Dr. Ruzena Bajcsy (UC Berkeley, Member of United States National Academy of Engineering and Member of United States Institute of Medicine of the National Academies), Dr. Mary Qu Yang (United States National Institutes of Health and Oak Ridge, DOE), Dr. Andrzej Niemierko (Harvard), Dr. A. Keith Dunker (Indiana), Dr. Brian D. Athey (Michigan), Dr. Weida Tong (FDA, United States Department of Health and Human Services), Dr. Cathy H. Wu (Georgetown), Dr. Dong Xu (Missouri), Drs. Arif Ghafoor and Okan K Ersoy (Purdue), Dr. Mark Borodovsky (Georgia Tech, President of ISIBM), Dr. Hamid R. Arabnia (UGA, Vice-President of ISIBM), and other scientific leaders. The committee presented the 2009 ISIBM Outstanding Achievement Awards to Dr. Joydeep Ghosh (UT Austin), Dr. Aidong Zhang (Buffalo) and Dr. Zhi-Hua Zhou (Nanjing) for their significant contributions to the field of intelligent biological medicine.

  16. 2K09 and thereafter : the coming era of integrative bioinformatics, systems biology and intelligent computing for functional genomics and personalized medicine research

    PubMed Central

    2010-01-01

    Significant interest exists in establishing synergistic research in bioinformatics, systems biology and intelligent computing. Supported by the United States National Science Foundation (NSF), International Society of Intelligent Biological Medicine (http://www.ISIBM.org), International Journal of Computational Biology and Drug Design (IJCBDD) and International Journal of Functional Informatics and Personalized Medicine, the ISIBM International Joint Conferences on Bioinformatics, Systems Biology and Intelligent Computing (ISIBM IJCBS 2009) attracted more than 300 papers and 400 researchers and medical doctors world-wide. It was the only inter/multidisciplinary conference aimed to promote synergistic research and education in bioinformatics, systems biology and intelligent computing. The conference committee was very grateful for the valuable advice and suggestions from honorary chairs, steering committee members and scientific leaders including Dr. Michael S. Waterman (USC, Member of United States National Academy of Sciences), Dr. Chih-Ming Ho (UCLA, Member of United States National Academy of Engineering and Academician of Academia Sinica), Dr. Wing H. Wong (Stanford, Member of United States National Academy of Sciences), Dr. Ruzena Bajcsy (UC Berkeley, Member of United States National Academy of Engineering and Member of United States Institute of Medicine of the National Academies), Dr. Mary Qu Yang (United States National Institutes of Health and Oak Ridge, DOE), Dr. Andrzej Niemierko (Harvard), Dr. A. Keith Dunker (Indiana), Dr. Brian D. Athey (Michigan), Dr. Weida Tong (FDA, United States Department of Health and Human Services), Dr. Cathy H. Wu (Georgetown), Dr. Dong Xu (Missouri), Drs. Arif Ghafoor and Okan K Ersoy (Purdue), Dr. Mark Borodovsky (Georgia Tech, President of ISIBM), Dr. Hamid R. Arabnia (UGA, Vice-President of ISIBM), and other scientific leaders. The committee presented the 2009 ISIBM Outstanding Achievement Awards to Dr. Joydeep Ghosh (UT Austin), Dr. Aidong Zhang (Buffalo) and Dr. Zhi-Hua Zhou (Nanjing) for their significant contributions to the field of intelligent biological medicine. PMID:21143775

  17. Knowledge acquisition, semantic text mining, and security risks in health and biomedical informatics

    PubMed Central

    Huang, Jingshan; Dou, Dejing; Dang, Jiangbo; Pardue, J Harold; Qin, Xiao; Huan, Jun; Gerthoffer, William T; Tan, Ming

    2012-01-01

    Computational techniques have been adopted in medical and biological systems for a long time. There is no doubt that the development and application of computational methods will render great help in better understanding biomedical and biological functions. Large amounts of datasets have been produced by biomedical and biological experiments and simulations. In order for researchers to gain knowledge from original data, nontrivial transformation is necessary, which is regarded as a critical link in the chain of knowledge acquisition, sharing, and reuse. Challenges that have been encountered include: how to efficiently and effectively represent human knowledge in formal computing models, how to take advantage of semantic text mining techniques rather than traditional syntactic text mining, and how to handle security issues during the knowledge sharing and reuse. This paper summarizes the state-of-the-art in these research directions. We aim to provide readers with an introduction of major computing themes to be applied to the medical and biological research. PMID:22371823

  18. Computing with motile bio-agents

    NASA Astrophysics Data System (ADS)

    Nicolau, Dan V., Jr.; Burrage, Kevin; Nicolau, Dan V.

    2007-12-01

    We describe a model of computation of the parallel type, which we call 'computing with bio-agents', based on the concept that motions of biological objects such as bacteria or protein molecular motors in confined spaces can be regarded as computations. We begin with the observation that the geometric nature of the physical structures in which model biological objects move modulates the motions of the latter. Consequently, by changing the geometry, one can control the characteristic trajectories of the objects; on the basis of this, we argue that such systems are computing devices. We investigate the computing power of mobile bio-agent systems and show that they are computationally universal in the sense that they are capable of computing any Boolean function in parallel. We argue also that using appropriate conditions, bio-agent systems can solve NP-complete problems in probabilistic polynomial time.

  19. Massive Exploration of Perturbed Conditions of the Blood Coagulation Cascade through GPU Parallelization

    PubMed Central

    Cazzaniga, Paolo; Nobile, Marco S.; Besozzi, Daniela; Bellini, Matteo; Mauri, Giancarlo

    2014-01-01

    The introduction of general-purpose Graphics Processing Units (GPUs) is boosting scientific applications in Bioinformatics, Systems Biology, and Computational Biology. In these fields, the use of high-performance computing solutions is motivated by the need of performing large numbers of in silico analysis to study the behavior of biological systems in different conditions, which necessitate a computing power that usually overtakes the capability of standard desktop computers. In this work we present coagSODA, a CUDA-powered computational tool that was purposely developed for the analysis of a large mechanistic model of the blood coagulation cascade (BCC), defined according to both mass-action kinetics and Hill functions. coagSODA allows the execution of parallel simulations of the dynamics of the BCC by automatically deriving the system of ordinary differential equations and then exploiting the numerical integration algorithm LSODA. We present the biological results achieved with a massive exploration of perturbed conditions of the BCC, carried out with one-dimensional and bi-dimensional parameter sweep analysis, and show that GPU-accelerated parallel simulations of this model can increase the computational performances up to a 181× speedup compared to the corresponding sequential simulations. PMID:25025072

  20. A machine-learned analysis of human gene polymorphisms modulating persisting pain points at major roles of neuroimmune processes.

    PubMed

    Kringel, Dario; Lippmann, Catharina; Parnham, Michael J; Kalso, Eija; Ultsch, Alfred; Lötsch, Jörn

    2018-06-19

    Human genetic research has implicated functional variants of more than one hundred genes in the modulation of persisting pain. Artificial intelligence and machine learning techniques may combine this knowledge with results of genetic research gathered in any context, which permits the identification of the key biological processes involved in chronic sensitization to pain. Based on published evidence, a set of 110 genes carrying variants reported to be associated with modulation of the clinical phenotype of persisting pain in eight different clinical settings was submitted to unsupervised machine-learning aimed at functional clustering. Subsequently, a mathematically supported subset of genes, comprising those most consistently involved in persisting pain, was analyzed by means of computational functional genomics in the Gene Ontology knowledgebase. Clustering of genes with evidence for a modulation of persisting pain elucidated a functionally heterogeneous set. The situation cleared when the focus was narrowed to a genetic modulation consistently observed throughout several clinical settings. On this basis, two groups of biological processes, the immune system and nitric oxide signaling, emerged as major players in sensitization to persisting pain, which is biologically highly plausible and in agreement with other lines of pain research. The present computational functional genomics-based approach provided a computational systems-biology perspective on chronic sensitization to pain. Human genetic control of persisting pain points to the immune system as a source of potential future targets for drugs directed against persisting pain. Contemporary machine-learned methods provide innovative approaches to knowledge discovery from previous evidence. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. Computational challenges of structure-based approaches applied to HIV.

    PubMed

    Forli, Stefano; Olson, Arthur J

    2015-01-01

    Here, we review some of the opportunities and challenges that we face in computational modeling of HIV therapeutic targets and structural biology, both in terms of methodology development and structure-based drug design (SBDD). Computational methods have provided fundamental support to HIV research since the initial structural studies, helping to unravel details of HIV biology. Computational models have proved to be a powerful tool to analyze and understand the impact of mutations and to overcome their structural and functional influence in drug resistance. With the availability of structural data, in silico experiments have been instrumental in exploiting and improving interactions between drugs and viral targets, such as HIV protease, reverse transcriptase, and integrase. Issues such as viral target dynamics and mutational variability, as well as the role of water and estimates of binding free energy in characterizing ligand interactions, are areas of active computational research. Ever-increasing computational resources and theoretical and algorithmic advances have played a significant role in progress to date, and we envision a continually expanding role for computational methods in our understanding of HIV biology and SBDD in the future.

  2. Computational Modeling in Concert with Laboratory Studies: Application to B Cell Differentiation

    EPA Science Inventory

    Remediation is expensive, so accurate prediction of dose-response is important to help control costs. Dose response is a function of biological mechanisms. Computational models of these mechanisms improve the efficiency of research and provide the capability for prediction.

  3. Modelling protein functional domains in signal transduction using Maude

    NASA Technical Reports Server (NTRS)

    Sriram, M. G.

    2003-01-01

    Modelling of protein-protein interactions in signal transduction is receiving increased attention in computational biology. This paper describes recent research in the application of Maude, a symbolic language founded on rewriting logic, to the modelling of functional domains within signalling proteins. Protein functional domains (PFDs) are a critical focus of modern signal transduction research. In general, Maude models can simulate biological signalling networks and produce specific testable hypotheses at various levels of abstraction. Developing symbolic models of signalling proteins containing functional domains is important because of the potential to generate analyses of complex signalling networks based on structure-function relationships.

  4. Computer-aided design of biological circuits using TinkerCell.

    PubMed

    Chandran, Deepak; Bergmann, Frank T; Sauro, Herbert M

    2010-01-01

    Synthetic biology is an engineering discipline that builds on modeling practices from systems biology and wet-lab techniques from genetic engineering. As synthetic biology advances, efficient procedures will be developed that will allow a synthetic biologist to design, analyze, and build biological networks. In this idealized pipeline, computer-aided design (CAD) is a necessary component. The role of a CAD application would be to allow efficient transition from a general design to a final product. TinkerCell is a design tool for serving this purpose in synthetic biology. In TinkerCell, users build biological networks using biological parts and modules. The network can be analyzed using one of several functions provided by TinkerCell or custom programs from third-party sources. Since best practices for modeling and constructing synthetic biology networks have not yet been established, TinkerCell is designed as a flexible and extensible application that can adjust itself to changes in the field. © 2010 Landes Bioscience

  5. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking

    PubMed Central

    Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems. PMID:27187178

  6. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking.

    PubMed

    Pârvu, Ovidiu; Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems.

  7. Computer-Assisted Microscopy in Science Teaching and Research.

    ERIC Educational Resources Information Center

    Radice, Gary P.

    1997-01-01

    Describes a technological approach to teaching the relationships between biological form and function. Computer-assisted image analysis was integrated into a microanatomy course. Students spend less time memorizing and more time observing, measuring, and interpreting, building technical and analytical skills. Appendices list hardware and software…

  8. Computational optimization and biological evolution.

    PubMed

    Goryanin, Igor

    2010-10-01

    Modelling and optimization principles become a key concept in many biological areas, especially in biochemistry. Definitions of objective function, fitness and co-evolution, although they differ between biology and mathematics, are similar in a general sense. Although successful in fitting models to experimental data, and some biochemical predictions, optimization and evolutionary computations should be developed further to make more accurate real-life predictions, and deal not only with one organism in isolation, but also with communities of symbiotic and competing organisms. One of the future goals will be to explain and predict evolution not only for organisms in shake flasks or fermenters, but for real competitive multispecies environments.

  9. An online model composition tool for system biology models

    PubMed Central

    2013-01-01

    Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914

  10. A biomimetic colorimetric logic gate system based on multi-functional peptide-mediated gold nanoparticle assembly.

    PubMed

    Li, Yong; Li, Wang; He, Kai-Yu; Li, Pei; Huang, Yan; Nie, Zhou; Yao, Shou-Zhuo

    2016-04-28

    In natural biological systems, proteins exploit various functional peptide motifs to exert target response and activity switch, providing a functional and logic basis for complex cellular activities. Building biomimetic peptide-based bio-logic systems is highly intriguing but remains relatively unexplored due to limited logic recognition elements and complex signal outputs. In this proof-of-principle work, we attempted to address these problems by utilizing multi-functional peptide probes and the peptide-mediated nanoparticle assembly system. Here, the rationally designed peptide probes function as the dual-target responsive element specifically responsive to metal ions and enzymes as well as the mediator regulating the assembly of gold nanoparticles (AuNPs). Taking advantage of Zn2+ ions and chymotrypsin as the model inputs of metal ions and enzymes, respectively, we constructed the peptide logic system computed by the multi-functional peptide probes and outputted by the readable colour change of AuNPs. In this way, the representative binary basic logic gates (AND, OR, INHIBIT, NAND, IMPLICATION) have been achieved by delicately coding the peptide sequence, demonstrating the versatility of our logic system. Additionally, we demonstrated that the three-input combinational logic gate (INHIBIT-OR) could also be successfully integrated and applied as a multi-tasking biosensor for colorimetric detection of dual targets. This nanoparticle-based peptide logic system presents a valid strategy to illustrate peptide information processing and provides a practical platform for executing peptide computing or peptide-related multiplexing sensing, implying that the controllable nanomaterial assembly is a promising and potent methodology for the advancement of biomimetic bio-logic computation.

  11. Function-Based Algorithms for Biological Sequences

    ERIC Educational Resources Information Center

    Mohanty, Pragyan Sheela P.

    2015-01-01

    Two problems at two different abstraction levels of computational biology are studied. At the molecular level, efficient pattern matching algorithms in DNA sequences are presented. For gene order data, an efficient data structure is presented capable of storing all gene re-orderings in a systematic manner. A common characteristic of presented…

  12. Finding and defining the natural automata acting in living plants: Toward the synthetic biology for robotics and informatics in vivo.

    PubMed

    Kawano, Tomonori; Bouteau, François; Mancuso, Stefano

    2012-11-01

    The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed.

  13. Finding and defining the natural automata acting in living plants: Toward the synthetic biology for robotics and informatics in vivo

    PubMed Central

    Kawano, Tomonori; Bouteau, François; Mancuso, Stefano

    2012-01-01

    The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed. PMID:23336016

  14. Multiscale Mechanics of Articular Cartilage: Potentials and Challenges of Coupling Musculoskeletal, Joint, and Microscale Computational Models

    PubMed Central

    Halloran, J. P.; Sibole, S.; van Donkelaar, C. C.; van Turnhout, M. C.; Oomens, C. W. J.; Weiss, J. A.; Guilak, F.; Erdemir, A.

    2012-01-01

    Articular cartilage experiences significant mechanical loads during daily activities. Healthy cartilage provides the capacity for load bearing and regulates the mechanobiological processes for tissue development, maintenance, and repair. Experimental studies at multiple scales have provided a fundamental understanding of macroscopic mechanical function, evaluation of the micromechanical environment of chondrocytes, and the foundations for mechanobiological response. In addition, computational models of cartilage have offered a concise description of experimental data at many spatial levels under healthy and diseased conditions, and have served to generate hypotheses for the mechanical and biological function. Further, modeling and simulation provides a platform for predictive risk assessment, management of dysfunction, as well as a means to relate multiple spatial scales. Simulation-based investigation of cartilage comes with many challenges including both the computational burden and often insufficient availability of data for model development and validation. This review outlines recent modeling and simulation approaches to understand cartilage function from a mechanical systems perspective, and illustrates pathways to associate mechanics with biological function. Computational representations at single scales are provided from the body down to the microstructure, along with attempts to explore multiscale mechanisms of load sharing that dictate the mechanical environment of the cartilage and chondrocytes. PMID:22648577

  15. How to Make a Synthetic Multicellular Computer

    PubMed Central

    Macia, Javier; Sole, Ricard

    2014-01-01

    Biological systems perform computations at multiple scales and they do so in a robust way. Engineering metaphors have often been used in order to provide a rationale for modeling cellular and molecular computing networks and as the basis for their synthetic design. However, a major constraint in this mapping between electronic and wet computational circuits is the wiring problem. Although wires are identical within electronic devices, they must be different when using synthetic biology designs. Moreover, in most cases the designed molecular systems cannot be reused for other functions. A new approximation allows us to simplify the problem by using synthetic cellular consortia where the output of the computation is distributed over multiple engineered cells. By evolving circuits in silico, we can obtain the minimal sets of Boolean units required to solve the given problem at the lowest cost using cellular consortia. Our analysis reveals that the basic set of logic units is typically non-standard. Among the most common units, the so called inverted IMPLIES (N-Implies) appears to be one of the most important elements along with the NOT and AND functions. Although NOR and NAND gates are widely used in electronics, evolved circuits based on combinations of these gates are rare, thus suggesting that the strategy of combining the same basic logic gates might be inappropriate in order to easily implement synthetic computational constructs. The implications for future synthetic designs, the general view of synthetic biology as a standard engineering domain, as well as potencial drawbacks are outlined. PMID:24586222

  16. Rosen's (M,R) system in process algebra.

    PubMed

    Gatherer, Derek; Galpin, Vashti

    2013-11-17

    Robert Rosen's Metabolism-Replacement, or (M,R), system can be represented as a compact network structure with a single source and three products derived from that source in three consecutive reactions. (M,R) has been claimed to be non-reducible to its components and algorithmically non-computable, in the sense of not being evaluable as a function by a Turing machine. If (M,R)-like structures are present in real biological networks, this suggests that many biological networks will be non-computable, with implications for those branches of systems biology that rely on in silico modelling for predictive purposes. We instantiate (M,R) using the process algebra Bio-PEPA, and discuss the extent to which our model represents a true realization of (M,R). We observe that under some starting conditions and parameter values, stable states can be achieved. Although formal demonstration of algorithmic computability remains elusive for (M,R), we discuss the extent to which our Bio-PEPA representation of (M,R) allows us to sidestep Rosen's fundamental objections to computational systems biology. We argue that the behaviour of (M,R) in Bio-PEPA shows life-like properties.

  17. Computation, prediction, and experimental tests of fitness for bacteriophage T7 mutants with permuted genomes

    NASA Astrophysics Data System (ADS)

    Endy, Drew; You, Lingchong; Yin, John; Molineux, Ian J.

    2000-05-01

    We created a simulation based on experimental data from bacteriophage T7 that computes the developmental cycle of the wild-type phage and also of mutants that have an altered genome order. We used the simulation to compute the fitness of more than 105 mutants. We tested these computations by constructing and experimentally characterizing T7 mutants in which we repositioned gene 1, coding for T7 RNA polymerase. Computed protein synthesis rates for ectopic gene 1 strains were in moderate agreement with observed rates. Computed phage-doubling rates were close to observations for two of four strains, but significantly overestimated those of the other two. Computations indicate that the genome organization of wild-type T7 is nearly optimal for growth: only 2.8% of random genome permutations were computed to grow faster, the highest 31% faster, than wild type. Specific discrepancies between computations and observations suggest that a better understanding of the translation efficiency of individual mRNAs and the functions of qualitatively "nonessential" genes will be needed to improve the T7 simulation. In silico representations of biological systems can serve to assess and advance our understanding of the underlying biology. Iteration between computation, prediction, and observation should increase the rate at which biological hypotheses are formulated and tested.

  18. Nonlinear dynamics based digital logic and circuits.

    PubMed

    Kia, Behnam; Lindner, John F; Ditto, William L

    2015-01-01

    We discuss the role and importance of dynamics in the brain and biological neural networks and argue that dynamics is one of the main missing elements in conventional Boolean logic and circuits. We summarize a simple dynamics based computing method, and categorize different techniques that we have introduced to realize logic, functionality, and programmability. We discuss the role and importance of coupled dynamics in networks of biological excitable cells, and then review our simple coupled dynamics based method for computing. In this paper, for the first time, we show how dynamics can be used and programmed to implement computation in any given base, including but not limited to base two.

  19. What is bioinformatics? A proposed definition and overview of the field.

    PubMed

    Luscombe, N M; Greenbaum, D; Gerstein, M

    2001-01-01

    The recent flood of data from genome sequences and functional genomics has given rise to new field, bioinformatics, which combines elements of biology and computer science. Here we propose a definition for this new field and review some of the research that is being pursued, particularly in relation to transcriptional regulatory systems. Our definition is as follows: Bioinformatics is conceptualizing biology in terms of macromolecules (in the sense of physical-chemistry) and then applying "informatics" techniques (derived from disciplines such as applied maths, computer science, and statistics) to understand and organize the information associated with these molecules, on a large-scale. Analyses in bioinformatics predominantly focus on three types of large datasets available in molecular biology: macromolecular structures, genome sequences, and the results of functional genomics experiments (e.g. expression data). Additional information includes the text of scientific papers and "relationship data" from metabolic pathways, taxonomy trees, and protein-protein interaction networks. Bioinformatics employs a wide range of computational techniques including sequence and structural alignment, database design and data mining, macromolecular geometry, phylogenetic tree construction, prediction of protein structure and function, gene finding, and expression data clustering. The emphasis is on approaches integrating a variety of computational methods and heterogeneous data sources. Finally, bioinformatics is a practical discipline. We survey some representative applications, such as finding homologues, designing drugs, and performing large-scale censuses. Additional information pertinent to the review is available over the web at http://bioinfo.mbb.yale.edu/what-is-it.

  20. Reverse Genetics and High Throughput Sequencing Methodologies for Plant Functional Genomics

    PubMed Central

    Ben-Amar, Anis; Daldoul, Samia; Reustle, Götz M.; Krczal, Gabriele; Mliki, Ahmed

    2016-01-01

    In the post-genomic era, increasingly sophisticated genetic tools are being developed with the long-term goal of understanding how the coordinated activity of genes gives rise to a complex organism. With the advent of the next generation sequencing associated with effective computational approaches, wide variety of plant species have been fully sequenced giving a wealth of data sequence information on structure and organization of plant genomes. Since thousands of gene sequences are already known, recently developed functional genomics approaches provide powerful tools to analyze plant gene functions through various gene manipulation technologies. Integration of different omics platforms along with gene annotation and computational analysis may elucidate a complete view in a system biology level. Extensive investigations on reverse genetics methodologies were deployed for assigning biological function to a specific gene or gene product. We provide here an updated overview of these high throughout strategies highlighting recent advances in the knowledge of functional genomics in plants. PMID:28217003

  1. Applications of systems approaches in the study of rheumatic diseases.

    PubMed

    Kim, Ki-Jo; Lee, Saseong; Kim, Wan-Uk

    2015-03-01

    The complex interaction of molecules within a biological system constitutes a functional module. These modules are then acted upon by both internal and external factors, such as genetic and environmental stresses, which under certain conditions can manifest as complex disease phenotypes. Recent advances in high-throughput biological analyses, in combination with improved computational methods for data enrichment, functional annotation, and network visualization, have enabled a much deeper understanding of the mechanisms underlying important biological processes by identifying functional modules that are temporally and spatially perturbed in the context of disease development. Systems biology approaches such as these have produced compelling observations that would be impossible to replicate using classical methodologies, with greater insights expected as both the technology and methods improve in the coming years. Here, we examine the use of systems biology and network analysis in the study of a wide range of rheumatic diseases to better understand the underlying molecular and clinical features.

  2. Deducing protein function by forensic integrative cell biology.

    PubMed

    Earnshaw, William C

    2013-12-01

    Our ability to sequence genomes has provided us with near-complete lists of the proteins that compose cells, tissues, and organisms, but this is only the beginning of the process to discover the functions of cellular components. In the future, it's going to be crucial to develop computational analyses that can predict the biological functions of uncharacterised proteins. At the same time, we must not forget those fundamental experimental skills needed to confirm the predictions or send the analysts back to the drawing board to devise new ones.

  3. Synthetic biology expands chemical control of microorganisms.

    PubMed

    Ford, Tyler J; Silver, Pamela A

    2015-10-01

    The tools of synthetic biology allow researchers to change the ways engineered organisms respond to chemical stimuli. Decades of basic biology research and new efforts in computational protein and RNA design have led to the development of small molecule sensors that can be used to alter organism function. These new functions leap beyond the natural propensities of the engineered organisms. They can range from simple fluorescence or growth reporting to pathogen killing, and can involve metabolic coordination among multiple cells or organisms. Herein, we discuss how synthetic biology alters microorganisms' responses to chemical stimuli resulting in the development of microbes as toxicity sensors, disease treatments, and chemical factories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. The Layer-Oriented Approach to Declarative Languages for Biological Modeling

    PubMed Central

    Raikov, Ivan; De Schutter, Erik

    2012-01-01

    We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language. PMID:22615554

  5. The layer-oriented approach to declarative languages for biological modeling.

    PubMed

    Raikov, Ivan; De Schutter, Erik

    2012-01-01

    We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language.

  6. Computational modeling of neural plasticity for self-organization of neural networks.

    PubMed

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Finite Dimensional Approximations for Continuum Multiscale Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlyand, Leonid

    2017-01-24

    The completed research project concerns the development of novel computational techniques for modeling nonlinear multiscale physical and biological phenomena. Specifically, it addresses the theoretical development and applications of the homogenization theory (coarse graining) approach to calculation of the effective properties of highly heterogenous biological and bio-inspired materials with many spatial scales and nonlinear behavior. This theory studies properties of strongly heterogeneous media in problems arising in materials science, geoscience, biology, etc. Modeling of such media raises fundamental mathematical questions, primarily in partial differential equations (PDEs) and calculus of variations, the subject of the PI’s research. The focus of completed researchmore » was on mathematical models of biological and bio-inspired materials with the common theme of multiscale analysis and coarse grain computational techniques. Biological and bio-inspired materials offer the unique ability to create environmentally clean functional materials used for energy conversion and storage. These materials are intrinsically complex, with hierarchical organization occurring on many nested length and time scales. The potential to rationally design and tailor the properties of these materials for broad energy applications has been hampered by the lack of computational techniques, which are able to bridge from the molecular to the macroscopic scale. The project addressed the challenge of computational treatments of such complex materials by the development of a synergistic approach that combines innovative multiscale modeling/analysis techniques with high performance computing.« less

  8. Quantum chemical methods for the investigation of photoinitiated processes in biological systems: theory and applications.

    PubMed

    Dreuw, Andreas

    2006-11-13

    With the advent of modern computers and advances in the development of efficient quantum chemical computer codes, the meaningful computation of large molecular systems at a quantum mechanical level became feasible. Recent experimental effort to understand photoinitiated processes in biological systems, for instance photosynthesis or vision, at a molecular level also triggered theoretical investigations in this field. In this Minireview, standard quantum chemical methods are presented that are applicable and recently used for the calculation of excited states of photoinitiated processes in biological molecular systems. These methods comprise configuration interaction singles, the complete active space self-consistent field method, and time-dependent density functional theory and its variants. Semiempirical approaches are also covered. Their basic theoretical concepts and mathematical equations are briefly outlined, and their properties and limitations are discussed. Recent successful applications of the methods to photoinitiated processes in biological systems are described and theoretical tools for the analysis of excited states are presented.

  9. A Computational Framework for Bioimaging Simulation.

    PubMed

    Watabe, Masaki; Arjunan, Satya N V; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi

    2015-01-01

    Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units.

  10. Sixth International Conference on Systems Biology (ICSB 2005)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Professor Andrew Murray

    2005-10-22

    This grant supported the Sixth International Conference on Systems Biology (ICSB 2005), held in Boston, Massachusetts from October 19th to 22nd, 2005. The ICSB is the only major, annual, international conference focused exclusively on the important emerging field of systems biology. It draws together scientists with expertise in theoretical, computational and experimental approaches to understanding biological systems at many levels. Previous ICSB meetings have been held in Tokyo (2000), at Caltech (2001), at the Karolinska Institute (2002), at Washington University in St. Louis (2003), and in Heidelberg (2004). These conferences have been increasingly successful at bringing together the growing communitymore » of established and junior researchers with interests in this area. Boston is home to several groups that have shown leadership in the field and was therefore an ideal place to hold this conference . The executive committee for the conference comprised Jim Collins (Biomedical Engineering, Boston University), Marc Kirschner (chair of the new Department of Systems Biology at Harvard Medical School), Eric Lander (director of the Broad Institute of MIT and Harvard), Andrew Murray (director of Harvard’s Bauer Center for Genomics Research) and Peter Sorger (director of MIT’s Computational and Systems Biology Initiative). There are almost as many definitions of systems biology as there are systems biologists. We take a broad view of the field, and we succeeded in one of our major aims in organizing a conference that bridges two types of divide. The first is that between traditional academic disciplines: each of our sessions includes speakers from biology and from one or more physical or quantitative sciences. The second type includes those that separate experimental biologists from their colleagues who work on theory or computation. Here again, each session included representatives from at least two of these three categories; indeed, many of the speakers combined at least two of the categories in their own research activities. We define systems biology as a widening of focus in biology from individual genes or proteins to the complex networks of these molecules that allow cells and organisms to function. In the same way that conscious thought cannot be said to reside in any single neuron in the brain, simpler biological functions such as cell division arise from the interactions among many components in a network or ‘functional module’. For us, systems biology is characterized by the recognition that a higher-order description of biological function, accompanied by quantitative methods of analysis — often borrowed from disciplines such as physics, engineering, computer science or mathematics — can lead to the identification of general principles that underlie the structure, behavior, and evolution of cells and organisms. The heart of the conference were sessions on six topics: intracellular dynamics (featuring measurements on single cells, and their interpretation); biology by design (synthetic biology); intracellular networks (signal transduction and transcriptional regulation); multicellular networks (development and pattern formation); mechanics and scale in cellular behavior (featuring work on cytoskeletal mechanics, and on scaling relationships in biology); and evolution in action (including experimental evolution, of both real and artificial life-forms). Each session had four invited speakers; 23 of the 24 invited speakers attended (see below). We have selected these speakers not only for the interest of their research, but for their skills as communicators, thereby giving us the best chance of bridging the divides mentioned above. We also made a point of including women, younger investigators and people from outside the United States among the speakers. In addition to the invited speakers, we allotted time in the program for at least five contributed talks, which were selected from the poster submissions. Our aim in selecting these contributors showcased work that is “hot off the bench” (or computer) at the time of the conference, and also created additional opportunities for younger investigators to present their work. The main conference was preceded by a day of tutorials, and followed by two days of workshops, on a range of topics in quantitative, computational and systems biology.« less

  11. IN SILICO APPROACHES TO MECHANISTIC AND PREDICTIVE TOXICOLOGY: AN INTRODUCTION TO BIOINFORMATICS FOR TOXICOLOGISTS. (R827402)

    EPA Science Inventory

    Abstract

    Bioinformatics, or in silico biology, is a rapidly growing field that encompasses the theory and application of computational approaches to model, predict, and explain biological function at the molecular level. This information rich field requires new ...

  12. Biomimetic robots using EAP as artificial muscles - progress and challenges

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph

    2004-01-01

    Biology offers a great model for emulation in areas ranging from tools, computational algorithms, materials science, mechanisms and information technology. In recent years, the field of biomimetics, namely mimicking biology, has blossomed with significant advances enabling the reverse engineering of many animals' functions and implementation of some of these capabilities.

  13. An Inexpensive Biophysics Laboratory Apparatus for Acquiring Pulmonary Function Data with Clinical Applications

    NASA Astrophysics Data System (ADS)

    Harkay, Gregory

    2001-11-01

    Interest on the part of the Physics Department at KSC in developing a computer interfaced lab with appeal to biology majors and a need to perform a clinical pulmonological study to fulfill a biology requirement led to the author's undergraduate research project in which a recording spirometer (typical cost: $15K) was constructed from readily available materials and a typical undergraduate lab computer interface. Simple components, including a basic photogate circuit, CPU fan, and PVC couplings were used to construct an instrument for measuring flow rates as a function of time. Pasco software was used to build an experiment in which data was collected and integration performed such that one could obtain accurate values for FEV1 (forced expiratory volume for one second) and FVC (forced vital capacity) and their ratio for a large sample of subjects. Results were compared to published norms and subjects with impaired respiratory mechanisms identified. This laboratory exercise is one with which biology students can clearly identify and would be a robust addition to the repertoire for a HS or college physics or biology teaching laboratory.

  14. Reprogramming cellular functions with engineered membrane proteins.

    PubMed

    Arber, Caroline; Young, Melvin; Barth, Patrick

    2017-10-01

    Taking inspiration from Nature, synthetic biology utilizes and modifies biological components to expand the range of biological functions for engineering new practical devices and therapeutics. While early breakthroughs mainly concerned the design of gene circuits, recent efforts have focused on engineering signaling pathways to reprogram cellular functions. Since signal transduction across cell membranes initiates and controls intracellular signaling, membrane receptors have been targeted by diverse protein engineering approaches despite limited mechanistic understanding of their function. The modular architecture of several receptor families has enabled the empirical construction of chimeric receptors combining domains from distinct native receptors which have found successful immunotherapeutic applications. Meanwhile, progress in membrane protein structure determination, computational modeling and rational design promise to foster the engineering of a broader range of membrane receptor functions. Marrying empirical and rational membrane protein engineering approaches should enable the reprogramming of cells with widely diverse fine-tuned functions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Nanomaterial interactions with biomembranes: Bridging the gap between soft matter models and biological context.

    PubMed

    Werner, Marco; Auth, Thorsten; Beales, Paul A; Fleury, Jean Baptiste; Höök, Fredrik; Kress, Holger; Van Lehn, Reid C; Müller, Marcus; Petrov, Eugene P; Sarkisov, Lev; Sommer, Jens-Uwe; Baulin, Vladimir A

    2018-04-03

    Synthetic polymers, nanoparticles, and carbon-based materials have great potential in applications including drug delivery, gene transfection, in vitro and in vivo imaging, and the alteration of biological function. Nature and humans use different design strategies to create nanomaterials: biological objects have emerged from billions of years of evolution and from adaptation to their environment resulting in high levels of structural complexity; in contrast, synthetic nanomaterials result from minimalistic but controlled design options limited by the authors' current understanding of the biological world. This conceptual mismatch makes it challenging to create synthetic nanomaterials that possess desired functions in biological media. In many biologically relevant applications, nanomaterials must enter the cell interior to perform their functions. An essential transport barrier is the cell-protecting plasma membrane and hence the understanding of its interaction with nanomaterials is a fundamental task in biotechnology. The authors present open questions in the field of nanomaterial interactions with biological membranes, including: how physical mechanisms and molecular forces acting at the nanoscale restrict or inspire design options; which levels of complexity to include next in computational and experimental models to describe how nanomaterials cross barriers via passive or active processes; and how the biological media and protein corona interfere with nanomaterial functionality. In this Perspective, the authors address these questions with the aim of offering guidelines for the development of next-generation nanomaterials that function in biological media.

  16. Enhancement of DFT-calculations at petascale: Nuclear Magnetic Resonance, Hybrid Density Functional Theory and Car-Parrinello calculations

    NASA Astrophysics Data System (ADS)

    Varini, Nicola; Ceresoli, Davide; Martin-Samos, Layla; Girotto, Ivan; Cavazzoni, Carlo

    2013-08-01

    One of the most promising techniques used for studying the electronic properties of materials is based on Density Functional Theory (DFT) approach and its extensions. DFT has been widely applied in traditional solid state physics problems where periodicity and symmetry play a crucial role in reducing the computational workload. With growing compute power capability and the development of improved DFT methods, the range of potential applications is now including other scientific areas such as Chemistry and Biology. However, cross disciplinary combinations of traditional Solid-State Physics, Chemistry and Biology drastically improve the system complexity while reducing the degree of periodicity and symmetry. Large simulation cells containing of hundreds or even thousands of atoms are needed to model these kind of physical systems. The treatment of those systems still remains a computational challenge even with modern supercomputers. In this paper we describe our work to improve the scalability of Quantum ESPRESSO (Giannozzi et al., 2009 [3]) for treating very large cells and huge numbers of electrons. To this end we have introduced an extra level of parallelism, over electronic bands, in three kernels for solving computationally expensive problems: the Sternheimer equation solver (Nuclear Magnetic Resonance, package QE-GIPAW), the Fock operator builder (electronic ground-state, package PWscf) and most of the Car-Parrinello routines (Car-Parrinello dynamics, package CP). Final benchmarks show our success in computing the Nuclear Magnetic Response (NMR) chemical shift of a large biological assembly, the electronic structure of defected amorphous silica with hybrid exchange-correlation functionals and the equilibrium atomic structure of height Porphyrins anchored to a Carbon Nanotube, on many thousands of CPU cores.

  17. A Computational Network Biology Approach to Uncover Novel Genes Related to Alzheimer's Disease.

    PubMed

    Zanzoni, Andreas

    2016-01-01

    Recent advances in the fields of genetics and genomics have enabled the identification of numerous Alzheimer's disease (AD) candidate genes, although for many of them the role in AD pathophysiology has not been uncovered yet. Concomitantly, network biology studies have shown a strong link between protein network connectivity and disease. In this chapter I describe a computational approach that, by combining local and global network analysis strategies, allows the formulation of novel hypotheses on the molecular mechanisms involved in AD and prioritizes candidate genes for further functional studies.

  18. [Genotoxic modification of nucleic acid bases and biological consequences of it. Review and prospects of experimental and computational investigations

    NASA Technical Reports Server (NTRS)

    Poltev, V. I.; Bruskov, V. I.; Shuliupina, N. V.; Rein, R.; Shibata, M.; Ornstein, R.; Miller, J.

    1993-01-01

    The review is presented of experimental and computational data on the influence of genotoxic modification of bases (deamination, alkylation, oxidation) on the structure and biological functioning of nucleic acids. Pathways are discussed for the influence of modification on coding properties of bases, on possible errors of nucleic acid biosynthesis, and on configurations of nucleotide mispairs. The atomic structure of nucleic acid fragments with modified bases and the role of base damages in mutagenesis and carcinogenesis are considered.

  19. Computational studies of Ras and PI3K

    NASA Technical Reports Server (NTRS)

    Ren, Lei; Cucinotta, Francis A.

    2004-01-01

    Until recently, experimental techniques in molecular cell biology have been the primary means to investigate biological risk upon space radiation. However, computational modeling provides an alternative theoretical approach, which utilizes various computational tools to simulate proteins, nucleotides, and their interactions. In this study, we are focused on using molecular mechanics (MM) and molecular dynamics (MD) to study the mechanism of protein-protein binding and to estimate the binding free energy between proteins. Ras is a key element in a variety of cell processes, and its activation of phosphoinositide 3-kinase (PI3K) is important for survival of transformed cells. Different computational approaches for this particular study are presented to calculate the solvation energies and binding free energies of H-Ras and PI3K. The goal of this study is to establish computational methods to investigate the roles of different proteins played in the cellular responses to space radiation, including modification of protein function through gene mutation, and to support the studies in molecular cell biology and theoretical kinetics models for our risk assessment project.

  20. Bio-steps beyond Turing.

    PubMed

    Calude, Cristian S; Păun, Gheorghe

    2004-11-01

    Are there 'biologically computing agents' capable to compute Turing uncomputable functions? It is perhaps tempting to dismiss this question with a negative answer. Quite the opposite, for the first time in the literature on molecular computing we contend that the answer is not theoretically negative. Our results will be formulated in the language of membrane computing (P systems). Some mathematical results presented here are interesting in themselves. In contrast with most speed-up methods which are based on non-determinism, our results rest upon some universality results proved for deterministic P systems. These results will be used for building "accelerated P systems". In contrast with the case of Turing machines, acceleration is a part of the hardware (not a quality of the environment) and it is realised either by decreasing the size of "reactors" or by speeding-up the communication channels. Consequently, two acceleration postulates of biological inspiration are introduced; each of them poses specific questions to biology. Finally, in a more speculative part of the paper, we will deal with Turing non-computability activity of the brain and possible forms of (extraterrestrial) intelligence.

  1. Incorporating modeling and simulations in undergraduate biophysical chemistry course to promote understanding of structure-dynamics-function relationships in proteins.

    PubMed

    Hati, Sanchita; Bhattacharyya, Sudeep

    2016-01-01

    A project-based biophysical chemistry laboratory course, which is offered to the biochemistry and molecular biology majors in their senior year, is described. In this course, the classroom study of the structure-function of biomolecules is integrated with the discovery-guided laboratory study of these molecules using computer modeling and simulations. In particular, modern computational tools are employed to elucidate the relationship between structure, dynamics, and function in proteins. Computer-based laboratory protocols that we introduced in three modules allow students to visualize the secondary, super-secondary, and tertiary structures of proteins, analyze non-covalent interactions in protein-ligand complexes, develop three-dimensional structural models (homology model) for new protein sequences and evaluate their structural qualities, and study proteins' intrinsic dynamics to understand their functions. In the fourth module, students are assigned to an authentic research problem, where they apply their laboratory skills (acquired in modules 1-3) to answer conceptual biophysical questions. Through this process, students gain in-depth understanding of protein dynamics-the missing link between structure and function. Additionally, the requirement of term papers sharpens students' writing and communication skills. Finally, these projects result in new findings that are communicated in peer-reviewed journals. © 2016 The International Union of Biochemistry and Molecular Biology.

  2. Logic integration of mRNA signals by an RNAi-based molecular computer.

    PubMed

    Xie, Zhen; Liu, Siyuan John; Bleris, Leonidas; Benenson, Yaakov

    2010-05-01

    Synthetic in vivo molecular 'computers' could rewire biological processes by establishing programmable, non-native pathways between molecular signals and biological responses. Multiple molecular computer prototypes have been shown to work in simple buffered solutions. Many of those prototypes were made of DNA strands and performed computations using cycles of annealing-digestion or strand displacement. We have previously introduced RNA interference (RNAi)-based computing as a way of implementing complex molecular logic in vivo. Because it also relies on nucleic acids for its operation, RNAi computing could benefit from the tools developed for DNA systems. However, these tools must be harnessed to produce bioactive components and be adapted for harsh operating environments that reflect in vivo conditions. In a step toward this goal, we report the construction and implementation of biosensors that 'transduce' mRNA levels into bioactive, small interfering RNA molecules via RNA strand exchange in a cell-free Drosophila embryo lysate, a step beyond simple buffered environments. We further integrate the sensors with our RNAi 'computational' module to evaluate two-input logic functions on mRNA concentrations. Our results show how RNA strand exchange can expand the utility of RNAi computing and point toward the possibility of using strand exchange in a native biological setting.

  3. Computer Center: 2 HyperCard Stacks for Biology.

    ERIC Educational Resources Information Center

    Duhrkopf, Richard, Ed.

    1989-01-01

    Two Hypercard stacks are reviewed including "Amino Acids," created to help students associate amino acid names with their structures, and "DNA Teacher," a tutorial on the structure and function of DNA. Availability, functions, hardware requirements, and general comments on these stacks are provided. (CW)

  4. Designing integrated computational biology pipelines visually.

    PubMed

    Jamil, Hasan M

    2013-01-01

    The long-term cost of developing and maintaining a computational pipeline that depends upon data integration and sophisticated workflow logic is too high to even contemplate "what if" or ad hoc type queries. In this paper, we introduce a novel application building interface for computational biology research, called VizBuilder, by leveraging a recent query language called BioFlow for life sciences databases. Using VizBuilder, it is now possible to develop ad hoc complex computational biology applications at throw away costs. The underlying query language supports data integration and workflow construction almost transparently and fully automatically, using a best effort approach. Users express their application by drawing it with VizBuilder icons and connecting them in a meaningful way. Completed applications are compiled and translated as BioFlow queries for execution by the data management system LifeDB, for which VizBuilder serves as a front end. We discuss VizBuilder features and functionalities in the context of a real life application after we briefly introduce BioFlow. The architecture and design principles of VizBuilder are also discussed. Finally, we outline future extensions of VizBuilder. To our knowledge, VizBuilder is a unique system that allows visually designing computational biology pipelines involving distributed and heterogeneous resources in an ad hoc manner.

  5. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  6. Linear-array based full-view high-resolution photoacoustic computed tomography of whole mouse brain functions in vivo

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Pengfei; Wang, Lihong V.

    2018-02-01

    Photoacoustic computed tomography (PACT) is a non-invasive imaging technique offering high contrast, high resolution, and deep penetration in biological tissues. We report a photoacoustic computed tomography (PACT) system equipped with a high frequency linear array for anatomical and functional imaging of the mouse whole brain. The linear array was rotationally scanned in the coronal plane to achieve the full-view coverage. We investigated spontaneous neural activities in the deep brain by monitoring the hemodynamics and observed strong interhemispherical correlations between contralateral regions, both in the cortical layer and in the deep regions.

  7. Survey of local and global biological network alignment: the need to reconcile the two sides of the same coin.

    PubMed

    Guzzi, Pietro Hiram; Milenkovic, Tijana

    2018-05-01

    Analogous to genomic sequence alignment that allows for across-species transfer of biological knowledge between conserved sequence regions, biological network alignment can be used to guide the knowledge transfer between conserved regions of molecular networks of different species. Hence, biological network alignment can be used to redefine the traditional notion of a sequence-based homology to a new notion of network-based homology. Analogous to genomic sequence alignment, there exist local and global biological network alignments. Here, we survey prominent and recent computational approaches of each network alignment type and discuss their (dis)advantages. Then, as it was recently shown that the two approach types are complementary, in the sense that they capture different slices of cellular functioning, we discuss the need to reconcile the two network alignment types and present a recent first step in this direction. We conclude with some open research problems on this topic and comment on the usefulness of network alignment in other domains besides computational biology.

  8. Roadmap for Computer-Aided Modeling of Theranostics and Related Nanosystems

    NASA Astrophysics Data System (ADS)

    Ulicny, Jozef; Kozar, Tibor

    2018-02-01

    Detailed understanding of the interactions of novel metal-containing nanoparticles with biological membranes, macromolecules and other molecular targets of the living cell is crucial for the elucidation of the biological actions of such functionalized nanosystems. We present here the construction and modeling of thiolate-protected gold clusters and the prediction of their static and dynamic properties.

  9. Proteins of unknown function in the Protein Data Bank (PDB): an inventory of true uncharacterized proteins and computational tools for their analysis.

    PubMed

    Nadzirin, Nurul; Firdaus-Raih, Mohd

    2012-10-08

    Proteins of uncharacterized functions form a large part of many of the currently available biological databases and this situation exists even in the Protein Data Bank (PDB). Our analysis of recent PDB data revealed that only 42.53% of PDB entries (1084 coordinate files) that were categorized under "unknown function" are true examples of proteins of unknown function at this point in time. The remainder 1465 entries also annotated as such appear to be able to have their annotations re-assessed, based on the availability of direct functional characterization experiments for the protein itself, or for homologous sequences or structures thus enabling computational function inference.

  10. Phase-contrast x-ray computed tomography for biological imaging

    NASA Astrophysics Data System (ADS)

    Momose, Atsushi; Takeda, Tohoru; Itai, Yuji

    1997-10-01

    We have shown so far that 3D structures in biological sot tissues such as cancer can be revealed by phase-contrast x- ray computed tomography using an x-ray interferometer. As a next step, we aim at applications of this technique to in vivo observation, including radiographic applications. For this purpose, the size of view field is desired to be more than a few centimeters. Therefore, a larger x-ray interferometer should be used with x-rays of higher energy. We have evaluated the optimal x-ray energy from an aspect of does as a function of sample size. Moreover, desired spatial resolution to an image sensor is discussed as functions of x-ray energy and sample size, basing on a requirement in the analysis of interference fringes.

  11. Active Interaction Mapping as a tool to elucidate hierarchical functions of biological processes.

    PubMed

    Farré, Jean-Claude; Kramer, Michael; Ideker, Trey; Subramani, Suresh

    2017-07-03

    Increasingly, various 'omics data are contributing significantly to our understanding of novel biological processes, but it has not been possible to iteratively elucidate hierarchical functions in complex phenomena. We describe a general systems biology approach called Active Interaction Mapping (AI-MAP), which elucidates the hierarchy of functions for any biological process. Existing and new 'omics data sets can be iteratively added to create and improve hierarchical models which enhance our understanding of particular biological processes. The best datatypes to further improve an AI-MAP model are predicted computationally. We applied this approach to our understanding of general and selective autophagy, which are conserved in most eukaryotes, setting the stage for the broader application to other cellular processes of interest. In the particular application to autophagy-related processes, we uncovered and validated new autophagy and autophagy-related processes, expanded known autophagy processes with new components, integrated known non-autophagic processes with autophagy and predict other unexplored connections.

  12. PatchSurfers: Two methods for local molecular property-based binding ligand prediction.

    PubMed

    Shin, Woong-Hee; Bures, Mark Gregory; Kihara, Daisuke

    2016-01-15

    Protein function prediction is an active area of research in computational biology. Function prediction can help biologists make hypotheses for characterization of genes and help interpret biological assays, and thus is a productive area for collaboration between experimental and computational biologists. Among various function prediction methods, predicting binding ligand molecules for a target protein is an important class because ligand binding events for a protein are usually closely intertwined with the proteins' biological function, and also because predicted binding ligands can often be directly tested by biochemical assays. Binding ligand prediction methods can be classified into two types: those which are based on protein-protein (or pocket-pocket) comparison, and those that compare a target pocket directly to ligands. Recently, our group proposed two computational binding ligand prediction methods, Patch-Surfer, which is a pocket-pocket comparison method, and PL-PatchSurfer, which compares a pocket to ligand molecules. The two programs apply surface patch-based descriptions to calculate similarity or complementarity between molecules. A surface patch is characterized by physicochemical properties such as shape, hydrophobicity, and electrostatic potentials. These properties on the surface are represented using three-dimensional Zernike descriptors (3DZD), which are based on a series expansion of a 3 dimensional function. Utilizing 3DZD for describing the physicochemical properties has two main advantages: (1) rotational invariance and (2) fast comparison. Here, we introduce Patch-Surfer and PL-PatchSurfer with an emphasis on PL-PatchSurfer, which is more recently developed. Illustrative examples of PL-PatchSurfer performance on binding ligand prediction as well as virtual drug screening are also provided. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Submolecular Gates Self-Assemble for Hot-Electron Transfer in Proteins.

    PubMed

    Filip-Granit, Neta; Goldberg, Eran; Samish, Ilan; Ashur, Idan; van der Boom, Milko E; Cohen, Hagai; Scherz, Avigdor

    2017-07-27

    Redox reactions play key roles in fundamental biological processes. The related spatial organization of donors and acceptors is assumed to undergo evolutionary optimization facilitating charge mobilization within the relevant biological context. Experimental information from submolecular functional sites is needed to understand the organization strategies and driving forces involved in the self-development of structure-function relationships. Here we exploit chemically resolved electrical measurements (CREM) to probe the atom-specific electrostatic potentials (ESPs) in artificial arrays of bacteriochlorophyll (BChl) derivatives that provide model systems for photoexcited (hot) electron donation and withdrawal. On the basis of computations we show that native BChl's in the photosynthetic reaction center (RC) self-assemble at their ground-state as aligned gates for functional charge transfer. The combined computational and experimental results further reveal how site-specific polarizability perpendicular to the molecular plane enhances the hot-electron transport. Maximal transport efficiency is predicted for a specific, ∼5 Å, distance above the center of the metalized BChl, which is in remarkably close agreement with the distance and mutual orientation of corresponding native cofactors. These findings provide new metrics and guidelines for analysis of biological redox centers and for designing charge mobilizing machines such as artificial photosynthesis.

  14. Quantifying electron transfer reactions in biological systems: what interactions play the major role?

    NASA Astrophysics Data System (ADS)

    Sjulstok, Emil; Olsen, Jógvan Magnus Haugaard; Solov'Yov, Ilia A.

    2015-12-01

    Various biological processes involve the conversion of energy into forms that are usable for chemical transformations and are quantum mechanical in nature. Such processes involve light absorption, excited electronic states formation, excitation energy transfer, electrons and protons tunnelling which for example occur in photosynthesis, cellular respiration, DNA repair, and possibly magnetic field sensing. Quantum biology uses computation to model biological interactions in light of quantum mechanical effects and has primarily developed over the past decade as a result of convergence between quantum physics and biology. In this paper we consider electron transfer in biological processes, from a theoretical view-point; namely in terms of quantum mechanical and semi-classical models. We systematically characterize the interactions between the moving electron and its biological environment to deduce the driving force for the electron transfer reaction and to establish those interactions that play the major role in propelling the electron. The suggested approach is seen as a general recipe to treat electron transfer events in biological systems computationally, and we utilize it to describe specifically the electron transfer reactions in Arabidopsis thaliana cryptochrome-a signaling photoreceptor protein that became attractive recently due to its possible function as a biological magnetoreceptor.

  15. An algebraic interpretation of PSP composition.

    PubMed

    Vaucher, G

    1998-01-01

    The introduction of time in artificial neurons is a delicate problem on which many groups are working. Our approach combines some properties of biological models and the algebraic properties of McCulloch and Pitts artificial neuron (AN) (McCulloch and Pitts, 1943) to produce a new model which links both characteristics. In this extended artificial neuron, postsynaptic potentials (PSPs) are considered as numerical elements, having two degrees of freedom, on which the neuron computes operations. Modelled in this manner, a group of neurons can be seen as a computer with an asynchronous architecture. To formalize the functioning of this computer, we propose an algebra of impulses. This approach might also be interesting in the modelling of the passive electrical properties in some biological neurons.

  16. ''After the Genome 5 Conference'' to be held October 6-10, 1999 in Jackson Hole, Wyoming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roger Brent

    OAK B139 The postgenomic era is arriving faster than anyone had imagined--sometime during 2000 we'll have a large fraction of the human genome sequence. Heretofore, our understanding of function has come from non-industrial experiments whose conclusions were largely framed in human language. The advent of large amounts of sequence data, and of ''functional genomic'' data types such as mRNA expression data, have changed this picture. These data share the feature that individual observations and measurements are typically relatively low value adding. Such data is now being generated so rapidly that the amount of information contained in it will surpass themore » amount of biological information collected by traditional means. It is tantalizing to envision using genomic information to create a quantitative biology with a very strong data component. Unfortunately, we are very early in our understanding of how to ''compute on'' genomic information so as to extract biological knowledge from i t. In fact, some current efforts to come to grips with genomic information often resemble a computer savvy library science, where the most important issues concern categories, classification schemes, and information retrieval. When exploring new libraries, a measure of cataloging and inventory is surely inevitable. However, at some point we will need to move from library science to scholarship.We would like to achieve a quantitative and predictive understanding of biological function. We realize that making the bridge from knowledge of systems to the sets of abstractions that constitute computable entities is not easy. The After the Genome meetings were started in 1995 to help the biological community think about and prepare for the changes in biological research in the face of the oncoming flow of genomic information. The term ''After the Genome'' refers to a future in which complete inventories of the gene products of entire organisms become available.Since then, many more biologists have become cognizant of the issues raised by this future, and, in response, the organizers intend to distinguish this meeting from other ''postgenomic'' meetings by bringing together intellectuals from subject fields far outside of conventional biology with the expectation that this will help focus thinking beyond the immediate future. To this end, After the Genome 5 will bring together industrial and university researchers, including: (1) Physicists, chemists, and engineers who are devising and using new data gathering techniques, such as microarrays, protein mass spectrometry, and single molecule measurements (2) Computer scientists from fields as diverse as geology and wargames, who have experience moving from broad knowledge of systems to analysis that results in models and simulations (3) Neurobiologists and computer scientists who combine physiological experimentation and computer modeling to understand single cells and small networks of cells (4) Biologists who are trying to model genetic networks (5) All-around visionary thinkers (6) policy makers, to suggest how to convey any good ideas to organizations that can commit resources to them.« less

  17. "After the Genome 5, Conference to be held October 6-10, 1999, Jackson Hole, Wyoming"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brent, Roger

    The postgenomic era is arriving faster than anyone had imagined-- sometime during 2000 we'll have a large fraction of the human genome sequence. Heretofore, our understanding of function has come from non-industrial experiments whose conclusions were largely framed in human language. The advent of large amounts of sequence data, and of "functional genomic" data types such as mRNA expression data, have changed this picture. These data share the feature that individual observations and measurements are typically relatively low value adding. Such data is now being generated so rapidly that the amount of information contained in it will surpass the amountmore » of biological information collected by traditional means. It is tantalizing to envision using genomic information to create a quantitative biology with a very strong data component. Unfortunately, we are very early in our understanding of how to "compute on" genomic information so as to extract biological knowledge from it. In fact, some current efforts to come to grips with genomic information often resemble a computer savvy library science, where the most important issues concern categories, classification schemes, and information retrieval. When exploring new libraries, a measure of cataloging and inventory is surely inevitable. However, at some point we will need to move from library science to scholarship. We would like to achieve a quantitative and predictive understanding of biological function. We realize that making the bridge from knowledge of systems to the sets of abstractions that constitute computable entities is not easy. The After the Genome meetings were started in 1995 to help the biological community think about and prepare for the changes in biological research in the face of the oncoming flow of genomic information. The term "After the Genome" refers to a future in which complete inventories of the gene products of entire organisms become available. Since then, many more biologists have become cognizant of the issues raised by this future, and, in response, the organizers intend to distinguish this meeting from other "postgenomic" meetings by bringing together intellectuals from subject fields far outside of conventional biology with the expectation that this will help focus thinking beyond the immediate future. To this end, After the Genome 5 will bring together industrial and university researchers, including: 1) Physicists, chemists, and engineers who are devising and using new data gathering techniques, such as microarrays, protein mass spectrometry, and single molecule measurements 2) Computer scientists from fields as diverse as geology and wargames, who have experience moving from broad knowledge of systems to analysis that results in models and simulations 3) Neurobiologists and computer scientists who combine physiological experimentation and computer modeling to understand single cells and small networks of cells 4) Biologists who are trying to model genetic networks 5) All- around visionary thinkers 7) policy makers, to suggest how to convey any good ideas to organizations that can commit resources to them.« less

  18. A Computational Framework for Bioimaging Simulation

    PubMed Central

    Watabe, Masaki; Arjunan, Satya N. V.; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi

    2015-01-01

    Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units. PMID:26147508

  19. Massive-scale gene co-expression network construction and robustness testing using random matrix theory.

    PubMed

    Gibson, Scott M; Ficklin, Stephen P; Isaacson, Sven; Luo, Feng; Feltus, Frank A; Smith, Melissa C

    2013-01-01

    The study of gene relationships and their effect on biological function and phenotype is a focal point in systems biology. Gene co-expression networks built using microarray expression profiles are one technique for discovering and interpreting gene relationships. A knowledge-independent thresholding technique, such as Random Matrix Theory (RMT), is useful for identifying meaningful relationships. Highly connected genes in the thresholded network are then grouped into modules that provide insight into their collective functionality. While it has been shown that co-expression networks are biologically relevant, it has not been determined to what extent any given network is functionally robust given perturbations in the input sample set. For such a test, hundreds of networks are needed and hence a tool to rapidly construct these networks. To examine functional robustness of networks with varying input, we enhanced an existing RMT implementation for improved scalability and tested functional robustness of human (Homo sapiens), rice (Oryza sativa) and budding yeast (Saccharomyces cerevisiae). We demonstrate dramatic decrease in network construction time and computational requirements and show that despite some variation in global properties between networks, functional similarity remains high. Moreover, the biological function captured by co-expression networks thresholded by RMT is highly robust.

  20. IBIS integrated biological imaging system: electron micrograph image-processing software running on Unix workstations.

    PubMed

    Flifla, M J; Garreau, M; Rolland, J P; Coatrieux, J L; Thomas, D

    1992-12-01

    'IBIS' is a set of computer programs concerned with the processing of electron micrographs, with particular emphasis on the requirements for structural analyses of biological macromolecules. The software is written in FORTRAN 77 and runs on Unix workstations. A description of the various functions and the implementation mode is given. Some examples illustrate the user interface.

  1. A toolbox for discrete modelling of cell signalling dynamics.

    PubMed

    Paterson, Yasmin Z; Shorthouse, David; Pleijzier, Markus W; Piterman, Nir; Bendtsen, Claus; Hall, Benjamin A; Fisher, Jasmin

    2018-06-18

    In an age where the volume of data regarding biological systems exceeds our ability to analyse it, many researchers are looking towards systems biology and computational modelling to help unravel the complexities of gene and protein regulatory networks. In particular, the use of discrete modelling allows generation of signalling networks in the absence of full quantitative descriptions of systems, which are necessary for ordinary differential equation (ODE) models. In order to make such techniques more accessible to mainstream researchers, tools such as the BioModelAnalyzer (BMA) have been developed to provide a user-friendly graphical interface for discrete modelling of biological systems. Here we use the BMA to build a library of discrete target functions of known canonical molecular interactions, translated from ordinary differential equations (ODEs). We then show that these BMA target functions can be used to reconstruct complex networks, which can correctly predict many known genetic perturbations. This new library supports the accessibility ethos behind the creation of BMA, providing a toolbox for the construction of complex cell signalling models without the need for extensive experience in computer programming or mathematical modelling, and allows for construction and simulation of complex biological systems with only small amounts of quantitative data.

  2. Lateral organization, bilayer asymmetry, and inter-leaflet coupling of biological membranes

    DOE PAGES

    Smith, Jeremy C.; Cheng, Xiaolin; Nickels, Jonathan D.

    2015-07-29

    Understanding of cell membrane organization has evolved significantly from the classic fluid mosaic model. It is now recognized that biological membranes are highly organized structures, with differences in lipid compositions between inner and outer leaflets and in lateral structures within the bilayer plane, known as lipid rafts. These organizing principles are important for protein localization and function as well as cellular signaling. However, the mechanisms and biophysical basis of lipid raft formation, structure, dynamics and function are not clearly understood. One key question, which we focus on in this review, is how lateral organization and leaflet compositional asymmetry are coupled.more » Detailed information elucidating this question has been sparse because of the small size and transient nature of rafts and the experimental challenges in constructing asymmetric bilayers. Resolving this mystery will require advances in both experimentation and modeling. We discuss here the preparation of model systems along with experimental and computational approaches that have been applied in efforts to address this key question in membrane biology. Furthermore, we seek to place recent and future advances in experimental and computational techniques in context, providing insight into in-plane and transverse organization of biological membranes.« less

  3. Discovering the intelligence in molecular biology.

    PubMed

    Uberbacher, E

    1995-12-01

    The Third International Conference on Intelligent Systems in Molecular Biology was truly an outstanding event. Computational methods in molecular biology have reached a new level of maturity and utility, resulting in many high-impact applications. The success of this meeting bodes well for the rapid and continuing development of computational methods, intelligent systems and information-based approaches for the biosciences. The basic technology, originally most often applied to 'feasibility' problems, is now dealing effectively with the most difficult real-world problems. Significant progress has been made in understanding protein-structure information, structural classification, and how functional information and the relevant features of active-site geometry can be gleaned from structures by automated computational approaches. The value and limits of homology-based methods, and the ability to classify proteins by structure in the absence of homology, have reached a new level of sophistication. New methods for covariation analysis in the folding of large structures such as RNAs have shown remarkably good results, indicating the long-term potential to understand very complicated molecules and multimolecular complexes using computational means. Novel methods, such as HMMs, context-free grammars and the uses of mutual information theory, have taken center stage as highly valuable tools in our quest to represent and characterize biological information. A focus on creative uses of intelligent systems technologies and the trend toward biological application will undoubtedly continue and grow at the 1996 ISMB meeting in St Louis.

  4. Year 2 Report: Protein Function Prediction Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, C E

    2012-04-27

    Upon completion of our second year of development in a 3-year development cycle, we have completed a prototype protein structure-function annotation and function prediction system: Protein Function Prediction (PFP) platform (v.0.5). We have met our milestones for Years 1 and 2 and are positioned to continue development in completion of our original statement of work, or a reasonable modification thereof, in service to DTRA Programs involved in diagnostics and medical countermeasures research and development. The PFP platform is a multi-scale computational modeling system for protein structure-function annotation and function prediction. As of this writing, PFP is the only existing fullymore » automated, high-throughput, multi-scale modeling, whole-proteome annotation platform, and represents a significant advance in the field of genome annotation (Fig. 1). PFP modules perform protein functional annotations at the sequence, systems biology, protein structure, and atomistic levels of biological complexity (Fig. 2). Because these approaches provide orthogonal means of characterizing proteins and suggesting protein function, PFP processing maximizes the protein functional information that can currently be gained by computational means. Comprehensive annotation of pathogen genomes is essential for bio-defense applications in pathogen characterization, threat assessment, and medical countermeasure design and development in that it can short-cut the time and effort required to select and characterize protein biomarkers.« less

  5. Dynamics of biological systems: role of systems biology in medical research.

    PubMed

    Assmus, Heike E; Herwig, Ralf; Cho, Kwang-Hyun; Wolkenhauer, Olaf

    2006-11-01

    Cellular systems are networks of interacting components that change with time in response to external and internal events. Studying the dynamic behavior of these networks is the basis for an understanding of cellular functions and disease mechanisms. Quantitative time-series data leading to meaningful models can improve our knowledge of human physiology in health and disease, and aid the search for earlier diagnoses, better therapies and a healthier life. The advent of systems biology is about to take the leap into clinical research and medical applications. This review emphasizes the importance of a dynamic view and understanding of cell function. We discuss the potential for computer-aided mathematical modeling of biological systems in medical research with examples from some of the major therapeutic areas: cancer, cardiovascular, diabetic and neurodegenerative medicine.

  6. [Application of microelectronics CAD tools to synthetic biology].

    PubMed

    Madec, Morgan; Haiech, Jacques; Rosati, Élise; Rezgui, Abir; Gendrault, Yves; Lallement, Christophe

    2017-02-01

    Synthetic biology is an emerging science that aims to create new biological functions that do not exist in nature, based on the knowledge acquired in life science over the last century. Since the beginning of this century, several projects in synthetic biology have emerged. The complexity of the developed artificial bio-functions is relatively low so that empirical design methods could be used for the design process. Nevertheless, with the increasing complexity of biological circuits, this is no longer the case and a large number of computer aided design softwares have been developed in the past few years. These tools include languages for the behavioral description and the mathematical modelling of biological systems, simulators at different levels of abstraction, libraries of biological devices and circuit design automation algorithms. All of these tools already exist in other fields of engineering sciences, particularly in microelectronics. This is the approach that is put forward in this paper. © 2017 médecine/sciences – Inserm.

  7. Methods of information geometry in computational system biology (consistency between chemical and biological evolution).

    PubMed

    Astakhov, Vadim

    2009-01-01

    Interest in simulation of large-scale metabolic networks, species development, and genesis of various diseases requires new simulation techniques to accommodate the high complexity of realistic biological networks. Information geometry and topological formalisms are proposed to analyze information processes. We analyze the complexity of large-scale biological networks as well as transition of the system functionality due to modification in the system architecture, system environment, and system components. The dynamic core model is developed. The term dynamic core is used to define a set of causally related network functions. Delocalization of dynamic core model provides a mathematical formalism to analyze migration of specific functions in biosystems which undergo structure transition induced by the environment. The term delocalization is used to describe these processes of migration. We constructed a holographic model with self-poetic dynamic cores which preserves functional properties under those transitions. Topological constraints such as Ricci flow and Pfaff dimension were found for statistical manifolds which represent biological networks. These constraints can provide insight on processes of degeneration and recovery which take place in large-scale networks. We would like to suggest that therapies which are able to effectively implement estimated constraints, will successfully adjust biological systems and recover altered functionality. Also, we mathematically formulate the hypothesis that there is a direct consistency between biological and chemical evolution. Any set of causal relations within a biological network has its dual reimplementation in the chemistry of the system environment.

  8. All biology is computational biology.

    PubMed

    Markowetz, Florian

    2017-03-01

    Here, I argue that computational thinking and techniques are so central to the quest of understanding life that today all biology is computational biology. Computational biology brings order into our understanding of life, it makes biological concepts rigorous and testable, and it provides a reference map that holds together individual insights. The next modern synthesis in biology will be driven by mathematical, statistical, and computational methods being absorbed into mainstream biological training, turning biology into a quantitative science.

  9. Computational approaches for rational design of proteins with novel functionalities

    PubMed Central

    Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul

    2012-01-01

    Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643

  10. A methodology for global-sensitivity analysis of time-dependent outputs in systems biology modelling.

    PubMed

    Sumner, T; Shephard, E; Bogle, I D L

    2012-09-07

    One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.

  11. System-based identification of toxicity pathways associated with multi-walled carbon nanotube-induced pathological responses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder-Talkington, Brandi N.; Dymacek, Julian; Mary Babb Randolph Cancer Center, West Virginia University, Morgantown, WV 26506-9300

    2013-10-15

    The fibrous shape and biopersistence of multi-walled carbon nanotubes (MWCNT) have raised concern over their potential toxicity after pulmonary exposure. As in vivo exposure to MWCNT produced a transient inflammatory and progressive fibrotic response, this study sought to identify significant biological processes associated with lung inflammation and fibrosis pathology data, based upon whole genome mRNA expression, bronchoaveolar lavage scores, and morphometric analysis from C57BL/6J mice exposed by pharyngeal aspiration to 0, 10, 20, 40, or 80 μg MWCNT at 1, 7, 28, or 56 days post-exposure. Using a novel computational model employing non-negative matrix factorization and Monte Carlo Markov Chainmore » simulation, significant biological processes with expression similar to MWCNT-induced lung inflammation and fibrosis pathology data in mice were identified. A subset of genes in these processes was determined to be functionally related to either fibrosis or inflammation by Ingenuity Pathway Analysis and was used to determine potential significant signaling cascades. Two genes determined to be functionally related to inflammation and fibrosis, vascular endothelial growth factor A (vegfa) and C-C motif chemokine 2 (ccl2), were confirmed by in vitro studies of mRNA and protein expression in small airway epithelial cells exposed to MWCNT as concordant with in vivo expression. This study identified that the novel computational model was sufficient to determine biological processes strongly associated with the pathology of lung inflammation and fibrosis and could identify potential toxicity signaling pathways and mechanisms of MWCNT exposure which could be used for future animal studies to support human risk assessment and intervention efforts. - Highlights: • A novel computational model identified toxicity pathways matching in vivo pathology. • Systematic identification of MWCNT-induced biological processes in mouse lungs • MWCNT-induced functional networks of lung inflammation and fibrosis were revealed. • Two functional, representative genes, ccl2 and vegfa, were validated in vitro.« less

  12. Rosen's (M,R) system in Unified Modelling Language.

    PubMed

    Zhang, Ling; Williams, Richard A; Gatherer, Derek

    2016-01-01

    Robert Rosen's (M,R) system is an abstract biological network architecture that is allegedly non-computable on a Turing machine. If (M,R) is truly non-computable, there are serious implications for the modelling of large biological networks in computer software. A body of work has now accumulated addressing Rosen's claim concerning (M,R) by attempting to instantiate it in various software systems. However, a conclusive refutation has remained elusive, principally since none of the attempts to date have unambiguously avoided the critique that they have altered the properties of (M,R) in the coding process, producing merely approximate simulations of (M,R) rather than true computational models. In this paper, we use the Unified Modelling Language (UML), a diagrammatic notation standard, to express (M,R) as a system of objects having attributes, functions and relations. We believe that this instantiates (M,R) in such a way than none of the original properties of the system are corrupted in the process. Crucially, we demonstrate that (M,R) as classically represented in the relational biology literature is implicitly a UML communication diagram. Furthermore, since UML is formally compatible with object-oriented computing languages, instantiation of (M,R) in UML strongly implies its computability in object-oriented coding languages. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Logic integration of mRNA signals by an RNAi-based molecular computer

    PubMed Central

    Xie, Zhen; Liu, Siyuan John; Bleris, Leonidas; Benenson, Yaakov

    2010-01-01

    Synthetic in vivo molecular ‘computers’ could rewire biological processes by establishing programmable, non-native pathways between molecular signals and biological responses. Multiple molecular computer prototypes have been shown to work in simple buffered solutions. Many of those prototypes were made of DNA strands and performed computations using cycles of annealing-digestion or strand displacement. We have previously introduced RNA interference (RNAi)-based computing as a way of implementing complex molecular logic in vivo. Because it also relies on nucleic acids for its operation, RNAi computing could benefit from the tools developed for DNA systems. However, these tools must be harnessed to produce bioactive components and be adapted for harsh operating environments that reflect in vivo conditions. In a step toward this goal, we report the construction and implementation of biosensors that ‘transduce’ mRNA levels into bioactive, small interfering RNA molecules via RNA strand exchange in a cell-free Drosophila embryo lysate, a step beyond simple buffered environments. We further integrate the sensors with our RNAi ‘computational’ module to evaluate two-input logic functions on mRNA concentrations. Our results show how RNA strand exchange can expand the utility of RNAi computing and point toward the possibility of using strand exchange in a native biological setting. PMID:20194121

  14. A cognitive computational model inspired by the immune system response.

    PubMed

    Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim

    2014-01-01

    The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective.

  15. A Cognitive Computational Model Inspired by the Immune System Response

    PubMed Central

    Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim

    2014-01-01

    The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective. PMID:25003131

  16. An Introduction to Programming for Bioscientists: A Python-Based Primer

    PubMed Central

    Mura, Cameron

    2016-01-01

    Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in molecular biology, biochemistry, and other biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language’s usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a “variable,” the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences. PMID:27271528

  17. An Introduction to Programming for Bioscientists: A Python-Based Primer.

    PubMed

    Ekmekci, Berk; McAnany, Charles E; Mura, Cameron

    2016-06-01

    Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in molecular biology, biochemistry, and other biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language's usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a "variable," the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences.

  18. Developing PFC representations using reinforcement learning

    PubMed Central

    Reynolds, Jeremy R.; O'Reilly, Randall C.

    2009-01-01

    From both functional and biological considerations, it is widely believed that action production, planning, and goal-oriented behaviors supported by the frontal cortex are organized hierarchically (Fuster, 1990, Koechlin, Ody, & Kouneiher, 2003, & Miller, Galanter, & Pribram, 1960) However, the nature of the different levels of the hierarchy remains unclear, and little attention has been paid to the origins of such a hierarchy. We address these issues through biologically-inspired computational models that develop representations through reinforcement learning. We explore several different factors in these models that might plausibly give rise to a hierarchical organization of representations within the PFC, including an initial connectivity hierarchy within PFC, a hierarchical set of connections between PFC and subcortical structures controlling it, and differential synaptic plasticity schedules. Simulation results indicate that architectural constraints contribute to the segregation of different types of representations, and that this segregation facilitates learning. These findings are consistent with the idea that there is a functional hierarchy in PFC, as captured in our earlier computational models of PFC function and a growing body of empirical data. PMID:19591977

  19. GENESIS 1.1: A hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms.

    PubMed

    Kobayashi, Chigusa; Jung, Jaewoon; Matsunaga, Yasuhiro; Mori, Takaharu; Ando, Tadashi; Tamura, Koichi; Kamiya, Motoshi; Sugita, Yuji

    2017-09-30

    GENeralized-Ensemble SImulation System (GENESIS) is a software package for molecular dynamics (MD) simulation of biological systems. It is designed to extend limitations in system size and accessible time scale by adopting highly parallelized schemes and enhanced conformational sampling algorithms. In this new version, GENESIS 1.1, new functions and advanced algorithms have been added. The all-atom and coarse-grained potential energy functions used in AMBER and GROMACS packages now become available in addition to CHARMM energy functions. The performance of MD simulations has been greatly improved by further optimization, multiple time-step integration, and hybrid (CPU + GPU) computing. The string method and replica-exchange umbrella sampling with flexible collective variable choice are used for finding the minimum free-energy pathway and obtaining free-energy profiles for conformational changes of a macromolecule. These new features increase the usefulness and power of GENESIS for modeling and simulation in biological research. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Text-mining and information-retrieval services for molecular biology

    PubMed Central

    Krallinger, Martin; Valencia, Alfonso

    2005-01-01

    Text-mining in molecular biology - defined as the automatic extraction of information about genes, proteins and their functional relationships from text documents - has emerged as a hybrid discipline on the edges of the fields of information science, bioinformatics and computational linguistics. A range of text-mining applications have been developed recently that will improve access to knowledge for biologists and database annotators. PMID:15998455

  1. Women are underrepresented in computational biology: An analysis of the scholarly literature in biology, computer science and computational biology.

    PubMed

    Bonham, Kevin S; Stefan, Melanie I

    2017-10-01

    While women are generally underrepresented in STEM fields, there are noticeable differences between fields. For instance, the gender ratio in biology is more balanced than in computer science. We were interested in how this difference is reflected in the interdisciplinary field of computational/quantitative biology. To this end, we examined the proportion of female authors in publications from the PubMed and arXiv databases. There are fewer female authors on research papers in computational biology, as compared to biology in general. This is true across authorship position, year, and journal impact factor. A comparison with arXiv shows that quantitative biology papers have a higher ratio of female authors than computer science papers, placing computational biology in between its two parent fields in terms of gender representation. Both in biology and in computational biology, a female last author increases the probability of other authors on the paper being female, pointing to a potential role of female PIs in influencing the gender balance.

  2. Molecular mechanics and dynamics characterization of an in silico mutated protein: a stand-alone lab module or support activity for in vivo and in vitro analyses of targeted proteins.

    PubMed

    Chiang, Harry; Robinson, Lucy C; Brame, Cynthia J; Messina, Troy C

    2013-01-01

    Over the past 20 years, the biological sciences have increasingly incorporated chemistry, physics, computer science, and mathematics to aid in the development and use of mathematical models. Such combined approaches have been used to address problems from protein structure-function relationships to the workings of complex biological systems. Computer simulations of molecular events can now be accomplished quickly and with standard computer technology. Also, simulation software is freely available for most computing platforms, and online support for the novice user is ample. We have therefore created a molecular dynamics laboratory module to enhance undergraduate student understanding of molecular events underlying organismal phenotype. This module builds on a previously described project in which students use site-directed mutagenesis to investigate functions of conserved sequence features in members of a eukaryotic protein kinase family. In this report, we detail the laboratory activities of a MD module that provide a complement to phenotypic outcomes by providing a hypothesis-driven and quantifiable measure of predicted structural changes caused by targeted mutations. We also present examples of analyses students may perform. These laboratory activities can be integrated with genetics or biochemistry experiments as described, but could also be used independently in any course that would benefit from a quantitative approach to protein structure-function relationships. Copyright © 2013 Wiley Periodicals, Inc.

  3. Biological materials by design.

    PubMed

    Qin, Zhao; Dimas, Leon; Adler, David; Bratzel, Graham; Buehler, Markus J

    2014-02-19

    In this topical review we discuss recent advances in the use of physical insight into the way biological materials function, to design novel engineered materials 'from scratch', or from the level of fundamental building blocks upwards and by using computational multiscale methods that link chemistry to material function. We present studies that connect advances in multiscale hierarchical material structuring with material synthesis and testing, review case studies of wood and other biological materials, and illustrate how engineered fiber composites and bulk materials are designed, modeled, and then synthesized and tested experimentally. The integration of experiment and simulation in multiscale design opens new avenues to explore the physics of materials from a fundamental perspective, and using complementary strengths from models and empirical techniques. Recent developments in this field illustrate a new paradigm by which complex material functionality is achieved through hierarchical structuring in spite of simple material constituents.

  4. Automated quantitative assessment of proteins' biological function in protein knowledge bases.

    PubMed

    Mayr, Gabriele; Lepperdinger, Günter; Lackner, Peter

    2008-01-01

    Primary protein sequence data are archived in databases together with information regarding corresponding biological functions. In this respect, UniProt/Swiss-Prot is currently the most comprehensive collection and it is routinely cross-examined when trying to unravel the biological role of hypothetical proteins. Bioscientists frequently extract single entries and further evaluate those on a subjective basis. In lieu of a standardized procedure for scoring the existing knowledge regarding individual proteins, we here report about a computer-assisted method, which we applied to score the present knowledge about any given Swiss-Prot entry. Applying this quantitative score allows the comparison of proteins with respect to their sequence yet highlights the comprehension of functional data. pfs analysis may be also applied for quality control of individual entries or for database management in order to rank entry listings.

  5. Empiricists are from Venus, modelers are from Mars: Reconciling experimental and computational approaches in cognitive neuroscience.

    PubMed

    Cowell, Rosemary A; Bussey, Timothy J; Saksida, Lisa M

    2012-11-01

    We describe how computational models can be useful to cognitive and behavioral neuroscience, and discuss some guidelines for deciding whether a model is useful. We emphasize that because instantiating a cognitive theory as a computational model requires specification of an explicit mechanism for the function in question, it often produces clear and novel behavioral predictions to guide empirical research. However, computational modeling in cognitive and behavioral neuroscience remains somewhat rare, perhaps because of misconceptions concerning the use of computational models (in particular, connectionist models) in these fields. We highlight some common misconceptions, each of which relates to an aspect of computational models: the problem space of the model, the level of biological organization at which the model is formulated, and the importance (or not) of biological plausibility, parsimony, and model parameters. Careful consideration of these aspects of a model by empiricists, along with careful delineation of them by modelers, may facilitate communication between the two disciplines and promote the use of computational models for guiding cognitive and behavioral experiments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Conservation of Dynamics Associated with Biological Function in an Enzyme Superfamily.

    PubMed

    Narayanan, Chitra; Bernard, David N; Bafna, Khushboo; Gagné, Donald; Chennubhotla, Chakra S; Doucet, Nicolas; Agarwal, Pratul K

    2018-03-06

    Enzyme superfamily members that share common chemical and/or biological functions also share common features. While the role of structure is well characterized, the link between enzyme function and dynamics is not well understood. We present a systematic characterization of intrinsic dynamics of over 20 members of the pancreatic-type RNase superfamily, which share a common structural fold. This study is motivated by the fact that the range of chemical activity as well as molecular motions of RNase homologs spans over 10 5 folds. Dynamics was characterized using a combination of nuclear magnetic resonance experiments and computer simulations. Phylogenetic clustering led to the grouping of sequences into functionally distinct subfamilies. Detailed characterization of the diverse RNases showed conserved dynamical traits for enzymes within subfamilies. These results suggest that selective pressure for the conservation of dynamical behavior, among other factors, may be linked to the distinct chemical and biological functions in an enzyme superfamily. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Petri net modelling of biological networks.

    PubMed

    Chaouiya, Claudine

    2007-07-01

    Mathematical modelling is increasingly used to get insights into the functioning of complex biological networks. In this context, Petri nets (PNs) have recently emerged as a promising tool among the various methods employed for the modelling and analysis of molecular networks. PNs come with a series of extensions, which allow different abstraction levels, from purely qualitative to more complex quantitative models. Noteworthily, each of these models preserves the underlying graph, which depicts the interactions between the biological components. This article intends to present the basics of the approach and to foster the potential role PNs could play in the development of the computational systems biology.

  8. Towards programmable plant genetic circuits.

    PubMed

    Medford, June I; Prasad, Ashok

    2016-07-01

    Synthetic biology enables the construction of genetic circuits with predictable gene functions in plants. Detailed quantitative descriptions of the transfer function or input-output function for genetic parts (promoters, 5' and 3' untranslated regions, etc.) are collected. These data are then used in computational simulations to determine their robustness and desired properties, thereby enabling the best components to be selected for experimental testing in plants. In addition, the process forms an iterative workflow which allows vast improvement to validated elements with sub-optimal function. These processes enable computational functions such as digital logic in living plants and follow the pathway of technological advances which took us from vacuum tubes to cell phones. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jeremy C.; Cheng, Xiaolin; Nickels, Jonathan D.

    Understanding of cell membrane organization has evolved significantly from the classic fluid mosaic model. It is now recognized that biological membranes are highly organized structures, with differences in lipid compositions between inner and outer leaflets and in lateral structures within the bilayer plane, known as lipid rafts. These organizing principles are important for protein localization and function as well as cellular signaling. However, the mechanisms and biophysical basis of lipid raft formation, structure, dynamics and function are not clearly understood. One key question, which we focus on in this review, is how lateral organization and leaflet compositional asymmetry are coupled.more » Detailed information elucidating this question has been sparse because of the small size and transient nature of rafts and the experimental challenges in constructing asymmetric bilayers. Resolving this mystery will require advances in both experimentation and modeling. We discuss here the preparation of model systems along with experimental and computational approaches that have been applied in efforts to address this key question in membrane biology. Furthermore, we seek to place recent and future advances in experimental and computational techniques in context, providing insight into in-plane and transverse organization of biological membranes.« less

  10. Implementation of logic functions and computations by chemical kinetics

    NASA Astrophysics Data System (ADS)

    Hjelmfelt, A.; Ross, J.

    We review our work on the computational functions of the kinetics of chemical networks. We examine spatially homogeneous networks which are based on prototypical reactions occurring in living cells and show the construction of logic gates and sequential and parallel networks. This work motivates the study of an important biochemical pathway, glycolysis, and we demonstrate that the switch that controls the flux in the direction of glycolysis or gluconeogenesis may be described as a fuzzy AND operator. We also study a spatially inhomogeneous network which shares features of theoretical and biological neural networks.

  11. Computing motion using resistive networks

    NASA Technical Reports Server (NTRS)

    Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James

    1988-01-01

    Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.

  12. Mergeomics: a web server for identifying pathological pathways, networks, and key regulators via multidimensional data integration.

    PubMed

    Arneson, Douglas; Bhattacharya, Anindya; Shu, Le; Mäkinen, Ville-Petteri; Yang, Xia

    2016-09-09

    Human diseases are commonly the result of multidimensional changes at molecular, cellular, and systemic levels. Recent advances in genomic technologies have enabled an outpour of omics datasets that capture these changes. However, separate analyses of these various data only provide fragmented understanding and do not capture the holistic view of disease mechanisms. To meet the urgent needs for tools that effectively integrate multiple types of omics data to derive biological insights, we have developed Mergeomics, a computational pipeline that integrates multidimensional disease association data with functional genomics and molecular networks to retrieve biological pathways, gene networks, and central regulators critical for disease development. To make the Mergeomics pipeline available to a wider research community, we have implemented an online, user-friendly web server ( http://mergeomics. idre.ucla.edu/ ). The web server features a modular implementation of the Mergeomics pipeline with detailed tutorials. Additionally, it provides curated genomic resources including tissue-specific expression quantitative trait loci, ENCODE functional annotations, biological pathways, and molecular networks, and offers interactive visualization of analytical results. Multiple computational tools including Marker Dependency Filtering (MDF), Marker Set Enrichment Analysis (MSEA), Meta-MSEA, and Weighted Key Driver Analysis (wKDA) can be used separately or in flexible combinations. User-defined summary-level genomic association datasets (e.g., genetic, transcriptomic, epigenomic) related to a particular disease or phenotype can be uploaded and computed real-time to yield biologically interpretable results, which can be viewed online and downloaded for later use. Our Mergeomics web server offers researchers flexible and user-friendly tools to facilitate integration of multidimensional data into holistic views of disease mechanisms in the form of tissue-specific key regulators, biological pathways, and gene networks.

  13. Defining functional distance using manifold embeddings of gene ontology annotations

    PubMed Central

    Lerman, Gilad; Shakhnovich, Boris E.

    2007-01-01

    Although rigorous measures of similarity for sequence and structure are now well established, the problem of defining functional relationships has been particularly daunting. Here, we present several manifold embedding techniques to compute distances between Gene Ontology (GO) functional annotations and consequently estimate functional distances between protein domains. To evaluate accuracy, we correlate the functional distance to the well established measures of sequence, structural, and phylogenetic similarities. Finally, we show that manual classification of structures into folds and superfamilies is mirrored by proximity in the newly defined function space. We show how functional distances place structure–function relationships in biological context resulting in insight into divergent and convergent evolution. The methods and results in this paper can be readily generalized and applied to a wide array of biologically relevant investigations, such as accuracy of annotation transference, the relationship between sequence, structure, and function, or coherence of expression modules. PMID:17595300

  14. Systems Biology for Organotypic Cell Cultures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grego, Sonia; Dougherty, Edward R.; Alexander, Francis J.

    Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, “organotypic” cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomicmore » data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data. This consensus report summarizes the discussions held.« less

  15. Workshop Report: Systems Biology for Organotypic Cell Cultures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grego, Sonia; Dougherty, Edward R.; Alexander, Francis Joseph

    Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, “organotypic” cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomicmore » data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data.« less

  16. Workshop Report: Systems Biology for Organotypic Cell Cultures

    DOE PAGES

    Grego, Sonia; Dougherty, Edward R.; Alexander, Francis Joseph; ...

    2016-11-14

    Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, “organotypic” cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomicmore » data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data.« less

  17. Systems biology for organotypic cell cultures.

    PubMed

    Grego, Sonia; Dougherty, Edward R; Alexander, Francis J; Auerbach, Scott S; Berridge, Brian R; Bittner, Michael L; Casey, Warren; Cooley, Philip C; Dash, Ajit; Ferguson, Stephen S; Fennell, Timothy R; Hawkins, Brian T; Hickey, Anthony J; Kleensang, Andre; Liebman, Michael N J; Martin, Florian; Maull, Elizabeth A; Paragas, Jason; Qiao, Guilin Gary; Ramaiahgari, Sreenivasa; Sumner, Susan J; Yoon, Miyoung

    2017-01-01

    Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, "organotypic" cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomic data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data.

  18. BIOCOMPUTATION: some history and prospects.

    PubMed

    Cull, Paul

    2013-06-01

    At first glance, biology and computer science are diametrically opposed sciences. Biology deals with carbon based life forms shaped by evolution and natural selection. Computer Science deals with electronic machines designed by engineers and guided by mathematical algorithms. In this brief paper, we review biologically inspired computing. We discuss several models of computation which have arisen from various biological studies. We show what these have in common, and conjecture how biology can still suggest answers and models for the next generation of computing problems. We discuss computation and argue that these biologically inspired models do not extend the theoretical limits on computation. We suggest that, in practice, biological models may give more succinct representations of various problems, and we mention a few cases in which biological models have proved useful. We also discuss the reciprocal impact of computer science on biology and cite a few significant contributions to biological science. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Computational properties of networks of synchronous groups of spiking neurons.

    PubMed

    Dayhoff, Judith E

    2007-09-01

    We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.

  20. Biological network motif detection and evaluation

    PubMed Central

    2011-01-01

    Background Molecular level of biological data can be constructed into system level of data as biological networks. Network motifs are defined as over-represented small connected subgraphs in networks and they have been used for many biological applications. Since network motif discovery involves computationally challenging processes, previous algorithms have focused on computational efficiency. However, we believe that the biological quality of network motifs is also very important. Results We define biological network motifs as biologically significant subgraphs and traditional network motifs are differentiated as structural network motifs in this paper. We develop five algorithms, namely, EDGEGO-BNM, EDGEBETWEENNESS-BNM, NMF-BNM, NMFGO-BNM and VOLTAGE-BNM, for efficient detection of biological network motifs, and introduce several evaluation measures including motifs included in complex, motifs included in functional module and GO term clustering score in this paper. Experimental results show that EDGEGO-BNM and EDGEBETWEENNESS-BNM perform better than existing algorithms and all of our algorithms are applicable to find structural network motifs as well. Conclusion We provide new approaches to finding network motifs in biological networks. Our algorithms efficiently detect biological network motifs and further improve existing algorithms to find high quality structural network motifs, which would be impossible using existing algorithms. The performances of the algorithms are compared based on our new evaluation measures in biological contexts. We believe that our work gives some guidelines of network motifs research for the biological networks. PMID:22784624

  1. Computational dynamic approaches for temporal omics data with applications to systems medicine.

    PubMed

    Liang, Yulan; Kelemen, Arpad

    2017-01-01

    Modeling and predicting biological dynamic systems and simultaneously estimating the kinetic structural and functional parameters are extremely important in systems and computational biology. This is key for understanding the complexity of the human health, drug response, disease susceptibility and pathogenesis for systems medicine. Temporal omics data used to measure the dynamic biological systems are essentials to discover complex biological interactions and clinical mechanism and causations. However, the delineation of the possible associations and causalities of genes, proteins, metabolites, cells and other biological entities from high throughput time course omics data is challenging for which conventional experimental techniques are not suited in the big omics era. In this paper, we present various recently developed dynamic trajectory and causal network approaches for temporal omics data, which are extremely useful for those researchers who want to start working in this challenging research area. Moreover, applications to various biological systems, health conditions and disease status, and examples that summarize the state-of-the art performances depending on different specific mining tasks are presented. We critically discuss the merits, drawbacks and limitations of the approaches, and the associated main challenges for the years ahead. The most recent computing tools and software to analyze specific problem type, associated platform resources, and other potentials for the dynamic trajectory and interaction methods are also presented and discussed in detail.

  2. Statistical molecular design of balanced compound libraries for QSAR modeling.

    PubMed

    Linusson, A; Elofsson, M; Andersson, I E; Dahlgren, M K

    2010-01-01

    A fundamental step in preclinical drug development is the computation of quantitative structure-activity relationship (QSAR) models, i.e. models that link chemical features of compounds with activities towards a target macromolecule associated with the initiation or progression of a disease. QSAR models are computed by combining information on the physicochemical and structural features of a library of congeneric compounds, typically assembled from two or more building blocks, and biological data from one or more in vitro assays. Since the models provide information on features affecting the compounds' biological activity they can be used as guides for further optimization. However, in order for a QSAR model to be relevant to the targeted disease, and drug development in general, the compound library used must contain molecules with balanced variation of the features spanning the chemical space believed to be important for interaction with the biological target. In addition, the assays used must be robust and deliver high quality data that are directly related to the function of the biological target and the associated disease state. In this review, we discuss and exemplify the concept of statistical molecular design (SMD) in the selection of building blocks and final synthetic targets (i.e. compounds to synthesize) to generate information-rich, balanced libraries for biological testing and computation of QSAR models.

  3. Implementation of Complex Biological Logic Circuits Using Spatially Distributed Multicellular Consortia

    PubMed Central

    Urrios, Arturo; de Nadal, Eulàlia; Solé, Ricard; Posas, Francesc

    2016-01-01

    Engineered synthetic biological devices have been designed to perform a variety of functions from sensing molecules and bioremediation to energy production and biomedicine. Notwithstanding, a major limitation of in vivo circuit implementation is the constraint associated to the use of standard methodologies for circuit design. Thus, future success of these devices depends on obtaining circuits with scalable complexity and reusable parts. Here we show how to build complex computational devices using multicellular consortia and space as key computational elements. This spatial modular design grants scalability since its general architecture is independent of the circuit’s complexity, minimizes wiring requirements and allows component reusability with minimal genetic engineering. The potential use of this approach is demonstrated by implementation of complex logical functions with up to six inputs, thus demonstrating the scalability and flexibility of this method. The potential implications of our results are outlined. PMID:26829588

  4. A First Attempt to Bring Computational Biology into Advanced High School Biology Classrooms

    PubMed Central

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S.

    2011-01-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors. PMID:22046118

  5. A first attempt to bring computational biology into advanced high school biology classrooms.

    PubMed

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S

    2011-10-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  6. Women are underrepresented in computational biology: An analysis of the scholarly literature in biology, computer science and computational biology

    PubMed Central

    2017-01-01

    While women are generally underrepresented in STEM fields, there are noticeable differences between fields. For instance, the gender ratio in biology is more balanced than in computer science. We were interested in how this difference is reflected in the interdisciplinary field of computational/quantitative biology. To this end, we examined the proportion of female authors in publications from the PubMed and arXiv databases. There are fewer female authors on research papers in computational biology, as compared to biology in general. This is true across authorship position, year, and journal impact factor. A comparison with arXiv shows that quantitative biology papers have a higher ratio of female authors than computer science papers, placing computational biology in between its two parent fields in terms of gender representation. Both in biology and in computational biology, a female last author increases the probability of other authors on the paper being female, pointing to a potential role of female PIs in influencing the gender balance. PMID:29023441

  7. Massive-Scale Gene Co-Expression Network Construction and Robustness Testing Using Random Matrix Theory

    PubMed Central

    Isaacson, Sven; Luo, Feng; Feltus, Frank A.; Smith, Melissa C.

    2013-01-01

    The study of gene relationships and their effect on biological function and phenotype is a focal point in systems biology. Gene co-expression networks built using microarray expression profiles are one technique for discovering and interpreting gene relationships. A knowledge-independent thresholding technique, such as Random Matrix Theory (RMT), is useful for identifying meaningful relationships. Highly connected genes in the thresholded network are then grouped into modules that provide insight into their collective functionality. While it has been shown that co-expression networks are biologically relevant, it has not been determined to what extent any given network is functionally robust given perturbations in the input sample set. For such a test, hundreds of networks are needed and hence a tool to rapidly construct these networks. To examine functional robustness of networks with varying input, we enhanced an existing RMT implementation for improved scalability and tested functional robustness of human (Homo sapiens), rice (Oryza sativa) and budding yeast (Saccharomyces cerevisiae). We demonstrate dramatic decrease in network construction time and computational requirements and show that despite some variation in global properties between networks, functional similarity remains high. Moreover, the biological function captured by co-expression networks thresholded by RMT is highly robust. PMID:23409071

  8. A cascade model of information processing and encoding for retinal prosthesis.

    PubMed

    Pei, Zhi-Jun; Gao, Guan-Xin; Hao, Bo; Qiao, Qing-Li; Ai, Hui-Jian

    2016-04-01

    Retinal prosthesis offers a potential treatment for individuals suffering from photoreceptor degeneration diseases. Establishing biological retinal models and simulating how the biological retina convert incoming light signal into spike trains that can be properly decoded by the brain is a key issue. Some retinal models have been presented, ranking from structural models inspired by the layered architecture to functional models originated from a set of specific physiological phenomena. However, Most of these focus on stimulus image compression, edge detection and reconstruction, but do not generate spike trains corresponding to visual image. In this study, based on state-of-the-art retinal physiological mechanism, including effective visual information extraction, static nonlinear rectification of biological systems and neurons Poisson coding, a cascade model of the retina including the out plexiform layer for information processing and the inner plexiform layer for information encoding was brought forward, which integrates both anatomic connections and functional computations of retina. Using MATLAB software, spike trains corresponding to stimulus image were numerically computed by four steps: linear spatiotemporal filtering, static nonlinear rectification, radial sampling and then Poisson spike generation. The simulated results suggested that such a cascade model could recreate visual information processing and encoding functionalities of the retina, which is helpful in developing artificial retina for the retinally blind.

  9. Indications for quantum computation requirements from comparative brain analysis

    NASA Astrophysics Data System (ADS)

    Bernroider, Gustav; Baer, Wolfgang

    2010-04-01

    Whether or not neuronal signal properties can engage 'non-trivial', i.e. functionally significant, quantum properties, is the subject of an ongoing debate. Here we provide evidence that quantum coherence dynamics can play a functional role in ion conduction mechanism with consequences on the shape and associative character of classical membrane signals. In particular, these new perspectives predict that a specific neuronal topology (e.g. the connectivity pattern of cortical columns in the primate brain) is less important and not really required to explain abilities in perception and sensory-motor integration. Instead, this evidence is suggestive for a decisive role of the number and functional segregation of ion channel proteins that can be engaged in a particular neuronal constellation. We provide evidence from comparative brain studies and estimates of computational capacity behind visual flight functions suggestive for a possible role of quantum computation in biological systems.

  10. Molecular Mechanics and Dynamics Characterization of an "in silico" Mutated Protein: A Stand-Alone Lab Module or Support Activity for "in vivo" and "in vitro" Analyses of Targeted Proteins

    ERIC Educational Resources Information Center

    Chiang, Harry; Robinson, Lucy C.; Brame, Cynthia J.; Messina, Troy C.

    2013-01-01

    Over the past 20 years, the biological sciences have increasingly incorporated chemistry, physics, computer science, and mathematics to aid in the development and use of mathematical models. Such combined approaches have been used to address problems from protein structure-function relationships to the workings of complex biological systems.…

  11. Protein-protein interaction predictions using text mining methods.

    PubMed

    Papanikolaou, Nikolas; Pavlopoulos, Georgios A; Theodosiou, Theodosios; Iliopoulos, Ioannis

    2015-03-01

    It is beyond any doubt that proteins and their interactions play an essential role in most complex biological processes. The understanding of their function individually, but also in the form of protein complexes is of a great importance. Nowadays, despite the plethora of various high-throughput experimental approaches for detecting protein-protein interactions, many computational methods aiming to predict new interactions have appeared and gained interest. In this review, we focus on text-mining based computational methodologies, aiming to extract information for proteins and their interactions from public repositories such as literature and various biological databases. We discuss their strengths, their weaknesses and how they complement existing experimental techniques by simultaneously commenting on the biological databases which hold such information and the benchmark datasets that can be used for evaluating new tools. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Computational Systems Biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDermott, Jason E.; Samudrala, Ram; Bumgarner, Roger E.

    2009-05-01

    Computational systems biology is the term that we use to describe computational methods to identify, infer, model, and store relationships between the molecules, pathways, and cells (“systems”) involved in a living organism. Based on this definition, the field of computational systems biology has been in existence for some time. However, the recent confluence of high throughput methodology for biological data gathering, genome-scale sequencing and computational processing power has driven a reinvention and expansion of this field. The expansions include not only modeling of small metabolic{Ishii, 2004 #1129; Ekins, 2006 #1601; Lafaye, 2005 #1744} and signaling systems{Stevenson-Paulik, 2006 #1742; Lafaye, 2005more » #1744} but also modeling of the relationships between biological components in very large systems, incluyding whole cells and organisms {Ideker, 2001 #1124; Pe'er, 2001 #1172; Pilpel, 2001 #393; Ideker, 2002 #327; Kelley, 2003 #1117; Shannon, 2003 #1116; Ideker, 2004 #1111}{Schadt, 2003 #475; Schadt, 2006 #1661}{McDermott, 2002 #878; McDermott, 2005 #1271}. Generally these models provide a general overview of one or more aspects of these systems and leave the determination of details to experimentalists focused on smaller subsystems. The promise of such approaches is that they will elucidate patterns, relationships and general features that are not evident from examining specific components or subsystems. These predictions are either interesting in and of themselves (for example, the identification of an evolutionary pattern), or are interesting and valuable to researchers working on a particular problem (for example highlight a previously unknown functional pathway). Two events have occurred to bring about the field computational systems biology to the forefront. One is the advent of high throughput methods that have generated large amounts of information about particular systems in the form of genetic studies, gene expression analyses (both protein and mRNA) and metabolomics. With such tools, research to consider systems as a whole are being conceived, planned and implemented experimentally on an ever more frequent and wider scale. The other is the growth of computational processing power and tools. Methods to analyze large data sets of this kind are often computationally demanding and, as is the case in other areas, the field has benefited from continuing improvements in computational hardware and methods. The field of computational biology is very much like a telescope with two sequential lenses: one lens represents the biological data and the other represents a computational and/or mathematical model of the data. Both lenses must be properly coordinated to yield an image that reflects biological reality. This means that the design parameters for both lenses must be designed in concert to create a system that yields a model of the organism that provides both predictive and mechanistic information. The chapters in this book describe the construction of subcomponents of such a system. Computational systems biology is a rapidly evolving field and no single group of investigators has yet developed a compete system that integrates both data generation and data analysis in such a way so as to allow full and accurate modeling of any single biological organism. However, the field is rapidly moving in that direction. The chapters in this book represent a snapshot of the current methods being developed and used in the area of computational systems biology. Each method or database described within represents one or more steps on the path to a complete description of a biological system. How these tools will evolve and ultimately be integrated is an area of intense research and interest. We hope that readers of this book will be motivated by the chapters within and become involved in this exciting area of research.« less

  13. Advanced techniques in placental biology -- workshop report.

    PubMed

    Nelson, D M; Sadovsky, Y; Robinson, J M; Croy, B A; Rice, G; Kniss, D A

    2006-04-01

    Major advances in placental biology have been realized as new technologies have been developed and existing methods have been refined in many areas of biological research. Classical anatomy and whole-organ physiology tools once used to analyze placental structure and function have been supplanted by more sophisticated techniques adapted from molecular biology, proteomics, and computational biology and bioinformatics. In addition, significant refinements in morphological study of the placenta and its constituent cell types have improved our ability to assess form and function in highly integrated manner. To offer an overview of modern technologies used by investigators to study the placenta, this workshop: Advanced techniques in placental biology, assembled experts who discussed fundamental principles and real time examples of four separate methodologies. Y. Sadovsky presented the principles of microRNA function as an endogenous mechanism of gene regulation. J. Robinson demonstrated the utility of correlative microscopy in which light-level and transmission electron microscopy are combined to provide cellular and subcellular views of placental cells. A. Croy provided a lecture on the use of microdissection techniques which are invaluable for isolating very small subsets of cell types for molecular analysis. Finally, G. Rice presented an overview methods on profiling of complex protein mixtures within tissue and/or fluid samples that, when refined, will offer databases that will underpin a systems approach to modern trophoblast biology.

  14. Rational Design of an Ultrasensitive Quorum-Sensing Switch.

    PubMed

    Zeng, Weiqian; Du, Pei; Lou, Qiuli; Wu, Lili; Zhang, Haoqian M; Lou, Chunbo; Wang, Hongli; Ouyang, Qi

    2017-08-18

    One of the purposes of synthetic biology is to develop rational methods that accelerate the design of genetic circuits, saving time and effort spent on experiments and providing reliably predictable circuit performance. We applied a reverse engineering approach to design an ultrasensitive transcriptional quorum-sensing switch. We want to explore how systems biology can guide synthetic biology in the choice of specific DNA sequences and their regulatory relations to achieve a targeted function. The workflow comprises network enumeration that achieves the target function robustly, experimental restriction of the obtained candidate networks, global parameter optimization via mathematical analysis, selection and engineering of parts based on these calculations, and finally, circuit construction based on the principles of standardization and modularization. The performance of realized quorum-sensing switches was in good qualitative agreement with the computational predictions. This study provides practical principles for the rational design of genetic circuits with targeted functions.

  15. Sensitivity analysis of dynamic biological systems with time-delays.

    PubMed

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2010-10-15

    Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.

  16. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  17. Genome-Wide Detection and Analysis of Multifunctional Genes

    PubMed Central

    Pritykin, Yuri; Ghersi, Dario; Singh, Mona

    2015-01-01

    Many genes can play a role in multiple biological processes or molecular functions. Identifying multifunctional genes at the genome-wide level and studying their properties can shed light upon the complexity of molecular events that underpin cellular functioning, thereby leading to a better understanding of the functional landscape of the cell. However, to date, genome-wide analysis of multifunctional genes (and the proteins they encode) has been limited. Here we introduce a computational approach that uses known functional annotations to extract genes playing a role in at least two distinct biological processes. We leverage functional genomics data sets for three organisms—H. sapiens, D. melanogaster, and S. cerevisiae—and show that, as compared to other annotated genes, genes involved in multiple biological processes possess distinct physicochemical properties, are more broadly expressed, tend to be more central in protein interaction networks, tend to be more evolutionarily conserved, and are more likely to be essential. We also find that multifunctional genes are significantly more likely to be involved in human disorders. These same features also hold when multifunctionality is defined with respect to molecular functions instead of biological processes. Our analysis uncovers key features about multifunctional genes, and is a step towards a better genome-wide understanding of gene multifunctionality. PMID:26436655

  18. Analog Computation by DNA Strand Displacement Circuits.

    PubMed

    Song, Tianqi; Garg, Sudhanshu; Mokhtar, Reem; Bui, Hieu; Reif, John

    2016-08-19

    DNA circuits have been widely used to develop biological computing devices because of their high programmability and versatility. Here, we propose an architecture for the systematic construction of DNA circuits for analog computation based on DNA strand displacement. The elementary gates in our architecture include addition, subtraction, and multiplication gates. The input and output of these gates are analog, which means that they are directly represented by the concentrations of the input and output DNA strands, respectively, without requiring a threshold for converting to Boolean signals. We provide detailed domain designs and kinetic simulations of the gates to demonstrate their expected performance. On the basis of these gates, we describe how DNA circuits to compute polynomial functions of inputs can be built. Using Taylor Series and Newton Iteration methods, functions beyond the scope of polynomials can also be computed by DNA circuits built upon our architecture.

  19. Computational analysis of conserved RNA secondary structure in transcriptomes and genomes.

    PubMed

    Eddy, Sean R

    2014-01-01

    Transcriptomics experiments and computational predictions both enable systematic discovery of new functional RNAs. However, many putative noncoding transcripts arise instead from artifacts and biological noise, and current computational prediction methods have high false positive rates. I discuss prospects for improving computational methods for analyzing and identifying functional RNAs, with a focus on detecting signatures of conserved RNA secondary structure. An interesting new front is the application of chemical and enzymatic experiments that probe RNA structure on a transcriptome-wide scale. I review several proposed approaches for incorporating structure probing data into the computational prediction of RNA secondary structure. Using probabilistic inference formalisms, I show how all these approaches can be unified in a well-principled framework, which in turn allows RNA probing data to be easily integrated into a wide range of analyses that depend on RNA secondary structure inference. Such analyses include homology search and genome-wide detection of new structural RNAs.

  20. VirtualLeaf: an open-source framework for cell-based modeling of plant tissue growth and development.

    PubMed

    Merks, Roeland M H; Guravage, Michael; Inzé, Dirk; Beemster, Gerrit T S

    2011-02-01

    Plant organs, including leaves and roots, develop by means of a multilevel cross talk between gene regulation, patterned cell division and cell expansion, and tissue mechanics. The multilevel regulatory mechanisms complicate classic molecular genetics or functional genomics approaches to biological development, because these methodologies implicitly assume a direct relation between genes and traits at the level of the whole plant or organ. Instead, understanding gene function requires insight into the roles of gene products in regulatory networks, the conditions of gene expression, etc. This interplay is impossible to understand intuitively. Mathematical and computer modeling allows researchers to design new hypotheses and produce experimentally testable insights. However, the required mathematics and programming experience makes modeling poorly accessible to experimental biologists. Problem-solving environments provide biologically intuitive in silico objects ("cells", "regulation networks") required for setting up a simulation and present those to the user in terms of familiar, biological terminology. Here, we introduce the cell-based computer modeling framework VirtualLeaf for plant tissue morphogenesis. The current version defines a set of biologically intuitive C++ objects, including cells, cell walls, and diffusing and reacting chemicals, that provide useful abstractions for building biological simulations of developmental processes. We present a step-by-step introduction to building models with VirtualLeaf, providing basic example models of leaf venation and meristem development. VirtualLeaf-based models provide a means for plant researchers to analyze the function of developmental genes in the context of the biophysics of growth and patterning. VirtualLeaf is an ongoing open-source software project (http://virtualleaf.googlecode.com) that runs on Windows, Mac, and Linux.

  1. Mining for Micropeptides.

    PubMed

    Makarewich, Catherine A; Olson, Eric N

    2017-09-01

    Advances in computational biology and large-scale transcriptome analyses have revealed that a much larger portion of the genome is transcribed than was previously recognized, resulting in the production of a diverse population of RNA molecules with both protein-coding and noncoding potential. Emerging evidence indicates that several RNA molecules have been mis-annotated as noncoding and in fact harbor short open reading frames (sORFs) that encode functional peptides and that have evaded detection until now due to their small size. sORF-encoded peptides (SEPs), or micropeptides, have been shown to have important roles in fundamental biological processes and in the maintenance of cellular homeostasis. These small proteins can act independently, for example as ligands or signaling molecules, or they can exert their biological functions by engaging with and modulating larger regulatory proteins. Given their small size, micropeptides may be uniquely suited to fine-tune complex biological systems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Design of synthetic biological logic circuits based on evolutionary algorithm.

    PubMed

    Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei

    2013-08-01

    The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose.

  3. Permeating disciplines: Overcoming barriers between molecular simulations and classical structure-function approaches in biological ion transport.

    PubMed

    Howard, Rebecca J; Carnevale, Vincenzo; Delemotte, Lucie; Hellmich, Ute A; Rothberg, Brad S

    2018-04-01

    Ion translocation across biological barriers is a fundamental requirement for life. In many cases, controlling this process-for example with neuroactive drugs-demands an understanding of rapid and reversible structural changes in membrane-embedded proteins, including ion channels and transporters. Classical approaches to electrophysiology and structural biology have provided valuable insights into several such proteins over macroscopic, often discontinuous scales of space and time. Integrating these observations into meaningful mechanistic models now relies increasingly on computational methods, particularly molecular dynamics simulations, while surfacing important challenges in data management and conceptual alignment. Here, we seek to provide contemporary context, concrete examples, and a look to the future for bridging disciplinary gaps in biological ion transport. This article is part of a Special Issue entitled: Beyond the Structure-Function Horizon of Membrane Proteins edited by Ute Hellmich, Rupak Doshi and Benjamin McIlwain. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Autopoiesis + extended cognition + nature = can buildings think?

    PubMed Central

    Dollens, Dennis

    2015-01-01

    To incorporate metabolic, bioremedial functions into the performance of buildings and to balance generative architecture's dominant focus on computational programming and digital fabrication, this text first discusses hybridizing Maturana and Varela's biological theory of autopoiesis with Andy Clark's hypothesis of extended cognition. Doing so establishes a procedural protocol to research biological domains from which design could source data/insight from biosemiotics, sensory plants, and biocomputation. I trace computation and botanic simulations back to Alan Turing's little-known 1950s Morphogenetic drawings, reaction-diffusion algorithms, and pioneering artificial intelligence (AI) in order to establish bioarchitecture's generative point of origin. I ask provocatively, Can buildings think? as a question echoing Turing's own, "Can machines think?" PMID:26478784

  5. A data mining paradigm for identifying key factors in biological processes using gene expression data.

    PubMed

    Li, Jin; Zheng, Le; Uchiyama, Akihiko; Bin, Lianghua; Mauro, Theodora M; Elias, Peter M; Pawelczyk, Tadeusz; Sakowicz-Burkiewicz, Monika; Trzeciak, Magdalena; Leung, Donald Y M; Morasso, Maria I; Yu, Peng

    2018-06-13

    A large volume of biological data is being generated for studying mechanisms of various biological processes. These precious data enable large-scale computational analyses to gain biological insights. However, it remains a challenge to mine the data efficiently for knowledge discovery. The heterogeneity of these data makes it difficult to consistently integrate them, slowing down the process of biological discovery. We introduce a data processing paradigm to identify key factors in biological processes via systematic collection of gene expression datasets, primary analysis of data, and evaluation of consistent signals. To demonstrate its effectiveness, our paradigm was applied to epidermal development and identified many genes that play a potential role in this process. Besides the known epidermal development genes, a substantial proportion of the identified genes are still not supported by gain- or loss-of-function studies, yielding many novel genes for future studies. Among them, we selected a top gene for loss-of-function experimental validation and confirmed its function in epidermal differentiation, proving the ability of this paradigm to identify new factors in biological processes. In addition, this paradigm revealed many key genes in cold-induced thermogenesis using data from cold-challenged tissues, demonstrating its generalizability. This paradigm can lead to fruitful results for studying molecular mechanisms in an era of explosive accumulation of publicly available biological data.

  6. The EPA Comptox Chemistry Dashboard: A Web-Based Data ...

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. As an outcome of these efforts the National Center for Computational Toxicology (NCCT) has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences including high-throughput in vitro screening data, in vivo and functional use data, exposure models and chemical databases with associated properties. A series of software applications and databases have been produced over the past decade to deliver these data but recent developments have focused on the development of a new software architecture that assembles the resources into a single platform. A new web application, the CompTox Chemistry Dashboard provides access to data associated with ~720,000 chemical substances. These data include experimental and predicted physicochemical property data, bioassay screening data associated with the ToxCast program, product and functional use information and a myriad of related data of value to environmental scientists. The dashboard provides chemical-based searching based on chemical names, synonyms and CAS Registry Numbers. Flexible search capabilities allow for chemical identificati

  7. clubber: removing the bioinformatics bottleneck in big data analyses.

    PubMed

    Miller, Maximilian; Zhu, Chengsheng; Bromberg, Yana

    2017-06-13

    With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these "big data" analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber's goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment.

  8. The EPA CompTox Chemistry Dashboard - an online resource ...

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. As an outcome of these efforts the National Center for Computational Toxicology (NCCT) has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences including high-throughput in vitro screening data, in vivo and functional use data, exposure models and chemical databases with associated properties. A series of software applications and databases have been produced over the past decade to deliver these data. Recent work has focused on the development of a new architecture that assembles the resources into a single platform. With a focus on delivering access to Open Data streams, web service integration accessibility and a user-friendly web application the CompTox Dashboard provides access to data associated with ~720,000 chemical substances. These data include research data in the form of bioassay screening data associated with the ToxCast program, experimental and predicted physicochemical properties, product and functional use information and related data of value to environmental scientists. This presentation will provide an overview of the CompTox Dashboard and its va

  9. clubber: removing the bioinformatics bottleneck in big data analyses

    PubMed Central

    Miller, Maximilian; Zhu, Chengsheng; Bromberg, Yana

    2018-01-01

    With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these “big data” analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber’s goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment. PMID:28609295

  10. Parametric scaling from species relative abundances to absolute abundances in the computation of biological diversity: a first proposal using Shannon's entropy.

    PubMed

    Ricotta, Carlo

    2003-01-01

    Traditional diversity measures such as the Shannon entropy are generally computed from the species' relative abundance vector of a given community to the exclusion of species' absolute abundances. In this paper, I first mention some examples where the total information content associated with a given community may be more adequate than Shannon's average information content for a better understanding of ecosystem functioning. Next, I propose a parametric measure of statistical information that contains both Shannon's entropy and total information content as special cases of this more general function.

  11. Toward a multiscale modeling framework for understanding serotonergic function

    PubMed Central

    Wong-Lin, KongFatt; Wang, Da-Hui; Moustafa, Ahmed A; Cohen, Jeremiah Y; Nakamura, Kae

    2017-01-01

    Despite its importance in regulating emotion and mental wellbeing, the complex structure and function of the serotonergic system present formidable challenges toward understanding its mechanisms. In this paper, we review studies investigating the interactions between serotonergic and related brain systems and their behavior at multiple scales, with a focus on biologically-based computational modeling. We first discuss serotonergic intracellular signaling and neuronal excitability, followed by neuronal circuit and systems levels. At each level of organization, we will discuss the experimental work accompanied by related computational modeling work. We then suggest that a multiscale modeling approach that integrates the various levels of neurobiological organization could potentially transform the way we understand the complex functions associated with serotonin. PMID:28417684

  12. Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids.

    PubMed

    Aradi, Bálint; Niklasson, Anders M N; Frauenheim, Thomas

    2015-07-14

    A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born-Oppenheimer molecular dynamics. For systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can be applied to a broad range of problems in materials science, chemistry, and biology.

  13. PANDORA: keyword-based analysis of protein sets by integration of annotation sources.

    PubMed

    Kaplan, Noam; Vaaknin, Avishay; Linial, Michal

    2003-10-01

    Recent advances in high-throughput methods and the application of computational tools for automatic classification of proteins have made it possible to carry out large-scale proteomic analyses. Biological analysis and interpretation of sets of proteins is a time-consuming undertaking carried out manually by experts. We have developed PANDORA (Protein ANnotation Diagram ORiented Analysis), a web-based tool that provides an automatic representation of the biological knowledge associated with any set of proteins. PANDORA uses a unique approach of keyword-based graphical analysis that focuses on detecting subsets of proteins that share unique biological properties and the intersections of such sets. PANDORA currently supports SwissProt keywords, NCBI Taxonomy, InterPro entries and the hierarchical classification terms from ENZYME, SCOP and GO databases. The integrated study of several annotation sources simultaneously allows a representation of biological relations of structure, function, cellular location, taxonomy, domains and motifs. PANDORA is also integrated into the ProtoNet system, thus allowing testing thousands of automatically generated clusters. We illustrate how PANDORA enhances the biological understanding of large, non-uniform sets of proteins originating from experimental and computational sources, without the need for prior biological knowledge on individual proteins.

  14. Incorporating Modeling and Simulations in Undergraduate Biophysical Chemistry Course to Promote Understanding of Structure-Dynamics-Function Relationships in Proteins

    ERIC Educational Resources Information Center

    Hati, Sanchita; Bhattacharyya, Sudeep

    2016-01-01

    A project-based biophysical chemistry laboratory course, which is offered to the biochemistry and molecular biology majors in their senior year, is described. In this course, the classroom study of the structure-function of biomolecules is integrated with the discovery-guided laboratory study of these molecules using computer modeling and…

  15. On the role of the plasmodial cytoskeleton in facilitating intelligent behavior in slime mold Physarum polycephalum

    PubMed Central

    Mayne, Richard; Adamatzky, Andrew; Jones, Jeff

    2015-01-01

    The plasmodium of slime mold Physarum polycephalum behaves as an amorphous reaction-diffusion computing substrate and is capable of apparently ‘intelligent’ behavior. But how does intelligence emerge in an acellular organism? Through a range of laboratory experiments, we visualize the plasmodial cytoskeleton—a ubiquitous cellular protein scaffold whose functions are manifold and essential to life—and discuss its putative role as a network for transducing, transmitting and structuring data streams within the plasmodium. Through a range of computer modeling techniques, we demonstrate how emergent behavior, and hence computational intelligence, may occur in cytoskeletal communications networks. Specifically, we model the topology of both the actin and tubulin cytoskeletal networks and discuss how computation may occur therein. Furthermore, we present bespoke cellular automata and particle swarm models for the computational process within the cytoskeleton and observe the incidence of emergent patterns in both. Our work grants unique insight into the origins of natural intelligence; the results presented here are therefore readily transferable to the fields of natural computation, cell biology and biomedical science. We conclude by discussing how our results may alter our biological, computational and philosophical understanding of intelligence and consciousness. PMID:26478782

  16. On the role of the plasmodial cytoskeleton in facilitating intelligent behavior in slime mold Physarum polycephalum.

    PubMed

    Mayne, Richard; Adamatzky, Andrew; Jones, Jeff

    2015-01-01

    The plasmodium of slime mold Physarum polycephalum behaves as an amorphous reaction-diffusion computing substrate and is capable of apparently 'intelligent' behavior. But how does intelligence emerge in an acellular organism? Through a range of laboratory experiments, we visualize the plasmodial cytoskeleton-a ubiquitous cellular protein scaffold whose functions are manifold and essential to life-and discuss its putative role as a network for transducing, transmitting and structuring data streams within the plasmodium. Through a range of computer modeling techniques, we demonstrate how emergent behavior, and hence computational intelligence, may occur in cytoskeletal communications networks. Specifically, we model the topology of both the actin and tubulin cytoskeletal networks and discuss how computation may occur therein. Furthermore, we present bespoke cellular automata and particle swarm models for the computational process within the cytoskeleton and observe the incidence of emergent patterns in both. Our work grants unique insight into the origins of natural intelligence; the results presented here are therefore readily transferable to the fields of natural computation, cell biology and biomedical science. We conclude by discussing how our results may alter our biological, computational and philosophical understanding of intelligence and consciousness.

  17. Integrated Computational Analysis of Genes Associated with Human Hereditary Insensitivity to Pain. A Drug Repurposing Perspective

    PubMed Central

    Lötsch, Jörn; Lippmann, Catharina; Kringel, Dario; Ultsch, Alfred

    2017-01-01

    Genes causally involved in human insensitivity to pain provide a unique molecular source of studying the pathophysiology of pain and the development of novel analgesic drugs. The increasing availability of “big data” enables novel research approaches to chronic pain while also requiring novel techniques for data mining and knowledge discovery. We used machine learning to combine the knowledge about n = 20 genes causally involved in human hereditary insensitivity to pain with the knowledge about the functions of thousands of genes. An integrated computational analysis proposed that among the functions of this set of genes, the processes related to nervous system development and to ceramide and sphingosine signaling pathways are particularly important. This is in line with earlier suggestions to use these pathways as therapeutic target in pain. Following identification of the biological processes characterizing hereditary insensitivity to pain, the biological processes were used for a similarity analysis with the functions of n = 4,834 database-queried drugs. Using emergent self-organizing maps, a cluster of n = 22 drugs was identified sharing important functional features with hereditary insensitivity to pain. Several members of this cluster had been implicated in pain in preclinical experiments. Thus, the present concept of machine-learned knowledge discovery for pain research provides biologically plausible results and seems to be suitable for drug discovery by identifying a narrow choice of repurposing candidates, demonstrating that contemporary machine-learned methods offer innovative approaches to knowledge discovery from available evidence. PMID:28848388

  18. Deconstructing the core dynamics from a complex time-lagged regulatory biological circuit.

    PubMed

    Eriksson, O; Brinne, B; Zhou, Y; Björkegren, J; Tegnér, J

    2009-03-01

    Complex regulatory dynamics is ubiquitous in molecular networks composed of genes and proteins. Recent progress in computational biology and its application to molecular data generate a growing number of complex networks. Yet, it has been difficult to understand the governing principles of these networks beyond graphical analysis or extensive numerical simulations. Here the authors exploit several simplifying biological circumstances which thereby enable to directly detect the underlying dynamical regularities driving periodic oscillations in a dynamical nonlinear computational model of a protein-protein network. System analysis is performed using the cell cycle, a mathematically well-described complex regulatory circuit driven by external signals. By introducing an explicit time delay and using a 'tearing-and-zooming' approach the authors reduce the system to a piecewise linear system with two variables that capture the dynamics of this complex network. A key step in the analysis is the identification of functional subsystems by identifying the relations between state-variables within the model. These functional subsystems are referred to as dynamical modules operating as sensitive switches in the original complex model. By using reduced mathematical representations of the subsystems the authors derive explicit conditions on how the cell cycle dynamics depends on system parameters, and can, for the first time, analyse and prove global conditions for system stability. The approach which includes utilising biological simplifying conditions, identification of dynamical modules and mathematical reduction of the model complexity may be applicable to other well-characterised biological regulatory circuits. [Includes supplementary material].

  19. Computing the origin and evolution of the ribosome from its structure — Uncovering processes of macromolecular accretion benefiting synthetic biology

    PubMed Central

    Caetano-Anollés, Gustavo; Caetano-Anollés, Derek

    2015-01-01

    Accretion occurs pervasively in nature at widely different timeframes. The process also manifests in the evolution of macromolecules. Here we review recent computational and structural biology studies of evolutionary accretion that make use of the ideographic (historical, retrodictive) and nomothetic (universal, predictive) scientific frameworks. Computational studies uncover explicit timelines of accretion of structural parts in molecular repertoires and molecules. Phylogenetic trees of protein structural domains and proteomes and their molecular functions were built from a genomic census of millions of encoded proteins and associated terminal Gene Ontology terms. Trees reveal a ‘metabolic-first’ origin of proteins, the late development of translation, and a patchwork distribution of proteins in biological networks mediated by molecular recruitment. Similarly, the natural history of ancient RNA molecules inferred from trees of molecular substructures built from a census of molecular features shows patchwork-like accretion patterns. Ideographic analyses of ribosomal history uncover the early appearance of structures supporting mRNA decoding and tRNA translocation, the coevolution of ribosomal proteins and RNA, and a first evolutionary transition that brings ribosomal subunits together into a processive protein biosynthetic complex. Nomothetic structural biology studies of tertiary interactions and ancient insertions in rRNA complement these findings, once concentric layering assumptions are removed. Patterns of coaxial helical stacking reveal a frustrated dynamics of outward and inward ribosomal growth possibly mediated by structural grafting. The early rise of the ribosomal ‘turnstile’ suggests an evolutionary transition in natural biological computation. Results make explicit the need to understand processes of molecular growth and information transfer of macromolecules. PMID:27096056

  20. Cancer systems biology in the genome sequencing era: part 1, dissecting and modeling of tumor clones and their networks.

    PubMed

    Wang, Edwin; Zou, Jinfeng; Zaman, Naif; Beitel, Lenore K; Trifiro, Mark; Paliouras, Miltiadis

    2013-08-01

    Recent tumor genome sequencing confirmed that one tumor often consists of multiple cell subpopulations (clones) which bear different, but related, genetic profiles such as mutation and copy number variation profiles. Thus far, one tumor has been viewed as a whole entity in cancer functional studies. With the advances of genome sequencing and computational analysis, we are able to quantify and computationally dissect clones from tumors, and then conduct clone-based analysis. Emerging technologies such as single-cell genome sequencing and RNA-Seq could profile tumor clones. Thus, we should reconsider how to conduct cancer systems biology studies in the genome sequencing era. We will outline new directions for conducting cancer systems biology by considering that genome sequencing technology can be used for dissecting, quantifying and genetically characterizing clones from tumors. Topics discussed in Part 1 of this review include computationally quantifying of tumor subpopulations; clone-based network modeling, cancer hallmark-based networks and their high-order rewiring principles and the principles of cell survival networks of fast-growing clones. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  1. Protein Dynamics from NMR and Computer Simulation

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Kravchenko, Olga; Kemple, Marvin; Likic, Vladimir; Klimtchuk, Elena; Prendergast, Franklyn

    2002-03-01

    Proteins exhibit internal motions from the millisecond to sub-nanosecond time scale. The challenge is to relate these internal motions to biological function. A strategy to address this aim is to apply a combination of several techniques including high-resolution NMR, computer simulation of molecular dynamics (MD), molecular graphics, and finally molecular biology, the latter to generate appropriate samples. Two difficulties that arise are: (1) the time scale which is most directly biologically relevant (ms to μs) is not readily accessible by these techniques and (2) the techniques focus on local and not collective motions. We will outline methods using ^13C-NMR to help alleviate the second problem, as applied to intestinal fatty acid binding protein, a relatively small intracellular protein believed to be involved in fatty acid transport and metabolism. This work is supported in part by PHS Grant GM34847 (FGP) and by a fellowship from the American Heart Association (QW).

  2. Computational modeling of diffusion in the cerebellum.

    PubMed

    Marinov, Toma M; Santamaria, Fidel

    2014-01-01

    Diffusion is a major transport mechanism in living organisms. In the cerebellum, diffusion is responsible for the propagation of molecular signaling involved in synaptic plasticity and metabolism, both intracellularly and extracellularly. In this chapter, we present an overview of the cerebellar structure and function. We then discuss the types of diffusion processes present in the cerebellum and their biological importance. We particularly emphasize the differences between extracellular and intracellular diffusion and the presence of tortuosity and anomalous diffusion in different parts of the cerebellar cortex. We provide a mathematical introduction to diffusion and a conceptual overview of various computational modeling techniques. We discuss their scope and their limit of application. Although our focus is the cerebellum, we have aimed at presenting the biological and mathematical foundations as general as possible to be applicable to any other area in biology in which diffusion is of importance. © 2014 Elsevier Inc. All rights reserved.

  3. Focus issue: series on computational and systems biology.

    PubMed

    Gough, Nancy R

    2011-09-06

    The application of computational biology and systems biology is yielding quantitative insight into cellular regulatory phenomena. For the month of September, Science Signaling highlights research featuring computational approaches to understanding cell signaling and investigation of signaling networks, a series of Teaching Resources from a course in systems biology, and various other articles and resources relevant to the application of computational biology and systems biology to the study of signal transduction.

  4. Think globally and solve locally: secondary memory-based network learning for automated multi-species function prediction

    PubMed Central

    2014-01-01

    Background Network-based learning algorithms for automated function prediction (AFP) are negatively affected by the limited coverage of experimental data and limited a priori known functional annotations. As a consequence their application to model organisms is often restricted to well characterized biological processes and pathways, and their effectiveness with poorly annotated species is relatively limited. A possible solution to this problem might consist in the construction of big networks including multiple species, but this in turn poses challenging computational problems, due to the scalability limitations of existing algorithms and the main memory requirements induced by the construction of big networks. Distributed computation or the usage of big computers could in principle respond to these issues, but raises further algorithmic problems and require resources not satisfiable with simple off-the-shelf computers. Results We propose a novel framework for scalable network-based learning of multi-species protein functions based on both a local implementation of existing algorithms and the adoption of innovative technologies: we solve “locally” the AFP problem, by designing “vertex-centric” implementations of network-based algorithms, but we do not give up thinking “globally” by exploiting the overall topology of the network. This is made possible by the adoption of secondary memory-based technologies that allow the efficient use of the large memory available on disks, thus overcoming the main memory limitations of modern off-the-shelf computers. This approach has been applied to the analysis of a large multi-species network including more than 300 species of bacteria and to a network with more than 200,000 proteins belonging to 13 Eukaryotic species. To our knowledge this is the first work where secondary-memory based network analysis has been applied to multi-species function prediction using biological networks with hundreds of thousands of proteins. Conclusions The combination of these algorithmic and technological approaches makes feasible the analysis of large multi-species networks using ordinary computers with limited speed and primary memory, and in perspective could enable the analysis of huge networks (e.g. the whole proteomes available in SwissProt), using well-equipped stand-alone machines. PMID:24843788

  5. Well-characterized sequence features of eukaryote genomes and implications for ab initio gene prediction.

    PubMed

    Huang, Ying; Chen, Shi-Yi; Deng, Feilong

    2016-01-01

    In silico analysis of DNA sequences is an important area of computational biology in the post-genomic era. Over the past two decades, computational approaches for ab initio prediction of gene structure from genome sequence alone have largely facilitated our understanding on a variety of biological questions. Although the computational prediction of protein-coding genes has already been well-established, we are also facing challenges to robustly find the non-coding RNA genes, such as miRNA and lncRNA. Two main aspects of ab initio gene prediction include the computed values for describing sequence features and used algorithm for training the discriminant function, and by which different combinations are employed into various bioinformatic tools. Herein, we briefly review these well-characterized sequence features in eukaryote genomes and applications to ab initio gene prediction. The main purpose of this article is to provide an overview to beginners who aim to develop the related bioinformatic tools.

  6. A parallel approach of COFFEE objective function to multiple sequence alignment

    NASA Astrophysics Data System (ADS)

    Zafalon, G. F. D.; Visotaky, J. M. V.; Amorim, A. R.; Valêncio, C. R.; Neves, L. A.; de Souza, R. C. G.; Machado, J. M.

    2015-09-01

    The computational tools to assist genomic analyzes show even more necessary due to fast increasing of data amount available. With high computational costs of deterministic algorithms for sequence alignments, many works concentrate their efforts in the development of heuristic approaches to multiple sequence alignments. However, the selection of an approach, which offers solutions with good biological significance and feasible execution time, is a great challenge. Thus, this work aims to show the parallelization of the processing steps of MSA-GA tool using multithread paradigm in the execution of COFFEE objective function. The standard objective function implemented in the tool is the Weighted Sum of Pairs (WSP), which produces some distortions in the final alignments when sequences sets with low similarity are aligned. Then, in studies previously performed we implemented the COFFEE objective function in the tool to smooth these distortions. Although the nature of COFFEE objective function implies in the increasing of execution time, this approach presents points, which can be executed in parallel. With the improvements implemented in this work, we can verify the execution time of new approach is 24% faster than the sequential approach with COFFEE. Moreover, the COFFEE multithreaded approach is more efficient than WSP, because besides it is slightly fast, its biological results are better.

  7. Algorithms in nature: the convergence of systems biology and computational thinking

    PubMed Central

    Navlakha, Saket; Bar-Joseph, Ziv

    2011-01-01

    Computer science and biology have enjoyed a long and fruitful relationship for decades. Biologists rely on computational methods to analyze and integrate large data sets, while several computational methods were inspired by the high-level design principles of biological systems. Recently, these two directions have been converging. In this review, we argue that thinking computationally about biological processes may lead to more accurate models, which in turn can be used to improve the design of algorithms. We discuss the similar mechanisms and requirements shared by computational and biological processes and then present several recent studies that apply this joint analysis strategy to problems related to coordination, network analysis, and tracking and vision. We also discuss additional biological processes that can be studied in a similar manner and link them to potential computational problems. With the rapid accumulation of data detailing the inner workings of biological systems, we expect this direction of coupling biological and computational studies to greatly expand in the future. PMID:22068329

  8. Atomic switch networks-nanoarchitectonic design of a complex system for natural computing.

    PubMed

    Demis, E C; Aguilera, R; Sillin, H O; Scharnhorst, K; Sandouk, E J; Aono, M; Stieg, A Z; Gimzewski, J K

    2015-05-22

    Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing-a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.

  9. Interacting domain-specific languages with biological problem solving environments

    NASA Astrophysics Data System (ADS)

    Cickovski, Trevor M.

    Iteratively developing a biological model and verifying results with lab observations has become standard practice in computational biology. This process is currently facilitated by biological Problem Solving Environments (PSEs), multi-tiered and modular software frameworks which traditionally consist of two layers: a computational layer written in a high level language using design patterns, and a user interface layer which hides its details. Although PSEs have proven effective, they still enforce some communication overhead between biologists refining their models through repeated comparison with experimental observations in vitro or in vivo, and programmers actually implementing model extensions and modifications within the computational layer. I illustrate the use of biological Domain-Specific Languages (DSLs) as a middle-level PSE tier to ameliorate this problem by providing experimentalists with the ability to iteratively test and develop their models using a higher degree of expressive power compared to a graphical interface, while saving the requirement of general purpose programming knowledge. I develop two radically different biological DSLs: XML-based BIOLOGO will model biological morphogenesis using a cell-centered stochastic cellular automaton and translate into C++ modules for an object-oriented PSE C OMPUCELL3D, and MDLab will provide a set of high-level Python libraries for running molecular dynamics simulations, using wrapped functionality from the C++ PSE PROTOMOL. I describe each language in detail, including its its roles within the larger PSE and its expressibility in terms of representable phenomena, and a discussion of observations from users of the languages. Moreover I will use these studies to draw general conclusions about biological DSL development, including dependencies upon the goals of the corresponding PSE, strategies, and tradeoffs.

  10. Computational Lipidomics and Lipid Bioinformatics: Filling In the Blanks.

    PubMed

    Pauling, Josch; Klipp, Edda

    2016-12-22

    Lipids are highly diverse metabolites of pronounced importance in health and disease. While metabolomics is a broad field under the omics umbrella that may also relate to lipids, lipidomics is an emerging field which specializes in the identification, quantification and functional interpretation of complex lipidomes. Today, it is possible to identify and distinguish lipids in a high-resolution, high-throughput manner and simultaneously with a lot of structural detail. However, doing so may produce thousands of mass spectra in a single experiment which has created a high demand for specialized computational support to analyze these spectral libraries. The computational biology and bioinformatics community has so far established methodology in genomics, transcriptomics and proteomics but there are many (combinatorial) challenges when it comes to structural diversity of lipids and their identification, quantification and interpretation. This review gives an overview and outlook on lipidomics research and illustrates ongoing computational and bioinformatics efforts. These efforts are important and necessary steps to advance the lipidomics field alongside analytic, biochemistry, biomedical and biology communities and to close the gap in available computational methodology between lipidomics and other omics sub-branches.

  11. Identification of functional modules using network topology and high-throughput data.

    PubMed

    Ulitsky, Igor; Shamir, Ron

    2007-01-26

    With the advent of systems biology, biological knowledge is often represented today by networks. These include regulatory and metabolic networks, protein-protein interaction networks, and many others. At the same time, high-throughput genomics and proteomics techniques generate very large data sets, which require sophisticated computational analysis. Usually, separate and different analysis methodologies are applied to each of the two data types. An integrated investigation of network and high-throughput information together can improve the quality of the analysis by accounting simultaneously for topological network properties alongside intrinsic features of the high-throughput data. We describe a novel algorithmic framework for this challenge. We first transform the high-throughput data into similarity values, (e.g., by computing pairwise similarity of gene expression patterns from microarray data). Then, given a network of genes or proteins and similarity values between some of them, we seek connected sub-networks (or modules) that manifest high similarity. We develop algorithms for this problem and evaluate their performance on the osmotic shock response network in S. cerevisiae and on the human cell cycle network. We demonstrate that focused, biologically meaningful and relevant functional modules are obtained. In comparison with extant algorithms, our approach has higher sensitivity and higher specificity. We have demonstrated that our method can accurately identify functional modules. Hence, it carries the promise to be highly useful in analysis of high throughput data.

  12. High Precision Prediction of Functional Sites in Protein Structures

    PubMed Central

    Buturovic, Ljubomir; Wong, Mike; Tang, Grace W.; Altman, Russ B.; Petkovic, Dragutin

    2014-01-01

    We address the problem of assigning biological function to solved protein structures. Computational tools play a critical role in identifying potential active sites and informing screening decisions for further lab analysis. A critical parameter in the practical application of computational methods is the precision, or positive predictive value. Precision measures the level of confidence the user should have in a particular computed functional assignment. Low precision annotations lead to futile laboratory investigations and waste scarce research resources. In this paper we describe an advanced version of the protein function annotation system FEATURE, which achieved 99% precision and average recall of 95% across 20 representative functional sites. The system uses a Support Vector Machine classifier operating on the microenvironment of physicochemical features around an amino acid. We also compared performance of our method with state-of-the-art sequence-level annotator Pfam in terms of precision, recall and localization. To our knowledge, no other functional site annotator has been rigorously evaluated against these key criteria. The software and predictive models are incorporated into the WebFEATURE service at http://feature.stanford.edu/wf4.0-beta. PMID:24632601

  13. CFEL-ASG Software Suite (CASS): usage for free-electron laser experiments with biological focus.

    PubMed

    Foucar, Lutz

    2016-08-01

    CASS [Foucar et al. (2012). Comput. Phys. Commun. 183 , 2207-2213] is a well established software suite for experiments performed at any sort of light source. It is based on a modular design and can easily be adapted for use at free-electron laser (FEL) experiments that have a biological focus. This article will list all the additional functionality and enhancements of CASS for use with FEL experiments that have been introduced since the first publication. The article will also highlight some advanced experiments with biological aspects that have been performed.

  14. Synthesizing Biomolecule-based Boolean Logic Gates

    PubMed Central

    Miyamoto, Takafumi; Razavi, Shiva; DeRose, Robert; Inoue, Takanari

    2012-01-01

    One fascinating recent avenue of study in the field of synthetic biology is the creation of biomolecule-based computers. The main components of a computing device consist of an arithmetic logic unit, the control unit, memory, and the input and output devices. Boolean logic gates are at the core of the operational machinery of these parts, hence to make biocomputers a reality, biomolecular logic gates become a necessity. Indeed, with the advent of more sophisticated biological tools, both nucleic acid- and protein-based logic systems have been generated. These devices function in the context of either test tubes or living cells and yield highly specific outputs given a set of inputs. In this review, we discuss various types of biomolecular logic gates that have been synthesized, with particular emphasis on recent developments that promise increased complexity of logic gate circuitry, improved computational speed, and potential clinical applications. PMID:23526588

  15. Synthesizing biomolecule-based Boolean logic gates.

    PubMed

    Miyamoto, Takafumi; Razavi, Shiva; DeRose, Robert; Inoue, Takanari

    2013-02-15

    One fascinating recent avenue of study in the field of synthetic biology is the creation of biomolecule-based computers. The main components of a computing device consist of an arithmetic logic unit, the control unit, memory, and the input and output devices. Boolean logic gates are at the core of the operational machinery of these parts, and hence to make biocomputers a reality, biomolecular logic gates become a necessity. Indeed, with the advent of more sophisticated biological tools, both nucleic acid- and protein-based logic systems have been generated. These devices function in the context of either test tubes or living cells and yield highly specific outputs given a set of inputs. In this review, we discuss various types of biomolecular logic gates that have been synthesized, with particular emphasis on recent developments that promise increased complexity of logic gate circuitry, improved computational speed, and potential clinical applications.

  16. A composite computational model of liver glucose homeostasis. I. Building the composite model.

    PubMed

    Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A

    2012-04-07

    A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.

  17. Computer support for physiological cell modelling using an ontology on cell physiology.

    PubMed

    Takao, Shimayoshi; Kazuhiro, Komurasaki; Akira, Amano; Takeshi, Iwashita; Masanori, Kanazawa; Tetsuya, Matsuda

    2006-01-01

    The development of electrophysiological whole cell models to support the understanding of biological mechanisms is increasing rapidly. Due to the complexity of biological systems, comprehensive cell models, which are composed of many imported sub-models of functional elements, can get quite complicated as well, making computer modification difficult. Here, we propose a computer support to enhance structural changes of cell models, employing the markup languages CellML and our original PMSML (physiological model structure markup language), in addition to a new ontology for cell physiological modelling. In particular, a method to make references from CellML files to the ontology and a method to assist manipulation of model structures using markup languages together with the ontology are reported. Using these methods three software utilities, including a graphical model editor, are implemented. Experimental results proved that these methods are effective for the modification of electrophysiological models.

  18. Impaired associative learning in schizophrenia: behavioral and computational studies

    PubMed Central

    Diwadkar, Vaibhav A.; Flaugher, Brad; Jones, Trevor; Zalányi, László; Ujfalussy, Balázs; Keshavan, Matcheri S.

    2008-01-01

    Associative learning is a central building block of human cognition and in large part depends on mechanisms of synaptic plasticity, memory capacity and fronto–hippocampal interactions. A disorder like schizophrenia is thought to be characterized by altered plasticity, and impaired frontal and hippocampal function. Understanding the expression of this dysfunction through appropriate experimental studies, and understanding the processes that may give rise to impaired behavior through biologically plausible computational models will help clarify the nature of these deficits. We present a preliminary computational model designed to capture learning dynamics in healthy control and schizophrenia subjects. Experimental data was collected on a spatial-object paired-associate learning task. The task evinces classic patterns of negatively accelerated learning in both healthy control subjects and patients, with patients demonstrating lower rates of learning than controls. Our rudimentary computational model of the task was based on biologically plausible assumptions, including the separation of dorsal/spatial and ventral/object visual streams, implementation of rules of learning, the explicit parameterization of learning rates (a plausible surrogate for synaptic plasticity), and learning capacity (a plausible surrogate for memory capacity). Reductions in learning dynamics in schizophrenia were well-modeled by reductions in learning rate and learning capacity. The synergy between experimental research and a detailed computational model of performance provides a framework within which to infer plausible biological bases of impaired learning dynamics in schizophrenia. PMID:19003486

  19. Structural biology computing: Lessons for the biomedical research sciences.

    PubMed

    Morin, Andrew; Sliz, Piotr

    2013-11-01

    The field of structural biology, whose aim is to elucidate the molecular and atomic structures of biological macromolecules, has long been at the forefront of biomedical sciences in adopting and developing computational research methods. Operating at the intersection between biophysics, biochemistry, and molecular biology, structural biology's growth into a foundational framework on which many concepts and findings of molecular biology are interpreted1 has depended largely on parallel advancements in computational tools and techniques. Without these computing advances, modern structural biology would likely have remained an exclusive pursuit practiced by few, and not become the widely practiced, foundational field it is today. As other areas of biomedical research increasingly embrace research computing techniques, the successes, failures and lessons of structural biology computing can serve as a useful guide to progress in other biomedically related research fields. Copyright © 2013 Wiley Periodicals, Inc.

  20. Computational systems biology and dose-response modeling in relation to new directions in toxicity testing.

    PubMed

    Zhang, Qiang; Bhattacharya, Sudin; Andersen, Melvin E; Conolly, Rory B

    2010-02-01

    The new paradigm envisioned for toxicity testing in the 21st century advocates shifting from the current animal-based testing process to a combination of in vitro cell-based studies, high-throughput techniques, and in silico modeling. A strategic component of the vision is the adoption of the systems biology approach to acquire, analyze, and interpret toxicity pathway data. As key toxicity pathways are identified and their wiring details elucidated using traditional and high-throughput techniques, there is a pressing need to understand their qualitative and quantitative behaviors in response to perturbation by both physiological signals and exogenous stressors. The complexity of these molecular networks makes the task of understanding cellular responses merely by human intuition challenging, if not impossible. This process can be aided by mathematical modeling and computer simulation of the networks and their dynamic behaviors. A number of theoretical frameworks were developed in the last century for understanding dynamical systems in science and engineering disciplines. These frameworks, which include metabolic control analysis, biochemical systems theory, nonlinear dynamics, and control theory, can greatly facilitate the process of organizing, analyzing, and understanding toxicity pathways. Such analysis will require a comprehensive examination of the dynamic properties of "network motifs"--the basic building blocks of molecular circuits. Network motifs like feedback and feedforward loops appear repeatedly in various molecular circuits across cell types and enable vital cellular functions like homeostasis, all-or-none response, memory, and biological rhythm. These functional motifs and associated qualitative and quantitative properties are the predominant source of nonlinearities observed in cellular dose response data. Complex response behaviors can arise from toxicity pathways built upon combinations of network motifs. While the field of computational cell biology has advanced rapidly with increasing availability of new data and powerful simulation techniques, a quantitative orientation is still lacking in life sciences education to make efficient use of these new tools to implement the new toxicity testing paradigm. A revamped undergraduate curriculum in the biological sciences including compulsory courses in mathematics and analysis of dynamical systems is required to address this gap. In parallel, dissemination of computational systems biology techniques and other analytical tools among practicing toxicologists and risk assessment professionals will help accelerate implementation of the new toxicity testing vision.

  1. Two- and three-input TALE-based AND logic computation in embryonic stem cells.

    PubMed

    Lienert, Florian; Torella, Joseph P; Chen, Jan-Hung; Norsworthy, Michael; Richardson, Ryan R; Silver, Pamela A

    2013-11-01

    Biological computing circuits can enhance our ability to control cellular functions and have potential applications in tissue engineering and medical treatments. Transcriptional activator-like effectors (TALEs) represent attractive components of synthetic gene regulatory circuits, as they can be designed de novo to target a given DNA sequence. We here demonstrate that TALEs can perform Boolean logic computation in mammalian cells. Using a split-intein protein-splicing strategy, we show that a functional TALE can be reconstituted from two inactive parts, thus generating two-input AND logic computation. We further demonstrate three-piece intein splicing in mammalian cells and use it to perform three-input AND computation. Using methods for random as well as targeted insertion of these relatively large genetic circuits, we show that TALE-based logic circuits are functional when integrated into the genome of mouse embryonic stem cells. Comparing construct variants in the same genomic context, we modulated the strength of the TALE-responsive promoter to improve the output of these circuits. Our work establishes split TALEs as a tool for building logic computation with the potential of controlling expression of endogenous genes or transgenes in response to a combination of cellular signals.

  2. Simulation of CNT-AFM tip based on finite element analysis for targeted probe of the biological cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yousefi, Amin Termeh, E-mail: at.tyousefi@gmail.com; Miyake, Mikio, E-mail: miyakejaist@gmail.com; Ikeda, Shoichiro, E-mail: sho16.ikeda@gmail.com

    Carbon nanotubes (CNTs) are potentially ideal tips for atomic force microscopy (AFM) due to the robust mechanical properties, nano scale diameter and also their ability to be functionalized by chemical and biological components at the tip ends. This contribution develops the idea of using CNTs as an AFM tip in computational analysis of the biological cell’s. Finite element analysis employed for each section and displacement of the nodes located in the contact area was monitored by using an output database (ODB). This reliable integration of CNT-AFM tip process provides a new class of high performance nanoprobes for single biological cellmore » analysis.« less

  3. Force fields and scoring functions for carbohydrate simulation.

    PubMed

    Xiong, Xiuming; Chen, Zhaoqiang; Cossins, Benjamin P; Xu, Zhijian; Shao, Qiang; Ding, Kai; Zhu, Weiliang; Shi, Jiye

    2015-01-12

    Carbohydrate dynamics plays a vital role in many biological processes, but we are not currently able to probe this with experimental approaches. The highly flexible nature of carbohydrate structures differs in many aspects from other biomolecules, posing significant challenges for studies employing computational simulation. Over past decades, computational study of carbohydrates has been focused on the development of structure prediction methods, force field optimization, molecular dynamics simulation, and scoring functions for carbohydrate-protein interactions. Advances in carbohydrate force fields and scoring functions can be largely attributed to enhanced computational algorithms, application of quantum mechanics, and the increasing number of experimental structures determined by X-ray and NMR techniques. The conformational analysis of carbohydrates is challengeable and has gone into intensive study in elucidating the anomeric, the exo-anomeric, and the gauche effects. Here, we review the issues associated with carbohydrate force fields and scoring functions, which will have a broad application in the field of carbohydrate-based drug design. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Multi-agent-based bio-network for systems biology: protein-protein interaction network as an example.

    PubMed

    Ren, Li-Hong; Ding, Yong-Sheng; Shen, Yi-Zhen; Zhang, Xiang-Feng

    2008-10-01

    Recently, a collective effort from multiple research areas has been made to understand biological systems at the system level. This research requires the ability to simulate particular biological systems as cells, organs, organisms, and communities. In this paper, a novel bio-network simulation platform is proposed for system biology studies by combining agent approaches. We consider a biological system as a set of active computational components interacting with each other and with an external environment. Then, we propose a bio-network platform for simulating the behaviors of biological systems and modelling them in terms of bio-entities and society-entities. As a demonstration, we discuss how a protein-protein interaction (PPI) network can be seen as a society of autonomous interactive components. From interactions among small PPI networks, a large PPI network can emerge that has a remarkable ability to accomplish a complex function or task. We also simulate the evolution of the PPI networks by using the bio-operators of the bio-entities. Based on the proposed approach, various simulators with different functions can be embedded in the simulation platform, and further research can be done from design to development, including complexity validation of the biological system.

  5. Multi-level and hybrid modelling approaches for systems biology.

    PubMed

    Bardini, R; Politano, G; Benso, A; Di Carlo, S

    2017-01-01

    During the last decades, high-throughput techniques allowed for the extraction of a huge amount of data from biological systems, unveiling more of their underling complexity. Biological systems encompass a wide range of space and time scales, functioning according to flexible hierarchies of mechanisms making an intertwined and dynamic interplay of regulations. This becomes particularly evident in processes such as ontogenesis, where regulative assets change according to process context and timing, making structural phenotype and architectural complexities emerge from a single cell, through local interactions. The information collected from biological systems are naturally organized according to the functional levels composing the system itself. In systems biology, biological information often comes from overlapping but different scientific domains, each one having its own way of representing phenomena under study. That is, the different parts of the system to be modelled may be described with different formalisms. For a model to have improved accuracy and capability for making a good knowledge base, it is good to comprise different system levels, suitably handling the relative formalisms. Models which are both multi-level and hybrid satisfy both these requirements, making a very useful tool in computational systems biology. This paper reviews some of the main contributions in this field.

  6. Defining a Computational Framework for the Assessment of Taxonomic Applicability

    EPA Science Inventory

    The Adverse Outcome Pathway (AOP) framework describes the effects of environmental stressors across multiple scales of biological organization and function. This includes an evaluation of the potential for each key event to occur across a broad range of species in order to determ...

  7. Neural system prediction and identification challenge.

    PubMed

    Vlachos, Ioannis; Zaytsev, Yury V; Spreizer, Sebastian; Aertsen, Ad; Kumar, Arvind

    2013-01-01

    Can we infer the function of a biological neural network (BNN) if we know the connectivity and activity of all its constituent neurons?This question is at the core of neuroscience and, accordingly, various methods have been developed to record the activity and connectivity of as many neurons as possible. Surprisingly, there is no theoretical or computational demonstration that neuronal activity and connectivity are indeed sufficient to infer the function of a BNN. Therefore, we pose the Neural Systems Identification and Prediction Challenge (nuSPIC). We provide the connectivity and activity of all neurons and invite participants (1) to infer the functions implemented (hard-wired) in spiking neural networks (SNNs) by stimulating and recording the activity of neurons and, (2) to implement predefined mathematical/biological functions using SNNs. The nuSPICs can be accessed via a web-interface to the NEST simulator and the user is not required to know any specific programming language. Furthermore, the nuSPICs can be used as a teaching tool. Finally, nuSPICs use the crowd-sourcing model to address scientific issues. With this computational approach we aim to identify which functions can be inferred by systematic recordings of neuronal activity and connectivity. In addition, nuSPICs will help the design and application of new experimental paradigms based on the structure of the SNN and the presumed function which is to be discovered.

  8. Neural system prediction and identification challenge

    PubMed Central

    Vlachos, Ioannis; Zaytsev, Yury V.; Spreizer, Sebastian; Aertsen, Ad; Kumar, Arvind

    2013-01-01

    Can we infer the function of a biological neural network (BNN) if we know the connectivity and activity of all its constituent neurons?This question is at the core of neuroscience and, accordingly, various methods have been developed to record the activity and connectivity of as many neurons as possible. Surprisingly, there is no theoretical or computational demonstration that neuronal activity and connectivity are indeed sufficient to infer the function of a BNN. Therefore, we pose the Neural Systems Identification and Prediction Challenge (nuSPIC). We provide the connectivity and activity of all neurons and invite participants (1) to infer the functions implemented (hard-wired) in spiking neural networks (SNNs) by stimulating and recording the activity of neurons and, (2) to implement predefined mathematical/biological functions using SNNs. The nuSPICs can be accessed via a web-interface to the NEST simulator and the user is not required to know any specific programming language. Furthermore, the nuSPICs can be used as a teaching tool. Finally, nuSPICs use the crowd-sourcing model to address scientific issues. With this computational approach we aim to identify which functions can be inferred by systematic recordings of neuronal activity and connectivity. In addition, nuSPICs will help the design and application of new experimental paradigms based on the structure of the SNN and the presumed function which is to be discovered. PMID:24399966

  9. Analysis of Cysteine Redox Post-Translational Modifications in Cell Biology and Drug Pharmacology.

    PubMed

    Wani, Revati; Murray, Brion W

    2017-01-01

    Reversible cysteine oxidation is an emerging class of protein post-translational modification (PTM) that regulates catalytic activity, modulates conformation, impacts protein-protein interactions, and affects subcellular trafficking of numerous proteins. Redox PTMs encompass a broad array of cysteine oxidation reactions with different half-lives, topographies, and reactivities such as S-glutathionylation and sulfoxidation. Recent studies from our group underscore the lesser known effect of redox protein modifications on drug binding. To date, biological studies to understand mechanistic and functional aspects of redox regulation are technically challenging. A prominent issue is the lack of tools for labeling proteins oxidized to select chemotype/oxidant species in cells. Predictive computational tools and curated databases of oxidized proteins are facilitating structural and functional insights into regulation of the network of oxidized proteins or redox proteome. In this chapter, we discuss analytical platforms for studying protein oxidation, suggest computational tools currently available in the field to determine redox sensitive proteins, and begin to illuminate roles of cysteine redox PTMs in drug pharmacology.

  10. Clustering and Network Analysis of Reverse Phase Protein Array Data.

    PubMed

    Byron, Adam

    2017-01-01

    Molecular profiling of proteins and phosphoproteins using a reverse phase protein array (RPPA) platform, with a panel of target-specific antibodies, enables the parallel, quantitative proteomic analysis of many biological samples in a microarray format. Hence, RPPA analysis can generate a high volume of multidimensional data that must be effectively interrogated and interpreted. A range of computational techniques for data mining can be applied to detect and explore data structure and to form functional predictions from large datasets. Here, two approaches for the computational analysis of RPPA data are detailed: the identification of similar patterns of protein expression by hierarchical cluster analysis and the modeling of protein interactions and signaling relationships by network analysis. The protocols use freely available, cross-platform software, are easy to implement, and do not require any programming expertise. Serving as data-driven starting points for further in-depth analysis, validation, and biological experimentation, these and related bioinformatic approaches can accelerate the functional interpretation of RPPA data.

  11. Evaluating Computational Gene Ontology Annotations.

    PubMed

    Škunca, Nives; Roberts, Richard J; Steffen, Martin

    2017-01-01

    Two avenues to understanding gene function are complementary and often overlapping: experimental work and computational prediction. While experimental annotation generally produces high-quality annotations, it is low throughput. Conversely, computational annotations have broad coverage, but the quality of annotations may be variable, and therefore evaluating the quality of computational annotations is a critical concern.In this chapter, we provide an overview of strategies to evaluate the quality of computational annotations. First, we discuss why evaluating quality in this setting is not trivial. We highlight the various issues that threaten to bias the evaluation of computational annotations, most of which stem from the incompleteness of biological databases. Second, we discuss solutions that address these issues, for example, targeted selection of new experimental annotations and leveraging the existing experimental annotations.

  12. ISMB 2016 offers outstanding science, networking, and celebration

    PubMed Central

    Fogg, Christiana

    2016-01-01

    The annual international conference on Intelligent Systems for Molecular Biology (ISMB) is the major meeting of the International Society for Computational Biology (ISCB). Over the past 23 years the ISMB conference has grown to become the world's largest bioinformatics/computational biology conference. ISMB 2016 will be the year's most important computational biology event globally. The conferences provide a multidisciplinary forum for disseminating the latest developments in bioinformatics/computational biology. ISMB brings together scientists from computer science, molecular biology, mathematics, statistics and related fields. Its principal focus is on the development and application of advanced computational methods for biological problems. ISMB 2016 offers the strongest scientific program and the broadest scope of any international bioinformatics/computational biology conference. Building on past successes, the conference is designed to cater to variety of disciplines within the bioinformatics/computational biology community.  ISMB 2016 takes place July 8 - 12 at the Swan and Dolphin Hotel in Orlando, Florida, United States. For two days preceding the conference, additional opportunities including Satellite Meetings, Student Council Symposium, and a selection of Special Interest Group Meetings and Applied Knowledge Exchange Sessions (AKES) are all offered to enable registered participants to learn more on the latest methods and tools within specialty research areas. PMID:27347392

  13. ISMB 2016 offers outstanding science, networking, and celebration.

    PubMed

    Fogg, Christiana

    2016-01-01

    The annual international conference on Intelligent Systems for Molecular Biology (ISMB) is the major meeting of the International Society for Computational Biology (ISCB). Over the past 23 years the ISMB conference has grown to become the world's largest bioinformatics/computational biology conference. ISMB 2016 will be the year's most important computational biology event globally. The conferences provide a multidisciplinary forum for disseminating the latest developments in bioinformatics/computational biology. ISMB brings together scientists from computer science, molecular biology, mathematics, statistics and related fields. Its principal focus is on the development and application of advanced computational methods for biological problems. ISMB 2016 offers the strongest scientific program and the broadest scope of any international bioinformatics/computational biology conference. Building on past successes, the conference is designed to cater to variety of disciplines within the bioinformatics/computational biology community.  ISMB 2016 takes place July 8 - 12 at the Swan and Dolphin Hotel in Orlando, Florida, United States. For two days preceding the conference, additional opportunities including Satellite Meetings, Student Council Symposium, and a selection of Special Interest Group Meetings and Applied Knowledge Exchange Sessions (AKES) are all offered to enable registered participants to learn more on the latest methods and tools within specialty research areas.

  14. Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas

    A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less

  15. Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids

    DOE PAGES

    Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas

    2015-06-26

    A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less

  16. The effects of a visualization-centered curriculum on conceptual understanding and representational competence in high school biology

    NASA Astrophysics Data System (ADS)

    Wilder, Anna

    The purpose of this study was to investigate the effects of a visualization-centered curriculum, Hemoglobin: A Case of Double Identity, on conceptual understanding and representational competence in high school biology. Sixty-nine students enrolled in three sections of freshman biology taught by the same teacher participated in this study. Online Chemscape Chime computer-based molecular visualizations were incorporated into the 10-week curriculum to introduce students to fundamental structure and function relationships. Measures used in this study included a Hemoglobin Structure and Function Test, Mental Imagery Questionnaire, Exam Difficulty Survey, the Student Assessment of Learning Gains, the Group Assessment of Logical Thinking, the Attitude Toward Science in School Assessment, audiotapes of student interviews, students' artifacts, weekly unit activity surveys, informal researcher observations and a teacher's weekly questionnaire. The Hemoglobin Structure and Function Test, consisting of Parts A and B, was administered as a pre and posttest. Part A used exclusively verbal test items to measure conceptual understanding, while Part B used visual-verbal test items to measure conceptual understanding and representational competence. Results of the Hemoglobin Structure and Function pre and posttest revealed statistically significant gains in conceptual understanding and representational competence, suggesting the visualization-centered curriculum implemented in this study was effective in supporting positive learning outcomes. The large positive correlation between posttest results on Part A, comprised of all-verbal test items, and Part B, using visual-verbal test items, suggests this curriculum supported students' mutual development of conceptual understanding and representational competence. Evidence based on student interviews, Student Assessment of Learning Gains ratings and weekly activity surveys indicated positive attitudes toward the use of Chemscape Chime software and the computer-based molecular visualization activities as learning tools. Evidence from these same sources also indicated that students felt computer-based molecular visualization activities in conjunction with other classroom activities supported their learning. Implications for instructional design are discussed.

  17. Integrative, Dynamic Structural Biology at Atomic Resolution—It’s About Time

    PubMed Central

    van den Bedem, Henry; Fraser, James S.

    2015-01-01

    Biomolecules adopt a dynamic ensemble of conformations, each with the potential to interact with binding partners or perform the chemical reactions required for a multitude of cellular functions. Recent advances in X-ray crystallography, Nuclear Magnetic Resonance (NMR) spectroscopy, and other techniques are helping us realize the dream of seeing—in atomic detail—how different parts of biomolecules exchange between functional sub-states using concerted motions. Integrative structural biology has advanced our understanding of the formation of large macromolecular complexes and how their components interact in assemblies by leveraging data from many low-resolution methods. Here, we review the growing opportunities for integrative, dynamic structural biology at the atomic scale, contending there is increasing synergistic potential between X-ray crystallography, NMR, and computer simulations to reveal a structural basis for protein conformational dynamics at high resolution. PMID:25825836

  18. Computational Psychiatry

    PubMed Central

    Wang, Xiao-Jing; Krystal, John H.

    2014-01-01

    Psychiatric disorders such as autism and schizophrenia arise from abnormalities in brain systems that underlie cognitive, emotional and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically-realistic modeling bridging cellular and synaptic mechanisms with behavior, model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941

  19. Extension of research data repository system to support direct compute access to biomedical datasets: enhancing Dataverse to support large datasets

    PubMed Central

    McKinney, Bill; Meyer, Peter A.; Crosas, Mercè; Sliz, Piotr

    2016-01-01

    Access to experimental X-ray diffraction image data is important for validation and reproduction of macromolecular models and indispensable for the development of structural biology processing methods. In response to the evolving needs of the structural biology community, we recently established a diffraction data publication system, the Structural Biology Data Grid (SBDG, data.sbgrid.org), to preserve primary experimental datasets supporting scientific publications. All datasets published through the SBDG are freely available to the research community under a public domain dedication license, with metadata compliant with the DataCite Schema (schema.datacite.org). A proof-of-concept study demonstrated community interest and utility. Publication of large datasets is a challenge shared by several fields, and the SBDG has begun collaborating with the Institute for Quantitative Social Science at Harvard University to extend the Dataverse (dataverse.org) open-source data repository system to structural biology datasets. Several extensions are necessary to support the size and metadata requirements for structural biology datasets. In this paper, we describe one such extension—functionality supporting preservation of filesystem structure within Dataverse—which is essential for both in-place computation and supporting non-http data transfers. PMID:27862010

  20. Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications

    NASA Astrophysics Data System (ADS)

    Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon

    1997-04-01

    A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.

  1. AHaH computing-from metastable switches to attractors to machine learning.

    PubMed

    Nugent, Michael Alexander; Molter, Timothy Wesley

    2014-01-01

    Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures-all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.

  2. Mammalian synthetic biology for studying the cell

    PubMed Central

    Mathur, Melina; Xiang, Joy S.

    2017-01-01

    Synthetic biology is advancing the design of genetic devices that enable the study of cellular and molecular biology in mammalian cells. These genetic devices use diverse regulatory mechanisms to both examine cellular processes and achieve precise and dynamic control of cellular phenotype. Synthetic biology tools provide novel functionality to complement the examination of natural cell systems, including engineered molecules with specific activities and model systems that mimic complex regulatory processes. Continued development of quantitative standards and computational tools will expand capacities to probe cellular mechanisms with genetic devices to achieve a more comprehensive understanding of the cell. In this study, we review synthetic biology tools that are being applied to effectively investigate diverse cellular processes, regulatory networks, and multicellular interactions. We also discuss current challenges and future developments in the field that may transform the types of investigation possible in cell biology. PMID:27932576

  3. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions

    PubMed Central

    Hucka, Michael; Bergmann, Frank T.; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M.; Le Novére, Nicolas; Myers, Chris J.; Olivier, Brett G.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Waltemath, Dagmar; Wilkinson, Darren J.

    2017-01-01

    Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528569

  4. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core

    PubMed Central

    Hucka, Michael; Bergmann, Frank T.; Hoops, Stefan; Keating, Sarah M.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Wilkinson, Darren J.

    2017-01-01

    Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528564

  5. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions.

    PubMed

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.

  6. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.

    PubMed

    Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.

  7. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.

    PubMed

    Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J

    2015-06-01

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.

  8. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions.

    PubMed

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2015-06-01

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.

  9. International Journal of Quantum Chemistry, Quantum Biology Symposium No. 24. Proceedings of the International Symposium on the Application of Fundamental Theory to Problems of Biology and Pharmacology, Held at Ponce de Leon Resort, St. Augustine, Florida on March 1-7, 1997. Volume 65, No. 6, 1997.

    DTIC Science & Technology

    1997-03-01

    characteristic musk odor [7]. Molecular shape is important in circular dichroism and chirality. Enantiomers, if viewed in isolation, have identical all...simply a function of the length of the elon - gated structure). We can introduce V/S as an index of molecular shape by using computed molecular

  10. Evidence of pervasive biologically functional secondary structures within the genomes of eukaryotic single-stranded DNA viruses.

    PubMed

    Muhire, Brejnev Muhizi; Golden, Michael; Murrell, Ben; Lefeuvre, Pierre; Lett, Jean-Michel; Gray, Alistair; Poon, Art Y F; Ngandu, Nobubelo Kwanele; Semegni, Yves; Tanov, Emil Pavlov; Monjane, Adérito Luis; Harkins, Gordon William; Varsani, Arvind; Shepherd, Dionne Natalie; Martin, Darren Patrick

    2014-02-01

    Single-stranded DNA (ssDNA) viruses have genomes that are potentially capable of forming complex secondary structures through Watson-Crick base pairing between their constituent nucleotides. A few of the structural elements formed by such base pairings are, in fact, known to have important functions during the replication of many ssDNA viruses. Unknown, however, are (i) whether numerous additional ssDNA virus genomic structural elements predicted to exist by computational DNA folding methods actually exist and (ii) whether those structures that do exist have any biological relevance. We therefore computationally inferred lists of the most evolutionarily conserved structures within a diverse selection of animal- and plant-infecting ssDNA viruses drawn from the families Circoviridae, Anelloviridae, Parvoviridae, Nanoviridae, and Geminiviridae and analyzed these for evidence of natural selection favoring the maintenance of these structures. While we find evidence that is consistent with purifying selection being stronger at nucleotide sites that are predicted to be base paired than at sites predicted to be unpaired, we also find strong associations between sites that are predicted to pair with one another and site pairs that are apparently coevolving in a complementary fashion. Collectively, these results indicate that natural selection actively preserves much of the pervasive secondary structure that is evident within eukaryote-infecting ssDNA virus genomes and, therefore, that much of this structure is biologically functional. Lastly, we provide examples of various highly conserved but completely uncharacterized structural elements that likely have important functions within some of the ssDNA virus genomes analyzed here.

  11. Evidence of Pervasive Biologically Functional Secondary Structures within the Genomes of Eukaryotic Single-Stranded DNA Viruses

    PubMed Central

    Muhire, Brejnev Muhizi; Golden, Michael; Murrell, Ben; Lefeuvre, Pierre; Lett, Jean-Michel; Gray, Alistair; Poon, Art Y. F.; Ngandu, Nobubelo Kwanele; Semegni, Yves; Tanov, Emil Pavlov; Monjane, Adérito Luis; Harkins, Gordon William; Varsani, Arvind; Shepherd, Dionne Natalie

    2014-01-01

    Single-stranded DNA (ssDNA) viruses have genomes that are potentially capable of forming complex secondary structures through Watson-Crick base pairing between their constituent nucleotides. A few of the structural elements formed by such base pairings are, in fact, known to have important functions during the replication of many ssDNA viruses. Unknown, however, are (i) whether numerous additional ssDNA virus genomic structural elements predicted to exist by computational DNA folding methods actually exist and (ii) whether those structures that do exist have any biological relevance. We therefore computationally inferred lists of the most evolutionarily conserved structures within a diverse selection of animal- and plant-infecting ssDNA viruses drawn from the families Circoviridae, Anelloviridae, Parvoviridae, Nanoviridae, and Geminiviridae and analyzed these for evidence of natural selection favoring the maintenance of these structures. While we find evidence that is consistent with purifying selection being stronger at nucleotide sites that are predicted to be base paired than at sites predicted to be unpaired, we also find strong associations between sites that are predicted to pair with one another and site pairs that are apparently coevolving in a complementary fashion. Collectively, these results indicate that natural selection actively preserves much of the pervasive secondary structure that is evident within eukaryote-infecting ssDNA virus genomes and, therefore, that much of this structure is biologically functional. Lastly, we provide examples of various highly conserved but completely uncharacterized structural elements that likely have important functions within some of the ssDNA virus genomes analyzed here. PMID:24284329

  12. Recent advances in jointed quantum mechanics and molecular mechanics calculations of biological macromolecules: schemes and applications coupled to ab initio calculations.

    PubMed

    Hagiwara, Yohsuke; Tateno, Masaru

    2010-10-20

    We review the recent research on the functional mechanisms of biological macromolecules using theoretical methodologies coupled to ab initio quantum mechanical (QM) treatments of reaction centers in proteins and nucleic acids. Since in most cases such biological molecules are large, the computational costs of performing ab initio calculations for the entire structures are prohibitive. Instead, simulations that are jointed with molecular mechanics (MM) calculations are crucial to evaluate the long-range electrostatic interactions, which significantly affect the electronic structures of biological macromolecules. Thus, we focus our attention on the methodologies/schemes and applications of jointed QM/MM calculations, and discuss the critical issues to be elucidated in biological macromolecular systems. © 2010 IOP Publishing Ltd

  13. Atomic switch networks—nanoarchitectonic design of a complex system for natural computing

    NASA Astrophysics Data System (ADS)

    Demis, E. C.; Aguilera, R.; Sillin, H. O.; Scharnhorst, K.; Sandouk, E. J.; Aono, M.; Stieg, A. Z.; Gimzewski, J. K.

    2015-05-01

    Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing—a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.

  14. Computational analyses in cognitive neuroscience: in defense of biological implausibility.

    PubMed

    Dror, I E; Gallogly, D P

    1999-06-01

    Because cognitive neuroscience researchers attempt to understand the human mind by bridging behavior and brain, they expect computational analyses to be biologically plausible. In this paper, biologically implausible computational analyses are shown to have critical and essential roles in the various stages and domains of cognitive neuroscience research. Specifically, biologically implausible computational analyses can contribute to (1) understanding and characterizing the problem that is being studied, (2) examining the availability of information and its representation, and (3) evaluating and understanding the neuronal solution. In the context of the distinct types of contributions made by certain computational analyses, the biological plausibility of those analyses is altogether irrelevant. These biologically implausible models are nevertheless relevant and important for biologically driven research.

  15. Systems biology definition of the core proteome of metabolism and expression is consistent with high-throughput data.

    PubMed

    Yang, Laurence; Tan, Justin; O'Brien, Edward J; Monk, Jonathan M; Kim, Donghyuk; Li, Howard J; Charusanti, Pep; Ebrahim, Ali; Lloyd, Colton J; Yurkovich, James T; Du, Bin; Dräger, Andreas; Thomas, Alex; Sun, Yuekai; Saunders, Michael A; Palsson, Bernhard O

    2015-08-25

    Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma genitalium). Based on transcriptomics data across environmental and genetic backgrounds, the systems biology core proteome is significantly enriched in nondifferentially expressed genes and depleted in differentially expressed genes. Compared with the noncore, core gene expression levels are also similar across genetic backgrounds (two times higher Spearman rank correlation) and exhibit significantly more complex transcriptional and posttranscriptional regulatory features (40% more transcription start sites per gene, 22% longer 5'UTR). Thus, genome-scale systems biology approaches rigorously identify a functional core proteome needed to support growth. This framework, validated by using high-throughput datasets, facilitates a mechanistic understanding of systems-level core proteome function through in silico models; it de facto defines a paleome.

  16. Web-based applications for building, managing and analysing kinetic models of biological systems.

    PubMed

    Lee, Dong-Yup; Saha, Rajib; Yusufi, Faraaz Noor Khan; Park, Wonjun; Karimi, Iftekhar A

    2009-01-01

    Mathematical modelling and computational analysis play an essential role in improving our capability to elucidate the functions and characteristics of complex biological systems such as metabolic, regulatory and cell signalling pathways. The modelling and concomitant simulation render it possible to predict the cellular behaviour of systems under various genetically and/or environmentally perturbed conditions. This motivates systems biologists/bioengineers/bioinformaticians to develop new tools and applications, allowing non-experts to easily conduct such modelling and analysis. However, among a multitude of systems biology tools developed to date, only a handful of projects have adopted a web-based approach to kinetic modelling. In this report, we evaluate the capabilities and characteristics of current web-based tools in systems biology and identify desirable features, limitations and bottlenecks for further improvements in terms of usability and functionality. A short discussion on software architecture issues involved in web-based applications and the approaches taken by existing tools is included for those interested in developing their own simulation applications.

  17. A program code generator for multiphysics biological simulation using markup languages.

    PubMed

    Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi

    2012-01-01

    To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.

  18. Computational discovery of small open reading frames in Bacillus lehensis

    NASA Astrophysics Data System (ADS)

    Zainuddin, Nurhafizhoh; Illias, Rosli Md.; Mahadi, Nor Muhammad; Firdaus-Raih, Mohd

    2015-09-01

    Bacillus lehensis is a Gram-positive and endospore-forming alkalitolerant bacterial strain. In recent years there has been increasing interest in alkaliphilic bacteria and their ability to grow under extreme conditions as well as their ability to serve various important functions in industrial biology especially enzyme production. Small open reading frames (sORFs) have emerged as important regulators in various biological roles such as tumor progression, hormone signalling and stress response. Over the past decade, many biocomputational tools have been developed to predict genes in bacterial genomes. In this study, three softwares were used to predict sORF (≤ 80 aa) in B. lehensis by using whole genome sequence. We used comparative analysis to identify the sORFs in B. lehensis that conserved across all other bacterial genomes. We extended the analysis by doing the homology analysis against protein database. This study established the sORFs in B. lehensis that are conserved across bacteria which might has important biological function which still remain elusive in biological field.

  19. Designing synthetic RNAs to determine the relevance of structural motifs in picornavirus IRES elements

    NASA Astrophysics Data System (ADS)

    Fernandez-Chamorro, Javier; Lozano, Gloria; Garcia-Martin, Juan Antonio; Ramajo, Jorge; Dotu, Ivan; Clote, Peter; Martinez-Salas, Encarnacion

    2016-04-01

    The function of Internal Ribosome Entry Site (IRES) elements is intimately linked to their RNA structure. Viral IRES elements are organized in modular domains consisting of one or more stem-loops that harbor conserved RNA motifs critical for internal initiation of translation. A conserved motif is the pyrimidine-tract located upstream of the functional initiation codon in type I and II picornavirus IRES. By computationally designing synthetic RNAs to fold into a structure that sequesters the polypyrimidine tract in a hairpin, we establish a correlation between predicted inaccessibility of the pyrimidine tract and IRES activity, as determined in both in vitro and in vivo systems. Our data supports the hypothesis that structural sequestration of the pyrimidine-tract within a stable hairpin inactivates IRES activity, since the stronger the stability of the hairpin the higher the inhibition of protein synthesis. Destabilization of the stem-loop immediately upstream of the pyrimidine-tract also decreases IRES activity. Our work introduces a hybrid computational/experimental method to determine the importance of structural motifs for biological function. Specifically, we show the feasibility of using the software RNAiFold to design synthetic RNAs with particular sequence and structural motifs that permit subsequent experimental determination of the importance of such motifs for biological function.

  20. Computational Modeling in Liver Surgery

    PubMed Central

    Christ, Bruno; Dahmen, Uta; Herrmann, Karl-Heinz; König, Matthias; Reichenbach, Jürgen R.; Ricken, Tim; Schleicher, Jana; Ole Schwen, Lars; Vlaic, Sebastian; Waschinsky, Navina

    2017-01-01

    The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery. PMID:29249974

  1. Developing PFC representations using reinforcement learning.

    PubMed

    Reynolds, Jeremy R; O'Reilly, Randall C

    2009-12-01

    From both functional and biological considerations, it is widely believed that action production, planning, and goal-oriented behaviors supported by the frontal cortex are organized hierarchically [Fuster (1991); Koechlin, E., Ody, C., & Kouneiher, F. (2003). Neuroscience: The architecture of cognitive control in the human prefrontal cortex. Science, 424, 1181-1184; Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the structure of behavior. New York: Holt]. However, the nature of the different levels of the hierarchy remains unclear, and little attention has been paid to the origins of such a hierarchy. We address these issues through biologically-inspired computational models that develop representations through reinforcement learning. We explore several different factors in these models that might plausibly give rise to a hierarchical organization of representations within the PFC, including an initial connectivity hierarchy within PFC, a hierarchical set of connections between PFC and subcortical structures controlling it, and differential synaptic plasticity schedules. Simulation results indicate that architectural constraints contribute to the segregation of different types of representations, and that this segregation facilitates learning. These findings are consistent with the idea that there is a functional hierarchy in PFC, as captured in our earlier computational models of PFC function and a growing body of empirical data.

  2. Dovetailing biology and chemistry: integrating the Gene Ontology with the ChEBI chemical ontology

    PubMed Central

    2013-01-01

    Background The Gene Ontology (GO) facilitates the description of the action of gene products in a biological context. Many GO terms refer to chemical entities that participate in biological processes. To facilitate accurate and consistent systems-wide biological representation, it is necessary to integrate the chemical view of these entities with the biological view of GO functions and processes. We describe a collaborative effort between the GO and the Chemical Entities of Biological Interest (ChEBI) ontology developers to ensure that the representation of chemicals in the GO is both internally consistent and in alignment with the chemical expertise captured in ChEBI. Results We have examined and integrated the ChEBI structural hierarchy into the GO resource through computationally-assisted manual curation of both GO and ChEBI. Our work has resulted in the creation of computable definitions of GO terms that contain fully defined semantic relationships to corresponding chemical terms in ChEBI. Conclusions The set of logical definitions using both the GO and ChEBI has already been used to automate aspects of GO development and has the potential to allow the integration of data across the domains of biology and chemistry. These logical definitions are available as an extended version of the ontology from http://purl.obolibrary.org/obo/go/extensions/go-plus.owl. PMID:23895341

  3. Integrative network alignment reveals large regions of global network similarity in yeast and human.

    PubMed

    Kuchaiev, Oleksii; Przulj, Natasa

    2011-05-15

    High-throughput methods for detecting molecular interactions have produced large sets of biological network data with much more yet to come. Analogous to sequence alignment, efficient and reliable network alignment methods are expected to improve our understanding of biological systems. Unlike sequence alignment, network alignment is computationally intractable. Hence, devising efficient network alignment heuristics is currently a foremost challenge in computational biology. We introduce a novel network alignment algorithm, called Matching-based Integrative GRAph ALigner (MI-GRAAL), which can integrate any number and type of similarity measures between network nodes (e.g. proteins), including, but not limited to, any topological network similarity measure, sequence similarity, functional similarity and structural similarity. Hence, we resolve the ties in similarity measures and find a combination of similarity measures yielding the largest contiguous (i.e. connected) and biologically sound alignments. MI-GRAAL exposes the largest functional, connected regions of protein-protein interaction (PPI) network similarity to date: surprisingly, it reveals that 77.7% of proteins in the baker's yeast high-confidence PPI network participate in such a subnetwork that is fully contained in the human high-confidence PPI network. This is the first demonstration that species as diverse as yeast and human contain so large, continuous regions of global network similarity. We apply MI-GRAAL's alignments to predict functions of un-annotated proteins in yeast, human and bacteria validating our predictions in the literature. Furthermore, using network alignment scores for PPI networks of different herpes viruses, we reconstruct their phylogenetic relationship. This is the first time that phylogeny is exactly reconstructed from purely topological alignments of PPI networks. Supplementary files and MI-GRAAL executables: http://bio-nets.doc.ic.ac.uk/MI-GRAAL/.

  4. 04-ERD-052-Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loots, G G; Ovcharenko, I; Collette, N

    2007-02-26

    Generating the sequence of the human genome represents a colossal achievement for science and mankind. The technical use for the human genome project information holds great promise to cure disease, prevent bioterror threats, as well as to learn about human origins. Yet converting the sequence data into biological meaningful information has not been immediately obvious, and we are still in the preliminary stages of understanding how the genome is organized, what are the functional building blocks and how do these sequences mediate complex biological processes. The overarching goal of this program was to develop novel methods and high throughput strategiesmore » for determining the functions of ''anonymous'' human genes that are evolutionarily deeply conserved in other vertebrates. We coupled analytical tool development and computational predictions regarding gene function with novel high throughput experimental strategies and tested biological predictions in the laboratory. The tools required for comparative genomic data-mining are fundamentally the same whether they are applied to scientific studies of related microbes or the search for functions of novel human genes. For this reason the tools, conceptual framework and the coupled informatics-experimental biology paradigm we developed in this LDRD has many potential scientific applications relevant to LLNL multidisciplinary research in bio-defense, bioengineering, bionanosciences and microbial and environmental genomics.« less

  5. Combining wet and dry research: experience with model development for cardiac mechano-electric structure-function studies

    PubMed Central

    Quinn, T. Alexander; Kohl, Peter

    2013-01-01

    Since the development of the first mathematical cardiac cell model 50 years ago, computational modelling has become an increasingly powerful tool for the analysis of data and for the integration of information related to complex cardiac behaviour. Current models build on decades of iteration between experiment and theory, representing a collective understanding of cardiac function. All models, whether computational, experimental, or conceptual, are simplified representations of reality and, like tools in a toolbox, suitable for specific applications. Their range of applicability can be explored (and expanded) by iterative combination of ‘wet’ and ‘dry’ investigation, where experimental or clinical data are used to first build and then validate computational models (allowing integration of previous findings, quantitative assessment of conceptual models, and projection across relevant spatial and temporal scales), while computational simulations are utilized for plausibility assessment, hypotheses-generation, and prediction (thereby defining further experimental research targets). When implemented effectively, this combined wet/dry research approach can support the development of a more complete and cohesive understanding of integrated biological function. This review illustrates the utility of such an approach, based on recent examples of multi-scale studies of cardiac structure and mechano-electric function. PMID:23334215

  6. Numerical simulations of flying and swimming of biological systems with the viscous vortex particle method

    NASA Astrophysics Data System (ADS)

    Eldredge, Jeff

    2005-11-01

    Many biological mechanisms of locomotion involve the interaction of a fluid with a deformable surface undergoing large unsteady motion. Analysis of such problems poses a significant challenge to conventional grid-based computational approaches. Particularly in the moderate Reynolds number regime where many insects and fish function, viscous and inertial processes are both important, and vorticity serves a crucial role. In this work, the viscous vortex particle method is shown to provide an efficient, intuitive simulation approach for investigation of these biological systems. In contrast with a grid-based approach, the method solves the Navier--Stokes equations by tracking computational particles that carry smooth blobs of vorticity and exchange strength with one another to account for viscous diffusion. Thus, computational resources are focused on the physically relevant features of the flow, and there is no need for artificial boundary conditions. Building from previously-developed techniques for the creation of vorticity to enforce no-throughflow and no-slip conditions, the present method is extended to problems of coupled fluid--body dynamics by enforcement of global conservation of momenta. The application to several two-dimensional model problems is demonstrated, including single and multiple flapping wings and free swimming of a three-linkage fish.

  7. In silico evolution of the hunchback gene indicates redundancy in cis-regulatory organization and spatial gene expression

    PubMed Central

    Zagrijchuk, Elizaveta A.; Sabirov, Marat A.; Holloway, David M.; Spirov, Alexander V.

    2014-01-01

    Biological development depends on the coordinated expression of genes in time and space. Developmental genes have extensive cis-regulatory regions which control their expression. These regions are organized in a modular manner, with different modules controlling expression at different times and locations. Both how modularity evolved and what function it serves are open questions. We present a computational model for the cis-regulation of the hunchback (hb) gene in the fruit fly (Drosophila). We simulate evolution (using an evolutionary computation approach from computer science) to find the optimal cis-regulatory arrangements for fitting experimental hb expression patterns. We find that the cis-regulatory region tends to readily evolve modularity. These cis-regulatory modules (CRMs) do not tend to control single spatial domains, but show a multi-CRM/multi-domain correspondence. We find that the CRM-domain correspondence seen in Drosophila evolves with a high probability in our model, supporting the biological relevance of the approach. The partial redundancy resulting from multi-CRM control may confer some biological robustness against corruption of regulatory sequences. The technique developed on hb could readily be applied to other multi-CRM developmental genes. PMID:24712536

  8. Penrose-Hameroff orchestrated objective-reduction proposal for human consciousness is not biologically feasible.

    PubMed

    McKemmish, Laura K; Reimers, Jeffrey R; McKenzie, Ross H; Mark, Alan E; Hush, Noel S

    2009-08-01

    Penrose and Hameroff have argued that the conventional models of a brain function based on neural networks alone cannot account for human consciousness, claiming that quantum-computation elements are also required. Specifically, in their Orchestrated Objective Reduction (Orch OR) model [R. Penrose and S. R. Hameroff, J. Conscious. Stud. 2, 99 (1995)], it is postulated that microtubules act as quantum processing units, with individual tubulin dimers forming the computational elements. This model requires that the tubulin is able to switch between alternative conformational states in a coherent manner, and that this process be rapid on the physiological time scale. Here, the biological feasibility of the Orch OR proposal is examined in light of recent experimental studies on microtubule assembly and dynamics. It is shown that the tubulins do not possess essential properties required for the Orch OR proposal, as originally proposed, to hold. Further, we consider also recent progress in the understanding of the long-lived coherent motions in biological systems, a feature critical to Orch OR, and show that no reformation of the proposal based on known physical paradigms could lead to quantum computing within microtubules. Hence, the Orch OR model is not a feasible explanation of the origin of consciousness.

  9. Step-by-Step Simulation of Radiation Chemistry Using Green Functions for Diffusion-Influenced Reactions

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2011-01-01

    Radiolytic species are formed approximately 1 ps after the passage of ionizing radiation through matter. After their formation, they diffuse and chemically react with other radiolytic species and neighboring biological molecules, leading to various oxidative damage. Therefore, the simulation of radiation chemistry is of considerable importance to understand how radiolytic species damage biological molecules [1]. The step-by-step simulation of chemical reactions is difficult, because the radiolytic species are distributed non-homogeneously in the medium. Consequently, computational approaches based on Green functions for diffusion-influenced reactions should be used [2]. Recently, Green functions for more complex type of reactions have been published [3-4]. We have developed exact random variate generators of these Green functions [5], which will allow us to use them in radiation chemistry codes. Moreover, simulating chemistry using the Green functions is which is computationally very demanding, because the probabilities of reactions between each pair of particles should be evaluated at each timestep [2]. This kind of problem is well adapted for General Purpose Graphic Processing Units (GPGPU), which can handle a large number of similar calculations simultaneously. These new developments will allow us to include more complex reactions in chemistry codes, and to improve the calculation time. This code should be of importance to link radiation track structure simulations and DNA damage models.

  10. Detection of Low-order Curves in Images using Biologically-plausible Hardware

    DTIC Science & Technology

    2012-09-29

    the intersections of iso-eccentricity and iso-polar contours were entered into the computer via a graphics tablet . In regions where there was...functional mri . Cerebral Cortex, 7:181 – 192, 1997. [25] Jacob Feldman. Bayesian contour integration. Perception and Psychophysics, 63:1171 – 1182, 2001. [26

  11. Application of Virtual Reality Technology in Biology Education.

    ERIC Educational Resources Information Center

    Shim, Kew-Cheol; Park, Jong-Seok; Kim, Hyun-Sup; Kim, Jae-Hyun; Park, Young-Chul; Ryu, Hai-Il

    2003-01-01

    Reports on the findings of a study designed to develop three-dimensional virtual reality technology (VRT) learning programs for middle school students and evaluate the program's educational value. Focuses on the topic of structure and function of the eye. Concludes that VRT simulations allow comfortable interaction with computers and increase the…

  12. Frustration in Biomolecules

    PubMed Central

    Ferreiro, Diego U.; Komives, Elizabeth A.; Wolynes, Peter G.

    2014-01-01

    Biomolecules are the prime information processing elements of living matter. Most of these inanimate systems are polymers that compute their own structures and dynamics using as input seemingly random character strings of their sequence, following which they coalesce and perform integrated cellular functions. In large computational systems with a finite interaction-codes, the appearance of conflicting goals is inevitable. Simple conflicting forces can lead to quite complex structures and behaviors, leading to the concept of frustration in condensed matter. We present here some basic ideas about frustration in biomolecules and how the frustration concept leads to a better appreciation of many aspects of the architecture of biomolecules, and how biomolecular structure connects to function. These ideas are simultaneously both seductively simple and perilously subtle to grasp completely. The energy landscape theory of protein folding provides a framework for quantifying frustration in large systems and has been implemented at many levels of description. We first review the notion of frustration from the areas of abstract logic and its uses in simple condensed matter systems. We discuss then how the frustration concept applies specifically to heteropolymers, testing folding landscape theory in computer simulations of protein models and in experimentally accessible systems. Studying the aspects of frustration averaged over many proteins provides ways to infer energy functions useful for reliable structure prediction. We discuss how frustration affects folding mechanisms. We review here how a large part of the biological functions of proteins are related to subtle local physical frustration effects and how frustration influences the appearance of metastable states, the nature of binding processes, catalysis and allosteric transitions. We hope to illustrate how Frustration is a fundamental concept in relating function to structural biology. PMID:25225856

  13. Integrative structure modeling with the Integrative Modeling Platform.

    PubMed

    Webb, Benjamin; Viswanath, Shruthi; Bonomi, Massimiliano; Pellarin, Riccardo; Greenberg, Charles H; Saltzberg, Daniel; Sali, Andrej

    2018-01-01

    Building models of a biological system that are consistent with the myriad data available is one of the key challenges in biology. Modeling the structure and dynamics of macromolecular assemblies, for example, can give insights into how biological systems work, evolved, might be controlled, and even designed. Integrative structure modeling casts the building of structural models as a computational optimization problem, for which information about the assembly is encoded into a scoring function that evaluates candidate models. Here, we describe our open source software suite for integrative structure modeling, Integrative Modeling Platform (https://integrativemodeling.org), and demonstrate its use. © 2017 The Protein Society.

  14. Systems Toxicology: From Basic Research to Risk Assessment

    PubMed Central

    2014-01-01

    Systems Toxicology is the integration of classical toxicology with quantitative analysis of large networks of molecular and functional changes occurring across multiple levels of biological organization. Society demands increasingly close scrutiny of the potential health risks associated with exposure to chemicals present in our everyday life, leading to an increasing need for more predictive and accurate risk-assessment approaches. Developing such approaches requires a detailed mechanistic understanding of the ways in which xenobiotic substances perturb biological systems and lead to adverse outcomes. Thus, Systems Toxicology approaches offer modern strategies for gaining such mechanistic knowledge by combining advanced analytical and computational tools. Furthermore, Systems Toxicology is a means for the identification and application of biomarkers for improved safety assessments. In Systems Toxicology, quantitative systems-wide molecular changes in the context of an exposure are measured, and a causal chain of molecular events linking exposures with adverse outcomes (i.e., functional and apical end points) is deciphered. Mathematical models are then built to describe these processes in a quantitative manner. The integrated data analysis leads to the identification of how biological networks are perturbed by the exposure and enables the development of predictive mathematical models of toxicological processes. This perspective integrates current knowledge regarding bioanalytical approaches, computational analysis, and the potential for improved risk assessment. PMID:24446777

  15. Systems toxicology: from basic research to risk assessment.

    PubMed

    Sturla, Shana J; Boobis, Alan R; FitzGerald, Rex E; Hoeng, Julia; Kavlock, Robert J; Schirmer, Kristin; Whelan, Maurice; Wilks, Martin F; Peitsch, Manuel C

    2014-03-17

    Systems Toxicology is the integration of classical toxicology with quantitative analysis of large networks of molecular and functional changes occurring across multiple levels of biological organization. Society demands increasingly close scrutiny of the potential health risks associated with exposure to chemicals present in our everyday life, leading to an increasing need for more predictive and accurate risk-assessment approaches. Developing such approaches requires a detailed mechanistic understanding of the ways in which xenobiotic substances perturb biological systems and lead to adverse outcomes. Thus, Systems Toxicology approaches offer modern strategies for gaining such mechanistic knowledge by combining advanced analytical and computational tools. Furthermore, Systems Toxicology is a means for the identification and application of biomarkers for improved safety assessments. In Systems Toxicology, quantitative systems-wide molecular changes in the context of an exposure are measured, and a causal chain of molecular events linking exposures with adverse outcomes (i.e., functional and apical end points) is deciphered. Mathematical models are then built to describe these processes in a quantitative manner. The integrated data analysis leads to the identification of how biological networks are perturbed by the exposure and enables the development of predictive mathematical models of toxicological processes. This perspective integrates current knowledge regarding bioanalytical approaches, computational analysis, and the potential for improved risk assessment.

  16. The post-genomic era of biological network alignment.

    PubMed

    Faisal, Fazle E; Meng, Lei; Crawford, Joseph; Milenković, Tijana

    2015-12-01

    Biological network alignment aims to find regions of topological and functional (dis)similarities between molecular networks of different species. Then, network alignment can guide the transfer of biological knowledge from well-studied model species to less well-studied species between conserved (aligned) network regions, thus complementing valuable insights that have already been provided by genomic sequence alignment. Here, we review computational challenges behind the network alignment problem, existing approaches for solving the problem, ways of evaluating their alignment quality, and the approaches' biomedical applications. We discuss recent innovative efforts of improving the existing view of network alignment. We conclude with open research questions in comparative biological network research that could further our understanding of principles of life, evolution, disease, and therapeutics.

  17. Toward Engineering Synthetic Microbial Metabolism

    PubMed Central

    McArthur, George H.; Fong, Stephen S.

    2010-01-01

    The generation of well-characterized parts and the formulation of biological design principles in synthetic biology are laying the foundation for more complex and advanced microbial metabolic engineering. Improvements in de novo DNA synthesis and codon-optimization alone are already contributing to the manufacturing of pathway enzymes with improved or novel function. Further development of analytical and computer-aided design tools should accelerate the forward engineering of precisely regulated synthetic pathways by providing a standard framework for the predictable design of biological systems from well-characterized parts. In this review we discuss the current state of synthetic biology within a four-stage framework (design, modeling, synthesis, analysis) and highlight areas requiring further advancement to facilitate true engineering of synthetic microbial metabolism. PMID:20037734

  18. Computational approaches for de novo design and redesign of metal-binding sites on proteins.

    PubMed

    Akcapinar, Gunseli Bayram; Sezerman, Osman Ugur

    2017-04-28

    Metal ions play pivotal roles in protein structure, function and stability. The functional and structural diversity of proteins in nature expanded with the incorporation of metal ions or clusters in proteins. Approximately one-third of these proteins in the databases contain metal ions. Many biological and chemical processes in nature involve metal ion-binding proteins, aka metalloproteins. Many cellular reactions that underpin life require metalloproteins. Most of the remarkable, complex chemical transformations are catalysed by metalloenzymes. Realization of the importance of metal-binding sites in a variety of cellular events led to the advancement of various computational methods for their prediction and characterization. Furthermore, as structural and functional knowledgebase about metalloproteins is expanding with advances in computational and experimental fields, the focus of the research is now shifting towards de novo design and redesign of metalloproteins to extend nature's own diversity beyond its limits. In this review, we will focus on the computational toolbox for prediction of metal ion-binding sites, de novo metalloprotein design and redesign. We will also give examples of tailor-made artificial metalloproteins designed with the computational toolbox. © 2017 The Author(s).

  19. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    PubMed

    Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-03-22

    Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by combining network efficiency analysis with scoring function from molecular docking.

  20. A Network-Based Multi-Target Computational Estimation Scheme for Anticoagulant Activities of Compounds

    PubMed Central

    Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-01-01

    Background Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. Methodology We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. Conclusions This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by combining network efficiency analysis with scoring function from molecular docking. PMID:21445339

  1. ADAM: analysis of discrete models of biological systems using computer algebra.

    PubMed

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.

  2. Interaction entropy for protein-protein binding

    NASA Astrophysics Data System (ADS)

    Sun, Zhaoxi; Yan, Yu N.; Yang, Maoyou; Zhang, John Z. H.

    2017-03-01

    Protein-protein interactions are at the heart of signal transduction and are central to the function of protein machine in biology. The highly specific protein-protein binding is quantitatively characterized by the binding free energy whose accurate calculation from the first principle is a grand challenge in computational biology. In this paper, we show how the interaction entropy approach, which was recently proposed for protein-ligand binding free energy calculation, can be applied to computing the entropic contribution to the protein-protein binding free energy. Explicit theoretical derivation of the interaction entropy approach for protein-protein interaction system is given in detail from the basic definition. Extensive computational studies for a dozen realistic protein-protein interaction systems are carried out using the present approach and comparisons of the results for these protein-protein systems with those from the standard normal mode method are presented. Analysis of the present method for application in protein-protein binding as well as the limitation of the method in numerical computation is discussed. Our study and analysis of the results provided useful information for extracting correct entropic contribution in protein-protein binding from molecular dynamics simulations.

  3. Interaction entropy for protein-protein binding.

    PubMed

    Sun, Zhaoxi; Yan, Yu N; Yang, Maoyou; Zhang, John Z H

    2017-03-28

    Protein-protein interactions are at the heart of signal transduction and are central to the function of protein machine in biology. The highly specific protein-protein binding is quantitatively characterized by the binding free energy whose accurate calculation from the first principle is a grand challenge in computational biology. In this paper, we show how the interactionentropy approach, which was recently proposed for protein-ligand binding free energy calculation, can be applied to computing the entropic contribution to the protein-protein binding free energy. Explicit theoretical derivation of the interactionentropy approach for protein-protein interaction system is given in detail from the basic definition. Extensive computational studies for a dozen realistic protein-protein interaction systems are carried out using the present approach and comparisons of the results for these protein-protein systems with those from the standard normal mode method are presented. Analysis of the present method for application in protein-protein binding as well as the limitation of the method in numerical computation is discussed. Our study and analysis of the results provided useful information for extracting correct entropic contribution in protein-protein binding from molecular dynamics simulations.

  4. Discovering and understanding oncogenic gene fusions through data intensive computational approaches

    PubMed Central

    Latysheva, Natasha S.; Babu, M. Madan

    2016-01-01

    Abstract Although gene fusions have been recognized as important drivers of cancer for decades, our understanding of the prevalence and function of gene fusions has been revolutionized by the rise of next-generation sequencing, advances in bioinformatics theory and an increasing capacity for large-scale computational biology. The computational work on gene fusions has been vastly diverse, and the present state of the literature is fragmented. It will be fruitful to merge three camps of gene fusion bioinformatics that appear to rarely cross over: (i) data-intensive computational work characterizing the molecular biology of gene fusions; (ii) development research on fusion detection tools, candidate fusion prioritization algorithms and dedicated fusion databases and (iii) clinical research that seeks to either therapeutically target fusion transcripts and proteins or leverages advances in detection tools to perform large-scale surveys of gene fusion landscapes in specific cancer types. In this review, we unify these different—yet highly complementary and symbiotic—approaches with the view that increased synergy will catalyze advancements in gene fusion identification, characterization and significance evaluation. PMID:27105842

  5. Single-molecule protein sequencing through fingerprinting: computational assessment

    NASA Astrophysics Data System (ADS)

    Yao, Yao; Docter, Margreet; van Ginkel, Jetty; de Ridder, Dick; Joo, Chirlmin

    2015-10-01

    Proteins are vital in all biological systems as they constitute the main structural and functional components of cells. Recent advances in mass spectrometry have brought the promise of complete proteomics by helping draft the human proteome. Yet, this commonly used protein sequencing technique has fundamental limitations in sensitivity. Here we propose a method for single-molecule (SM) protein sequencing. A major challenge lies in the fact that proteins are composed of 20 different amino acids, which demands 20 molecular reporters. We computationally demonstrate that it suffices to measure only two types of amino acids to identify proteins and suggest an experimental scheme using SM fluorescence. When achieved, this highly sensitive approach will result in a paradigm shift in proteomics, with major impact in the biological and medical sciences.

  6. Towards a 'siliconeural computer': technological successes and challenges.

    PubMed

    Hughes, Mark A; Shipston, Mike J; Murray, Alan F

    2015-07-28

    Electronic signals govern the function of both nervous systems and computers, albeit in different ways. As such, hybridizing both systems to create an iono-electric brain-computer interface is a realistic goal; and one that promises exciting advances in both heterotic computing and neuroprosthetics capable of circumventing devastating neuropathology. 'Neural networks' were, in the 1980s, viewed naively as a potential panacea for all computational problems that did not fit well with conventional computing. The field bifurcated during the 1990s into a highly successful and much more realistic machine learning community and an equally pragmatic, biologically oriented 'neuromorphic computing' community. Algorithms found in nature that use the non-synchronous, spiking nature of neuronal signals have been found to be (i) implementable efficiently in silicon and (ii) computationally useful. As a result, interest has grown in techniques that could create mixed 'siliconeural' computers. Here, we discuss potential approaches and focus on one particular platform using parylene-patterned silicon dioxide.

  7. Structural and Chemical Biology of Terpenoid Cyclases

    PubMed Central

    2017-01-01

    The year 2017 marks the twentieth anniversary of terpenoid cyclase structural biology: a trio of terpenoid cyclase structures reported together in 1997 were the first to set the foundation for understanding the enzymes largely responsible for the exquisite chemodiversity of more than 80000 terpenoid natural products. Terpenoid cyclases catalyze the most complex chemical reactions in biology, in that more than half of the substrate carbon atoms undergo changes in bonding and hybridization during a single enzyme-catalyzed cyclization reaction. The past two decades have witnessed structural, functional, and computational studies illuminating the modes of substrate activation that initiate the cyclization cascade, the management and manipulation of high-energy carbocation intermediates that propagate the cyclization cascade, and the chemical strategies that terminate the cyclization cascade. The role of the terpenoid cyclase as a template for catalysis is paramount to its function, and protein engineering can be used to reprogram the cyclization cascade to generate alternative and commercially important products. Here, I review key advances in terpenoid cyclase structural and chemical biology, focusing mainly on terpenoid cyclases and related prenyltransferases for which X-ray crystal structures have informed and advanced our understanding of enzyme structure and function. PMID:28841019

  8. Identification of Boolean Network Models From Time Series Data Incorporating Prior Knowledge.

    PubMed

    Leifeld, Thomas; Zhang, Zhihua; Zhang, Ping

    2018-01-01

    Motivation: Mathematical models take an important place in science and engineering. A model can help scientists to explain dynamic behavior of a system and to understand the functionality of system components. Since length of a time series and number of replicates is limited by the cost of experiments, Boolean networks as a structurally simple and parameter-free logical model for gene regulatory networks have attracted interests of many scientists. In order to fit into the biological contexts and to lower the data requirements, biological prior knowledge is taken into consideration during the inference procedure. In the literature, the existing identification approaches can only deal with a subset of possible types of prior knowledge. Results: We propose a new approach to identify Boolean networks from time series data incorporating prior knowledge, such as partial network structure, canalizing property, positive and negative unateness. Using vector form of Boolean variables and applying a generalized matrix multiplication called the semi-tensor product (STP), each Boolean function can be equivalently converted into a matrix expression. Based on this, the identification problem is reformulated as an integer linear programming problem to reveal the system matrix of Boolean model in a computationally efficient way, whose dynamics are consistent with the important dynamics captured in the data. By using prior knowledge the number of candidate functions can be reduced during the inference. Hence, identification incorporating prior knowledge is especially suitable for the case of small size time series data and data without sufficient stimuli. The proposed approach is illustrated with the help of a biological model of the network of oxidative stress response. Conclusions: The combination of efficient reformulation of the identification problem with the possibility to incorporate various types of prior knowledge enables the application of computational model inference to systems with limited amount of time series data. The general applicability of this methodological approach makes it suitable for a variety of biological systems and of general interest for biological and medical research.

  9. ProMotE: an efficient algorithm for counting independent motifs in uncertain network topologies.

    PubMed

    Ren, Yuanfang; Sarkar, Aisharjya; Kahveci, Tamer

    2018-06-26

    Identifying motifs in biological networks is essential in uncovering key functions served by these networks. Finding non-overlapping motif instances is however a computationally challenging task. The fact that biological interactions are uncertain events further complicates the problem, as it makes the existence of an embedding of a given motif an uncertain event as well. In this paper, we develop a novel method, ProMotE (Probabilistic Motif Embedding), to count non-overlapping embeddings of a given motif in probabilistic networks. We utilize a polynomial model to capture the uncertainty. We develop three strategies to scale our algorithm to large networks. Our experiments demonstrate that our method scales to large networks in practical time with high accuracy where existing methods fail. Moreover, our experiments on cancer and degenerative disease networks show that our method helps in uncovering key functional characteristics of biological networks.

  10. Revisiting the Quantum Brain Hypothesis: Toward Quantum (Neuro)biology?

    PubMed Central

    Jedlicka, Peter

    2017-01-01

    The nervous system is a non-linear dynamical complex system with many feedback loops. A conventional wisdom is that in the brain the quantum fluctuations are self-averaging and thus functionally negligible. However, this intuition might be misleading in the case of non-linear complex systems. Because of an extreme sensitivity to initial conditions, in complex systems the microscopic fluctuations may be amplified and thereby affect the system’s behavior. In this way quantum dynamics might influence neuronal computations. Accumulating evidence in non-neuronal systems indicates that biological evolution is able to exploit quantum stochasticity. The recent rise of quantum biology as an emerging field at the border between quantum physics and the life sciences suggests that quantum events could play a non-trivial role also in neuronal cells. Direct experimental evidence for this is still missing but future research should address the possibility that quantum events contribute to an extremely high complexity, variability and computational power of neuronal dynamics. PMID:29163041

  11. Revisiting the Quantum Brain Hypothesis: Toward Quantum (Neuro)biology?

    PubMed

    Jedlicka, Peter

    2017-01-01

    The nervous system is a non-linear dynamical complex system with many feedback loops. A conventional wisdom is that in the brain the quantum fluctuations are self-averaging and thus functionally negligible. However, this intuition might be misleading in the case of non-linear complex systems. Because of an extreme sensitivity to initial conditions, in complex systems the microscopic fluctuations may be amplified and thereby affect the system's behavior. In this way quantum dynamics might influence neuronal computations. Accumulating evidence in non-neuronal systems indicates that biological evolution is able to exploit quantum stochasticity. The recent rise of quantum biology as an emerging field at the border between quantum physics and the life sciences suggests that quantum events could play a non-trivial role also in neuronal cells. Direct experimental evidence for this is still missing but future research should address the possibility that quantum events contribute to an extremely high complexity, variability and computational power of neuronal dynamics.

  12. Underlying Principles of Natural Selection in Network Evolution: Systems Biology Approach

    PubMed Central

    Chen, Bor-Sen; Wu, Wei-Sheng

    2007-01-01

    Systems biology is a rapidly expanding field that integrates diverse areas of science such as physics, engineering, computer science, mathematics, and biology toward the goal of elucidating the underlying principles of hierarchical metabolic and regulatory systems in the cell, and ultimately leading to predictive understanding of cellular response to perturbations. Because post-genomics research is taking place throughout the tree of life, comparative approaches offer a way for combining data from many organisms to shed light on the evolution and function of biological networks from the gene to the organismal level. Therefore, systems biology can build on decades of theoretical work in evolutionary biology, and at the same time evolutionary biology can use the systems biology approach to go in new uncharted directions. In this study, we present a review of how the post-genomics era is adopting comparative approaches and dynamic system methods to understand the underlying design principles of network evolution and to shape the nascent field of evolutionary systems biology. Finally, the application of evolutionary systems biology to robust biological network designs is also discussed from the synthetic biology perspective. PMID:19468310

  13. Learning nucleic acids solving by bioinformatics problems.

    PubMed

    Nunes, Rhewter; Barbosa de Almeida Júnior, Edivaldo; Pessoa Pinto de Menezes, Ivandilson; Malafaia, Guilherme

    2015-01-01

    The article describes the development of a new approach to teach molecular biology to undergraduate biology students. The 34 students who participated in this research belonged to the first period of the Biological Sciences teaching course of the Instituto Federal Goiano at Urutaí Campus, Brazil. They were registered in Cell Biology in the first semester of 2013. They received four 55 min-long expository/dialogued lectures that covered the content of "structure and functions of nucleic acids". Later the students were invited to attend four meetings (in a computer laboratory) in which some concepts of Bioinformatics were presented and some problems of the Rosalind platform were solved. The observations we report here are very useful as a broad groundwork to development new research. An interesting possibility is research into the effects of bioinformatics interventions that improve molecular biology learning. © 2015 The International Union of Biochemistry and Molecular Biology.

  14. Mammalian synthetic biology for studying the cell.

    PubMed

    Mathur, Melina; Xiang, Joy S; Smolke, Christina D

    2017-01-02

    Synthetic biology is advancing the design of genetic devices that enable the study of cellular and molecular biology in mammalian cells. These genetic devices use diverse regulatory mechanisms to both examine cellular processes and achieve precise and dynamic control of cellular phenotype. Synthetic biology tools provide novel functionality to complement the examination of natural cell systems, including engineered molecules with specific activities and model systems that mimic complex regulatory processes. Continued development of quantitative standards and computational tools will expand capacities to probe cellular mechanisms with genetic devices to achieve a more comprehensive understanding of the cell. In this study, we review synthetic biology tools that are being applied to effectively investigate diverse cellular processes, regulatory networks, and multicellular interactions. We also discuss current challenges and future developments in the field that may transform the types of investigation possible in cell biology. © 2017 Mathur et al.

  15. MetNetAPI: A flexible method to access and manipulate biological network data from MetNet

    PubMed Central

    2010-01-01

    Background Convenient programmatic access to different biological databases allows automated integration of scientific knowledge. Many databases support a function to download files or data snapshots, or a webservice that offers "live" data. However, the functionality that a database offers cannot be represented in a static data download file, and webservices may consume considerable computational resources from the host server. Results MetNetAPI is a versatile Application Programming Interface (API) to the MetNetDB database. It abstracts, captures and retains operations away from a biological network repository and website. A range of database functions, previously only available online, can be immediately (and independently from the website) applied to a dataset of interest. Data is available in four layers: molecular entities, localized entities (linked to a specific organelle), interactions, and pathways. Navigation between these layers is intuitive (e.g. one can request the molecular entities in a pathway, as well as request in what pathways a specific entity participates). Data retrieval can be customized: Network objects allow the construction of new and integration of existing pathways and interactions, which can be uploaded back to our server. In contrast to webservices, the computational demand on the host server is limited to processing data-related queries only. Conclusions An API provides several advantages to a systems biology software platform. MetNetAPI illustrates an interface with a central repository of data that represents the complex interrelationships of a metabolic and regulatory network. As an alternative to data-dumps and webservices, it allows access to a current and "live" database and exposes analytical functions to application developers. Yet it only requires limited resources on the server-side (thin server/fat client setup). The API is available for Java, Microsoft.NET and R programming environments and offers flexible query and broad data- retrieval methods. Data retrieval can be customized to client needs and the API offers a framework to construct and manipulate user-defined networks. The design principles can be used as a template to build programmable interfaces for other biological databases. The API software and tutorials are available at http://www.metnetonline.org/api. PMID:21083943

  16. 78 FR 59927 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology..., Computational, and Systems Biology [External Review Draft]'' (EPA/600/R-13/214A). EPA is also announcing that... Advances in Molecular, Computational, and Systems Biology [External Review Draft]'' is available primarily...

  17. 78 FR 68058 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology..., computational, and systems biology data can better inform risk assessment. This draft document is available for...

  18. Density functional theory across chemistry, physics and biology.

    PubMed

    van Mourik, Tanja; Bühl, Michael; Gaigeot, Marie-Pierre

    2014-03-13

    The past decades have seen density functional theory (DFT) evolve from a rising star in computational quantum chemistry to one of its major players. This Theme Issue, which comes half a century after the publication of the Hohenberg-Kohn theorems that laid the foundations of modern DFT, reviews progress and challenges in present-day DFT research. Rather than trying to be comprehensive, this Theme Issue attempts to give a flavour of selected aspects of DFT.

  19. AHaH Computing–From Metastable Switches to Attractors to Machine Learning

    PubMed Central

    Nugent, Michael Alexander; Molter, Timothy Wesley

    2014-01-01

    Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures–all key capabilities of biological nervous systems and modern machine learning algorithms with real world application. PMID:24520315

  20. Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition

    NASA Astrophysics Data System (ADS)

    Fitch, W. Tecumseh

    2014-09-01

    Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology.

  1. Toward a computational framework for cognitive biology: unifying approaches from cognitive neuroscience and comparative cognition.

    PubMed

    Fitch, W Tecumseh

    2014-09-01

    Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology. Copyright © 2014. Published by Elsevier B.V.

  2. Computational approaches to schizophrenia: A perspective on negative symptoms.

    PubMed

    Deserno, Lorenz; Heinz, Andreas; Schlagenhauf, Florian

    2017-08-01

    Schizophrenia is a heterogeneous spectrum disorder often associated with detrimental negative symptoms. In recent years, computational approaches to psychiatry have attracted growing attention. Negative symptoms have shown some overlap with general cognitive impairments and were also linked to impaired motivational processing in brain circuits implementing reward prediction. In this review, we outline how computational approaches may help to provide a better understanding of negative symptoms in terms of the potentially underlying behavioural and biological mechanisms. First, we describe the idea that negative symptoms could arise from a failure to represent reward expectations to enable flexible behavioural adaptation. It has been proposed that these impairments arise from a failure to use prediction errors to update expectations. Important previous studies focused on processing of so-called model-free prediction errors where learning is determined by past rewards only. However, learning and decision-making arise from multiple cognitive mechanisms functioning simultaneously, and dissecting them via well-designed tasks in conjunction with computational modelling is a promising avenue. Second, we move on to a proof-of-concept example on how generative models of functional imaging data from a cognitive task enable the identification of subgroups of patients mapping on different levels of negative symptoms. Combining the latter approach with behavioural studies regarding learning and decision-making may allow the identification of key behavioural and biological parameters distinctive for different dimensions of negative symptoms versus a general cognitive impairment. We conclude with an outlook on how this computational framework could, at some point, enrich future clinical studies. Copyright © 2016. Published by Elsevier B.V.

  3. Computational Intelligence for Medical Imaging Simulations.

    PubMed

    Chang, Victor

    2017-11-25

    This paper describes how to simulate medical imaging by computational intelligence to explore areas that cannot be easily achieved by traditional ways, including genes and proteins simulations related to cancer development and immunity. This paper has presented simulations and virtual inspections of BIRC3, BIRC6, CCL4, KLKB1 and CYP2A6 with their outputs and explanations, as well as brain segment intensity due to dancing. Our proposed MapReduce framework with the fusion algorithm can simulate medical imaging. The concept is very similar to the digital surface theories to simulate how biological units can get together to form bigger units, until the formation of the entire unit of biological subject. The M-Fusion and M-Update function by the fusion algorithm can achieve a good performance evaluation which can process and visualize up to 40 GB of data within 600 s. We conclude that computational intelligence can provide effective and efficient healthcare research offered by simulations and visualization.

  4. Approximating lens power.

    PubMed

    Kaye, Stephen B

    2009-04-01

    To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.

  5. Integrative computational approach for genome-based study of microbial lipid-degrading enzymes.

    PubMed

    Vorapreeda, Tayvich; Thammarongtham, Chinae; Laoteng, Kobkul

    2016-07-01

    Lipid-degrading or lipolytic enzymes have gained enormous attention in academic and industrial sectors. Several efforts are underway to discover new lipase enzymes from a variety of microorganisms with particular catalytic properties to be used for extensive applications. In addition, various tools and strategies have been implemented to unravel the functional relevance of the versatile lipid-degrading enzymes for special purposes. This review highlights the study of microbial lipid-degrading enzymes through an integrative computational approach. The identification of putative lipase genes from microbial genomes and metagenomic libraries using homology-based mining is discussed, with an emphasis on sequence analysis of conserved motifs and enzyme topology. Molecular modelling of three-dimensional structure on the basis of sequence similarity is shown to be a potential approach for exploring the structural and functional relationships of candidate lipase enzymes. The perspectives on a discriminative framework of cutting-edge tools and technologies, including bioinformatics, computational biology, functional genomics and functional proteomics, intended to facilitate rapid progress in understanding lipolysis mechanism and to discover novel lipid-degrading enzymes of microorganisms are discussed.

  6. The 'Biologically-Inspired Computing' Column

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike

    2006-01-01

    The field of Biology changed dramatically in 1953, with the determination by Francis Crick and James Dewey Watson of the double helix structure of DNA. This discovery changed Biology for ever, allowing the sequencing of the human genome, and the emergence of a "new Biology" focused on DNA, genes, proteins, data, and search. Computational Biology and Bioinformatics heavily rely on computing to facilitate research into life and development. Simultaneously, an understanding of the biology of living organisms indicates a parallel with computing systems: molecules in living cells interact, grow, and transform according to the "program" dictated by DNA. Moreover, paradigms of Computing are emerging based on modelling and developing computer-based systems exploiting ideas that are observed in nature. This includes building into computer systems self-management and self-governance mechanisms that are inspired by the human body's autonomic nervous system, modelling evolutionary systems analogous to colonies of ants or other insects, and developing highly-efficient and highly-complex distributed systems from large numbers of (often quite simple) largely homogeneous components to reflect the behaviour of flocks of birds, swarms of bees, herds of animals, or schools of fish. This new field of "Biologically-Inspired Computing", often known in other incarnations by other names, such as: Autonomic Computing, Pervasive Computing, Organic Computing, Biomimetics, and Artificial Life, amongst others, is poised at the intersection of Computer Science, Engineering, Mathematics, and the Life Sciences. Successes have been reported in the fields of drug discovery, data communications, computer animation, control and command, exploration systems for space, undersea, and harsh environments, to name but a few, and augur much promise for future progress.

  7. A traveling salesman approach for predicting protein functions.

    PubMed

    Johnson, Olin; Liu, Jing

    2006-10-12

    Protein-protein interaction information can be used to predict unknown protein functions and to help study biological pathways. Here we present a new approach utilizing the classic Traveling Salesman Problem to study the protein-protein interactions and to predict protein functions in budding yeast Saccharomyces cerevisiae. We apply the global optimization tool from combinatorial optimization algorithms to cluster the yeast proteins based on the global protein interaction information. We then use this clustering information to help us predict protein functions. We use our algorithm together with the direct neighbor algorithm 1 on characterized proteins and compare the prediction accuracy of the two methods. We show our algorithm can produce better predictions than the direct neighbor algorithm, which only considers the immediate neighbors of the query protein. Our method is a promising one to be used as a general tool to predict functions of uncharacterized proteins and a successful sample of using computer science knowledge and algorithms to study biological problems.

  8. A traveling salesman approach for predicting protein functions

    PubMed Central

    Johnson, Olin; Liu, Jing

    2006-01-01

    Background Protein-protein interaction information can be used to predict unknown protein functions and to help study biological pathways. Results Here we present a new approach utilizing the classic Traveling Salesman Problem to study the protein-protein interactions and to predict protein functions in budding yeast Saccharomyces cerevisiae. We apply the global optimization tool from combinatorial optimization algorithms to cluster the yeast proteins based on the global protein interaction information. We then use this clustering information to help us predict protein functions. We use our algorithm together with the direct neighbor algorithm [1] on characterized proteins and compare the prediction accuracy of the two methods. We show our algorithm can produce better predictions than the direct neighbor algorithm, which only considers the immediate neighbors of the query protein. Conclusion Our method is a promising one to be used as a general tool to predict functions of uncharacterized proteins and a successful sample of using computer science knowledge and algorithms to study biological problems. PMID:17147783

  9. Report of the matrix of biological knowledge workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morowitz, H.J.; Smith, T.

    1987-10-30

    Current understanding of biology involves complex relationships rooted in enormous amounts of data. These data include entries from biochemistry, ecology, genetics, human and veterinary medicine, molecular structure studies, agriculture, embryology, systematics, and many other disciplines. The present wealth of biological data goes beyond past accumulations now include new understandings from molecular biology. Several important biological databases are currently being supported, and more are planned; however, major problems of interdatabase communication and management efficiency abound. Few scientists are currently capable of keeping up with this ever-increasing wealth of knowledge, let alone searching it efficiently for new or unsuspected links and importantmore » analogies. Yet this is what is required if the continued rapid generation of such data is to lead most effectively to the major conceptual, medical, and agricultural advances anticipated over the coming decades in the United States. The opportunity exists to combine the potential of modern computer science, database management, and artificial intelligence in a major effort to organize the vast wealth of biological and clinical data. The time is right because the amount of data is still manageable even in its current highly-fragmented form; important hardware and computer science tools have been greatly improved; and there have been recent fundamental advances in our comprehension of biology. This latter is particularly true at the molecular level where the information for nearly all higher structure and function is encoded. The organization of all biological experimental data coordinately within a structure incorporating our current understanding - the Matrix of Biological Knowledge - will provide the data and structure for the major advances foreseen in the years ahead.« less

  10. Extension of research data repository system to support direct compute access to biomedical datasets: enhancing Dataverse to support large datasets.

    PubMed

    McKinney, Bill; Meyer, Peter A; Crosas, Mercè; Sliz, Piotr

    2017-01-01

    Access to experimental X-ray diffraction image data is important for validation and reproduction of macromolecular models and indispensable for the development of structural biology processing methods. In response to the evolving needs of the structural biology community, we recently established a diffraction data publication system, the Structural Biology Data Grid (SBDG, data.sbgrid.org), to preserve primary experimental datasets supporting scientific publications. All datasets published through the SBDG are freely available to the research community under a public domain dedication license, with metadata compliant with the DataCite Schema (schema.datacite.org). A proof-of-concept study demonstrated community interest and utility. Publication of large datasets is a challenge shared by several fields, and the SBDG has begun collaborating with the Institute for Quantitative Social Science at Harvard University to extend the Dataverse (dataverse.org) open-source data repository system to structural biology datasets. Several extensions are necessary to support the size and metadata requirements for structural biology datasets. In this paper, we describe one such extension-functionality supporting preservation of file system structure within Dataverse-which is essential for both in-place computation and supporting non-HTTP data transfers. © 2016 New York Academy of Sciences.

  11. Applying Computerized-Scoring Models of Written Biological Explanations across Courses and Colleges: Prospects and Limitations

    PubMed Central

    Ha, Minsu; Nehm, Ross H.; Urban-Lurain, Mark; Merrill, John E.

    2011-01-01

    Our study explored the prospects and limitations of using machine-learning software to score introductory biology students’ written explanations of evolutionary change. We investigated three research questions: 1) Do scoring models built using student responses at one university function effectively at another university? 2) How many human-scored student responses are needed to build scoring models suitable for cross-institutional application? 3) What factors limit computer-scoring efficacy, and how can these factors be mitigated? To answer these questions, two biology experts scored a corpus of 2556 short-answer explanations (from biology majors and nonmajors) at two universities for the presence or absence of five key concepts of evolution. Human- and computer-generated scores were compared using kappa agreement statistics. We found that machine-learning software was capable in most cases of accurately evaluating the degree of scientific sophistication in undergraduate majors’ and nonmajors’ written explanations of evolutionary change. In cases in which the software did not perform at the benchmark of “near-perfect” agreement (kappa > 0.80), we located the causes of poor performance and identified a series of strategies for their mitigation. Machine-learning software holds promise as an assessment tool for use in undergraduate biology education, but like most assessment tools, it is also characterized by limitations. PMID:22135372

  12. Applying computerized-scoring models of written biological explanations across courses and colleges: prospects and limitations.

    PubMed

    Ha, Minsu; Nehm, Ross H; Urban-Lurain, Mark; Merrill, John E

    2011-01-01

    Our study explored the prospects and limitations of using machine-learning software to score introductory biology students' written explanations of evolutionary change. We investigated three research questions: 1) Do scoring models built using student responses at one university function effectively at another university? 2) How many human-scored student responses are needed to build scoring models suitable for cross-institutional application? 3) What factors limit computer-scoring efficacy, and how can these factors be mitigated? To answer these questions, two biology experts scored a corpus of 2556 short-answer explanations (from biology majors and nonmajors) at two universities for the presence or absence of five key concepts of evolution. Human- and computer-generated scores were compared using kappa agreement statistics. We found that machine-learning software was capable in most cases of accurately evaluating the degree of scientific sophistication in undergraduate majors' and nonmajors' written explanations of evolutionary change. In cases in which the software did not perform at the benchmark of "near-perfect" agreement (kappa > 0.80), we located the causes of poor performance and identified a series of strategies for their mitigation. Machine-learning software holds promise as an assessment tool for use in undergraduate biology education, but like most assessment tools, it is also characterized by limitations.

  13. AnnotCompute: annotation-based exploration and meta-analysis of genomics experiments

    PubMed Central

    Zheng, Jie; Stoyanovich, Julia; Manduchi, Elisabetta; Liu, Junmin; Stoeckert, Christian J.

    2011-01-01

    The ever-increasing scale of biological data sets, particularly those arising in the context of high-throughput technologies, requires the development of rich data exploration tools. In this article, we present AnnotCompute, an information discovery platform for repositories of functional genomics experiments such as ArrayExpress. Our system leverages semantic annotations of functional genomics experiments with controlled vocabulary and ontology terms, such as those from the MGED Ontology, to compute conceptual dissimilarities between pairs of experiments. These dissimilarities are then used to support two types of exploratory analysis—clustering and query-by-example. We show that our proposed dissimilarity measures correspond to a user's intuition about conceptual dissimilarity, and can be used to support effective query-by-example. We also evaluate the quality of clustering based on these measures. While AnnotCompute can support a richer data exploration experience, its effectiveness is limited in some cases, due to the quality of available annotations. Nonetheless, tools such as AnnotCompute may provide an incentive for richer annotations of experiments. Code is available for download at http://www.cbil.upenn.edu/downloads/AnnotCompute. Database URL: http://www.cbil.upenn.edu/annotCompute/ PMID:22190598

  14. Comparing sixteen scoring functions for predicting biological activities of ligands for protein targets.

    PubMed

    Xu, Weijun; Lucke, Andrew J; Fairlie, David P

    2015-04-01

    Accurately predicting relative binding affinities and biological potencies for ligands that interact with proteins remains a significant challenge for computational chemists. Most evaluations of docking and scoring algorithms have focused on enhancing ligand affinity for a protein by optimizing docking poses and enrichment factors during virtual screening. However, there is still relatively limited information on the accuracy of commercially available docking and scoring software programs for correctly predicting binding affinities and biological activities of structurally related inhibitors of different enzyme classes. Presented here is a comparative evaluation of eight molecular docking programs (Autodock Vina, Fitted, FlexX, Fred, Glide, GOLD, LibDock, MolDock) using sixteen docking and scoring functions to predict the rank-order activity of different ligand series for six pharmacologically important protein and enzyme targets (Factor Xa, Cdk2 kinase, Aurora A kinase, COX-2, pla2g2a, β Estrogen receptor). Use of Fitted gave an excellent correlation (Pearson 0.86, Spearman 0.91) between predicted and experimental binding only for Cdk2 kinase inhibitors. FlexX and GOLDScore produced good correlations (Pearson>0.6) for hydrophilic targets such as Factor Xa, Cdk2 kinase and Aurora A kinase. By contrast, pla2g2a and COX-2 emerged as difficult targets for scoring functions to predict ligand activities. Although possessing a high hydrophobicity in its binding site, β Estrogen receptor produced reasonable correlations using LibDock (Pearson 0.75, Spearman 0.68). These findings can assist medicinal chemists to better match scoring functions with ligand-target systems for hit-to-lead optimization using computer-aided drug design approaches. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Legacies in material flux: Structural changes before long-term studies

    Treesearch

    D.J. Bain; M.B. Green; J. Campbell; J. Chamblee; S. Chaoka; J. Fraterrigo; S. Kaushal; S. Martin; T. Jordan; T. Parolari; B. Sobczak; D. Weller; W. M. Wollheim; E. Boose; J. Duncan; G. Gettel; B. Hall; P. Kumar; J. Thompson; J. Vose; E. Elliott; D. Leigh

    2012-01-01

    Legacy effects of past land use and disturbance are increasingly recognized, yet consistent definitions of and criteria for defining them do not exist. To address this gap in biological- and ecosystem-assessment frameworks, we propose a general metric for evaluating potential legacy effects, which are computed by normalizing altered system function persistence with...

  16. Computational study of stability of an H-H-type pseudoknot motif.

    PubMed

    Wang, Jun; Zhao, Yunjie; Wang, Jian; Xiao, Yi

    2015-12-01

    Motifs in RNA tertiary structures are important to their structural organizations and biological functions. Here we consider an H-H-type pseudoknot (HHpk) motif that consists of two hairpins connected by a junction loop and with kissing interactions between the two hairpin loops. Such a tertiary structural motif is recurrently found in RNA tertiary structures, but is difficult to predict computationally. So it is important to understand the mechanism of its formation and stability. Here we investigate the stability of the HHpk tertiary structure by using an all-atom molecular dynamics simulation. The results indicate that the HHpk tertiary structure is stable. However, it is found that this stability is not due to the helix-helix packing, as is usually expected, but is maintained by the combined action of the kissing hairpin loops and junctions, although the former plays the main role. Stable HHpk motifs may form structural platforms for the molecules to realize their biological functions. These results are useful for understanding the construction principle of RNA tertiary structures and structure prediction.

  17. Methods for improving simulations of biological systems: systemic computation and fractal proteins

    PubMed Central

    Bentley, Peter J.

    2009-01-01

    Modelling and simulation are becoming essential for new fields such as synthetic biology. Perhaps the most important aspect of modelling is to follow a clear design methodology that will help to highlight unwanted deficiencies. The use of tools designed to aid the modelling process can be of benefit in many situations. In this paper, the modelling approach called systemic computation (SC) is introduced. SC is an interaction-based language, which enables individual-based expression and modelling of biological systems, and the interactions between them. SC permits a precise description of a hypothetical mechanism to be written using an intuitive graph-based or a calculus-based notation. The same description can then be directly run as a simulation, merging the hypothetical mechanism and the simulation into the same entity. However, even when using well-designed modelling tools to produce good models, the best model is not always the most accurate one. Frequently, computational constraints or lack of data make it infeasible to model an aspect of biology. Simplification may provide one way forward, but with inevitable consequences of decreased accuracy. Instead of attempting to replace an element with a simpler approximation, it is sometimes possible to substitute the element with a different but functionally similar component. In the second part of this paper, this modelling approach is described and its advantages are summarized using an exemplar: the fractal protein model. Finally, the paper ends with a discussion of good biological modelling practice by presenting lessons learned from the use of SC and the fractal protein model. PMID:19324681

  18. Biologically important conformational features of DNA as interpreted by quantum mechanics and molecular mechanics computations of its simple fragments.

    PubMed

    Poltev, V; Anisimov, V M; Dominguez, V; Gonzalez, E; Deriabina, A; Garcia, D; Rivas, F; Polteva, N A

    2018-02-01

    Deciphering the mechanism of functioning of DNA as the carrier of genetic information requires identifying inherent factors determining its structure and function. Following this path, our previous DFT studies attributed the origin of unique conformational characteristics of right-handed Watson-Crick duplexes (WCDs) to the conformational profile of deoxydinucleoside monophosphates (dDMPs) serving as the minimal repeating units of DNA strand. According to those findings, the directionality of the sugar-phosphate chain and the characteristic ranges of dihedral angles of energy minima combined with the geometric differences between purines and pyrimidines determine the dependence on base sequence of the three-dimensional (3D) structure of WCDs. This work extends our computational study to complementary deoxydinucleotide-monophosphates (cdDMPs) of non-standard conformation, including those of Z-family, Hoogsteen duplexes, parallel-stranded structures, and duplexes with mispaired bases. For most of these systems, except Z-conformation, computations closely reproduce experimental data within the tolerance of characteristic limits of dihedral parameters for each conformation family. Computation of cdDMPs with Z-conformation reveals that their experimental structures do not correspond to the internal energy minimum. This finding establishes the leading role of external factors in formation of the Z-conformation. Energy minima of cdDMPs of non-Watson-Crick duplexes demonstrate different sequence-dependence features than those known for WCDs. The obtained results provide evidence that the biologically important regularities of 3D structure distinguish WCDs from duplexes having non-Watson-Crick nucleotide pairing.

  19. Evolutionary Computing Methods for Spectral Retrieval

    NASA Technical Reports Server (NTRS)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  20. An agent-based computational model for tuberculosis spreading on age-structured populations

    NASA Astrophysics Data System (ADS)

    Graciani Rodrigues, C. C.; Espíndola, Aquino L.; Penna, T. J. P.

    2015-06-01

    In this work we present an agent-based computational model to study the spreading of the tuberculosis (TB) disease on age-structured populations. The model proposed is a merge of two previous models: an agent-based computational model for the spreading of tuberculosis and a bit-string model for biological aging. The combination of TB with the population aging, reproduces the coexistence of health states, as seen in real populations. In addition, the universal exponential behavior of mortalities curves is still preserved. Finally, the population distribution as function of age shows the prevalence of TB mostly in elders, for high efficacy treatments.

  1. ISCB Ebola Award for Important Future Research on the Computational Biology of Ebola Virus

    PubMed Central

    Karp, Peter D.; Berger, Bonnie; Kovats, Diane; Lengauer, Thomas; Linial, Michal; Sabeti, Pardis; Hide, Winston; Rost, Burkhard

    2015-01-01

    Speed is of the essence in combating Ebola; thus, computational approaches should form a significant component of Ebola research. As for the development of any modern drug, computational biology is uniquely positioned to contribute through comparative analysis of the genome sequences of Ebola strains as well as 3-D protein modeling. Other computational approaches to Ebola may include large-scale docking studies of Ebola proteins with human proteins and with small-molecule libraries, computational modeling of the spread of the virus, computational mining of the Ebola literature, and creation of a curated Ebola database. Taken together, such computational efforts could significantly accelerate traditional scientific approaches. In recognition of the need for important and immediate solutions from the field of computational biology against Ebola, the International Society for Computational Biology (ISCB) announces a prize for an important computational advance in fighting the Ebola virus. ISCB will confer the ISCB Fight against Ebola Award, along with a prize of US$2,000, at its July 2016 annual meeting (ISCB Intelligent Systems for Molecular Biology (ISMB) 2016, Orlando, Florida). PMID:26097686

  2. ISCB Ebola Award for Important Future Research on the Computational Biology of Ebola Virus.

    PubMed

    Karp, Peter D; Berger, Bonnie; Kovats, Diane; Lengauer, Thomas; Linial, Michal; Sabeti, Pardis; Hide, Winston; Rost, Burkhard

    2015-01-01

    Speed is of the essence in combating Ebola; thus, computational approaches should form a significant component of Ebola research. As for the development of any modern drug, computational biology is uniquely positioned to contribute through comparative analysis of the genome sequences of Ebola strains as well as 3-D protein modeling. Other computational approaches to Ebola may include large-scale docking studies of Ebola proteins with human proteins and with small-molecule libraries, computational modeling of the spread of the virus, computational mining of the Ebola literature, and creation of a curated Ebola database. Taken together, such computational efforts could significantly accelerate traditional scientific approaches. In recognition of the need for important and immediate solutions from the field of computational biology against Ebola, the International Society for Computational Biology (ISCB) announces a prize for an important computational advance in fighting the Ebola virus. ISCB will confer the ISCB Fight against Ebola Award, along with a prize of US$2,000, at its July 2016 annual meeting (ISCB Intelligent Systems for Molecular Biology (ISMB) 2016, Orlando, Florida).

  3. A computational platform to maintain and migrate manual functional annotations for BioCyc databases.

    PubMed

    Walsh, Jesse R; Sen, Taner Z; Dickerson, Julie A

    2014-10-12

    BioCyc databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integration as new knowledge is discovered. As functional annotations are improved, scalable methods are needed for curators to manage annotations without detailed knowledge of the specific design of the BioCyc database. We have developed CycTools, a software tool which allows curators to maintain functional annotations in a model organism database. This tool builds on existing software to improve and simplify annotation data imports of user provided data into BioCyc databases. Additionally, CycTools automatically resolves synonyms and alternate identifiers contained within the database into the appropriate internal identifiers. Automating steps in the manual data entry process can improve curation efforts for major biological databases. The functionality of CycTools is demonstrated by transferring GO term annotations from MaizeCyc to matching proteins in CornCyc, both maize metabolic pathway databases available at MaizeGDB, and by creating strain specific databases for metabolic engineering.

  4. A Theoretical Mechanism of Szilard Engine Function in Nucleic Acids and the Implications for Quantum Coherence in Biological Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Mihelic, F.

    2010-12-22

    Nucleic acids theoretically possess a Szilard engine function that can convert the energy associated with the Shannon entropy of molecules for which they have coded recognition, into the useful work of geometric reconfiguration of the nucleic acid molecule. This function is logically reversible because its mechanism is literally and physically constructed out of the information necessary to reduce the Shannon entropy of such molecules, which means that this information exists on both sides of the theoretical engine, and because information is retained in the geometric degrees of freedom of the nucleic acid molecule, a quantum gate is formed through whichmore » multi-state nucleic acid qubits can interact. Entangled biophotons emitted as a consequence of symmetry breaking nucleic acid Szilard engine (NASE) function can be used to coordinate relative positioning of different nucleic acid locations, both within and between cells, thus providing the potential for quantum coherence of an entire biological system. Theoretical implications of understanding biological systems as such 'quantum adaptive systems' include the potential for multi-agent based quantum computing, and a better understanding of systemic pathologies such as cancer, as being related to a loss of systemic quantum coherence.« less

  5. A Theoretical Mechanism of Szilard Engine Function in Nucleic Acids and the Implications for Quantum Coherence in Biological Systems

    NASA Astrophysics Data System (ADS)

    Matthew Mihelic, F.

    2010-12-01

    Nucleic acids theoretically possess a Szilard engine function that can convert the energy associated with the Shannon entropy of molecules for which they have coded recognition, into the useful work of geometric reconfiguration of the nucleic acid molecule. This function is logically reversible because its mechanism is literally and physically constructed out of the information necessary to reduce the Shannon entropy of such molecules, which means that this information exists on both sides of the theoretical engine, and because information is retained in the geometric degrees of freedom of the nucleic acid molecule, a quantum gate is formed through which multi-state nucleic acid qubits can interact. Entangled biophotons emitted as a consequence of symmetry breaking nucleic acid Szilard engine (NASE) function can be used to coordinate relative positioning of different nucleic acid locations, both within and between cells, thus providing the potential for quantum coherence of an entire biological system. Theoretical implications of understanding biological systems as such "quantum adaptive systems" include the potential for multi-agent based quantum computing, and a better understanding of systemic pathologies such as cancer, as being related to a loss of systemic quantum coherence.

  6. The portable UNIX programming system (PUPS) and CANTOR: a computational environment for dynamical representation and analysis of complex neurobiological data.

    PubMed

    O'Neill, M A; Hilgetag, C C

    2001-08-29

    Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement.

  7. The portable UNIX programming system (PUPS) and CANTOR: a computational environment for dynamical representation and analysis of complex neurobiological data.

    PubMed Central

    O'Neill, M A; Hilgetag, C C

    2001-01-01

    Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement. PMID:11545702

  8. Reliable but Timesaving: In Search of an Efficient Quantum-chemical Method for the Description of Functional Fullerenes.

    PubMed

    Reis, H; Rasulev, B; Papadopoulos, M G; Leszczynski, J

    2015-01-01

    Fullerene and its derivatives are currently one of the most intensively investigated species in the area of nanomedicine and nanochemistry. Various unique properties of fullerenes are responsible for their wide range applications in industry, biology and medicine. A large pool of functionalized C60 and C70 fullerenes is investigated theoretically at different levels of quantum-mechanical theory. The semiempirial PM6 method, density functional theory with the B3LYP functional, and correlated ab initio MP2 method are employed to compute the optimized structures, and an array of properties for the considered species. In addition to the calculations for isolated molecules, the results of solution calculations are also reported at the DFT level, using the polarizable continuum model (PCM). Ionization potentials (IPs) and electron affinities (EAs) are computed by means of Koopmans' theorem as well as with the more accurate but computationally expensive ΔSCF method. Both procedures yield comparable values, while comparison of IPs and EAs computed with different quantum-mechanical methods shows surprisingly large differences. Harmonic vibrational frequencies are computed at the PM6 and B3LYP levels of theory and compared with each other. A possible application of the frequencies as 3D descriptors in the EVA (EigenVAlues) method is shown. All the computed data are made available, and may be used to replace experimental data in routine applications where large amounts of data are required, e.g. in structure-activity relationship studies of the toxicity of fullerene derivatives.

  9. The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 2 Core.

    PubMed

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2018-03-09

    Computational models can help researchers to interpret data, understand biological functions, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that different software systems can exchange. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 2 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML, their encoding in XML (the eXtensible Markup Language), validation rules that determine the validity of an SBML document, and examples of models in SBML form. The design of Version 2 differs from Version 1 principally in allowing new MathML constructs, making more child elements optional, and adding identifiers to all SBML elements instead of only selected elements. Other materials and software are available from the SBML project website at http://sbml.org/.

  10. Biological adaptations for functional features of language in the face of cultural evolution.

    PubMed

    Christiansen, Morten H; Reali, Florencia; Chater, Nick

    2011-04-01

    Although there may be no true language universals, it is nonetheless possible to discern several family resemblance patterns across the languages of the world. Recent work on the cultural evolution of language indicates the source of these patterns is unlikely to be an innate universal grammar evolved through biological adaptations for arbitrary linguistic features. Instead, it has been suggested that the patterns of resemblance emerge because language has been shaped by the brain, with individual languages representing different but partially overlapping solutions to the same set of nonlinguistic constraints. Here, we use computational simulations to investigate whether biological adaptation for functional features of language, deriving from cognitive and communicative constraints, may nonetheless be possible alongside rapid cultural evolution. Specifically, we focus on the Baldwin effect as an evolutionary mechanism by which previously learned linguistic features might become innate through natural selection across many generations of language users. The results indicate that cultural evolution of language does not necessarily prevent functional features of language from becoming genetically fixed, thus potentially providing a particularly informative source of constraints on cross-linguistic resemblance patterns.

  11. From head to tail: new models and approaches in primate functional anatomy and biomechanics.

    PubMed

    Organ, Jason M; Deleon, Valerie B; Wang, Qian; Smith, Timothy D

    2010-04-01

    This special issue of The Anatomical Record (AR) is based on interest generated by a symposium at the 2008 annual meeting of the American Association of Anatomists (AAA) at Experimental Biology, entitled "An Evolutionary Perspective on Human Anatomy." The development of this volume in turn provided impetus for a Biological Anthropology Mini-Meeting, organized by members of the AAA for the 2010 Experimental Biology meeting in Anaheim, California. The research presented in these pages reflects the themes of these symposia and provides a snapshot of the current state of primate functional anatomy and biomechanics research. The 17 articles in this special issue utilize new models and/or approaches to study long-standing questions about the evolution of our closest relatives, including soft-tissue dissection and microanatomical techniques, experimental approaches to morphology, kinematic and kinetic biomechanics, high-resolution computed tomography, and Finite Element Analysis (FEA). This volume continues a close historical association between the disciplines of anatomy and biological anthropology: anatomists benefit from an understanding of the evolutionary history of our modern form, and biological anthropologists rely on anatomical principles to make informed evolutionary inferences about our closest relatives. (c) 2010 Wiley-Liss, Inc.

  12. Global identifiability of linear compartmental models--a computer algebra algorithm.

    PubMed

    Audoly, S; D'Angiò, L; Saccomani, M P; Cobelli, C

    1998-01-01

    A priori global identifiability deals with the uniqueness of the solution for the unknown parameters of a model and is, thus, a prerequisite for parameter estimation of biological dynamic models. Global identifiability is however difficult to test, since it requires solving a system of algebraic nonlinear equations which increases both in nonlinearity degree and number of terms and unknowns with increasing model order. In this paper, a computer algebra tool, GLOBI (GLOBal Identifiability) is presented, which combines the topological transfer function method with the Buchberger algorithm, to test global identifiability of linear compartmental models. GLOBI allows for the automatic testing of a priori global identifiability of general structure compartmental models from general multi input-multi output experiments. Examples of usage of GLOBI to analyze a priori global identifiability of some complex biological compartmental models are provided.

  13. A simplified computational memory model from information processing.

    PubMed

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  14. The biological microprocessor, or how to build a computer with biological parts

    PubMed Central

    Moe-Behrens, Gerd HG

    2013-01-01

    Systemics, a revolutionary paradigm shift in scientific thinking, with applications in systems biology, and synthetic biology, have led to the idea of using silicon computers and their engineering principles as a blueprint for the engineering of a similar machine made from biological parts. Here we describe these building blocks and how they can be assembled to a general purpose computer system, a biological microprocessor. Such a system consists of biological parts building an input / output device, an arithmetic logic unit, a control unit, memory, and wires (busses) to interconnect these components. A biocomputer can be used to monitor and control a biological system. PMID:24688733

  15. How MAP kinase modules function as robust, yet adaptable, circuits.

    PubMed

    Tian, Tianhai; Harding, Angus

    2014-01-01

    Genetic and biochemical studies have revealed that the diversity of cell types and developmental patterns evident within the animal kingdom is generated by a handful of conserved, core modules. Core biological modules must be robust, able to maintain functionality despite perturbations, and yet sufficiently adaptable for random mutations to generate phenotypic variation during evolution. Understanding how robust, adaptable modules have influenced the evolution of eukaryotes will inform both evolutionary and synthetic biology. One such system is the MAP kinase module, which consists of a 3-tiered kinase circuit configuration that has been evolutionarily conserved from yeast to man. MAP kinase signal transduction pathways are used across eukaryotic phyla to drive biological functions that are crucial for life. Here we ask the fundamental question, why do MAPK modules follow a conserved 3-tiered topology rather than some other number? Using computational simulations, we identify a fundamental 2-tiered circuit topology that can be readily reconfigured by feedback loops and scaffolds to generate diverse signal outputs. When this 2-kinase circuit is connected to proximal input kinases, a 3-tiered modular configuration is created that is both robust and adaptable, providing a biological circuit that can regulate multiple phenotypes and maintain functionality in an uncertain world. We propose that the 3-tiered signal transduction module has been conserved through positive selection, because it facilitated the generation of phenotypic variation during eukaryotic evolution.

  16. How MAP kinase modules function as robust, yet adaptable, circuits

    PubMed Central

    Tian, Tianhai; Harding, Angus

    2014-01-01

    Genetic and biochemical studies have revealed that the diversity of cell types and developmental patterns evident within the animal kingdom is generated by a handful of conserved, core modules. Core biological modules must be robust, able to maintain functionality despite perturbations, and yet sufficiently adaptable for random mutations to generate phenotypic variation during evolution. Understanding how robust, adaptable modules have influenced the evolution of eukaryotes will inform both evolutionary and synthetic biology. One such system is the MAP kinase module, which consists of a 3-tiered kinase circuit configuration that has been evolutionarily conserved from yeast to man. MAP kinase signal transduction pathways are used across eukaryotic phyla to drive biological functions that are crucial for life. Here we ask the fundamental question, why do MAPK modules follow a conserved 3-tiered topology rather than some other number? Using computational simulations, we identify a fundamental 2-tiered circuit topology that can be readily reconfigured by feedback loops and scaffolds to generate diverse signal outputs. When this 2-kinase circuit is connected to proximal input kinases, a 3-tiered modular configuration is created that is both robust and adaptable, providing a biological circuit that can regulate multiple phenotypes and maintain functionality in an uncertain world. We propose that the 3-tiered signal transduction module has been conserved through positive selection, because it facilitated the generation of phenotypic variation during eukaryotic evolution. PMID:25483189

  17. Biology Teacher and Expert Opinions about Computer Assisted Biology Instruction Materials: A Software Entitled Nucleic Acids and Protein Synthesis

    ERIC Educational Resources Information Center

    Hasenekoglu, Ismet; Timucin, Melih

    2007-01-01

    The aim of this study is to collect and evaluate opinions of CAI experts and biology teachers about a high school level Computer Assisted Biology Instruction Material presenting computer-made modelling and simulations. It is a case study. A material covering "Nucleic Acids and Protein Synthesis" topic was developed as the…

  18. Evolutionary versatility of eukaryotic protein domains revealed by their bigram networks

    PubMed Central

    2011-01-01

    Background Protein domains are globular structures of independently folded polypeptides that exert catalytic or binding activities. Their sequences are recognized as evolutionary units that, through genome recombination, constitute protein repertoires of linkage patterns. Via mutations, domains acquire modified functions that contribute to the fitness of cells and organisms. Recent studies have addressed the evolutionary selection that may have shaped the functions of individual domains and the emergence of particular domain combinations, which led to new cellular functions in multi-cellular animals. This study focuses on modeling domain linkage globally and investigates evolutionary implications that may be revealed by novel computational analysis. Results A survey of 77 completely sequenced eukaryotic genomes implies a potential hierarchical and modular organization of biological functions in most living organisms. Domains in a genome or multiple genomes are modeled as a network of hetero-duplex covalent linkages, termed bigrams. A novel computational technique is introduced to decompose such networks, whereby the notion of domain "networking versatility" is derived and measured. The most and least "versatile" domains (termed "core domains" and "peripheral domains" respectively) are examined both computationally via sequence conservation measures and experimentally using selected domains. Our study suggests that such a versatility measure extracted from the bigram networks correlates with the adaptivity of domains during evolution, where the network core domains are highly adaptive, significantly contrasting the network peripheral domains. Conclusions Domain recombination has played a major part in the evolution of eukaryotes attributing to genome complexity. From a system point of view, as the results of selection and constant refinement, networks of domain linkage are structured in a hierarchical modular fashion. Domains with high degree of networking versatility appear to be evolutionary adaptive, potentially through functional innovations. Domain bigram networks are informative as a model of biological functions. The networking versatility indices extracted from such networks for individual domains reflect the strength of evolutionary selection that the domains have experienced. PMID:21849086

  19. Evolutionary versatility of eukaryotic protein domains revealed by their bigram networks.

    PubMed

    Xie, Xueying; Jin, Jing; Mao, Yongyi

    2011-08-18

    Protein domains are globular structures of independently folded polypeptides that exert catalytic or binding activities. Their sequences are recognized as evolutionary units that, through genome recombination, constitute protein repertoires of linkage patterns. Via mutations, domains acquire modified functions that contribute to the fitness of cells and organisms. Recent studies have addressed the evolutionary selection that may have shaped the functions of individual domains and the emergence of particular domain combinations, which led to new cellular functions in multi-cellular animals. This study focuses on modeling domain linkage globally and investigates evolutionary implications that may be revealed by novel computational analysis. A survey of 77 completely sequenced eukaryotic genomes implies a potential hierarchical and modular organization of biological functions in most living organisms. Domains in a genome or multiple genomes are modeled as a network of hetero-duplex covalent linkages, termed bigrams. A novel computational technique is introduced to decompose such networks, whereby the notion of domain "networking versatility" is derived and measured. The most and least "versatile" domains (termed "core domains" and "peripheral domains" respectively) are examined both computationally via sequence conservation measures and experimentally using selected domains. Our study suggests that such a versatility measure extracted from the bigram networks correlates with the adaptivity of domains during evolution, where the network core domains are highly adaptive, significantly contrasting the network peripheral domains. Domain recombination has played a major part in the evolution of eukaryotes attributing to genome complexity. From a system point of view, as the results of selection and constant refinement, networks of domain linkage are structured in a hierarchical modular fashion. Domains with high degree of networking versatility appear to be evolutionary adaptive, potentially through functional innovations. Domain bigram networks are informative as a model of biological functions. The networking versatility indices extracted from such networks for individual domains reflect the strength of evolutionary selection that the domains have experienced.

  20. Protein function prediction--the power of multiplicity.

    PubMed

    Rentzsch, Robert; Orengo, Christine A

    2009-04-01

    Advances in experimental and computational methods have quietly ushered in a new era in protein function annotation. This 'age of multiplicity' is marked by the notion that only the use of multiple tools, multiple evidence and considering the multiple aspects of function can give us the broad picture that 21st century biology will need to link and alter micro- and macroscopic phenotypes. It might also help us to undo past mistakes by removing errors from our databases and prevent us from producing more. On the downside, multiplicity is often confusing. We therefore systematically review methods and resources for automated protein function prediction, looking at individual (biochemical) and contextual (network) functions, respectively.

  1. Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks

    PubMed Central

    2014-01-01

    Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226

  2. Mutual information estimation reveals global associations between stimuli and biological processes

    PubMed Central

    Suzuki, Taiji; Sugiyama, Masashi; Kanamori, Takafumi; Sese, Jun

    2009-01-01

    Background Although microarray gene expression analysis has become popular, it remains difficult to interpret the biological changes caused by stimuli or variation of conditions. Clustering of genes and associating each group with biological functions are often used methods. However, such methods only detect partial changes within cell processes. Herein, we propose a method for discovering global changes within a cell by associating observed conditions of gene expression with gene functions. Results To elucidate the association, we introduce a novel feature selection method called Least-Squares Mutual Information (LSMI), which computes mutual information without density estimaion, and therefore LSMI can detect nonlinear associations within a cell. We demonstrate the effectiveness of LSMI through comparison with existing methods. The results of the application to yeast microarray datasets reveal that non-natural stimuli affect various biological processes, whereas others are no significant relation to specific cell processes. Furthermore, we discover that biological processes can be categorized into four types according to the responses of various stimuli: DNA/RNA metabolism, gene expression, protein metabolism, and protein localization. Conclusion We proposed a novel feature selection method called LSMI, and applied LSMI to mining the association between conditions of yeast and biological processes through microarray datasets. In fact, LSMI allows us to elucidate the global organization of cellular process control. PMID:19208155

  3. UltraPse: A Universal and Extensible Software Platform for Representing Biological Sequences.

    PubMed

    Du, Pu-Feng; Zhao, Wei; Miao, Yang-Yang; Wei, Le-Yi; Wang, Likun

    2017-11-14

    With the avalanche of biological sequences in public databases, one of the most challenging problems in computational biology is to predict their biological functions and cellular attributes. Most of the existing prediction algorithms can only handle fixed-length numerical vectors. Therefore, it is important to be able to represent biological sequences with various lengths using fixed-length numerical vectors. Although several algorithms, as well as software implementations, have been developed to address this problem, these existing programs can only provide a fixed number of representation modes. Every time a new sequence representation mode is developed, a new program will be needed. In this paper, we propose the UltraPse as a universal software platform for this problem. The function of the UltraPse is not only to generate various existing sequence representation modes, but also to simplify all future programming works in developing novel representation modes. The extensibility of UltraPse is particularly enhanced. It allows the users to define their own representation mode, their own physicochemical properties, or even their own types of biological sequences. Moreover, UltraPse is also the fastest software of its kind. The source code package, as well as the executables for both Linux and Windows platforms, can be downloaded from the GitHub repository.

  4. Synthetic Analog and Digital Circuits for Cellular Computation and Memory

    PubMed Central

    Purcell, Oliver; Lu, Timothy K.

    2014-01-01

    Biological computation is a major area of focus in synthetic biology because it has the potential to enable a wide range of applications. Synthetic biologists have applied engineering concepts to biological systems in order to construct progressively more complex gene circuits capable of processing information in living cells. Here, we review the current state of computational genetic circuits and describe artificial gene circuits that perform digital and analog computation. We then discuss recent progress in designing gene circuits that exhibit memory, and how memory and computation have been integrated to yield more complex systems that can both process and record information. Finally, we suggest new directions for engineering biological circuits capable of computation. PMID:24794536

  5. ACToR-AGGREGATED COMPUTATIONAL TOXICOLOGY ...

    EPA Pesticide Factsheets

    One goal of the field of computational toxicology is to predict chemical toxicity by combining computer models with biological and toxicological data. predict chemical toxicity by combining computer models with biological and toxicological data

  6. HGIMDA: Heterogeneous graph inference for miRNA-disease association prediction

    PubMed Central

    Zhang, Xu; You, Zhu-Hong; Huang, Yu-An; Yan, Gui-Ying

    2016-01-01

    Recently, microRNAs (miRNAs) have drawn more and more attentions because accumulating experimental studies have indicated miRNA could play critical roles in multiple biological processes as well as the development and progression of human complex diseases. Using the huge number of known heterogeneous biological datasets to predict potential associations between miRNAs and diseases is an important topic in the field of biology, medicine, and bioinformatics. In this study, considering the limitations in the previous computational methods, we developed the computational model of Heterogeneous Graph Inference for MiRNA-Disease Association prediction (HGIMDA) to uncover potential miRNA-disease associations by integrating miRNA functional similarity, disease semantic similarity, Gaussian interaction profile kernel similarity, and experimentally verified miRNA-disease associations into a heterogeneous graph. HGIMDA obtained AUCs of 0.8781 and 0.8077 based on global and local leave-one-out cross validation, respectively. Furthermore, HGIMDA was applied to three important human cancers for performance evaluation. As a result, 90% (Colon Neoplasms), 88% (Esophageal Neoplasms) and 88% (Kidney Neoplasms) of top 50 predicted miRNAs are confirmed by recent experiment reports. Furthermore, HGIMDA could be effectively applied to new diseases and new miRNAs without any known associations, which overcome the important limitations of many previous computational models. PMID:27533456

  7. HGIMDA: Heterogeneous graph inference for miRNA-disease association prediction.

    PubMed

    Chen, Xing; Yan, Chenggang Clarence; Zhang, Xu; You, Zhu-Hong; Huang, Yu-An; Yan, Gui-Ying

    2016-10-04

    Recently, microRNAs (miRNAs) have drawn more and more attentions because accumulating experimental studies have indicated miRNA could play critical roles in multiple biological processes as well as the development and progression of human complex diseases. Using the huge number of known heterogeneous biological datasets to predict potential associations between miRNAs and diseases is an important topic in the field of biology, medicine, and bioinformatics. In this study, considering the limitations in the previous computational methods, we developed the computational model of Heterogeneous Graph Inference for MiRNA-Disease Association prediction (HGIMDA) to uncover potential miRNA-disease associations by integrating miRNA functional similarity, disease semantic similarity, Gaussian interaction profile kernel similarity, and experimentally verified miRNA-disease associations into a heterogeneous graph. HGIMDA obtained AUCs of 0.8781 and 0.8077 based on global and local leave-one-out cross validation, respectively. Furthermore, HGIMDA was applied to three important human cancers for performance evaluation. As a result, 90% (Colon Neoplasms), 88% (Esophageal Neoplasms) and 88% (Kidney Neoplasms) of top 50 predicted miRNAs are confirmed by recent experiment reports. Furthermore, HGIMDA could be effectively applied to new diseases and new miRNAs without any known associations, which overcome the important limitations of many previous computational models.

  8. A Formalized Design Process for Bacterial Consortia That Perform Logic Computing

    PubMed Central

    Sun, Rui; Xi, Jingyi; Wen, Dingqiao; Feng, Jingchen; Chen, Yiwei; Qin, Xiao; Ma, Yanrong; Luo, Wenhan; Deng, Linna; Lin, Hanchi; Yu, Ruofan; Ouyang, Qi

    2013-01-01

    The concept of microbial consortia is of great attractiveness in synthetic biology. Despite of all its benefits, however, there are still problems remaining for large-scaled multicellular gene circuits, for example, how to reliably design and distribute the circuits in microbial consortia with limited number of well-behaved genetic modules and wiring quorum-sensing molecules. To manage such problem, here we propose a formalized design process: (i) determine the basic logic units (AND, OR and NOT gates) based on mathematical and biological considerations; (ii) establish rules to search and distribute simplest logic design; (iii) assemble assigned basic logic units in each logic operating cell; and (iv) fine-tune the circuiting interface between logic operators. We in silico analyzed gene circuits with inputs ranging from two to four, comparing our method with the pre-existing ones. Results showed that this formalized design process is more feasible concerning numbers of cells required. Furthermore, as a proof of principle, an Escherichia coli consortium that performs XOR function, a typical complex computing operation, was designed. The construction and characterization of logic operators is independent of “wiring” and provides predictive information for fine-tuning. This formalized design process provides guidance for the design of microbial consortia that perform distributed biological computation. PMID:23468999

  9. OpenStructure: a flexible software framework for computational structural biology.

    PubMed

    Biasini, Marco; Mariani, Valerio; Haas, Jürgen; Scheuber, Stefan; Schenk, Andreas D; Schwede, Torsten; Philippsen, Ansgar

    2010-10-15

    Developers of new methods in computational structural biology are often hampered in their research by incompatible software tools and non-standardized data formats. To address this problem, we have developed OpenStructure as a modular open source platform to provide a powerful, yet flexible general working environment for structural bioinformatics. OpenStructure consists primarily of a set of libraries written in C++ with a cleanly designed application programmer interface. All functionality can be accessed directly in C++ or in a Python layer, meeting both the requirements for high efficiency and ease of use. Powerful selection queries and the notion of entity views to represent these selections greatly facilitate the development and implementation of algorithms on structural data. The modular integration of computational core methods with powerful visualization tools makes OpenStructure an ideal working and development environment. Several applications, such as the latest versions of IPLT and QMean, have been implemented based on OpenStructure-demonstrating its value for the development of next-generation structural biology algorithms. Source code licensed under the GNU lesser general public license and binaries for MacOS X, Linux and Windows are available for download at http://www.openstructure.org. torsten.schwede@unibas.ch Supplementary data are available at Bioinformatics online.

  10. Defining a Computational Framework for the Assessment of ...

    EPA Pesticide Factsheets

    The Adverse Outcome Pathway (AOP) framework describes the effects of environmental stressors across multiple scales of biological organization and function. This includes an evaluation of the potential for each key event to occur across a broad range of species in order to determine the taxonomic applicability of each AOP. Computational tools are needed to facilitate this process. Recently, we developed a tool that uses sequence homology to evaluate the applicability of molecular initiating events across species (Lalone et al., Toxicol. Sci., 2016). To extend our ability to make computational predictions at higher levels of biological organization, we have created the AOPdb. This database links molecular targets identified associated with key events in the AOPwiki to publically available data (e.g. gene-protein, pathway, species orthology, ontology, chemical, disease) including ToxCast assay information. The AOPdb combines different data types in order to characterize the impacts of chemicals to human health and the environment and serves as a decision support tool for case study development in the area of taxonomic applicability. As a proof of concept, the AOPdb allows identification of relevant molecular targets, biological pathways, and chemical and disease associations across species for four AOPs from the AOP-Wiki (https://aopwiki.org): Estrogen receptor antagonism leading to reproductive dysfunction (Aop:30); Aromatase inhibition leading to reproductive d

  11. In vitro evaluation of low-intensity light radiation on murine melanoma (B16F10) cells.

    PubMed

    Peidaee, P; Almansour, N M; Pirogova, E

    2016-03-01

    Changes in the energy state of biomolecules induced by electromagnetic radiation lead to changes in biological functions of irradiated biomolecules. Using the RRM approach, it was computationally predicted that far-infrared light irradiation in the range of 3500-6000 nm affects biological activity of proto-oncogene proteins. This in vitro study evaluates quantitatively and qualitatively the effects of selected far-infrared exposures in the computationally determined wavelengths on mouse melanoma B16F10 cells and Chinese hamster ovarian (CHO) cells by MTT (thiazolyl blue tetrazolium bromide) cell proliferation assay and confocal laser-scanning microscopy (CLSM). This paper also presents the findings obtained from irradiating B16F10 and CHO cells by the selected wavelengths in visible and near-infrared range. The MTT results show that far-infrared wavelength irradiation induces detrimental effect on cellular viability of B16F10 cells, while that of normal CHO cells is not affected considerably. Moreover, CLSM images demonstrate visible cellular detachment of cancer cells. The observed effects support the hypothesis that far-infrared light irradiation within the computationally determined wavelength range induces biological effect on cancer cells. From irradiation of selected visible and near-infrared wavelengths, no visible changes were detected in cellular viability of either normal or cancer cells.

  12. Systems biology and mechanics of growth.

    PubMed

    Eskandari, Mona; Kuhl, Ellen

    2015-01-01

    In contrast to inert systems, living biological systems have the advantage to adapt to their environment through growth and evolution. This transfiguration is evident during embryonic development, when the predisposed need to grow allows form to follow function. Alterations in the equilibrium state of biological systems breed disease and mutation in response to environmental triggers. The need to characterize the growth of biological systems to better understand these phenomena has motivated the continuum theory of growth and stimulated the development of computational tools in systems biology. Biological growth in development and disease is increasingly studied using the framework of morphoelasticity. Here, we demonstrate the potential for morphoelastic simulations through examples of volume, area, and length growth, inspired by tumor expansion, chronic bronchitis, brain development, intestine formation, plant shape, and myopia. We review the systems biology of living systems in light of biochemical and optical stimuli and classify different types of growth to facilitate the design of growth models for various biological systems within this generic framework. Exploring the systems biology of growth introduces a new venue to control and manipulate embryonic development, disease progression, and clinical intervention. © 2015 Wiley Periodicals, Inc.

  13. New technologies accelerate the exploration of non-coding RNAs in horticultural plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Degao; Mewalal, Ritesh; Hu, Rongbin

    Non-coding RNAs (ncRNAs), that is, RNAs not translated into proteins, are crucial regulators of a variety of biological processes in plants. While protein-encoding genes have been relatively well-annotated in sequenced genomes, accounting for a small portion of the genome space in plants, the universe of plant ncRNAs is rapidly expanding. Recent advances in experimental and computational technologies have generated a great momentum for discovery and functional characterization of ncRNAs. Here we summarize the classification and known biological functions of plant ncRNAs, review the application of next-generation sequencing (NGS) technology and ribosome profiling technology to ncRNA discovery in horticultural plants andmore » discuss the application of new technologies, especially the new genome-editing tool clustered regularly interspaced short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9) systems, to functional characterization of plant ncRNAs.« less

  14. Prediction of enzymatic pathways by integrative pathway mapping

    PubMed Central

    Wichelecki, Daniel J; San Francisco, Brian; Zhao, Suwen; Rodionov, Dmitry A; Vetting, Matthew W; Al-Obaidi, Nawar F; Lin, Henry; O'Meara, Matthew J; Scott, David A; Morris, John H; Russel, Daniel; Almo, Steven C; Osterman, Andrei L

    2018-01-01

    The functions of most proteins are yet to be determined. The function of an enzyme is often defined by its interacting partners, including its substrate and product, and its role in larger metabolic networks. Here, we describe a computational method that predicts the functions of orphan enzymes by organizing them into a linear metabolic pathway. Given candidate enzyme and metabolite pathway members, this aim is achieved by finding those pathways that satisfy structural and network restraints implied by varied input information, including that from virtual screening, chemoinformatics, genomic context analysis, and ligand -binding experiments. We demonstrate this integrative pathway mapping method by predicting the L-gulonate catabolic pathway in Haemophilus influenzae Rd KW20. The prediction was subsequently validated experimentally by enzymology, crystallography, and metabolomics. Integrative pathway mapping by satisfaction of structural and network restraints is extensible to molecular networks in general and thus formally bridges the gap between structural biology and systems biology. PMID:29377793

  15. Normal mode analysis and applications in biological physics.

    PubMed

    Dykeman, Eric C; Sankey, Otto F

    2010-10-27

    Normal mode analysis has become a popular and often used theoretical tool in the study of functional motions in enzymes, viruses, and large protein assemblies. The use of normal modes in the study of these motions is often extremely fruitful since many of the functional motions of large proteins can be described using just a few normal modes which are intimately related to the overall structure of the protein. In this review, we present a broad overview of several popular methods used in the study of normal modes in biological physics including continuum elastic theory, the elastic network model, and a new all-atom method, recently developed, which is capable of computing a subset of the low frequency vibrational modes exactly. After a review of the various methods, we present several examples of applications of normal modes in the study of functional motions, with an emphasis on viral capsids.

  16. New technologies accelerate the exploration of non-coding RNAs in horticultural plants

    PubMed Central

    Liu, Degao; Mewalal, Ritesh; Hu, Rongbin; Tuskan, Gerald A; Yang, Xiaohan

    2017-01-01

    Non-coding RNAs (ncRNAs), that is, RNAs not translated into proteins, are crucial regulators of a variety of biological processes in plants. While protein-encoding genes have been relatively well-annotated in sequenced genomes, accounting for a small portion of the genome space in plants, the universe of plant ncRNAs is rapidly expanding. Recent advances in experimental and computational technologies have generated a great momentum for discovery and functional characterization of ncRNAs. Here we summarize the classification and known biological functions of plant ncRNAs, review the application of next-generation sequencing (NGS) technology and ribosome profiling technology to ncRNA discovery in horticultural plants and discuss the application of new technologies, especially the new genome-editing tool clustered regularly interspaced short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9) systems, to functional characterization of plant ncRNAs. PMID:28698797

  17. Geneious Basic: An integrated and extendable desktop software platform for the organization and analysis of sequence data

    PubMed Central

    Kearse, Matthew; Moir, Richard; Wilson, Amy; Stones-Havas, Steven; Cheung, Matthew; Sturrock, Shane; Buxton, Simon; Cooper, Alex; Markowitz, Sidney; Duran, Chris; Thierer, Tobias; Ashton, Bruce; Meintjes, Peter; Drummond, Alexei

    2012-01-01

    Summary: The two main functions of bioinformatics are the organization and analysis of biological data using computational resources. Geneious Basic has been designed to be an easy-to-use and flexible desktop software application framework for the organization and analysis of biological data, with a focus on molecular sequences and related data types. It integrates numerous industry-standard discovery analysis tools, with interactive visualizations to generate publication-ready images. One key contribution to researchers in the life sciences is the Geneious public application programming interface (API) that affords the ability to leverage the existing framework of the Geneious Basic software platform for virtually unlimited extension and customization. The result is an increase in the speed and quality of development of computation tools for the life sciences, due to the functionality and graphical user interface available to the developer through the public API. Geneious Basic represents an ideal platform for the bioinformatics community to leverage existing components and to integrate their own specific requirements for the discovery, analysis and visualization of biological data. Availability and implementation: Binaries and public API freely available for download at http://www.geneious.com/basic, implemented in Java and supported on Linux, Apple OSX and MS Windows. The software is also available from the Bio-Linux package repository at http://nebc.nerc.ac.uk/news/geneiousonbl. Contact: peter@biomatters.com PMID:22543367

  18. Geneious Basic: an integrated and extendable desktop software platform for the organization and analysis of sequence data.

    PubMed

    Kearse, Matthew; Moir, Richard; Wilson, Amy; Stones-Havas, Steven; Cheung, Matthew; Sturrock, Shane; Buxton, Simon; Cooper, Alex; Markowitz, Sidney; Duran, Chris; Thierer, Tobias; Ashton, Bruce; Meintjes, Peter; Drummond, Alexei

    2012-06-15

    The two main functions of bioinformatics are the organization and analysis of biological data using computational resources. Geneious Basic has been designed to be an easy-to-use and flexible desktop software application framework for the organization and analysis of biological data, with a focus on molecular sequences and related data types. It integrates numerous industry-standard discovery analysis tools, with interactive visualizations to generate publication-ready images. One key contribution to researchers in the life sciences is the Geneious public application programming interface (API) that affords the ability to leverage the existing framework of the Geneious Basic software platform for virtually unlimited extension and customization. The result is an increase in the speed and quality of development of computation tools for the life sciences, due to the functionality and graphical user interface available to the developer through the public API. Geneious Basic represents an ideal platform for the bioinformatics community to leverage existing components and to integrate their own specific requirements for the discovery, analysis and visualization of biological data. Binaries and public API freely available for download at http://www.geneious.com/basic, implemented in Java and supported on Linux, Apple OSX and MS Windows. The software is also available from the Bio-Linux package repository at http://nebc.nerc.ac.uk/news/geneiousonbl.

  19. A novel optogenetically tunable frequency modulating oscillator

    PubMed Central

    2018-01-01

    Synthetic biology has enabled the creation of biological reconfigurable circuits, which perform multiple functions monopolizing a single biological machine; Such a system can switch between different behaviours in response to environmental cues. Previous work has demonstrated switchable dynamical behaviour employing reconfigurable logic gate genetic networks. Here we describe a computational framework for reconfigurable circuits in E.coli using combinations of logic gates, and also propose the biological implementation. The proposed system is an oscillator that can exhibit tunability of frequency and amplitude of oscillations. Further, the frequency of operation can be changed optogenetically. Insilico analysis revealed that two-component light systems, in response to light within a frequency range, can be used for modulating the frequency of the oscillator or stopping the oscillations altogether. Computational modelling reveals that mixing two colonies of E.coli oscillating at different frequencies generates spatial beat patterns. Further, we show that these oscillations more robustly respond to input perturbations compared to the base oscillator, to which the proposed oscillator is a modification. Compared to the base oscillator, the proposed system shows faster synchronization in a colony of cells for a larger region of the parameter space. Additionally, the proposed oscillator also exhibits lesser synchronization error in the transient period after input perturbations. This provides a strong basis for the construction of synthetic reconfigurable circuits in bacteria and other organisms, which can be scaled up to perform functions in the field of time dependent drug delivery with tunable dosages, and sets the stage for further development of circuits with synchronized population level behaviour. PMID:29389936

  20. A novel optogenetically tunable frequency modulating oscillator.

    PubMed

    Mahajan, Tarun; Rai, Kshitij

    2018-01-01

    Synthetic biology has enabled the creation of biological reconfigurable circuits, which perform multiple functions monopolizing a single biological machine; Such a system can switch between different behaviours in response to environmental cues. Previous work has demonstrated switchable dynamical behaviour employing reconfigurable logic gate genetic networks. Here we describe a computational framework for reconfigurable circuits in E.coli using combinations of logic gates, and also propose the biological implementation. The proposed system is an oscillator that can exhibit tunability of frequency and amplitude of oscillations. Further, the frequency of operation can be changed optogenetically. Insilico analysis revealed that two-component light systems, in response to light within a frequency range, can be used for modulating the frequency of the oscillator or stopping the oscillations altogether. Computational modelling reveals that mixing two colonies of E.coli oscillating at different frequencies generates spatial beat patterns. Further, we show that these oscillations more robustly respond to input perturbations compared to the base oscillator, to which the proposed oscillator is a modification. Compared to the base oscillator, the proposed system shows faster synchronization in a colony of cells for a larger region of the parameter space. Additionally, the proposed oscillator also exhibits lesser synchronization error in the transient period after input perturbations. This provides a strong basis for the construction of synthetic reconfigurable circuits in bacteria and other organisms, which can be scaled up to perform functions in the field of time dependent drug delivery with tunable dosages, and sets the stage for further development of circuits with synchronized population level behaviour.

  1. BicPAMS: software for biological data analysis with pattern-based biclustering.

    PubMed

    Henriques, Rui; Ferreira, Francisco L; Madeira, Sara C

    2017-02-02

    Biclustering has been largely applied for the unsupervised analysis of biological data, being recognised today as a key technique to discover putative modules in both expression data (subsets of genes correlated in subsets of conditions) and network data (groups of coherently interconnected biological entities). However, given its computational complexity, only recent breakthroughs on pattern-based biclustering enabled efficient searches without the restrictions that state-of-the-art biclustering algorithms place on the structure and homogeneity of biclusters. As a result, pattern-based biclustering provides the unprecedented opportunity to discover non-trivial yet meaningful biological modules with putative functions, whose coherency and tolerance to noise can be tuned and made problem-specific. To enable the effective use of pattern-based biclustering by the scientific community, we developed BicPAMS (Biclustering based on PAttern Mining Software), a software that: 1) makes available state-of-the-art pattern-based biclustering algorithms (BicPAM (Henriques and Madeira, Alg Mol Biol 9:27, 2014), BicNET (Henriques and Madeira, Alg Mol Biol 11:23, 2016), BicSPAM (Henriques and Madeira, BMC Bioinforma 15:130, 2014), BiC2PAM (Henriques and Madeira, Alg Mol Biol 11:1-30, 2016), BiP (Henriques and Madeira, IEEE/ACM Trans Comput Biol Bioinforma, 2015), DeBi (Serin and Vingron, AMB 6:1-12, 2011) and BiModule (Okada et al., IPSJ Trans Bioinf 48(SIG5):39-48, 2007)); 2) consistently integrates their dispersed contributions; 3) further explores additional accuracy and efficiency gains; and 4) makes available graphical and application programming interfaces. Results on both synthetic and real data confirm the relevance of BicPAMS for biological data analysis, highlighting its essential role for the discovery of putative modules with non-trivial yet biologically significant functions from expression and network data. BicPAMS is the first biclustering tool offering the possibility to: 1) parametrically customize the structure, coherency and quality of biclusters; 2) analyze large-scale biological networks; and 3) tackle the restrictive assumptions placed by state-of-the-art biclustering algorithms. These contributions are shown to be key for an adequate, complete and user-assisted unsupervised analysis of biological data. BicPAMS and its tutorial available in http://www.bicpams.com .

  2. HPC AND GRID COMPUTING FOR INTEGRATIVE BIOMEDICAL RESEARCH

    PubMed Central

    Kurc, Tahsin; Hastings, Shannon; Kumar, Vijay; Langella, Stephen; Sharma, Ashish; Pan, Tony; Oster, Scott; Ervin, David; Permar, Justin; Narayanan, Sivaramakrishnan; Gil, Yolanda; Deelman, Ewa; Hall, Mary; Saltz, Joel

    2010-01-01

    Integrative biomedical research projects query, analyze, and integrate many different data types and make use of datasets obtained from measurements or simulations of structure and function at multiple biological scales. With the increasing availability of high-throughput and high-resolution instruments, the integrative biomedical research imposes many challenging requirements on software middleware systems. In this paper, we look at some of these requirements using example research pattern templates. We then discuss how middleware systems, which incorporate Grid and high-performance computing, could be employed to address the requirements. PMID:20107625

  3. Metacognition: computation, biology and function

    PubMed Central

    Fleming, Stephen M.; Dolan, Raymond J.; Frith, Christopher D.

    2012-01-01

    Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape. PMID:22492746

  4. Application of dynamic topic models to toxicogenomics data.

    PubMed

    Lee, Mikyung; Liu, Zhichao; Huang, Ruili; Tong, Weida

    2016-10-06

    All biological processes are inherently dynamic. Biological systems evolve transiently or sustainably according to sequential time points after perturbation by environment insults, drugs and chemicals. Investigating the temporal behavior of molecular events has been an important subject to understand the underlying mechanisms governing the biological system in response to, such as, drug treatment. The intrinsic complexity of time series data requires appropriate computational algorithms for data interpretation. In this study, we propose, for the first time, the application of dynamic topic models (DTM) for analyzing time-series gene expression data. A large time-series toxicogenomics dataset was studied. It contains over 3144 microarrays of gene expression data corresponding to rat livers treated with 131 compounds (most are drugs) at two doses (control and high dose) in a repeated schedule containing four separate time points (4-, 8-, 15- and 29-day). We analyzed, with DTM, the topics (consisting of a set of genes) and their biological interpretations over these four time points. We identified hidden patterns embedded in this time-series gene expression profiles. From the topic distribution for compound-time condition, a number of drugs were successfully clustered by their shared mode-of-action such as PPARɑ agonists and COX inhibitors. The biological meaning underlying each topic was interpreted using diverse sources of information such as functional analysis of the pathways and therapeutic uses of the drugs. Additionally, we found that sample clusters produced by DTM are much more coherent in terms of functional categories when compared to traditional clustering algorithms. We demonstrated that DTM, a text mining technique, can be a powerful computational approach for clustering time-series gene expression profiles with the probabilistic representation of their dynamic features along sequential time frames. The method offers an alternative way for uncovering hidden patterns embedded in time series gene expression profiles to gain enhanced understanding of dynamic behavior of gene regulation in the biological system.

  5. Combining guilt-by-association and guilt-by-profiling to predict Saccharomyces cerevisiae gene function

    PubMed Central

    Tian, Weidong; Zhang, Lan V; Taşan, Murat; Gibbons, Francis D; King, Oliver D; Park, Julie; Wunderlich, Zeba; Cherry, J Michael; Roth, Frederick P

    2008-01-01

    Background: Learning the function of genes is a major goal of computational genomics. Methods for inferring gene function have typically fallen into two categories: 'guilt-by-profiling', which exploits correlation between function and other gene characteristics; and 'guilt-by-association', which transfers function from one gene to another via biological relationships. Results: We have developed a strategy ('Funckenstein') that performs guilt-by-profiling and guilt-by-association and combines the results. Using a benchmark set of functional categories and input data for protein-coding genes in Saccharomyces cerevisiae, Funckenstein was compared with a previous combined strategy. Subsequently, we applied Funckenstein to 2,455 Gene Ontology terms. In the process, we developed 2,455 guilt-by-profiling classifiers based on 8,848 gene characteristics and 12 functional linkage graphs based on 23 biological relationships. Conclusion: Funckenstein outperforms a previous combined strategy using a common benchmark dataset. The combination of 'guilt-by-profiling' and 'guilt-by-association' gave significant improvement over the component classifiers, showing the greatest synergy for the most specific functions. Performance was evaluated by cross-validation and by literature examination of the top-scoring novel predictions. These quantitative predictions should help prioritize experimental study of yeast gene functions. PMID:18613951

  6. Computational analysis of microRNA function in heart development.

    PubMed

    Liu, Ganqiang; Ding, Min; Chen, Jiajia; Huang, Jinyan; Wang, Haiyun; Jing, Qing; Shen, Bairong

    2010-09-01

    Emerging evidence suggests that specific spatio-temporal microRNA (miRNA) expression is required for heart development. In recent years, hundreds of miRNAs have been discovered. In contrast, functional annotations are available only for a very small fraction of these regulatory molecules. In order to provide a global perspective for the biologists who study the relationship between differentially expressed miRNAs and heart development, we employed computational analysis to uncover the specific cellular processes and biological pathways targeted by miRNAs in mouse heart development. Here, we utilized Gene Ontology (GO) categories, KEGG Pathway, and GeneGo Pathway Maps as a gene functional annotation system for miRNA target enrichment analysis. The target genes of miRNAs were found to be enriched in functional categories and pathway maps in which miRNAs could play important roles during heart development. Meanwhile, we developed miRHrt (http://sysbio.suda.edu.cn/mirhrt/), a database aiming to provide a comprehensive resource of miRNA function in regulating heart development. These computational analysis results effectively illustrated the correlation of differentially expressed miRNAs with cellular functions and heart development. We hope that the identified novel heart development-associated pathways and the database presented here would facilitate further understanding of the roles and mechanisms of miRNAs in heart development.

  7. Legacy effects in material flux: structural catchment changes predate long-term studies

    Treesearch

    Daniel Bain; Mark B. Green; John L. Campbell; John F. Chamblee; Sayo Chaoka; Jennifer M. Fraterrigo; Sujay S. Kaushal; Sujay S. Kaushal; Sherry L. Martin; Thomas E. Jordan; Anthony J. Parolari; William V. Sobczak; Donald E. Weller; Wilfred M. Wolheim; Emery R. Boose; Jonathan M. Duncan; Gretchen M. Gettel; Brian R. Hall; Praveen Kumar; Jonathan R. Thompson; James M. Vose; Emily M. Elliott; David S. Leigh

    2012-01-01

    Legacy effects of past land use and disturbance are increasingly recognized, yet consistent definitions of and criteria for defining them do not exist. To address this gap in biological- and ecosystem-assessment frameworks, we propose a general metric for evaluating potential legacy effects, which are computed by normalizing altered system function persistence with...

  8. Development and use of the Cytoscape app GFD-Net for measuring semantic dissimilarity of gene networks

    PubMed Central

    Diaz-Montana, Juan J.; Diaz-Diaz, Norberto

    2014-01-01

    Gene networks are one of the main computational models used to study the interaction between different elements during biological processes being widely used to represent gene–gene, or protein–protein interaction complexes. We present GFD-Net, a Cytoscape app for visualizing and analyzing the functional dissimilarity of gene networks. PMID:25400907

  9. Computational Approaches to Phenotyping

    PubMed Central

    Lussier, Yves A.; Liu, Yang

    2007-01-01

    The recent completion of the Human Genome Project has made possible a high-throughput “systems approach” for accelerating the elucidation of molecular underpinnings of human diseases, and subsequent derivation of molecular-based strategies to more effectively prevent, diagnose, and treat these diseases. Although altered phenotypes are among the most reliable manifestations of altered gene functions, research using systematic analysis of phenotype relationships to study human biology is still in its infancy. This article focuses on the emerging field of high-throughput phenotyping (HTP) phenomics research, which aims to capitalize on novel high-throughput computation and informatics technology developments to derive genomewide molecular networks of genotype–phenotype associations, or “phenomic associations.” The HTP phenomics research field faces the challenge of technological research and development to generate novel tools in computation and informatics that will allow researchers to amass, access, integrate, organize, and manage phenotypic databases across species and enable genomewide analysis to associate phenotypic information with genomic data at different scales of biology. Key state-of-the-art technological advancements critical for HTP phenomics research are covered in this review. In particular, we highlight the power of computational approaches to conduct large-scale phenomics studies. PMID:17202287

  10. A Method for Growing Bio-memristors from Slime Mold.

    PubMed

    Miranda, Eduardo Reck; Braund, Edward

    2017-11-02

    Our research is aimed at gaining a better understanding of the electronic properties of organisms in order to engineer novel bioelectronic systems and computing architectures based on biology. This specific paper focuses on harnessing the unicellular slime mold Physarum polycephalum to develop bio-memristors (or biological memristors) and bio-computing devices. The memristor is a resistor that possesses memory. It is the 4th fundamental passive circuit element (the other three are the resistor, the capacitor, and the inductor), which is paving the way for the design of new kinds of computing systems; e.g., computers that might relinquish the distinction between storage and a central processing unit. When applied with an AC voltage, the current vs. voltage characteristic of a memristor is a pinched hysteresis loop. It has been shown that P. polycephalum produces pinched hysteresis loops under AC voltages and displays adaptive behavior that is comparable with the functioning of a memristor. This paper presents the method that we developed for implementing bio-memristors with P. polycephalum and introduces the development of a receptacle to culture the organism, which facilitates its deployment as an electronic circuit component. Our method has proven to decrease growth time, increase component lifespan, and standardize electrical observations.

  11. Computational toxicology using the OpenTox application programming interface and Bioclipse

    PubMed Central

    2011-01-01

    Background Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. Findings This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. Conclusions A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers. PMID:22075173

  12. UC Merced Center for Computational Biology Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colvin, Michael; Watanabe, Masakatsu

    Final report for the UC Merced Center for Computational Biology. The Center for Computational Biology (CCB) was established to support multidisciplinary scientific research and academic programs in computational biology at the new University of California campus in Merced. In 2003, the growing gap between biology research and education was documented in a report from the National Academy of Sciences, Bio2010 Transforming Undergraduate Education for Future Research Biologists. We believed that a new type of biological sciences undergraduate and graduate programs that emphasized biological concepts and considered biology as an information science would have a dramatic impact in enabling the transformationmore » of biology. UC Merced as newest UC campus and the first new U.S. research university of the 21st century was ideally suited to adopt an alternate strategy - to create a new Biological Sciences majors and graduate group that incorporated the strong computational and mathematical vision articulated in the Bio2010 report. CCB aimed to leverage this strong commitment at UC Merced to develop a new educational program based on the principle of biology as a quantitative, model-driven science. Also we expected that the center would be enable the dissemination of computational biology course materials to other university and feeder institutions, and foster research projects that exemplify a mathematical and computations-based approach to the life sciences. As this report describes, the CCB has been successful in achieving these goals, and multidisciplinary computational biology is now an integral part of UC Merced undergraduate, graduate and research programs in the life sciences. The CCB began in fall 2004 with the aid of an award from U.S. Department of Energy (DOE), under its Genomes to Life program of support for the development of research and educational infrastructure in the modern biological sciences. This report to DOE describes the research and academic programs made possible by the CCB from its inception until August, 2010, at the end of the final extension. Although DOE support for the center ended in August 2010, the CCB will continue to exist and support its original objectives. The research and academic programs fostered by the CCB have led to additional extramural funding from other agencies, and we anticipate that CCB will continue to provide support for quantitative and computational biology program at UC Merced for many years to come. Since its inception in fall 2004, CCB research projects have continuously had a multi-institutional collaboration with Lawrence Livermore National Laboratory (LLNL), and the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, as well as individual collaborators at other sites. CCB affiliated faculty cover a broad range of computational and mathematical research including molecular modeling, cell biology, applied math, evolutional biology, bioinformatics, etc. The CCB sponsored the first distinguished speaker series at UC Merced, which had an important role is spreading the word about the computational biology emphasis at this new campus. One of CCB's original goals is to help train a new generation of biologists who bridge the gap between the computational and life sciences. To archive this goal, by summer 2006, a new program - summer undergraduate internship program, have been established under CCB to train the highly mathematical and computationally intensive Biological Science researchers. By the end of summer 2010, 44 undergraduate students had gone through this program. Out of those participants, 11 students have been admitted to graduate schools and 10 more students are interested in pursuing graduate studies in the sciences. The center is also continuing to facilitate the development and dissemination of undergraduate and graduate course materials based on the latest research in computational biology.« less

  13. Hierarchical Ensemble Methods for Protein Function Prediction

    PubMed Central

    2014-01-01

    Protein function prediction is a complex multiclass multilabel classification problem, characterized by multiple issues such as the incompleteness of the available annotations, the integration of multiple sources of high dimensional biomolecular data, the unbalance of several functional classes, and the difficulty of univocally determining negative examples. Moreover, the hierarchical relationships between functional classes that characterize both the Gene Ontology and FunCat taxonomies motivate the development of hierarchy-aware prediction methods that showed significantly better performances than hierarchical-unaware “flat” prediction methods. In this paper, we provide a comprehensive review of hierarchical methods for protein function prediction based on ensembles of learning machines. According to this general approach, a separate learning machine is trained to learn a specific functional term and then the resulting predictions are assembled in a “consensus” ensemble decision, taking into account the hierarchical relationships between classes. The main hierarchical ensemble methods proposed in the literature are discussed in the context of existing computational methods for protein function prediction, highlighting their characteristics, advantages, and limitations. Open problems of this exciting research area of computational biology are finally considered, outlining novel perspectives for future research. PMID:25937954

  14. Evaluating a variety of text-mined features for automatic protein function prediction with GOstruct.

    PubMed

    Funk, Christopher S; Kahanda, Indika; Ben-Hur, Asa; Verspoor, Karin M

    2015-01-01

    Most computational methods that predict protein function do not take advantage of the large amount of information contained in the biomedical literature. In this work we evaluate both ontology term co-mention and bag-of-words features mined from the biomedical literature and analyze their impact in the context of a structured output support vector machine model, GOstruct. We find that even simple literature based features are useful for predicting human protein function (F-max: Molecular Function =0.408, Biological Process =0.461, Cellular Component =0.608). One advantage of using literature features is their ability to offer easy verification of automated predictions. We find through manual inspection of misclassifications that some false positive predictions could be biologically valid predictions based upon support extracted from the literature. Additionally, we present a "medium-throughput" pipeline that was used to annotate a large subset of co-mentions; we suggest that this strategy could help to speed up the rate at which proteins are curated.

  15. A computer lab exploring evolutionary aspects of chromatin structure and dynamics for an undergraduate chromatin course*.

    PubMed

    Eirín-López, José M

    2013-01-01

    The study of chromatin constitutes one of the most active research fields in life sciences, being subject to constant revisions that continuously redefine the state of the art in its knowledge. As every other rapidly changing field, chromatin biology requires clear and straightforward educational strategies able to efficiently translate such a vast body of knowledge to the classroom. With this aim, the present work describes a multidisciplinary computer lab designed to introduce undergraduate students to the dynamic nature of chromatin, within the context of the one semester course "Chromatin: Structure, Function and Evolution." This exercise is organized in three parts including (a) molecular evolutionary biology of histone families (using the H1 family as example), (b) histone structure and variation across different animal groups, and (c) effect of histone diversity on nucleosome structure and chromatin dynamics. By using freely available bioinformatic tools that can be run on common computers, the concept of chromatin dynamics is interactively illustrated from a comparative/evolutionary perspective. At the end of this computer lab, students are able to translate the bioinformatic information into a biochemical context in which the relevance of histone primary structure on chromatin dynamics is exposed. During the last 8 years this exercise has proven to be a powerful approach for teaching chromatin structure and dynamics, allowing students a higher degree of independence during the processes of learning and self-assessment. Copyright © 2013 International Union of Biochemistry and Molecular Biology, Inc.

  16. Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments

    NASA Astrophysics Data System (ADS)

    Atwal, Gurinder S.; Kinney, Justin B.

    2016-03-01

    A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.

  17. Neuromorphic neural interfaces: from neurophysiological inspiration to biohybrid coupling with nervous systems

    NASA Astrophysics Data System (ADS)

    Broccard, Frédéric D.; Joshi, Siddharth; Wang, Jun; Cauwenberghs, Gert

    2017-08-01

    Objective. Computation in nervous systems operates with different computational primitives, and on different hardware, than traditional digital computation and is thus subjected to different constraints from its digital counterpart regarding the use of physical resources such as time, space and energy. In an effort to better understand neural computation on a physical medium with similar spatiotemporal and energetic constraints, the field of neuromorphic engineering aims to design and implement electronic systems that emulate in very large-scale integration (VLSI) hardware the organization and functions of neural systems at multiple levels of biological organization, from individual neurons up to large circuits and networks. Mixed analog/digital neuromorphic VLSI systems are compact, consume little power and operate in real time independently of the size and complexity of the model. Approach. This article highlights the current efforts to interface neuromorphic systems with neural systems at multiple levels of biological organization, from the synaptic to the system level, and discusses the prospects for future biohybrid systems with neuromorphic circuits of greater complexity. Main results. Single silicon neurons have been interfaced successfully with invertebrate and vertebrate neural networks. This approach allowed the investigation of neural properties that are inaccessible with traditional techniques while providing a realistic biological context not achievable with traditional numerical modeling methods. At the network level, populations of neurons are envisioned to communicate bidirectionally with neuromorphic processors of hundreds or thousands of silicon neurons. Recent work on brain-machine interfaces suggests that this is feasible with current neuromorphic technology. Significance. Biohybrid interfaces between biological neurons and VLSI neuromorphic systems of varying complexity have started to emerge in the literature. Primarily intended as a computational tool for investigating fundamental questions related to neural dynamics, the sophistication of current neuromorphic systems now allows direct interfaces with large neuronal networks and circuits, resulting in potentially interesting clinical applications for neuroengineering systems, neuroprosthetics and neurorehabilitation.

  18. A dictionary based informational genome analysis

    PubMed Central

    2012-01-01

    Background In the post-genomic era several methods of computational genomics are emerging to understand how the whole information is structured within genomes. Literature of last five years accounts for several alignment-free methods, arisen as alternative metrics for dissimilarity of biological sequences. Among the others, recent approaches are based on empirical frequencies of DNA k-mers in whole genomes. Results Any set of words (factors) occurring in a genome provides a genomic dictionary. About sixty genomes were analyzed by means of informational indexes based on genomic dictionaries, where a systemic view replaces a local sequence analysis. A software prototype applying a methodology here outlined carried out some computations on genomic data. We computed informational indexes, built the genomic dictionaries with different sizes, along with frequency distributions. The software performed three main tasks: computation of informational indexes, storage of these in a database, index analysis and visualization. The validation was done by investigating genomes of various organisms. A systematic analysis of genomic repeats of several lengths, which is of vivid interest in biology (for example to compute excessively represented functional sequences, such as promoters), was discussed, and suggested a method to define synthetic genetic networks. Conclusions We introduced a methodology based on dictionaries, and an efficient motif-finding software application for comparative genomics. This approach could be extended along many investigation lines, namely exported in other contexts of computational genomics, as a basis for discrimination of genomic pathologies. PMID:22985068

  19. A computational framework to empower probabilistic protein design

    PubMed Central

    Fromer, Menachem; Yanover, Chen

    2008-01-01

    Motivation: The task of engineering a protein to perform a target biological function is known as protein design. A commonly used paradigm casts this functional design problem as a structural one, assuming a fixed backbone. In probabilistic protein design, positional amino acid probabilities are used to create a random library of sequences to be simultaneously screened for biological activity. Clearly, certain choices of probability distributions will be more successful in yielding functional sequences. However, since the number of sequences is exponential in protein length, computational optimization of the distribution is difficult. Results: In this paper, we develop a computational framework for probabilistic protein design following the structural paradigm. We formulate the distribution of sequences for a structure using the Boltzmann distribution over their free energies. The corresponding probabilistic graphical model is constructed, and we apply belief propagation (BP) to calculate marginal amino acid probabilities. We test this method on a large structural dataset and demonstrate the superiority of BP over previous methods. Nevertheless, since the results obtained by BP are far from optimal, we thoroughly assess the paradigm using high-quality experimental data. We demonstrate that, for small scale sub-problems, BP attains identical results to those produced by exact inference on the paradigmatic model. However, quantitative analysis shows that the distributions predicted significantly differ from the experimental data. These findings, along with the excellent performance we observed using BP on the smaller problems, suggest potential shortcomings of the paradigm. We conclude with a discussion of how it may be improved in the future. Contact: fromer@cs.huji.ac.il PMID:18586717

  20. Protonic transport through solitons in hydrogen-bonded systems

    NASA Astrophysics Data System (ADS)

    Kavitha, L.; Jayanthi, S.; Muniyappan, A.; Gopi, D.

    2011-09-01

    We offer an alternative route for investigating soliton solutions in hydrogen-bonded (HB) chains. We invoke the modified extended tangent hyperbolic function method coupled with symbolic computation to solve the governing equation of motion for proton dynamics. We investigate the dynamics of proton transfer in HB chains through bell-shaped soliton excitations, which trigger the bio-energy transport in most biological systems. This solitonic mechanism of proton transfer could play functional roles in muscular contraction, enzymatic activity and oxidative phosphorylation.

  1. Synthetic analog and digital circuits for cellular computation and memory.

    PubMed

    Purcell, Oliver; Lu, Timothy K

    2014-10-01

    Biological computation is a major area of focus in synthetic biology because it has the potential to enable a wide range of applications. Synthetic biologists have applied engineering concepts to biological systems in order to construct progressively more complex gene circuits capable of processing information in living cells. Here, we review the current state of computational genetic circuits and describe artificial gene circuits that perform digital and analog computation. We then discuss recent progress in designing gene networks that exhibit memory, and how memory and computation have been integrated to yield more complex systems that can both process and record information. Finally, we suggest new directions for engineering biological circuits capable of computation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    PubMed

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  3. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME.

    PubMed

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2016-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.

  4. Multi -omics and metabolic modelling pipelines: challenges and tools for systems microbiology.

    PubMed

    Fondi, Marco; Liò, Pietro

    2015-02-01

    Integrated -omics approaches are quickly spreading across microbiology research labs, leading to (i) the possibility of detecting previously hidden features of microbial cells like multi-scale spatial organization and (ii) tracing molecular components across multiple cellular functional states. This promises to reduce the knowledge gap between genotype and phenotype and poses new challenges for computational microbiologists. We underline how the capability to unravel the complexity of microbial life will strongly depend on the integration of the huge and diverse amount of information that can be derived today from -omics experiments. In this work, we present opportunities and challenges of multi -omics data integration in current systems biology pipelines. We here discuss which layers of biological information are important for biotechnological and clinical purposes, with a special focus on bacterial metabolism and modelling procedures. A general review of the most recent computational tools for performing large-scale datasets integration is also presented, together with a possible framework to guide the design of systems biology experiments by microbiologists. Copyright © 2015. Published by Elsevier GmbH.

  5. Applications of artificial neural networks in medical science.

    PubMed

    Patel, Jigneshkumar L; Goyal, Ramesh K

    2007-09-01

    Computer technology has been advanced tremendously and the interest has been increased for the potential use of 'Artificial Intelligence (AI)' in medicine and biological research. One of the most interesting and extensively studied branches of AI is the 'Artificial Neural Networks (ANNs)'. Basically, ANNs are the mathematical algorithms, generated by computers. ANNs learn from standard data and capture the knowledge contained in the data. Trained ANNs approach the functionality of small biological neural cluster in a very fundamental manner. They are the digitized model of biological brain and can detect complex nonlinear relationships between dependent as well as independent variables in a data where human brain may fail to detect. Nowadays, ANNs are widely used for medical applications in various disciplines of medicine especially in cardiology. ANNs have been extensively applied in diagnosis, electronic signal analysis, medical image analysis and radiology. ANNs have been used by many authors for modeling in medicine and clinical research. Applications of ANNs are increasing in pharmacoepidemiology and medical data mining. In this paper, authors have summarized various applications of ANNs in medical science.

  6. Computational Medicine: Translating Models to Clinical Care

    PubMed Central

    Winslow, Raimond L.; Trayanova, Natalia; Geman, Donald; Miller, Michael I.

    2013-01-01

    Because of the inherent complexity of coupled nonlinear biological systems, the development of computational models is necessary for achieving a quantitative understanding of their structure and function in health and disease. Statistical learning is applied to high-dimensional biomolecular data to create models that describe relationships between molecules and networks. Multiscale modeling links networks to cells, organs, and organ systems. Computational approaches are used to characterize anatomic shape and its variations in health and disease. In each case, the purposes of modeling are to capture all that we know about disease and to develop improved therapies tailored to the needs of individuals. We discuss advances in computational medicine, with specific examples in the fields of cancer, diabetes, cardiology, and neurology. Advances in translating these computational methods to the clinic are described, as well as challenges in applying models for improving patient health. PMID:23115356

  7. Integrative Sparse K-Means With Overlapping Group Lasso in Genomic Applications for Disease Subtype Discovery

    PubMed Central

    Huo, Zhiguang; Tseng, George

    2017-01-01

    Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K-means (is-K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is-K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency. PMID:28959370

  8. Integrative Sparse K-Means With Overlapping Group Lasso in Genomic Applications for Disease Subtype Discovery.

    PubMed

    Huo, Zhiguang; Tseng, George

    2017-06-01

    Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K -means (is- K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is- K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency.

  9. Hierarchy, determinism, and specificity in theories of development and evolution.

    PubMed

    Deichmann, Ute

    2017-10-16

    The concepts of hierarchical organization, genetic determinism and biological specificity (for example of species, biologically relevant macromolecules, or genes) have played a crucial role in biology as a modern experimental science since its beginnings in the nineteenth century. The idea of genetic information (specificity) and genetic determination was at the basis of molecular biology that developed in the 1940s with macromolecules, viruses and prokaryotes as major objects of research often labelled "reductionist". However, the concepts have been marginalized or rejected in some of the research that in the late 1960s began to focus additionally on the molecularization of complex biological structures and functions using systems approaches. This paper challenges the view that 'molecular reductionism' has been successfully replaced by holism and a focus on the collective behaviour of cellular entities. It argues instead that there are more fertile replacements for molecular 'reductionism', in which genomics, embryology, biochemistry, and computer science intertwine and result in research that is as exact and causally predictive as earlier molecular biology.

  10. Role of temperature dependence of optical properties in laser irradiation of biological tissue

    NASA Astrophysics Data System (ADS)

    Rastegar, Sohi; Kim, Beop-Min; Jacques, Steven L.

    1992-08-01

    Optical properties of biological tissue can change as a result of thermal denaturation due to temperature rise; a familiar example is whitening observed in cooking egg-white. Changes in optical properties with temperature have been reported in the literature. Temperature rise due to laser irradiation is a function of the optical properties of tissue which themselves are a function of temperature of the tissue. This creates a coupling between light and temperature fields for biological tissue under laser irradiation. The effects of this coupling on the temperature response and light distribution may play an important role in dosimetry consideration for therapeutic as well as diagnostic application of lasers in medicine. In a previous study this problem was addressed in one dimension, for short irradiation exposures, using certain simplifying assumptions. The purpose of this research was to develop a mathematical model for dynamic optical changes with thermal denaturation and a computer program for simulation of these effects for a multi-dimensional geometry.

  11. ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra

    PubMed Central

    2011-01-01

    Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817

  12. Computing the Energy Cost of the Information Transmitted by Model Biological Neurons

    NASA Astrophysics Data System (ADS)

    Torrealdea, F. J.; Sarasola, C.; d'Anjou, A.; Moujahid, A.

    2009-08-01

    We assign an energy function to a Hindmarsh-Rose model of a neuron and use it to compute values of average energy consumption during its signalling activity. We also compute values of information entropy of an isolated neuron and of mutual information between two electrically coupled neurons. We find that for the isolated neuron the chaotic signaling regime is the one with the biggest ratio of information entropy to energy consumption. We also find that in the case of electrically coupled neurons there are values of the coupling strength at which the mutual information to energy consumption ratio is maximum, that is, that transmitting at that coupling conditions is energetically less expensive.

  13. Artificial Intelligence and brain.

    PubMed

    Shapshak, Paul

    2018-01-01

    From the start, Kurt Godel observed that computer and brain paradigms were considered on a par by researchers and that researchers had misunderstood his theorems. He hailed with displeasure that the brain transcends computers. In this brief article, we point out that Artificial Intelligence (AI) comprises multitudes of human-made methodologies, systems, and languages, and implemented with computer technology. These advances enhance development in the electron and quantum realms. In the biological realm, animal neurons function, also utilizing electron flow, and are products of evolution. Mirror neurons are an important paradigm in neuroscience research. Moreover, the paradigm shift proposed here - 'hall of mirror neurons' - is a potentially further productive research tactic. These concepts further expand AI and brain research.

  14. Multiphoton microscopy for the in-situ investigation of cellular processes and integrity in cryopreservation.

    PubMed

    Doerr, Daniel; Stark, Martin; Ehrhart, Friederike; Zimmermann, Heiko; Stracke, Frank

    2009-08-01

    In this study we demonstrate a new noninvasive imaging method to monitor freezing processes in biological samples and to investigate life in the frozen state. It combines a laser scanning microscope with a computer-controlled cryostage. Nearinfrared (NIR) femtosecond laser pulses evoke the fluorescence of endogenous fluorophores and fluorescent labels due to multiphoton absorption.The inherent optical nonlinearity of multiphoton absorption allows 3D fluorescence imaging for optical tomography of frozen biological material in-situ. As an example for functional imaging we use fluorescence lifetime imaging (FLIM) to create images with chemical and physical contrast.

  15. Protein location prediction using atomic composition and global features of the amino acid sequence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cherian, Betsy Sheena, E-mail: betsy.skb@gmail.com; Nair, Achuthsankar S.

    2010-01-22

    Subcellular location of protein is constructive information in determining its function, screening for drug candidates, vaccine design, annotation of gene products and in selecting relevant proteins for further studies. Computational prediction of subcellular localization deals with predicting the location of a protein from its amino acid sequence. For a computational localization prediction method to be more accurate, it should exploit all possible relevant biological features that contribute to the subcellular localization. In this work, we extracted the biological features from the full length protein sequence to incorporate more biological information. A new biological feature, distribution of atomic composition is effectivelymore » used with, multiple physiochemical properties, amino acid composition, three part amino acid composition, and sequence similarity for predicting the subcellular location of the protein. Support Vector Machines are designed for four modules and prediction is made by a weighted voting system. Our system makes prediction with an accuracy of 100, 82.47, 88.81 for self-consistency test, jackknife test and independent data test respectively. Our results provide evidence that the prediction based on the biological features derived from the full length amino acid sequence gives better accuracy than those derived from N-terminal alone. Considering the features as a distribution within the entire sequence will bring out underlying property distribution to a greater detail to enhance the prediction accuracy.« less

  16. Counting motifs in dynamic networks.

    PubMed

    Mukherjee, Kingshuk; Hasan, Md Mahmudul; Boucher, Christina; Kahveci, Tamer

    2018-04-11

    A network motif is a sub-network that occurs frequently in a given network. Detection of such motifs is important since they uncover functions and local properties of the given biological network. Finding motifs is however a computationally challenging task as it requires solving the costly subgraph isomorphism problem. Moreover, the topology of biological networks change over time. These changing networks are called dynamic biological networks. As the network evolves, frequency of each motif in the network also changes. Computing the frequency of a given motif from scratch in a dynamic network as the network topology evolves is infeasible, particularly for large and fast evolving networks. In this article, we design and develop a scalable method for counting the number of motifs in a dynamic biological network. Our method incrementally updates the frequency of each motif as the underlying network's topology evolves. Our experiments demonstrate that our method can update the frequency of each motif in orders of magnitude faster than counting the motif embeddings every time the network changes. If the network evolves more frequently, the margin with which our method outperforms the existing static methods, increases. We evaluated our method extensively using synthetic and real datasets, and show that our method is highly accurate(≥ 96%) and that it can be scaled to large dense networks. The results on real data demonstrate the utility of our method in revealing interesting insights on the evolution of biological processes.

  17. Theoretical computer science and the natural sciences

    NASA Astrophysics Data System (ADS)

    Marchal, Bruno

    2005-12-01

    I present some fundamental theorems in computer science and illustrate their relevance in Biology and Physics. I do not assume prerequisites in mathematics or computer science beyond the set N of natural numbers, functions from N to N, the use of some notational conveniences to describe functions, and at some point, a minimal amount of linear algebra and logic. I start with Cantor's transcendental proof by diagonalization of the non enumerability of the collection of functions from natural numbers to the natural numbers. I explain why this proof is not entirely convincing and show how, by restricting the notion of function in terms of discrete well defined processes, we are led to the non algorithmic enumerability of the computable functions, but also-through Church's thesis-to the algorithmic enumerability of partial computable functions. Such a notion of function constitutes, with respect to our purpose, a crucial generalization of that concept. This will make easy to justify deep and astonishing (counter-intuitive) incompleteness results about computers and similar machines. The modified Cantor diagonalization will provide a theory of concrete self-reference and I illustrate it by pointing toward an elementary theory of self-reproduction-in the Amoeba's way-and cellular self-regeneration-in the flatworm Planaria's way. To make it easier, I introduce a very simple and powerful formal system known as the Schoenfinkel-Curry combinators. I will use the combinators to illustrate in a more concrete way the notion introduced above. The combinators, thanks to their low-level fine grained design, will also make it possible to make a rough but hopefully illuminating description of the main lessons gained by the careful observation of nature, and to describe some new relations, which should exist between computer science, the science of life and the science of inert matter, once some philosophical, if not theological, hypotheses are made in the cognitive sciences. In the last section, I come back to self-reference and I give an exposition of its modal logics. This is used to show that theoretical computer science makes those “philosophical hypotheses” in theoretical cognitive science experimentally and mathematically testable.

  18. Computer-Based Semantic Network in Molecular Biology: A Demonstration.

    ERIC Educational Resources Information Center

    Callman, Joshua L.; And Others

    This paper analyzes the hardware and software features that would be desirable in a computer-based semantic network system for representing biology knowledge. It then describes in detail a prototype network of molecular biology knowledge that has been developed using Filevision software and a Macintosh computer. The prototype contains about 100…

  19. The radical mechanism of biological methane synthesis by methyl-coenzyme M reductase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wongnate, T.; Sliwa, D.; Ginovska, B.

    2016-05-19

    Methyl-coenzyme M reductase (MCR), the rate-limiting enzyme in methanogenesis and anaerobic methane oxidation, is responsible for the production of over one billion tons of methane per year. The mechanism of methane synthesis is unknown, with the two leading proposals involving either a methyl-nickel(III) (Mechanism I) or methyl radical/Ni(II)-thiolate (Mechanism II) intermediate(s). When the reaction between the active Ni(I) enzyme with substrates was studied by transient kinetic, spectroscopic and computational methods, formation of an EPR-silent Ni(II)-thiolate intermediate was positively identified by magnetic circular dichroism spectroscopy. There was no evidence for an EPR-active methyl-Ni(III) species. Temperature-dependent transient kinetic studies revealed that themore » activation energy for the initial catalytic step closely matched the value computed by density functional theory for Mechanism II. Thus, our results demonstrate that biological methane synthesis occurs by generation of a methyl radical.« less

  20. Integrative Systems Models of Cardiac Excitation Contraction Coupling

    PubMed Central

    Greenstein, Joseph L.; Winslow, Raimond L.

    2010-01-01

    Excitation-contraction coupling in the cardiac myocyte is mediated by a number of highly integrated mechanisms of intracellular Ca2+ transport. The complexity and integrative nature of heart cell electrophysiology and Ca2+-cycling has led to an evolution of computational models that have played a crucial role in shaping our understanding of heart function. An important emerging theme in systems biology is that the detailed nature of local signaling events, such as those that occur in the cardiac dyad, have important consequences at higher biological scales. Multi-scale modeling techniques have revealed many mechanistic links between micro-scale events, such as Ca2+ binding to a channel protein, and macro-scale phenomena, such as excitation-contraction coupling gain. Here we review experimentally based multi-scale computational models of excitation-contraction coupling and the insights that have been gained through their application. PMID:21212390

  1. Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems.

    PubMed

    D'Onofrio, David J; Abel, David L; Johnson, Donald E

    2012-03-14

    The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called prescriptive information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

  2. RE-PLAN: An Extensible Software Architecture to Facilitate Disaster Response Planning

    PubMed Central

    O’Neill, Martin; Mikler, Armin R.; Indrakanti, Saratchandra; Tiwari, Chetan; Jimenez, Tamara

    2014-01-01

    Computational tools are needed to make data-driven disaster mitigation planning accessible to planners and policymakers without the need for programming or GIS expertise. To address this problem, we have created modules to facilitate quantitative analyses pertinent to a variety of different disaster scenarios. These modules, which comprise the REsponse PLan ANalyzer (RE-PLAN) framework, may be used to create tools for specific disaster scenarios that allow planners to harness large amounts of disparate data and execute computational models through a point-and-click interface. Bio-E, a user-friendly tool built using this framework, was designed to develop and analyze the feasibility of ad hoc clinics for treating populations following a biological emergency event. In this article, the design and implementation of the RE-PLAN framework are described, and the functionality of the modules used in the Bio-E biological emergency mitigation tool are demonstrated. PMID:25419503

  3. A combined computational and structural model of the full-length human prolactin receptor

    PubMed Central

    Bugge, Katrine; Papaleo, Elena; Haxholm, Gitte W.; Hopper, Jonathan T. S.; Robinson, Carol V.; Olsen, Johan G.; Lindorff-Larsen, Kresten; Kragelund, Birthe B.

    2016-01-01

    The prolactin receptor is an archetype member of the class I cytokine receptor family, comprising receptors with fundamental functions in biology as well as key drug targets. Structurally, each of these receptors represent an intriguing diversity, providing an exceptionally challenging target for structural biology. Here, we access the molecular architecture of the monomeric human prolactin receptor by combining experimental and computational efforts. We solve the NMR structure of its transmembrane domain in micelles and collect structural data on overlapping fragments of the receptor with small-angle X-ray scattering, native mass spectrometry and NMR spectroscopy. Along with previously published data, these are integrated by molecular modelling to generate a full receptor structure. The result provides the first full view of a class I cytokine receptor, exemplifying the architecture of more than 40 different receptor chains, and reveals that the extracellular domain is merely the tip of a molecular iceberg. PMID:27174498

  4. Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data.

    PubMed

    Ching, Travers; Zhu, Xun; Garmire, Lana X

    2018-04-01

    Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet.

  5. Tracking perturbations in Boolean networks with spectral methods

    NASA Astrophysics Data System (ADS)

    Kesseli, Juha; Rämö, Pauli; Yli-Harja, Olli

    2005-08-01

    In this paper we present a method for predicting the spread of perturbations in Boolean networks. The method is applicable to networks that have no regular topology. The prediction of perturbations can be performed easily by using a presented result which enables the efficient computation of the required iterative formulas. This result is based on abstract Fourier transform of the functions in the network. In this paper the method is applied to show the spread of perturbations in networks containing a distribution of functions found from biological data. The advances in the study of the spread of perturbations can directly be applied to enable ways of quantifying chaos in Boolean networks. Derrida plots over an arbitrary number of time steps can be computed and thus distributions of functions compared with each other with respect to the amount of order they create in random networks.

  6. A simplified computational memory model from information processing

    PubMed Central

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  7. On finding bicliques in bipartite graphs: a novel algorithm and its application to the integration of diverse biological data types

    PubMed Central

    2014-01-01

    Background Integrating and analyzing heterogeneous genome-scale data is a huge algorithmic challenge for modern systems biology. Bipartite graphs can be useful for representing relationships across pairs of disparate data types, with the interpretation of these relationships accomplished through an enumeration of maximal bicliques. Most previously-known techniques are generally ill-suited to this foundational task, because they are relatively inefficient and without effective scaling. In this paper, a powerful new algorithm is described that produces all maximal bicliques in a bipartite graph. Unlike most previous approaches, the new method neither places undue restrictions on its input nor inflates the problem size. Efficiency is achieved through an innovative exploitation of bipartite graph structure, and through computational reductions that rapidly eliminate non-maximal candidates from the search space. An iterative selection of vertices for consideration based on non-decreasing common neighborhood sizes boosts efficiency and leads to more balanced recursion trees. Results The new technique is implemented and compared to previously published approaches from graph theory and data mining. Formal time and space bounds are derived. Experiments are performed on both random graphs and graphs constructed from functional genomics data. It is shown that the new method substantially outperforms the best previous alternatives. Conclusions The new method is streamlined, efficient, and particularly well-suited to the study of huge and diverse biological data. A robust implementation has been incorporated into GeneWeaver, an online tool for integrating and analyzing functional genomics experiments, available at http://geneweaver.org. The enormous increase in scalability it provides empowers users to study complex and previously unassailable gene-set associations between genes and their biological functions in a hierarchical fashion and on a genome-wide scale. This practical computational resource is adaptable to almost any applications environment in which bipartite graphs can be used to model relationships between pairs of heterogeneous entities. PMID:24731198

  8. Prospective assessment of CAD/CAM zirconia abutment and lithium disilicate crown restorations: 2.4 year results.

    PubMed

    Cooper, Lyndon F; Stanford, Clark; Feine, Jocelyne; McGuire, Michael

    2016-07-01

    Single-tooth implant restorations are commonly used to replace anterior maxillary teeth. The esthetic, functional, and biologic outcomes are, in part, a function of the abutment and crown. The purpose of this clinical study was to describe the implant, abutment, and crown survival and complication rates for CAD/CAM zirconia abutment and lithium disilicate crown restorations for single-tooth implants. As part of a broader prospective investigation that enrolled and treated 141 participants comparing tissue responses at the conical interface (CI; AstraTech OsseoSpeed), flat-to-flat interface (FI; NobelSpeedy), and platform-switch interface (PS; NanoTite Certain Prevail) of single-tooth implants, computer-aided design and computer-aided manufacturing (CAD/CAM) zirconia abutments (ATLANTIS Abutment) and cemented lithium disilicate (e.max) crowns were used in the restoration of all implants. After 2.4 years in function (3 years after implant placement), the implant, abutment, and crown of 110 participants were evaluated. Technical and biologic complications were recorded. Demographic results were tabulated as percentages with mean values and standard deviations. Abutment survival was calculated with the Kaplan-Meier method. After 2.4 years, no abutments or crowns had been lost. Abutment complications (screw loosening, screw fracture, fracture) were absent for all 3 implant groups. Crown complications were limited to 2 crowns debonding and 1 with excess cement (2.5%). Five biological complications (4.0%) were recorded. The overall complication rate was 6.5%. CAD/CAM zirconia abutments restored with cemented lithium disilicate crowns demonstrated high survival on 3 different implant-abutment interface designs. No abutment or abutment screw fracture occurred. The technical complications observed after 2.4 years were minor and reversible. The use of CAD/CAM zirconia abutments with cemented lithium disilicate crowns is associated with high technical and biologic success at 2.4 years. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  9. Analog "neuronal" networks in early vision.

    PubMed Central

    Koch, C; Marroquin, J; Yuille, A

    1986-01-01

    Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172

  10. Frustration in biomolecules.

    PubMed

    Ferreiro, Diego U; Komives, Elizabeth A; Wolynes, Peter G

    2014-11-01

    Biomolecules are the prime information processing elements of living matter. Most of these inanimate systems are polymers that compute their own structures and dynamics using as input seemingly random character strings of their sequence, following which they coalesce and perform integrated cellular functions. In large computational systems with finite interaction-codes, the appearance of conflicting goals is inevitable. Simple conflicting forces can lead to quite complex structures and behaviors, leading to the concept of frustration in condensed matter. We present here some basic ideas about frustration in biomolecules and how the frustration concept leads to a better appreciation of many aspects of the architecture of biomolecules, and especially how biomolecular structure connects to function by means of localized frustration. These ideas are simultaneously both seductively simple and perilously subtle to grasp completely. The energy landscape theory of protein folding provides a framework for quantifying frustration in large systems and has been implemented at many levels of description. We first review the notion of frustration from the areas of abstract logic and its uses in simple condensed matter systems. We discuss then how the frustration concept applies specifically to heteropolymers, testing folding landscape theory in computer simulations of protein models and in experimentally accessible systems. Studying the aspects of frustration averaged over many proteins provides ways to infer energy functions useful for reliable structure prediction. We discuss how frustration affects folding mechanisms. We review here how the biological functions of proteins are related to subtle local physical frustration effects and how frustration influences the appearance of metastable states, the nature of binding processes, catalysis and allosteric transitions. In this review, we also emphasize that frustration, far from being always a bad thing, is an essential feature of biomolecules that allows dynamics to be harnessed for function. In this way, we hope to illustrate how Frustration is a fundamental concept in molecular biology.

  11. Web-Based Computational Chemistry Education with CHARMMing I: Lessons and Tutorial

    PubMed Central

    Miller, Benjamin T.; Singh, Rishi P.; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S.; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R.; Woodcock, H. Lee

    2014-01-01

    This article describes the development, implementation, and use of web-based “lessons” to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that “point and click” simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance. PMID:25057988

  12. Computer-Aided Drug Design Methods.

    PubMed

    Yu, Wenbo; MacKerell, Alexander D

    2017-01-01

    Computational approaches are useful tools to interpret and guide experiments to expedite the antibiotic drug design process. Structure-based drug design (SBDD) and ligand-based drug design (LBDD) are the two general types of computer-aided drug design (CADD) approaches in existence. SBDD methods analyze macromolecular target 3-dimensional structural information, typically of proteins or RNA, to identify key sites and interactions that are important for their respective biological functions. Such information can then be utilized to design antibiotic drugs that can compete with essential interactions involving the target and thus interrupt the biological pathways essential for survival of the microorganism(s). LBDD methods focus on known antibiotic ligands for a target to establish a relationship between their physiochemical properties and antibiotic activities, referred to as a structure-activity relationship (SAR), information that can be used for optimization of known drugs or guide the design of new drugs with improved activity. In this chapter, standard CADD protocols for both SBDD and LBDD will be presented with a special focus on methodologies and targets routinely studied in our laboratory for antibiotic drug discoveries.

  13. Spike-Timing Dependent Plasticity in Unipolar Silicon Oxide RRAM Devices

    PubMed Central

    Zarudnyi, Konstantin; Mehonic, Adnan; Montesi, Luca; Buckwell, Mark; Hudziak, Stephen; Kenyon, Anthony J.

    2018-01-01

    Resistance switching, or Resistive RAM (RRAM) devices show considerable potential for application in hardware spiking neural networks (neuro-inspired computing) by mimicking some of the behavior of biological synapses, and hence enabling non-von Neumann computer architectures. Spike-timing dependent plasticity (STDP) is one such behavior, and one example of several classes of plasticity that are being examined with the aim of finding suitable algorithms for application in many computing tasks such as coincidence detection, classification and image recognition. In previous work we have demonstrated that the neuromorphic capabilities of silicon-rich silicon oxide (SiOx) resistance switching devices extend beyond plasticity to include thresholding, spiking, and integration. We previously demonstrated such behaviors in devices operated in the unipolar mode, opening up the question of whether we could add plasticity to the list of features exhibited by our devices. Here we demonstrate clear STDP in unipolar devices. Significantly, we show that the response of our devices is broadly similar to that of biological synapses. This work further reinforces the potential of simple two-terminal RRAM devices to mimic neuronal functionality in hardware spiking neural networks. PMID:29472837

  14. A model of proto-object based saliency

    PubMed Central

    Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph

    2013-01-01

    Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601

  15. Web-based computational chemistry education with CHARMMing I: Lessons and tutorial.

    PubMed

    Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R; Woodcock, H Lee

    2014-07-01

    This article describes the development, implementation, and use of web-based "lessons" to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that "point and click" simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance.

  16. Tools for building a comprehensive modeling system for virtual screening under real biological conditions: The Computational Titration algorithm.

    PubMed

    Kellogg, Glen E; Fornabaio, Micaela; Chen, Deliang L; Abraham, Donald J; Spyrakis, Francesca; Cozzini, Pietro; Mozzarelli, Andrea

    2006-05-01

    Computational tools utilizing a unique empirical modeling system based on the hydrophobic effect and the measurement of logP(o/w) (the partition coefficient for solvent transfer between 1-octanol and water) are described. The associated force field, Hydropathic INTeractions (HINT), contains much rich information about non-covalent interactions in the biological environment because of its basis in an experiment that measures interactions in solution. HINT is shown to be the core of an evolving virtual screening system that is capable of taking into account a number of factors often ignored such as entropy, effects of solvent molecules at the active site, and the ionization states of acidic and basic residues and ligand functional groups. The outline of a comprehensive modeling system for virtual screening that incorporates these features is described. In addition, a detailed description of the Computational Titration algorithm is provided. As an example, three complexes of dihydrofolate reductase (DHFR) are analyzed with our system and these results are compared with the experimental free energies of binding.

  17. Message from the ISCB: ISCB Ebola award for important future research on the computational biology of Ebola virus.

    PubMed

    Karp, Peter D; Berger, Bonnie; Kovats, Diane; Lengauer, Thomas; Linial, Michal; Sabeti, Pardis; Hide, Winston; Rost, Burkhard

    2015-02-15

    Speed is of the essence in combating Ebola; thus, computational approaches should form a significant component of Ebola research. As for the development of any modern drug, computational biology is uniquely positioned to contribute through comparative analysis of the genome sequences of Ebola strains and three-dimensional protein modeling. Other computational approaches to Ebola may include large-scale docking studies of Ebola proteins with human proteins and with small-molecule libraries, computational modeling of the spread of the virus, computational mining of the Ebola literature and creation of a curated Ebola database. Taken together, such computational efforts could significantly accelerate traditional scientific approaches. In recognition of the need for important and immediate solutions from the field of computational biology against Ebola, the International Society for Computational Biology (ISCB) announces a prize for an important computational advance in fighting the Ebola virus. ISCB will confer the ISCB Fight against Ebola Award, along with a prize of US$2000, at its July 2016 annual meeting (ISCB Intelligent Systems for Molecular Biology 2016, Orlando, FL). dkovats@iscb.org or rost@in.tum.de. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Deterministic Function Computation with Chemical Reaction Networks*

    PubMed Central

    Chen, Ho-Lin; Doty, David; Soloveichik, David

    2013-01-01

    Chemical reaction networks (CRNs) formally model chemistry in a well-mixed solution. CRNs are widely used to describe information processing occurring in natural cellular regulatory networks, and with upcoming advances in synthetic biology, CRNs are a promising language for the design of artificial molecular control circuitry. Nonetheless, despite the widespread use of CRNs in the natural sciences, the range of computational behaviors exhibited by CRNs is not well understood. CRNs have been shown to be efficiently Turing-universal (i.e., able to simulate arbitrary algorithms) when allowing for a small probability of error. CRNs that are guaranteed to converge on a correct answer, on the other hand, have been shown to decide only the semilinear predicates (a multi-dimensional generalization of “eventually periodic” sets). We introduce the notion of function, rather than predicate, computation by representing the output of a function f : ℕk → ℕl by a count of some molecular species, i.e., if the CRN starts with x1, …, xk molecules of some “input” species X1, …, Xk, the CRN is guaranteed to converge to having f(x1, …, xk) molecules of the “output” species Y1, …, Yl. We show that a function f : ℕk → ℕl is deterministically computed by a CRN if and only if its graph {(x, y) ∈ ℕk × ℕl ∣ f(x) = y} is a semilinear set. Finally, we show that each semilinear function f (a function whose graph is a semilinear set) can be computed by a CRN on input x in expected time O(polylog ∥x∥1). PMID:25383068

  19. A web-based system for neural network based classification in temporomandibular joint osteoarthritis.

    PubMed

    de Dumast, Priscille; Mirabel, Clément; Cevidanes, Lucia; Ruellas, Antonio; Yatabe, Marilia; Ioshida, Marcos; Ribera, Nina Tubau; Michoud, Loic; Gomes, Liliane; Huang, Chao; Zhu, Hongtu; Muniz, Luciana; Shoukri, Brandon; Paniagua, Beatriz; Styner, Martin; Pieper, Steve; Budin, Francois; Vimort, Jean-Baptiste; Pascal, Laura; Prieto, Juan Carlos

    2018-07-01

    The purpose of this study is to describe the methodological innovations of a web-based system for storage, integration and computation of biomedical data, using a training imaging dataset to remotely compute a deep neural network classifier of temporomandibular joint osteoarthritis (TMJOA). This study imaging dataset consisted of three-dimensional (3D) surface meshes of mandibular condyles constructed from cone beam computed tomography (CBCT) scans. The training dataset consisted of 259 condyles, 105 from control subjects and 154 from patients with diagnosis of TMJ OA. For the image analysis classification, 34 right and left condyles from 17 patients (39.9 ± 11.7 years), who experienced signs and symptoms of the disease for less than 5 years, were included as the testing dataset. For the integrative statistical model of clinical, biological and imaging markers, the sample consisted of the same 17 test OA subjects and 17 age and sex matched control subjects (39.4 ± 15.4 years), who did not show any sign or symptom of OA. For these 34 subjects, a standardized clinical questionnaire, blood and saliva samples were also collected. The technological methodologies in this study include a deep neural network classifier of 3D condylar morphology (ShapeVariationAnalyzer, SVA), and a flexible web-based system for data storage, computation and integration (DSCI) of high dimensional imaging, clinical, and biological data. The DSCI system trained and tested the neural network, indicating 5 stages of structural degenerative changes in condylar morphology in the TMJ with 91% close agreement between the clinician consensus and the SVA classifier. The DSCI remotely ran with a novel application of a statistical analysis, the Multivariate Functional Shape Data Analysis, that computed high dimensional correlations between shape 3D coordinates, clinical pain levels and levels of biological markers, and then graphically displayed the computation results. The findings of this study demonstrate a comprehensive phenotypic characterization of TMJ health and disease at clinical, imaging and biological levels, using novel flexible and versatile open-source tools for a web-based system that provides advanced shape statistical analysis and a neural network based classification of temporomandibular joint osteoarthritis. Published by Elsevier Ltd.

  20. Computer Analogies: Teaching Molecular Biology and Ecology.

    ERIC Educational Resources Information Center

    Rice, Stanley; McArthur, John

    2002-01-01

    Suggests that computer science analogies can aid the understanding of gene expression, including the storage of genetic information on chromosomes. Presents a matrix of biology and computer science concepts. (DDR)

  1. Applying Knowledge of Enzyme Biochemistry to the Prediction of Functional Sites for Aiding Drug Discovery.

    PubMed

    Pai, Priyadarshini P; Mondal, Sukanta

    2017-01-01

    Enzymes are biological catalysts that play an important role in determining the patterns of chemical transformations pertaining to life. Many milestones have been achieved in unraveling the mechanisms in which the enzymes orchestrate various cellular processes using experimental and computational approaches. Experimental studies generating nearly all possible mutations of target enzymes have been aided by rapid computational approaches aiming at enzyme functional classification, understanding domain organization, functional site identification. The functional architecture, essentially, is involved in binding or interaction with ligands including substrates, products, cofactors, inhibitors, providing for their function, such as in catalysis, ligand mediated cell signaling, allosteric regulation and post-translational modifications. With the increasing availability of enzyme information and advances in algorithm development, computational approaches have now become more capable of providing precise inputs for enzyme engineering, and in the process also making it more efficient. This has led to interesting findings, especially in aberrant enzyme interactions, such as hostpathogen interactions in infection, neurodegenerative diseases, cancer and diabetes. This review aims to summarize in retrospection - the mined knowledge, vivid perspectives and challenging strides in using available experimentally validated enzyme information for characterization. An analytical outlook is presented on the scope of exploring future directions. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  2. Basic Research Plan, February 2003

    DTIC Science & Technology

    2003-02-01

    consistent. This effort includes the nitration , crystallization, and coating of CL–20. Under Army sponsor- ship, a process for the nitration of CL–20 has...actuators • Multiscale computational design of structural materials with embedded functionality • Materials with embedded electrical/magnetic/optical...the innovative use of biology to produce unique materials and processes of mili- tary relevance; to increase economic and environmental affordability

  3. Optic Glomeruli: Biological Circuits that Compute Target Identity

    DTIC Science & Technology

    2013-11-01

    vitripennis. Insect Mol. Biol. Suppl. 1:121-36. Strausfeld NJ. 2012. Arthropod Brains. Evolution , Functional Elegance and Historical Significance. Harvard...Neuroscience and Center for Insect Science University of Arizona Tucson, AZ 85721 Contract No. FA8651-10-1-0001 November 2013 FINAL REPORT...PERFORMING ORGANIZATION REPORT NUMBER Department of Neuroscience and Center for Insect Science University of Arizona Tucson, AZ 85721

  4. The Role of Protein Loops and Linkers in Conformational Dynamics and Allostery.

    PubMed

    Papaleo, Elena; Saladino, Giorgio; Lambrughi, Matteo; Lindorff-Larsen, Kresten; Gervasio, Francesco Luigi; Nussinov, Ruth

    2016-06-08

    Proteins are dynamic entities that undergo a plethora of conformational changes that may take place on a wide range of time scales. These changes can be as small as the rotation of one or a few side-chain dihedral angles or involve concerted motions in larger portions of the three-dimensional structure; both kinds of motions can be important for biological function and allostery. It is becoming increasingly evident that "connector regions" are important components of the dynamic personality of protein structures. These regions may be either disordered loops, i.e., poorly structured regions connecting secondary structural elements, or linkers that connect entire protein domains. Experimental and computational studies have, however, revealed that these regions are not mere connectors, and their role in allostery and conformational changes has been emerging in the last few decades. Here we provide a detailed overview of the structural properties and classification of loops and linkers, as well as a discussion of the main computational methods employed to investigate their function and dynamical properties. We also describe their importance for protein dynamics and allostery using as examples key proteins in cellular biology and human diseases such as kinases, ubiquitinating enzymes, and transcription factors.

  5. Recent advances in automated protein design and its future challenges.

    PubMed

    Setiawan, Dani; Brender, Jeffrey; Zhang, Yang

    2018-04-25

    Protein function is determined by protein structure which is in turn determined by the corresponding protein sequence. If the rules that cause a protein to adopt a particular structure are understood, it should be possible to refine or even redefine the function of a protein by working backwards from the desired structure to the sequence. Automated protein design attempts to calculate the effects of mutations computationally with the goal of more radical or complex transformations than are accessible by experimental techniques. Areas covered: The authors give a brief overview of the recent methodological advances in computer-aided protein design, showing how methodological choices affect final design and how automated protein design can be used to address problems considered beyond traditional protein engineering, including the creation of novel protein scaffolds for drug development. Also, the authors address specifically the future challenges in the development of automated protein design. Expert opinion: Automated protein design holds potential as a protein engineering technique, particularly in cases where screening by combinatorial mutagenesis is problematic. Considering solubility and immunogenicity issues, automated protein design is initially more likely to make an impact as a research tool for exploring basic biology in drug discovery than in the design of protein biologics.

  6. Discovery of new candidate genes related to brain development using protein interaction information.

    PubMed

    Chen, Lei; Chu, Chen; Kong, Xiangyin; Huang, Tao; Cai, Yu-Dong

    2015-01-01

    Human brain development is a dramatic process composed of a series of complex and fine-tuned spatiotemporal gene expressions. A good comprehension of this process can assist us in developing the potential of our brain. However, we have only limited knowledge about the genes and gene functions that are involved in this biological process. Therefore, a substantial demand remains to discover new brain development-related genes and identify their biological functions. In this study, we aimed to discover new brain-development related genes by building a computational method. We referred to a series of computational methods used to discover new disease-related genes and developed a similar method. In this method, the shortest path algorithm was executed on a weighted graph that was constructed using protein-protein interactions. New candidate genes fell on at least one of the shortest paths connecting two known genes that are related to brain development. A randomization test was then adopted to filter positive discoveries. Of the final identified genes, several have been reported to be associated with brain development, indicating the effectiveness of the method, whereas several of the others may have potential roles in brain development.

  7. Structural computational modeling of RNA aptamers.

    PubMed

    Xu, Xiaojun; Dickey, David D; Chen, Shi-Jie; Giangrande, Paloma H

    2016-07-01

    RNA aptamers represent an emerging class of biologics that can be easily adapted for personalized and precision medicine. Several therapeutic aptamers with desirable binding and functional properties have been developed and evaluated in preclinical studies over the past 25years. However, for the majority of these aptamers, their clinical potential has yet to be realized. A significant hurdle to the clinical adoption of this novel class of biologicals is the limited information on their secondary and tertiary structure. Knowledge of the RNA's structure would greatly facilitate and expedite the post-selection optimization steps required for translation, including truncation (to reduce costs of manufacturing), chemical modification (to enhance stability and improve safety) and chemical conjugation (to improve drug properties for combinatorial therapy). Here we describe a structural computational modeling methodology that when coupled to a standard functional assay, can be used to determine key sequence and structural motifs of an RNA aptamer. We applied this methodology to enable the truncation of an aptamer to prostate specific membrane antigen (PSMA) with great potential for targeted therapy that had failed previous truncation attempts. This methodology can be easily applied to optimize other aptamers with therapeutic potential. Copyright © 2016. Published by Elsevier Inc.

  8. Functional Analysis of OMICs Data and Small Molecule Compounds in an Integrated "Knowledge-Based" Platform.

    PubMed

    Dubovenko, Alexey; Nikolsky, Yuri; Rakhmatulin, Eugene; Nikolskaya, Tatiana

    2017-01-01

    Analysis of NGS and other sequencing data, gene variants, gene expression, proteomics, and other high-throughput (OMICs) data is challenging because of its biological complexity and high level of technical and biological noise. One way to deal with both problems is to perform analysis with a high fidelity annotated knowledgebase of protein interactions, pathways, and functional ontologies. This knowledgebase has to be structured in a computer-readable format and must include software tools for managing experimental data, analysis, and reporting. Here, we present MetaCore™ and Key Pathway Advisor (KPA), an integrated platform for functional data analysis. On the content side, MetaCore and KPA encompass a comprehensive database of molecular interactions of different types, pathways, network models, and ten functional ontologies covering human, mouse, and rat genes. The analytical toolkit includes tools for gene/protein list enrichment analysis, statistical "interactome" tool for the identification of over- and under-connected proteins in the dataset, and a biological network analysis module made up of network generation algorithms and filters. The suite also features Advanced Search, an application for combinatorial search of the database content, as well as a Java-based tool called Pathway Map Creator for drawing and editing custom pathway maps. Applications of MetaCore and KPA include molecular mode of action of disease research, identification of potential biomarkers and drug targets, pathway hypothesis generation, analysis of biological effects for novel small molecule compounds and clinical applications (analysis of large cohorts of patients, and translational and personalized medicine).

  9. DIANA-microT web server: elucidating microRNA functions through target prediction.

    PubMed

    Maragkakis, M; Reczko, M; Simossis, V A; Alexiou, P; Papadopoulos, G L; Dalamagas, T; Giannopoulos, G; Goumas, G; Koukis, E; Kourtis, K; Vergoulis, T; Koziris, N; Sellis, T; Tsanakas, P; Hatzigeorgiou, A G

    2009-07-01

    Computational microRNA (miRNA) target prediction is one of the key means for deciphering the role of miRNAs in development and disease. Here, we present the DIANA-microT web server as the user interface to the DIANA-microT 3.0 miRNA target prediction algorithm. The web server provides extensive information for predicted miRNA:target gene interactions with a user-friendly interface, providing extensive connectivity to online biological resources. Target gene and miRNA functions may be elucidated through automated bibliographic searches and functional information is accessible through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The web server offers links to nomenclature, sequence and protein databases, and users are facilitated by being able to search for targeted genes using different nomenclatures or functional features, such as the genes possible involvement in biological pathways. The target prediction algorithm supports parameters calculated individually for each miRNA:target gene interaction and provides a signal-to-noise ratio and a precision score that helps in the evaluation of the significance of the predicted results. Using a set of miRNA targets recently identified through the pSILAC method, the performance of several computational target prediction programs was assessed. DIANA-microT 3.0 achieved there with 66% the highest ratio of correctly predicted targets over all predicted targets. The DIANA-microT web server is freely available at www.microrna.gr/microT.

  10. The microRNA Expression Profile in Donation after Cardiac Death (DCD) Livers and Its Ability to Identify Primary Non Function.

    PubMed

    Khorsandi, Shirin Elizabeth; Quaglia, Alberto; Salehi, Siamak; Jassem, Wayel; Vilca-Melendez, Hector; Prachalias, Andreas; Srinivasan, Parthi; Heaton, Nigel

    2015-01-01

    Donation after cardiac death (DCD) livers are marginal organs for transplant and their use is associated with a higher risk of primary non function (PNF) or early graft dysfunction (EGD). The aim was to determine if microRNA (miRNA) was able to discriminate between DCD livers of varying clinical outcome. DCD groups were categorized as PNF retransplanted within a week (n=7), good functional outcome (n=7) peak aspartate transaminase (AST) ≤ 1000 IU/L and EGD (n=9) peak AST ≥ 2500 IU/L. miRNA was extracted from archival formalin fixed post-perfusion tru-cut liver biopsies. High throughput expression analysis was performed using miRNA arrays. Bioinformatics for expression data analysis was performed and validated with real time quantitative PCR (RT-qPCR). The function of miRNA of interest was investigated using computational biology prediction algorithms. From the array analysis 16 miRNAs were identified as significantly different (p<0.05). On RT-qPCR miR-155 and miR-940 had the highest expression across all three DCD clinical groups. Only one miRNA, miR-22, was validated with marginal significance, to have differential expression between the three groups (p=0.049). From computational biology miR-22 was predicted to affect signalling pathways that impact protein turnover, metabolism and apoptosis/cell cycle. In conclusion, microRNA expression patterns have a low diagnostic potential clinically in discriminating DCD liver quality and outcome.

  11. iTools: a framework for classification, categorization and integration of computational biology resources.

    PubMed

    Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W

    2008-05-28

    The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu.

  12. iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources

    PubMed Central

    Dinov, Ivo D.; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H. V.; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D. Stott; Toga, Arthur W.

    2008-01-01

    The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu. PMID:18509477

  13. Bio-inspired nano-sensor-enhanced CNN visual computer.

    PubMed

    Porod, Wolfgang; Werblin, Frank; Chua, Leon O; Roska, Tamas; Rodriguez-Vazquez, Angel; Roska, Botond; Fay, Patrick; Bernstein, Gary H; Huang, Yih-Fang; Csurgay, Arpad I

    2004-05-01

    Nanotechnology opens new ways to utilize recent discoveries in biological image processing by translating the underlying functional concepts into the design of CNN (cellular neural/nonlinear network)-based systems incorporating nanoelectronic devices. There is a natural intersection joining studies of retinal processing, spatio-temporal nonlinear dynamics embodied in CNN, and the possibility of miniaturizing the technology through nanotechnology. This intersection serves as the springboard for our multidisciplinary project. Biological feature and motion detectors map directly into the spatio-temporal dynamics of CNN for target recognition, image stabilization, and tracking. The neural interactions underlying color processing will drive the development of nanoscale multispectral sensor arrays for image fusion. Implementing such nanoscale sensors on a CNN platform will allow the implementation of device feedback control, a hallmark of biological sensory systems. These biologically inspired CNN subroutines are incorporated into the new world of analog-and-logic algorithms and software, containing also many other active-wave computing mechanisms, including nature-inspired (physics and chemistry) as well as PDE-based sophisticated spatio-temporal algorithms. Our goal is to design and develop several miniature prototype devices for target detection, navigation, tracking, and robotics. This paper presents an example illustrating the synergies emerging from the convergence of nanotechnology, biotechnology, and information and cognitive science.

  14. ePlant and the 3D data display initiative: integrative systems biology on the world wide web.

    PubMed

    Fucile, Geoffrey; Di Biase, David; Nahal, Hardeep; La, Garon; Khodabandeh, Shokoufeh; Chen, Yani; Easley, Kante; Christendat, Dinesh; Kelley, Lawrence; Provart, Nicholas J

    2011-01-10

    Visualization tools for biological data are often limited in their ability to interactively integrate data at multiple scales. These computational tools are also typically limited by two-dimensional displays and programmatic implementations that require separate configurations for each of the user's computing devices and recompilation for functional expansion. Towards overcoming these limitations we have developed "ePlant" (http://bar.utoronto.ca/eplant) - a suite of open-source world wide web-based tools for the visualization of large-scale data sets from the model organism Arabidopsis thaliana. These tools display data spanning multiple biological scales on interactive three-dimensional models. Currently, ePlant consists of the following modules: a sequence conservation explorer that includes homology relationships and single nucleotide polymorphism data, a protein structure model explorer, a molecular interaction network explorer, a gene product subcellular localization explorer, and a gene expression pattern explorer. The ePlant's protein structure explorer module represents experimentally determined and theoretical structures covering >70% of the Arabidopsis proteome. The ePlant framework is accessed entirely through a web browser, and is therefore platform-independent. It can be applied to any model organism. To facilitate the development of three-dimensional displays of biological data on the world wide web we have established the "3D Data Display Initiative" (http://3ddi.org).

  15. Computation of repetitions and regularities of biologically weighted sequences.

    PubMed

    Christodoulakis, M; Iliopoulos, C; Mouchard, L; Perdikuri, K; Tsakalidis, A; Tsichlas, K

    2006-01-01

    Biological weighted sequences are used extensively in molecular biology as profiles for protein families, in the representation of binding sites and often for the representation of sequences produced by a shotgun sequencing strategy. In this paper, we address three fundamental problems in the area of biologically weighted sequences: (i) computation of repetitions, (ii) pattern matching, and (iii) computation of regularities. Our algorithms can be used as basic building blocks for more sophisticated algorithms applied on weighted sequences.

  16. Integrating interactive computational modeling in biology curricula.

    PubMed

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  17. Advances and Computational Tools towards Predictable Design in Biological Engineering

    PubMed Central

    2014-01-01

    The design process of complex systems in all the fields of engineering requires a set of quantitatively characterized components and a method to predict the output of systems composed by such elements. This strategy relies on the modularity of the used components or the prediction of their context-dependent behaviour, when parts functioning depends on the specific context. Mathematical models usually support the whole process by guiding the selection of parts and by predicting the output of interconnected systems. Such bottom-up design process cannot be trivially adopted for biological systems engineering, since parts function is hard to predict when components are reused in different contexts. This issue and the intrinsic complexity of living systems limit the capability of synthetic biologists to predict the quantitative behaviour of biological systems. The high potential of synthetic biology strongly depends on the capability of mastering this issue. This review discusses the predictability issues of basic biological parts (promoters, ribosome binding sites, coding sequences, transcriptional terminators, and plasmids) when used to engineer simple and complex gene expression systems in Escherichia coli. A comparison between bottom-up and trial-and-error approaches is performed for all the discussed elements and mathematical models supporting the prediction of parts behaviour are illustrated. PMID:25161694

  18. Computational genomic identification and functional reconstitution of plant natural product biosynthetic pathways

    PubMed Central

    2016-01-01

    Covering: 2003 to 2016 The last decade has seen the first major discoveries regarding the genomic basis of plant natural product biosynthetic pathways. Four key computationally driven strategies have been developed to identify such pathways, which make use of physical clustering, co-expression, evolutionary co-occurrence and epigenomic co-regulation of the genes involved in producing a plant natural product. Here, we discuss how these approaches can be used for the discovery of plant biosynthetic pathways encoded by both chromosomally clustered and non-clustered genes. Additionally, we will discuss opportunities to prioritize plant gene clusters for experimental characterization, and end with a forward-looking perspective on how synthetic biology technologies will allow effective functional reconstitution of candidate pathways using a variety of genetic systems. PMID:27321668

  19. Solving a Hamiltonian Path Problem with a bacterial computer

    PubMed Central

    Baumgardner, Jordan; Acker, Karen; Adefuye, Oyinade; Crowley, Samuel Thomas; DeLoache, Will; Dickson, James O; Heard, Lane; Martens, Andrew T; Morton, Nickolaus; Ritter, Michelle; Shoecraft, Amber; Treece, Jessica; Unzicker, Matthew; Valencia, Amanda; Waters, Mike; Campbell, A Malcolm; Heyer, Laurie J; Poet, Jeffrey L; Eckdahl, Todd T

    2009-01-01

    Background The Hamiltonian Path Problem asks whether there is a route in a directed graph from a beginning node to an ending node, visiting each node exactly once. The Hamiltonian Path Problem is NP complete, achieving surprising computational complexity with modest increases in size. This challenge has inspired researchers to broaden the definition of a computer. DNA computers have been developed that solve NP complete problems. Bacterial computers can be programmed by constructing genetic circuits to execute an algorithm that is responsive to the environment and whose result can be observed. Each bacterium can examine a solution to a mathematical problem and billions of them can explore billions of possible solutions. Bacterial computers can be automated, made responsive to selection, and reproduce themselves so that more processing capacity is applied to problems over time. Results We programmed bacteria with a genetic circuit that enables them to evaluate all possible paths in a directed graph in order to find a Hamiltonian path. We encoded a three node directed graph as DNA segments that were autonomously shuffled randomly inside bacteria by a Hin/hixC recombination system we previously adapted from Salmonella typhimurium for use in Escherichia coli. We represented nodes in the graph as linked halves of two different genes encoding red or green fluorescent proteins. Bacterial populations displayed phenotypes that reflected random ordering of edges in the graph. Individual bacterial clones that found a Hamiltonian path reported their success by fluorescing both red and green, resulting in yellow colonies. We used DNA sequencing to verify that the yellow phenotype resulted from genotypes that represented Hamiltonian path solutions, demonstrating that our bacterial computer functioned as expected. Conclusion We successfully designed, constructed, and tested a bacterial computer capable of finding a Hamiltonian path in a three node directed graph. This proof-of-concept experiment demonstrates that bacterial computing is a new way to address NP-complete problems using the inherent advantages of genetic systems. The results of our experiments also validate synthetic biology as a valuable approach to biological engineering. We designed and constructed basic parts, devices, and systems using synthetic biology principles of standardization and abstraction. PMID:19630940

  20. 'PACLIMS': a component LIM system for high-throughput functional genomic analysis.

    PubMed

    Donofrio, Nicole; Rajagopalon, Ravi; Brown, Douglas; Diener, Stephen; Windham, Donald; Nolin, Shelly; Floyd, Anna; Mitchell, Thomas; Galadima, Natalia; Tucker, Sara; Orbach, Marc J; Patel, Gayatri; Farman, Mark; Pampanwar, Vishal; Soderlund, Cari; Lee, Yong-Hwan; Dean, Ralph A

    2005-04-12

    Recent advances in sequencing techniques leading to cost reduction have resulted in the generation of a growing number of sequenced eukaryotic genomes. Computational tools greatly assist in defining open reading frames and assigning tentative annotations. However, gene functions cannot be asserted without biological support through, among other things, mutational analysis. In taking a genome-wide approach to functionally annotate an entire organism, in this application the approximately 11,000 predicted genes in the rice blast fungus (Magnaporthe grisea), an effective platform for tracking and storing both the biological materials created and the data produced across several participating institutions was required. The platform designed, named PACLIMS, was built to support our high throughput pipeline for generating 50,000 random insertion mutants of Magnaporthe grisea. To be a useful tool for materials and data tracking and storage, PACLIMS was designed to be simple to use, modifiable to accommodate refinement of research protocols, and cost-efficient. Data entry into PACLIMS was simplified through the use of barcodes and scanners, thus reducing the potential human error, time constraints, and labor. This platform was designed in concert with our experimental protocol so that it leads the researchers through each step of the process from mutant generation through phenotypic assays, thus ensuring that every mutant produced is handled in an identical manner and all necessary data is captured. Many sequenced eukaryotes have reached the point where computational analyses are no longer sufficient and require biological support for their predicted genes. Consequently, there is an increasing need for platforms that support high throughput genome-wide mutational analyses. While PACLIMS was designed specifically for this project, the source and ideas present in its implementation can be used as a model for other high throughput mutational endeavors.

  1. microRNAs Databases: Developmental Methodologies, Structural and Functional Annotations.

    PubMed

    Singh, Nagendra Kumar

    2017-09-01

    microRNA (miRNA) is an endogenous and evolutionary conserved non-coding RNA, involved in post-transcriptional process as gene repressor and mRNA cleavage through RNA-induced silencing complex (RISC) formation. In RISC, miRNA binds in complementary base pair with targeted mRNA along with Argonaut proteins complex, causes gene repression or endonucleolytic cleavage of mRNAs and results in many diseases and syndromes. After the discovery of miRNA lin-4 and let-7, subsequently large numbers of miRNAs were discovered by low-throughput and high-throughput experimental techniques along with computational process in various biological and metabolic processes. The miRNAs are important non-coding RNA for understanding the complex biological phenomena of organism because it controls the gene regulation. This paper reviews miRNA databases with structural and functional annotations developed by various researchers. These databases contain structural and functional information of animal, plant and virus miRNAs including miRNAs-associated diseases, stress resistance in plant, miRNAs take part in various biological processes, effect of miRNAs interaction on drugs and environment, effect of variance on miRNAs, miRNAs gene expression analysis, sequence of miRNAs, structure of miRNAs. This review focuses on the developmental methodology of miRNA databases such as computational tools and methods used for extraction of miRNAs annotation from different resources or through experiment. This study also discusses the efficiency of user interface design of every database along with current entry and annotations of miRNA (pathways, gene ontology, disease ontology, etc.). Here, an integrated schematic diagram of construction process for databases is also drawn along with tabular and graphical comparison of various types of entries in different databases. Aim of this paper is to present the importance of miRNAs-related resources at a single place.

  2. 'PACLIMS': A component LIM system for high-throughput functional genomic analysis

    PubMed Central

    Donofrio, Nicole; Rajagopalon, Ravi; Brown, Douglas; Diener, Stephen; Windham, Donald; Nolin, Shelly; Floyd, Anna; Mitchell, Thomas; Galadima, Natalia; Tucker, Sara; Orbach, Marc J; Patel, Gayatri; Farman, Mark; Pampanwar, Vishal; Soderlund, Cari; Lee, Yong-Hwan; Dean, Ralph A

    2005-01-01

    Background Recent advances in sequencing techniques leading to cost reduction have resulted in the generation of a growing number of sequenced eukaryotic genomes. Computational tools greatly assist in defining open reading frames and assigning tentative annotations. However, gene functions cannot be asserted without biological support through, among other things, mutational analysis. In taking a genome-wide approach to functionally annotate an entire organism, in this application the ~11,000 predicted genes in the rice blast fungus (Magnaporthe grisea), an effective platform for tracking and storing both the biological materials created and the data produced across several participating institutions was required. Results The platform designed, named PACLIMS, was built to support our high throughput pipeline for generating 50,000 random insertion mutants of Magnaporthe grisea. To be a useful tool for materials and data tracking and storage, PACLIMS was designed to be simple to use, modifiable to accommodate refinement of research protocols, and cost-efficient. Data entry into PACLIMS was simplified through the use of barcodes and scanners, thus reducing the potential human error, time constraints, and labor. This platform was designed in concert with our experimental protocol so that it leads the researchers through each step of the process from mutant generation through phenotypic assays, thus ensuring that every mutant produced is handled in an identical manner and all necessary data is captured. Conclusion Many sequenced eukaryotes have reached the point where computational analyses are no longer sufficient and require biological support for their predicted genes. Consequently, there is an increasing need for platforms that support high throughput genome-wide mutational analyses. While PACLIMS was designed specifically for this project, the source and ideas present in its implementation can be used as a model for other high throughput mutational endeavors. PMID:15826298

  3. Detection of seizures from small samples using nonlinear dynamic system theory.

    PubMed

    Yaylali, I; Koçak, H; Jayakar, P

    1996-07-01

    The electroencephalogram (EEG), like many other biological phenomena, is quite likely governed by nonlinear dynamics. Certain characteristics of the underlying dynamics have recently been quantified by computing the correlation dimensions (D2) of EEG time series data. In this paper, D2 of the unbiased autocovariance function of the scalp EEG data was used to detect electrographic seizure activity. Digital EEG data were acquired at a sampling rate of 200 Hz per channel and organized in continuous frames (duration 2.56 s, 512 data points). To increase the reliability of D2 computations with short duration data, raw EEG data were initially simplified using unbiased autocovariance analysis to highlight the periodic activity that is present during seizures. The D2 computation was then performed from the unbiased autocovariance function of each channel using the Grassberger-Procaccia method with Theiler's box-assisted correlation algorithm. Even with short duration data, this preprocessing proved to be computationally robust and displayed no significant sensitivity to implementation details such as the choices of embedding dimension and box size. The system successfully identified various types of seizures in clinical studies.

  4. MACBenAbim: A Multi-platform Mobile Application for searching keyterms in Computational Biology and Bioinformatics.

    PubMed

    Oluwagbemi, Olugbenga O; Adewumi, Adewole; Esuruoso, Abimbola

    2012-01-01

    Computational biology and bioinformatics are gradually gaining grounds in Africa and other developing nations of the world. However, in these countries, some of the challenges of computational biology and bioinformatics education are inadequate infrastructures, and lack of readily-available complementary and motivational tools to support learning as well as research. This has lowered the morale of many promising undergraduates, postgraduates and researchers from aspiring to undertake future study in these fields. In this paper, we developed and described MACBenAbim (Multi-platform Mobile Application for Computational Biology and Bioinformatics), a flexible user-friendly tool to search for, define and describe the meanings of keyterms in computational biology and bioinformatics, thus expanding the frontiers of knowledge of the users. This tool also has the capability of achieving visualization of results on a mobile multi-platform context. MACBenAbim is available from the authors for non-commercial purposes.

  5. The Physics of Open Ended Evolution

    NASA Astrophysics Data System (ADS)

    Adams, Alyssa M.

    What makes living systems different than non-living ones? Unfortunately this question is impossible to answer, at least currently. Instead, we must face computationally tangible questions based on our current understanding of physics, computation, information, and biology. Yet we have few insights into how living systems might quantifiably differ from their non-living counterparts, as in a mathematical foundation to explain away our observations of biological evolution, emergence, innovation, and organization. The development of a theory of living systems, if at all possible, demands a mathematical understanding of how data generated by complex biological systems changes over time. In addition, this theory ought to be broad enough as to not be constrained to an Earth-based biochemistry. In this dissertation, the philosophy of studying living systems from the perspective of traditional physics is first explored as a motivating discussion for subsequent research. Traditionally, we have often thought of the physical world from a bottom-up approach: things happening on a smaller scale aggregate into things happening on a larger scale. In addition, the laws of physics are generally considered static over time. Research suggests that biological evolution may follow dynamic laws that (at least in part) change as a function of the state of the system. Of the three featured research projects, cellular automata (CA) are used as a model to study certain aspects of living systems in two of them. These aspects include self-reference, open-ended evolution, local physical universality, subjectivity, and information processing. Open-ended evolution and local physical universality are attributed to the vast amount of innovation observed throughout biological evolution. Biological systems may distinguish themselves in terms of information processing and storage, not outside the theory of computation. The final research project concretely explores real-world phenomenon by means of mapping dominance hierarchies in the evolution of video game strategies. Though the main question of how life differs from non-life remains unanswered, the mechanisms behind open-ended evolution and physical universality are revealed.

  6. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    PubMed Central

    2010-01-01

    Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU) opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU) code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a starting point for modelers to develop their own GPU implementations, and encourage others to implement their modeling methods on the GPU and to make that code available to the wider community. PMID:20696053

  7. Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology

    NASA Astrophysics Data System (ADS)

    Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki

    2017-03-01

    Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.

  8. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME

    PubMed Central

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2017-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948

  9. COMPUTATIONAL TOXICOLOGY

    EPA Science Inventory

    Over the last several years, there has been increased pressure to utilize novel technologies derived from computational chemistry, molecular biology and systems biology in toxicological risk assessment. This new area has been referred to as "Computational Toxicology". Our resear...

  10. Reviews.

    ERIC Educational Resources Information Center

    Science Teacher, 1989

    1989-01-01

    Reviews seven software programs: (1) "Science Baseball: Biology" (testing a variety of topics); (2) "Wildways: Understanding Wildlife Conservation"; (3) "Earth Science Computer Test Bank"; (4) "Biology Computer Test Bank"; (5) "Computer Play & Learn Series" (a series of drill and test…

  11. Simulations For Investigating the Contrast Mechanism of Biological Cells with High Frequency Scanning Acoustic Microscopy

    NASA Astrophysics Data System (ADS)

    Juntarapaso, Yada

    Scanning Acoustic Microscopy (SAM) is one of the most powerful techniques for nondestructive evaluation and it is a promising tool for characterizing the elastic properties of biological tissues/cells. Exploring a single cell is important since there is a connection between single cell biomechanics and human cancer. Scanning acoustic microscopy (SAM) has been accepted and extensively utilized for acoustical cellular and tissue imaging including measurements of the mechanical and elastic properties of biological specimens. SAM provides superb advantages in that it is non-invasive, can measure mechanical properties of biological cells or tissues, and fixation/chemical staining is not necessary. The first objective of this research is to develop a program for simulating the images and contrast mechanism obtained by high-frequency SAM. Computer simulation algorithms based on MatlabRTM were built for simulating the images and contrast mechanisms. The mechanical properties of HeLa and MCF-7 cells were computed from the measurement data of the output signal amplitude as a function of distance from the focal planes of the acoustics lens which is known as V(z) . Algorithms for simulating V(z) responses involved the calculation of the reflectance function and were created based on ray theory and wave theory. The second objective is to design transducer arrays for SAM. Theoretical simulations based on Field II(c) programs of the high frequency ultrasound array designs were performed to enhance image resolution and volumetric imaging capabilities. Phased array beam forming and dynamic apodization and focusing were employed in the simulations. The new transducer array design will be state-of-the-art in improving the performance of SAM by electronic scanning and potentially providing a 4-D image of the specimen.

  12. Building polyhedra by self-assembly: theory and experiment.

    PubMed

    Kaplan, Ryan; Klobušický, Joseph; Pandey, Shivendra; Gracias, David H; Menon, Govind

    2014-01-01

    We investigate the utility of a mathematical framework based on discrete geometry to model biological and synthetic self-assembly. Our primary biological example is the self-assembly of icosahedral viruses; our synthetic example is surface-tension-driven self-folding polyhedra. In both instances, the process of self-assembly is modeled by decomposing the polyhedron into a set of partially formed intermediate states. The set of all intermediates is called the configuration space, pathways of assembly are modeled as paths in the configuration space, and the kinetics and yield of assembly are modeled by rate equations, Markov chains, or cost functions on the configuration space. We review an interesting interplay between biological function and mathematical structure in viruses in light of this framework. We discuss in particular: (i) tiling theory as a coarse-grained description of all-atom models; (ii) the building game-a growth model for the formation of polyhedra; and (iii) the application of these models to the self-assembly of the bacteriophage MS2. We then use a similar framework to model self-folding polyhedra. We use a discrete folding algorithm to compute a configuration space that idealizes surface-tension-driven self-folding and analyze pathways of assembly and dominant intermediates. These computations are then compared with experimental observations of a self-folding dodecahedron with side 300 μm. In both models, despite a combinatorial explosion in the size of the configuration space, a few pathways and intermediates dominate self-assembly. For self-folding polyhedra, the dominant intermediates have fewer degrees of freedom than comparable intermediates, and are thus more rigid. The concentration of assembly pathways on a few intermediates with distinguished geometric properties is biologically and physically important, and suggests deeper mathematical structure.

  13. Simulation and estimation of gene number in a biological pathway using almost complete saturation mutagenesis screening of haploid mouse cells.

    PubMed

    Tokunaga, Masahiro; Kokubu, Chikara; Maeda, Yusuke; Sese, Jun; Horie, Kyoji; Sugimoto, Nakaba; Kinoshita, Taroh; Yusa, Kosuke; Takeda, Junji

    2014-11-24

    Genome-wide saturation mutagenesis and subsequent phenotype-driven screening has been central to a comprehensive understanding of complex biological processes in classical model organisms such as flies, nematodes, and plants. The degree of "saturation" (i.e., the fraction of possible target genes identified) has been shown to be a critical parameter in determining all relevant genes involved in a biological function, without prior knowledge of their products. In mammalian model systems, however, the relatively large scale and labor intensity of experiments have hampered the achievement of actual saturation mutagenesis, especially for recessive traits that require biallelic mutations to manifest detectable phenotypes. By exploiting the recently established haploid mouse embryonic stem cells (ESCs), we present an implementation of almost complete saturation mutagenesis in a mammalian system. The haploid ESCs were mutagenized with the chemical mutagen N-ethyl-N-nitrosourea (ENU) and processed for the screening of mutants defective in various steps of the glycosylphosphatidylinositol-anchor biosynthetic pathway. The resulting 114 independent mutant clones were characterized by a functional complementation assay, and were shown to be defective in any of 20 genes among all 22 known genes essential for this well-characterized pathway. Ten mutants were further validated by whole-exome sequencing. The predominant generation of single-nucleotide substitutions by ENU resulted in a gene mutation rate proportional to the length of the coding sequence, which facilitated the experimental design of saturation mutagenesis screening with the aid of computational simulation. Our study enables mammalian saturation mutagenesis to become a realistic proposition. Computational simulation, combined with a pilot mutagenesis experiment, could serve as a tool for the estimation of the number of genes essential for biological processes such as drug target pathways when a positive selection of mutants is available.

  14. Biomolecular computing systems: principles, progress and potential.

    PubMed

    Benenson, Yaakov

    2012-06-12

    The task of information processing, or computation, can be performed by natural and man-made 'devices'. Man-made computers are made from silicon chips, whereas natural 'computers', such as the brain, use cells and molecules. Computation also occurs on a much smaller scale in regulatory and signalling pathways in individual cells and even within single biomolecules. Indeed, much of what we recognize as life results from the remarkable capacity of biological building blocks to compute in highly sophisticated ways. Rational design and engineering of biological computing systems can greatly enhance our ability to study and to control biological systems. Potential applications include tissue engineering and regeneration and medical treatments. This Review introduces key concepts and discusses recent progress that has been made in biomolecular computing.

  15. Rapid functional analysis of computationally complex rare human IRF6 gene variants using a novel zebrafish model.

    PubMed

    Li, Edward B; Truong, Dawn; Hallett, Shawn A; Mukherjee, Kusumika; Schutte, Brian C; Liao, Eric C

    2017-09-01

    Large-scale sequencing efforts have captured a rapidly growing catalogue of genetic variations. However, the accurate establishment of gene variant pathogenicity remains a central challenge in translating personal genomics information to clinical decisions. Interferon Regulatory Factor 6 (IRF6) gene variants are significant genetic contributors to orofacial clefts. Although approximately three hundred IRF6 gene variants have been documented, their effects on protein functions remain difficult to interpret. Here, we demonstrate the protein functions of human IRF6 missense gene variants could be rapidly assessed in detail by their abilities to rescue the irf6 -/- phenotype in zebrafish through variant mRNA microinjections at the one-cell stage. The results revealed many missense variants previously predicted by traditional statistical and computational tools to be loss-of-function and pathogenic retained partial or full protein function and rescued the zebrafish irf6 -/- periderm rupture phenotype. Through mRNA dosage titration and analysis of the Exome Aggregation Consortium (ExAC) database, IRF6 missense variants were grouped by their abilities to rescue at various dosages into three functional categories: wild type function, reduced function, and complete loss-of-function. This sensitive and specific biological assay was able to address the nuanced functional significances of IRF6 missense gene variants and overcome many limitations faced by current statistical and computational tools in assigning variant protein function and pathogenicity. Furthermore, it unlocked the possibility for characterizing yet undiscovered human IRF6 missense gene variants from orofacial cleft patients, and illustrated a generalizable functional genomics paradigm in personalized medicine.

  16. Identifying 5-methylcytosine sites in RNA sequence using composite encoding feature into Chou's PseKNC.

    PubMed

    Sabooh, M Fazli; Iqbal, Nadeem; Khan, Mukhtaj; Khan, Muslim; Maqbool, H F

    2018-05-01

    This study examines accurate and efficient computational method for identification of 5-methylcytosine sites in RNA modification. The occurrence of 5-methylcytosine (m 5 C) plays a vital role in a number of biological processes. For better comprehension of the biological functions and mechanism it is necessary to recognize m 5 C sites in RNA precisely. The laboratory techniques and procedures are available to identify m 5 C sites in RNA, but these procedures require a lot of time and resources. This study develops a new computational method for extracting the features of RNA sequence. In this method, first the RNA sequence is encoded via composite feature vector, then, for the selection of discriminate features, the minimum-redundancy-maximum-relevance algorithm was used. Secondly, the classification method used has been based on a support vector machine by using jackknife cross validation test. The suggested method efficiently identifies m 5 C sites from non- m 5 C sites and the outcome of the suggested algorithm is 93.33% with sensitivity of 90.0 and specificity of 96.66 on bench mark datasets. The result exhibits that proposed algorithm shown significant identification performance compared to the existing computational techniques. This study extends the knowledge about the occurrence sites of RNA modification which paves the way for better comprehension of the biological uses and mechanism. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Examining ion channel properties using free-energy methods.

    PubMed

    Domene, Carmen; Furini, Simone

    2009-01-01

    Recent advances in structural biology have revealed the architecture of a number of transmembrane channels, allowing for these complex biological systems to be understood in atomistic detail. Computational simulations are a powerful tool by which the dynamic and energetic properties, and thereby the function of these protein architectures, can be investigated. The experimentally observable properties of a system are often determined more by energetic than dynamics, and therefore understanding the underlying free energy (FE) of biophysical processes is of crucial importance. Critical to the accurate evaluation of FE values are the problems of obtaining accurate sampling of complex biological energy landscapes, and of obtaining accurate representations of the potential energy of a system, this latter problem having been addressed through the development of molecular force fields. While these challenges are common to all FE methods, depending on the system under study, and the questions being asked of it, one technique for FE calculation may be preferable to another, the choice of method and simulation protocol being crucial to achieve efficiency. Applied in a correct manner, FE calculations represent a predictive and affordable computational tool with which to make relevant contact with experiments. This chapter, therefore, aims to give an overview of the most widely implemented computational methods used to calculate the FE associated with particular biochemical or biophysical events, and to highlight their recent applications to ion channels. Copyright © 2009 Elsevier Inc. All rights reserved.

  18. A-DaGO-Fun: an adaptable Gene Ontology semantic similarity-based functional analysis tool.

    PubMed

    Mazandu, Gaston K; Chimusa, Emile R; Mbiyavanga, Mamana; Mulder, Nicola J

    2016-02-01

    Gene Ontology (GO) semantic similarity measures are being used for biological knowledge discovery based on GO annotations by integrating biological information contained in the GO structure into data analyses. To empower users to quickly compute, manipulate and explore these measures, we introduce A-DaGO-Fun (ADaptable Gene Ontology semantic similarity-based Functional analysis). It is a portable software package integrating all known GO information content-based semantic similarity measures and relevant biological applications associated with these measures. A-DaGO-Fun has the advantage not only of handling datasets from the current high-throughput genome-wide applications, but also allowing users to choose the most relevant semantic similarity approach for their biological applications and to adapt a given module to their needs. A-DaGO-Fun is freely available to the research community at http://web.cbio.uct.ac.za/ITGOM/adagofun. It is implemented in Linux using Python under free software (GNU General Public Licence). gmazandu@cbio.uct.ac.za or Nicola.Mulder@uct.ac.za Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Integrated lung tissue mechanics one piece at a time: Computational modeling across the scales of biology.

    PubMed

    Burrowes, Kelly S; Iravani, Amin; Kang, Wendy

    2018-01-12

    The lung is a delicately balanced and highly integrated mechanical system. Lung tissue is continuously exposed to the environment via the air we breathe, making it susceptible to damage. As a consequence, respiratory diseases present a huge burden on society and their prevalence continues to rise. Emergent function is produced not only by the sum of the function of its individual components but also by the complex feedback and interactions occurring across the biological scales - from genes to proteins, cells, tissue and whole organ - and back again. Computational modeling provides the necessary framework for pulling apart and putting back together the pieces of the body and organ systems so that we can fully understand how they function in both health and disease. In this review, we discuss models of lung tissue mechanics spanning from the protein level (the extracellular matrix) through to the level of cells, tissue and whole organ, many of which have been developed in isolation. This is a vital step in the process but to understand the emergent behavior of the lung, we must work towards integrating these component parts and accounting for feedback across the scales, such as mechanotransduction. These interactions will be key to unlocking the mechanisms occurring in disease and in seeking new pharmacological targets and improving personalized healthcare. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Artificial neural networks for efficient clustering of conformational ensembles and their potential for medicinal chemistry.

    PubMed

    Pandini, Alessandro; Fraccalvieri, Domenico; Bonati, Laura

    2013-01-01

    The biological function of proteins is strictly related to their molecular flexibility and dynamics: enzymatic activity, protein-protein interactions, ligand binding and allosteric regulation are important mechanisms involving protein motions. Computational approaches, such as Molecular Dynamics (MD) simulations, are now routinely used to study the intrinsic dynamics of target proteins as well as to complement molecular docking approaches. These methods have also successfully supported the process of rational design and discovery of new drugs. Identification of functionally relevant conformations is a key step in these studies. This is generally done by cluster analysis of the ensemble of structures in the MD trajectory. Recently Artificial Neural Network (ANN) approaches, in particular methods based on Self-Organising Maps (SOMs), have been reported performing more accurately and providing more consistent results than traditional clustering algorithms in various data-mining problems. In the specific case of conformational analysis, SOMs have been successfully used to compare multiple ensembles of protein conformations demonstrating a potential in efficiently detecting the dynamic signatures central to biological function. Moreover, examples of the use of SOMs to address problems relevant to other stages of the drug-design process, including clustering of docking poses, have been reported. In this contribution we review recent applications of ANN algorithms in analysing conformational and structural ensembles and we discuss their potential in computer-based approaches for medicinal chemistry.

  1. The relative vertex clustering value - a new criterion for the fast discovery of functional modules in protein interaction networks

    PubMed Central

    2015-01-01

    Background Cellular processes are known to be modular and are realized by groups of proteins implicated in common biological functions. Such groups of proteins are called functional modules, and many community detection methods have been devised for their discovery from protein interaction networks (PINs) data. In current agglomerative clustering approaches, vertices with just a very few neighbors are often classified as separate clusters, which does not make sense biologically. Also, a major limitation of agglomerative techniques is that their computational efficiency do not scale well to large PINs. Finally, PIN data obtained from large scale experiments generally contain many false positives, and this makes it hard for agglomerative clustering methods to find the correct clusters, since they are known to be sensitive to noisy data. Results We propose a local similarity premetric, the relative vertex clustering value, as a new criterion allowing to decide when a node can be added to a given node's cluster and which addresses the above three issues. Based on this criterion, we introduce a novel and very fast agglomerative clustering technique, FAC-PIN, for discovering functional modules and protein complexes from a PIN data. Conclusions Our proposed FAC-PIN algorithm is applied to nine PIN data from eight different species including the yeast PIN, and the identified functional modules are validated using Gene Ontology (GO) annotations from DAVID Bioinformatics Resources. Identified protein complexes are also validated using experimentally verified complexes. Computational results show that FAC-PIN can discover functional modules or protein complexes from PINs more accurately and more efficiently than HC-PIN and CNM, the current state-of-the-art approaches for clustering PINs in an agglomerative manner. PMID:25734691

  2. Ten quick tips for machine learning in computational biology.

    PubMed

    Chicco, Davide

    2017-01-01

    Machine learning has become a pivotal tool for many projects in computational biology, bioinformatics, and health informatics. Nevertheless, beginners and biomedical researchers often do not have enough experience to run a data mining project effectively, and therefore can follow incorrect practices, that may lead to common mistakes or over-optimistic results. With this review, we present ten quick tips to take advantage of machine learning in any computational biology context, by avoiding some common errors that we observed hundreds of times in multiple bioinformatics projects. We believe our ten suggestions can strongly help any machine learning practitioner to carry on a successful project in computational biology and related sciences.

  3. Cross-Linking and Mass Spectrometry Methodologies to Facilitate Structural Biology: Finding a Path through the Maze

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merkley, Eric D.; Cort, John R.; Adkins, Joshua N.

    2013-09-01

    Multiprotein complexes, rather than individual proteins, make up a large part of the biological macromolecular machinery of a cell. Understanding the structure and organization of these complexes is critical to understanding cellular function. Chemical cross-linking coupled with mass spectrometry is emerging as a complementary technique to traditional structural biology methods and can provide low-resolution structural information for a multitude of purposes, such as distance constraints in computational modeling of protein complexes. In this review, we discuss the experimental considerations for successful application of chemical cross-linking-mass spectrometry in biological studies and highlight three examples of such studies from the recent literature.more » These examples (as well as many others) illustrate the utility of a chemical cross-linking-mass spectrometry approach in facilitating structural analysis of large and challenging complexes.« less

  4. StrateGene: object-oriented programming in molecular biology.

    PubMed

    Carhart, R E; Cash, H D; Moore, J F

    1988-03-01

    This paper describes some of the ways that object-oriented programming methodologies have been used to represent and manipulate biological information in a working application. When running on a Xerox 1100 series computer, StrateGene functions as a genetic engineering workstation for the management of information about cloning experiments. It represents biological molecules, enzymes, fragments, and methods as classes, subclasses, and members in a hierarchy of objects. These objects may have various attributes, which themselves can be defined and classified. The attributes and their values can be passed from the classes of objects down to the subclasses and members. The user can modify the objects and their attributes while using them. New knowledge and changes to the system can be incorporated relatively easily. The operations on the biological objects are associated with the objects themselves. This makes it easier to invoke them correctly and allows generic operations to be customized for the particular object.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, David P.; D’Eustachio, Peter; Berardini, Tanya Z.

    The concept of a biological pathway, an ordered sequence of molecular transformations, is used to collect and represent molecular knowledge for a broad span of organismal biology. Representations of biomedical pathways typically are rich but idiosyncratic presentations of organized knowledge about individual pathways. Meanwhile, biomedical ontologies and associated annotation files are powerful tools that organize molecular information in a logically rigorous form to support computational analysis. The Gene Ontology (GO), representing Molecular Functions, Biological Processes and Cellular Components, incorporates many aspects of biological pathways within its ontological representations. Here we present a methodology for extending and refining the classes inmore » the GO for more comprehensive, consistent and integrated representation of pathways, leveraging knowledge embedded in current pathway representations such as those in the Reactome Knowledgebase and MetaCyc. With carbohydrate metabolic pathways as a use case, we discuss how our representation supports the integration of variant pathway classes into a unified ontological structure that can be used for data comparison and analysis.« less

  6. Recognizing and engineering digital-like logic gates and switches in gene regulatory networks.

    PubMed

    Bradley, Robert W; Buck, Martin; Wang, Baojun

    2016-10-01

    A central aim of synthetic biology is to build organisms that can perform useful activities in response to specified conditions. The digital computing paradigm which has proved so successful in electrical engineering is being mapped to synthetic biological systems to allow them to make such decisions. However, stochastic molecular processes have graded input-output functions, thus, bioengineers must select those with desirable characteristics and refine their transfer functions to build logic gates with digital-like switching behaviour. Recent efforts in genome mining and the development of programmable RNA-based switches, especially CRISPRi, have greatly increased the number of parts available to synthetic biologists. Improvements to the digital characteristics of these parts are required to enable robust predictable design of deeply layered logic circuits. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. Single nucleotide variations: Biological impact and theoretical interpretation

    PubMed Central

    Katsonis, Panagiotis; Koire, Amanda; Wilson, Stephen Joseph; Hsu, Teng-Kuei; Lua, Rhonald C; Wilkins, Angela Dawn; Lichtarge, Olivier

    2014-01-01

    Genome-wide association studies (GWAS) and whole-exome sequencing (WES) generate massive amounts of genomic variant information, and a major challenge is to identify which variations drive disease or contribute to phenotypic traits. Because the majority of known disease-causing mutations are exonic non-synonymous single nucleotide variations (nsSNVs), most studies focus on whether these nsSNVs affect protein function. Computational studies show that the impact of nsSNVs on protein function reflects sequence homology and structural information and predict the impact through statistical methods, machine learning techniques, or models of protein evolution. Here, we review impact prediction methods and discuss their underlying principles, their advantages and limitations, and how they compare to and complement one another. Finally, we present current applications and future directions for these methods in biological research and medical genetics. PMID:25234433

  8. Biology Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1982

    1982-01-01

    Describes laboratory procedures, demonstrations, and classroom activities/materials, including use of dwarf cichlids (fishes) in secondary school biology, teaching edge effects on stomatal diffusion, computer program on effects of selection on gene frequencies, biological oxidation/reduction reactions, short cuts with Drosophila, computer program…

  9. Shaping Small Bioactive Molecules to Untangle Their Biological Function: A Focus on Fluorescent Plant Hormones.

    PubMed

    Lace, Beatrice; Prandi, Cristina

    2016-08-01

    Modern biology overlaps with chemistry in explaining the structure and function of all cellular processes at the molecular level. Plant hormone research is perfectly located at the interface between these two disciplines, taking advantage of synthetic and computational chemistry as a tool to decipher the complex biological mechanisms regulating the action of plant hormones. These small signaling molecules regulate a wide range of developmental processes, adapting plant growth to ever changing environmental conditions. The synthesis of small bioactive molecules mimicking the activity of endogenous hormones allows us to unveil many molecular features of their functioning, giving rise to a new field, plant chemical biology. In this framework, fluorescence labeling of plant hormones is emerging as a successful strategy to track the fate of these challenging molecules inside living organisms. Thanks to the increasing availability of new fluorescent probes as well as advanced and innovative imaging technologies, we are now in a position to investigate many of the dynamic mechanisms through which plant hormones exert their action. Such a deep and detailed comprehension is mandatory for the development of new green technologies for practical applications. In this review, we summarize the results obtained so far concerning the fluorescent labeling of plant hormones, highlighting the basic steps leading to the design and synthesis of these compelling molecular tools and their applications. Copyright © 2016 The Author. Published by Elsevier Inc. All rights reserved.

  10. EPEPT: A web service for enhanced P-value estimation in permutation tests

    PubMed Central

    2011-01-01

    Background In computational biology, permutation tests have become a widely used tool to assess the statistical significance of an event under investigation. However, the common way of computing the P-value, which expresses the statistical significance, requires a very large number of permutations when small (and thus interesting) P-values are to be accurately estimated. This is computationally expensive and often infeasible. Recently, we proposed an alternative estimator, which requires far fewer permutations compared to the standard empirical approach while still reliably estimating small P-values [1]. Results The proposed P-value estimator has been enriched with additional functionalities and is made available to the general community through a public website and web service, called EPEPT. This means that the EPEPT routines can be accessed not only via a website, but also programmatically using any programming language that can interact with the web. Examples of web service clients in multiple programming languages can be downloaded. Additionally, EPEPT accepts data of various common experiment types used in computational biology. For these experiment types EPEPT first computes the permutation values and then performs the P-value estimation. Finally, the source code of EPEPT can be downloaded. Conclusions Different types of users, such as biologists, bioinformaticians and software engineers, can use the method in an appropriate and simple way. Availability http://informatics.systemsbiology.net/EPEPT/ PMID:22024252

  11. Information technology developments within the national biological information infrastructure

    USGS Publications Warehouse

    Cotter, G.; Frame, M.T.

    2000-01-01

    Looking out an office window or exploring a community park, one can easily see the tremendous challenges that biological information presents the computer science community. Biological information varies in format and content depending whether or not it is information pertaining to a particular species (i.e. Brown Tree Snake), or a specific ecosystem, which often includes multiple species, land use characteristics, and geospatially referenced information. The complexity and uniqueness of each individual species or ecosystem do not easily lend themselves to today's computer science tools and applications. To address the challenges that the biological enterprise presents the National Biological Information Infrastructure (NBII) (http://www.nbii.gov) was established in 1993. The NBII is designed to address these issues on a National scale within the United States, and through international partnerships abroad. This paper discusses current computer science efforts within the National Biological Information Infrastructure Program and future computer science research endeavors that are needed to address the ever-growing issues related to our Nation's biological concerns.

  12. Nicholas Metropolis Award for Outstanding Doctoral Thesis Work in Computational Physics Talk: Understanding Nano-scale Electronic Systems via Large-scale Computation

    NASA Astrophysics Data System (ADS)

    Cao, Chao

    2009-03-01

    Nano-scale physical phenomena and processes, especially those in electronics, have drawn great attention in the past decade. Experiments have shown that electronic and transport properties of functionalized carbon nanotubes are sensitive to adsorption of gas molecules such as H2, NO2, and NH3. Similar measurements have also been performed to study adsorption of proteins on other semiconductor nano-wires. These experiments suggest that nano-scale systems can be useful for making future chemical and biological sensors. Aiming to understand the physical mechanisms underlying and governing property changes at nano-scale, we start off by investigating, via first-principles method, the electronic structure of Pd-CNT before and after hydrogen adsorption, and continue with coherent electronic transport using non-equilibrium Green’s function techniques combined with density functional theory. Once our results are fully analyzed they can be used to interpret and understand experimental data, with a few difficult issues to be addressed. Finally, we discuss a newly developed multi-scale computing architecture, OPAL, that coordinates simultaneous execution of multiple codes. Inspired by the capabilities of this computing framework, we present a scenario of future modeling and simulation of multi-scale, multi-physical processes.

  13. SU-E-T-549: Modeling Relative Biological Effectiveness of Protons for Radiation Induced Brain Necrosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirkovic, D; Peeler, C; Grosshans, D

    Purpose: To develop a model of the relative biological effectiveness (RBE) of protons as a function of dose and linear energy transfer (LET) for induction of brain necrosis using clinical data. Methods: In this study, treatment planning information was exported from a clinical treatment planning system (TPS) and used to construct a detailed Monte Carlo model of the patient and the beam delivery system. The physical proton dose and LET were computed in each voxel of the patient volume using Monte Carlo particle transport. A follow-up magnetic resonance imaging (MRI) study registered to the treatment planning CT was used tomore » determine the region of the necrosis in the brain volume. Both, the whole brain and the necrosis volumes were segmented from the computed tomography (CT) dataset using the contours drawn by a physician and the corresponding voxels were binned with respect to dose and LET. The brain necrosis probability was computed as a function of dose and LET by dividing the total volume of all necrosis voxels with a given dose and LET with the corresponding total brain volume resulting in a set of NTCP-like curves (probability as a function of dose parameterized by LET). Results: The resulting model shows dependence on both dose and LET indicating the weakness of the constant RBE model for describing the brain toxicity. To the best of our knowledge the constant RBE model is currently used in all clinical applications which may Result in increased rate of brain toxicities in patients treated with protons. Conclusion: Further studies are needed to develop more accurate brain toxicity models for patients treated with protons and other heavy ions.« less

  14. Aligning the unalignable: bacteriophage whole genome alignments.

    PubMed

    Bérard, Sèverine; Chateau, Annie; Pompidor, Nicolas; Guertin, Paul; Bergeron, Anne; Swenson, Krister M

    2016-01-13

    In recent years, many studies focused on the description and comparison of large sets of related bacteriophage genomes. Due to the peculiar mosaic structure of these genomes, few informative approaches for comparing whole genomes exist: dot plots diagrams give a mostly qualitative assessment of the similarity/dissimilarity between two or more genomes, and clustering techniques are used to classify genomes. Multiple alignments are conspicuously absent from this scene. Indeed, whole genome aligners interpret lack of similarity between sequences as an indication of rearrangements, insertions, or losses. This behavior makes them ill-prepared to align bacteriophage genomes, where even closely related strains can accomplish the same biological function with highly dissimilar sequences. In this paper, we propose a multiple alignment strategy that exploits functional collinearity shared by related strains of bacteriophages, and uses partial orders to capture mosaicism of sets of genomes. As classical alignments do, the computed alignments can be used to predict that genes have the same biological function, even in the absence of detectable similarity. The Alpha aligner implements these ideas in visual interactive displays, and is used to compute several examples of alignments of Staphylococcus aureus and Mycobacterium bacteriophages, involving up to 29 genomes. Using these datasets, we prove that Alpha alignments are at least as good as those computed by standard aligners. Comparison with the progressive Mauve aligner - which implements a partial order strategy, but whose alignments are linearized - shows a greatly improved interactive graphic display, while avoiding misalignments. Multiple alignments of whole bacteriophage genomes work, and will become an important conceptual and visual tool in comparative genomics of sets of related strains. A python implementation of Alpha, along with installation instructions for Ubuntu and OSX, is available on bitbucket (https://bitbucket.org/thekswenson/alpha).

  15. ISOL@: an Italian SOLAnaceae genomics resource.

    PubMed

    Chiusano, Maria Luisa; D'Agostino, Nunzio; Traini, Alessandra; Licciardello, Concetta; Raimondo, Enrico; Aversano, Mario; Frusciante, Luigi; Monti, Luigi

    2008-03-26

    Present-day '-omics' technologies produce overwhelming amounts of data which include genome sequences, information on gene expression (transcripts and proteins) and on cell metabolic status. These data represent multiple aspects of a biological system and need to be investigated as a whole to shed light on the mechanisms which underpin the system functionality. The gathering and convergence of data generated by high-throughput technologies, the effective integration of different data-sources and the analysis of the information content based on comparative approaches are key methods for meaningful biological interpretations. In the frame of the International Solanaceae Genome Project, we propose here ISOLA, an Italian SOLAnaceae genomics resource. ISOLA (available at http://biosrv.cab.unina.it/isola) represents a trial platform and it is conceived as a multi-level computational environment.ISOLA currently consists of two main levels: the genome and the expression level. The cornerstone of the genome level is represented by the Solanum lycopersicum genome draft sequences generated by the International Tomato Genome Sequencing Consortium. Instead, the basic element of the expression level is the transcriptome information from different Solanaceae species, mainly in the form of species-specific comprehensive collections of Expressed Sequence Tags (ESTs). The cross-talk between the genome and the expression levels is based on data source sharing and on tools that enhance data quality, that extract information content from the levels' under parts and produce value-added biological knowledge. ISOLA is the result of a bioinformatics effort that addresses the challenges of the post-genomics era. It is designed to exploit '-omics' data based on effective integration to acquire biological knowledge and to approach a systems biology view. Beyond providing experimental biologists with a preliminary annotation of the tomato genome, this effort aims to produce a trial computational environment where different aspects and details are maintained as they are relevant for the analysis of the organization, the functionality and the evolution of the Solanaceae family.

  16. Inferring hidden causal relations between pathway members using reduced Google matrix of directed biological networks

    PubMed Central

    2018-01-01

    Signaling pathways represent parts of the global biological molecular network which connects them into a seamless whole through complex direct and indirect (hidden) crosstalk whose structure can change during development or in pathological conditions. We suggest a novel methodology, called Googlomics, for the structural analysis of directed biological networks using spectral analysis of their Google matrices, using parallels with quantum scattering theory, developed for nuclear and mesoscopic physics and quantum chaos. We introduce analytical “reduced Google matrix” method for the analysis of biological network structure. The method allows inferring hidden causal relations between the members of a signaling pathway or a functionally related group of genes. We investigate how the structure of hidden causal relations can be reprogrammed as a result of changes in the transcriptional network layer during cancerogenesis. The suggested Googlomics approach rigorously characterizes complex systemic changes in the wiring of large causal biological networks in a computationally efficient way. PMID:29370181

  17. The biophysical basis of Benveniste experiments: Entropy, structure, and information in water

    NASA Astrophysics Data System (ADS)

    Widom, Allan; Srivastava, Yogendra; Valenzi, Vincenzo

    Benveniste had observed that highly dilute (and even in the absence of physical molecules) biological agents still triggered relevant biological systems. Some of these experiments were reproduced in three other laboratories who cosigned the article, (Davenas et al., Nature 1988, 333, 816). Further works, [(Medical Hypotheses 2000, 54, 33), (Rivista di Biologia/Biology Forum 97, 2004, 169)], showed that molecular activity in more than 50 biochemical systems and even in bacteria could be induced by electromagnetic signals transferred through water solutes. The sources of the electromagnetic signals were recordings of specific biological activities. These results suggest that electromagnetic transmission of biochemical information can be stored in the electric dipole moments of water in close analogy to the manner in which magnetic moments store information on a computer disk. The electromagnetic transmission would enable in vivo transmissions of the specific molecular information between two functional biomolecules. In the present work, the physical nature of such biological information storage and retrieval in ordered quantum electromagnetic domains of water will be discussed.

  18. Imaging of the interaction of low frequency electric fields with biological tissues by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Peña, Adrian F.; Devine, Jack; Doronin, Alexander; Meglinski, Igor

    2014-03-01

    We report the use of conventional Optical Coherence Tomography (OCT) for visualization of propagation of low frequency electric field in soft biological tissues ex vivo. To increase the overall quality of the experimental images an adaptive Wiener filtering technique has been employed. Fourier domain correlation has been subsequently applied to enhance spatial resolution of images of biological tissues influenced by low frequency electric field. Image processing has been performed on Graphics Processing Units (GPUs) utilizing Compute Unified Device Architecture (CUDA) framework in the frequencydomain. The results show that variation in voltage and frequency of the applied electric field relates exponentially to the magnitude of its influence on biological tissue. The magnitude of influence is about twice more for fresh tissue samples in comparison to non-fresh ones. The obtained results suggest that OCT can be used for observation and quantitative evaluation of the electro-kinetic changes in biological tissues under different physiological conditions, functional electrical stimulation, and potentially can be used non-invasively for food quality control.

  19. An introduction to MATLAB.

    PubMed

    Sobie, Eric A

    2011-09-13

    This two-part lecture introduces students to the scientific computing language MATLAB. Prior computer programming experience is not required. The lectures present basic concepts of computer programming logic that tend to cause difficulties for beginners in addition to concepts that relate specifically to the MATLAB language syntax. The lectures begin with a discussion of vectors, matrices, and arrays. Because many types of biological data, such as fluorescence images and DNA microarrays, are stored as two-dimensional objects, processing these data is a form of array manipulation, and MATLAB is especially adept at handling such array objects. The students are introduced to basic commands in MATLAB, as well as built-in functions that provide useful shortcuts. The second lecture focuses on the differences between MATLAB scripts and MATLAB functions and describes when one method of programming organization might be preferable to the other. The principles are illustrated through the analysis of experimental data, specifically measurements of intracellular calcium concentration in live cells obtained using confocal microscopy.

  20. An Introduction to MATLAB

    PubMed Central

    Sobie, Eric A.

    2014-01-01

    This two-part lecture introduces students to the scientific computing language MATLAB. Prior computer programming experience is not required. The lectures present basic concepts of computer programming logic that tend to cause difficulties for beginners in addition to concepts that relate specifically to the MATLAB language syntax. The lectures begin with a discussion of vectors, matrices, and arrays. Because many types of biological data, such as fluorescence images and DNA microarrays, are stored as two-dimensional objects, processing these data is a form of array manipulation, and MATLAB is especially adept at handling such array objects. The students are introduced to basic commands in MATLAB, as well as built-in functions that provide useful shortcuts. The second lecture focuses on the differences between MATLAB scripts and MATLAB functions and describes when one method of programming organization might be preferable to the other. The principles are illustrated through the analysis of experimental data, specifically measurements of intracellular calcium concentration in live cells obtained using confocal microscopy. PMID:21934110

Top