RosettaRemodel: A Generalized Framework for Flexible Backbone Protein Design
Huang, Po-Ssu; Ban, Yih-En Andrew; Richter, Florian; Andre, Ingemar; Vernon, Robert; Schief, William R.; Baker, David
2011-01-01
We describe RosettaRemodel, a generalized framework for flexible protein design that provides a versatile and convenient interface to the Rosetta modeling suite. RosettaRemodel employs a unified interface, called a blueprint, which allows detailed control over many aspects of flexible backbone protein design calculations. RosettaRemodel allows the construction and elaboration of customized protocols for a wide range of design problems ranging from loop insertion and deletion, disulfide engineering, domain assembly, loop remodeling, motif grafting, symmetrical units, to de novo structure modeling. PMID:21909381
An Integrated Framework Advancing Membrane Protein Modeling and Design
Weitzner, Brian D.; Duran, Amanda M.; Tilley, Drew C.; Elazar, Assaf; Gray, Jeffrey J.
2015-01-01
Membrane proteins are critical functional molecules in the human body, constituting more than 30% of open reading frames in the human genome. Unfortunately, a myriad of difficulties in overexpression and reconstitution into membrane mimetics severely limit our ability to determine their structures. Computational tools are therefore instrumental to membrane protein structure prediction, consequently increasing our understanding of membrane protein function and their role in disease. Here, we describe a general framework facilitating membrane protein modeling and design that combines the scientific principles for membrane protein modeling with the flexible software architecture of Rosetta3. This new framework, called RosettaMP, provides a general membrane representation that interfaces with scoring, conformational sampling, and mutation routines that can be easily combined to create new protocols. To demonstrate the capabilities of this implementation, we developed four proof-of-concept applications for (1) prediction of free energy changes upon mutation; (2) high-resolution structural refinement; (3) protein-protein docking; and (4) assembly of symmetric protein complexes, all in the membrane environment. Preliminary data show that these algorithms can produce meaningful scores and structures. The data also suggest needed improvements to both sampling routines and score functions. Importantly, the applications collectively demonstrate the potential of combining the flexible nature of RosettaMP with the power of Rosetta algorithms to facilitate membrane protein modeling and design. PMID:26325167
Rosetta:MSF: a modular framework for multi-state computational protein design.
Löffler, Patrick; Schmitz, Samuel; Hupfeld, Enrico; Sterner, Reinhard; Merkl, Rainer
2017-06-01
Computational protein design (CPD) is a powerful technique to engineer existing proteins or to design novel ones that display desired properties. Rosetta is a software suite including algorithms for computational modeling and analysis of protein structures and offers many elaborate protocols created to solve highly specific tasks of protein engineering. Most of Rosetta's protocols optimize sequences based on a single conformation (i. e. design state). However, challenging CPD objectives like multi-specificity design or the concurrent consideration of positive and negative design goals demand the simultaneous assessment of multiple states. This is why we have developed the multi-state framework MSF that facilitates the implementation of Rosetta's single-state protocols in a multi-state environment and made available two frequently used protocols. Utilizing MSF, we demonstrated for one of these protocols that multi-state design yields a 15% higher performance than single-state design on a ligand-binding benchmark consisting of structural conformations. With this protocol, we designed de novo nine retro-aldolases on a conformational ensemble deduced from a (βα)8-barrel protein. All variants displayed measurable catalytic activity, testifying to a high success rate for this concept of multi-state enzyme design.
Rosetta:MSF: a modular framework for multi-state computational protein design
Hupfeld, Enrico; Sterner, Reinhard
2017-01-01
Computational protein design (CPD) is a powerful technique to engineer existing proteins or to design novel ones that display desired properties. Rosetta is a software suite including algorithms for computational modeling and analysis of protein structures and offers many elaborate protocols created to solve highly specific tasks of protein engineering. Most of Rosetta’s protocols optimize sequences based on a single conformation (i. e. design state). However, challenging CPD objectives like multi-specificity design or the concurrent consideration of positive and negative design goals demand the simultaneous assessment of multiple states. This is why we have developed the multi-state framework MSF that facilitates the implementation of Rosetta’s single-state protocols in a multi-state environment and made available two frequently used protocols. Utilizing MSF, we demonstrated for one of these protocols that multi-state design yields a 15% higher performance than single-state design on a ligand-binding benchmark consisting of structural conformations. With this protocol, we designed de novo nine retro-aldolases on a conformational ensemble deduced from a (βα)8-barrel protein. All variants displayed measurable catalytic activity, testifying to a high success rate for this concept of multi-state enzyme design. PMID:28604768
Generic framework for mining cellular automata models on protein-folding simulations.
Diaz, N; Tischer, I
2016-05-13
Cellular automata model identification is an important way of building simplified simulation models. In this study, we describe a generic architectural framework to ease the development process of new metaheuristic-based algorithms for cellular automata model identification in protein-folding trajectories. Our framework was developed by a methodology based on design patterns that allow an improved experience for new algorithms development. The usefulness of the proposed framework is demonstrated by the implementation of four algorithms, able to obtain extremely precise cellular automata models of the protein-folding process with a protein contact map representation. Dynamic rules obtained by the proposed approach are discussed, and future use for the new tool is outlined.
FAST: a framework for simulation and analysis of large-scale protein-silicon biosensor circuits.
Gu, Ming; Chakrabartty, Shantanu
2013-08-01
This paper presents a computer aided design (CAD) framework for verification and reliability analysis of protein-silicon hybrid circuits used in biosensors. It is envisioned that similar to integrated circuit (IC) CAD design tools, the proposed framework will be useful for system level optimization of biosensors and for discovery of new sensing modalities without resorting to laborious fabrication and experimental procedures. The framework referred to as FAST analyzes protein-based circuits by solving inverse problems involving stochastic functional elements that admit non-linear relationships between different circuit variables. In this regard, FAST uses a factor-graph netlist as a user interface and solving the inverse problem entails passing messages/signals between the internal nodes of the netlist. Stochastic analysis techniques like density evolution are used to understand the dynamics of the circuit and estimate the reliability of the solution. As an example, we present a complete design flow using FAST for synthesis, analysis and verification of our previously reported conductometric immunoassay that uses antibody-based circuits to implement forward error-correction (FEC).
A Framework for Globular Proteins
NASA Astrophysics Data System (ADS)
Lezon, Timothy
2006-03-01
Due to their remarkable chemical specificity and diversity, globular proteins play a crucial role in the network of molecular interactions of life. Over the past several decades, much experimental data has been accumulated on proteins, but the overarching principles that govern the general features of proteins remain largely unknown. Here, a novel framework for understanding many key attributes of globular proteins is presented. This framework suggests that the characteristics of globular proteins that make them well-suited for biological function are the emergent properties of a unique phase of matter. Implications of this picture include the provision of a fixed backdrop for molecular evolution and natural selection and design restrictions on molecular machinery. The work described here was carried out in collaboration with Jayanth Banavar and Amos Maritan.
Structure based re-design of the binding specificity of anti-apoptotic Bcl-xL
Chen, T. Scott; Palacios, Hector; Keating, Amy E.
2012-01-01
Many native proteins are multi-specific and interact with numerous partners, which can confound analysis of their functions. Protein design provides a potential route to generating synthetic variants of native proteins with more selective binding profiles. Re-designed proteins could be used as research tools, diagnostics or therapeutics. In this work, we used a library screening approach to re-engineer the multi-specific anti-apoptotic protein Bcl-xL to remove its interactions with many of its binding partners, making it a high affinity and selective binder of the BH3 region of pro-apoptotic protein Bad. To overcome the enormity of the potential Bcl-xL sequence space, we developed and applied a computational/experimental framework that used protein structure information to generate focused combinatorial libraries. Sequence features were identified using structure-based modeling, and an optimization algorithm based on integer programming was used to select degenerate codons that maximally covered these features. A constraint on library size was used to ensure thorough sampling. Using yeast surface display to screen a designed library of Bcl-xL variants, we successfully identified a protein with ~1,000-fold improvement in binding specificity for the BH3 region of Bad over the BH3 region of Bim. Although negative design was targeted only against the BH3 region of Bim, the best re-designed protein was globally specific against binding to 10 other peptides corresponding to native BH3 motifs. Our design framework demonstrates an efficient route to highly specific protein binders and may readily be adapted for application to other design problems. PMID:23154169
RosettaScripts: a scripting language interface to the Rosetta macromolecular modeling suite.
Fleishman, Sarel J; Leaver-Fay, Andrew; Corn, Jacob E; Strauch, Eva-Maria; Khare, Sagar D; Koga, Nobuyasu; Ashworth, Justin; Murphy, Paul; Richter, Florian; Lemmon, Gordon; Meiler, Jens; Baker, David
2011-01-01
Macromolecular modeling and design are increasingly useful in basic research, biotechnology, and teaching. However, the absence of a user-friendly modeling framework that provides access to a wide range of modeling capabilities is hampering the wider adoption of computational methods by non-experts. RosettaScripts is an XML-like language for specifying modeling tasks in the Rosetta framework. RosettaScripts provides access to protocol-level functionalities, such as rigid-body docking and sequence redesign, and allows fast testing and deployment of complex protocols without need for modifying or recompiling the underlying C++ code. We illustrate these capabilities with RosettaScripts protocols for the stabilization of proteins, the generation of computationally constrained libraries for experimental selection of higher-affinity binding proteins, loop remodeling, small-molecule ligand docking, design of ligand-binding proteins, and specificity redesign in DNA-binding proteins.
Parallel Computational Protein Design.
Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang
2017-01-01
Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.
Automated selection of stabilizing mutations in designed and natural proteins.
Borgo, Benjamin; Havranek, James J
2012-01-31
The ability to engineer novel protein folds, conformations, and enzymatic activities offers enormous potential for the development of new protein therapeutics and biocatalysts. However, many de novo and redesigned proteins exhibit poor hydrophobic packing in their predicted structures, leading to instability or insolubility. The general utility of rational, structure-based design would greatly benefit from an improved ability to generate well-packed conformations. Here we present an automated protocol within the RosettaDesign framework that can identify and improve poorly packed protein cores by selecting a series of stabilizing point mutations. We apply our method to previously characterized designed proteins that exhibited a decrease in stability after a full computational redesign. We further demonstrate the ability of our method to improve the thermostability of a well-behaved native protein. In each instance, biophysical characterization reveals that we were able to stabilize the original proteins against chemical and thermal denaturation. We believe our method will be a valuable tool for both improving upon designed proteins and conferring increased stability upon native proteins.
Automated selection of stabilizing mutations in designed and natural proteins
Borgo, Benjamin; Havranek, James J.
2012-01-01
The ability to engineer novel protein folds, conformations, and enzymatic activities offers enormous potential for the development of new protein therapeutics and biocatalysts. However, many de novo and redesigned proteins exhibit poor hydrophobic packing in their predicted structures, leading to instability or insolubility. The general utility of rational, structure-based design would greatly benefit from an improved ability to generate well-packed conformations. Here we present an automated protocol within the RosettaDesign framework that can identify and improve poorly packed protein cores by selecting a series of stabilizing point mutations. We apply our method to previously characterized designed proteins that exhibited a decrease in stability after a full computational redesign. We further demonstrate the ability of our method to improve the thermostability of a well-behaved native protein. In each instance, biophysical characterization reveals that we were able to stabilize the original proteins against chemical and thermal denaturation. We believe our method will be a valuable tool for both improving upon designed proteins and conferring increased stability upon native proteins. PMID:22307603
cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design
Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei
2016-01-01
Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509
Ruff, Kiersten M; Roberts, Stefan; Chilkoti, Ashutosh; Pappu, Rohit V
2018-06-24
Proteins and synthetic polymers can undergo phase transitions in response to changes to intensive solution parameters such as temperature, proton chemical potentials (pH), and hydrostatic pressure. For proteins and protein-based polymers, the information required for stimulus responsive phase transitions is encoded in their amino acid sequence. Here, we review some of the key physical principles that govern the phase transitions of archetypal intrinsically disordered protein polymers (IDPPs). These are disordered proteins with highly repetitive amino acid sequences. Advances in recombinant technologies have enabled the design and synthesis of protein sequences of a variety of sequence complexities and lengths. We summarize insights that have been gleaned from the design and characterization of IDPPs that undergo thermo-responsive phase transitions and build on these insights to present a general framework for IDPPs with pH and pressure responsive phase behavior. In doing so, we connect the stimulus responsive phase behavior of IDPPs with repetitive sequences to the coil-to-globule transitions that these sequences undergo at the single chain level in response to changes in stimuli. The proposed framework and ongoing studies of stimulus responsive phase behavior of designed IDPPs have direct implications in bioengineering, where designing sequences with bespoke material properties broadens the spectrum of applications, and in biology and medicine for understanding the sequence-specific driving forces for the formation of protein-based membraneless organelles as well as biological matrices that act as scaffolds for cells and mediators of cell-to-cell communication. Copyright © 2018. Published by Elsevier Ltd.
A computational framework to empower probabilistic protein design
Fromer, Menachem; Yanover, Chen
2008-01-01
Motivation: The task of engineering a protein to perform a target biological function is known as protein design. A commonly used paradigm casts this functional design problem as a structural one, assuming a fixed backbone. In probabilistic protein design, positional amino acid probabilities are used to create a random library of sequences to be simultaneously screened for biological activity. Clearly, certain choices of probability distributions will be more successful in yielding functional sequences. However, since the number of sequences is exponential in protein length, computational optimization of the distribution is difficult. Results: In this paper, we develop a computational framework for probabilistic protein design following the structural paradigm. We formulate the distribution of sequences for a structure using the Boltzmann distribution over their free energies. The corresponding probabilistic graphical model is constructed, and we apply belief propagation (BP) to calculate marginal amino acid probabilities. We test this method on a large structural dataset and demonstrate the superiority of BP over previous methods. Nevertheless, since the results obtained by BP are far from optimal, we thoroughly assess the paradigm using high-quality experimental data. We demonstrate that, for small scale sub-problems, BP attains identical results to those produced by exact inference on the paradigmatic model. However, quantitative analysis shows that the distributions predicted significantly differ from the experimental data. These findings, along with the excellent performance we observed using BP on the smaller problems, suggest potential shortcomings of the paradigm. We conclude with a discussion of how it may be improved in the future. Contact: fromer@cs.huji.ac.il PMID:18586717
Rationally designed synthetic protein hydrogels with predictable mechanical properties.
Wu, Junhua; Li, Pengfei; Dong, Chenling; Jiang, Heting; Bin Xue; Gao, Xiang; Qin, Meng; Wang, Wei; Bin Chen; Cao, Yi
2018-02-12
Designing synthetic protein hydrogels with tailored mechanical properties similar to naturally occurring tissues is an eternal pursuit in tissue engineering and stem cell and cancer research. However, it remains challenging to correlate the mechanical properties of protein hydrogels with the nanomechanics of individual building blocks. Here we use single-molecule force spectroscopy, protein engineering and theoretical modeling to prove that the mechanical properties of protein hydrogels are predictable based on the mechanical hierarchy of the cross-linkers and the load-bearing modules at the molecular level. These findings provide a framework for rationally designing protein hydrogels with independently tunable elasticity, extensibility, toughness and self-healing. Using this principle, we demonstrate the engineering of self-healable muscle-mimicking hydrogels that can significantly dissipate energy through protein unfolding. We expect that this principle can be generalized for the construction of protein hydrogels with customized mechanical properties for biomedical applications.
ERIC Educational Resources Information Center
Johnson, R. Jeremy
2014-01-01
HIV protease has served as a model protein for understanding protein structure, enzyme kinetics, structure-based drug design, and protein evolution. Inhibitors of HIV protease are also an essential part of effective HIV/AIDS treatment and have provided great societal benefits. The broad applications for HIV protease and its inhibitors make it a…
Principles for computational design of binding antibodies
Pszolla, M. Gabriele; Lapidoth, Gideon D.; Norn, Christoffer; Dym, Orly; Unger, Tamar; Albeck, Shira; Tyka, Michael D.; Fleishman, Sarel J.
2017-01-01
Natural proteins must both fold into a stable conformation and exert their molecular function. To date, computational design has successfully produced stable and atomically accurate proteins by using so-called “ideal” folds rich in regular secondary structures and almost devoid of loops and destabilizing elements, such as cavities. Molecular function, such as binding and catalysis, however, often demands nonideal features, including large and irregular loops and buried polar interaction networks, which have remained challenging for fold design. Through five design/experiment cycles, we learned principles for designing stable and functional antibody variable fragments (Fvs). Specifically, we (i) used sequence-design constraints derived from antibody multiple-sequence alignments, and (ii) during backbone design, maintained stabilizing interactions observed in natural antibodies between the framework and loops of complementarity-determining regions (CDRs) 1 and 2. Designed Fvs bound their ligands with midnanomolar affinities and were as stable as natural antibodies, despite having >30 mutations from mammalian antibody germlines. Furthermore, crystallographic analysis demonstrated atomic accuracy throughout the framework and in four of six CDRs in one design and atomic accuracy in the entire Fv in another. The principles we learned are general, and can be implemented to design other nonideal folds, generating stable, specific, and precise antibodies and enzymes. PMID:28973872
ERIC Educational Resources Information Center
Bethel, Casey M.; Lieberman, Raquel L.
2014-01-01
Here we present a multidisciplinary educational unit intended for general, advanced placement, or international baccalaureate-level high school science, focused on the three-dimensional structure of proteins and their connection to function and disease. The lessons are designed within the framework of the Next Generation Science Standards to make…
Toni, Tina; Tidor, Bruce
2013-01-01
Biological systems are inherently variable, with their dynamics influenced by intrinsic and extrinsic sources. These systems are often only partially characterized, with large uncertainties about specific sources of extrinsic variability and biochemical properties. Moreover, it is not yet well understood how different sources of variability combine and affect biological systems in concert. To successfully design biomedical therapies or synthetic circuits with robust performance, it is crucial to account for uncertainty and effects of variability. Here we introduce an efficient modeling and simulation framework to study systems that are simultaneously subject to multiple sources of variability, and apply it to make design decisions on small genetic networks that play a role of basic design elements of synthetic circuits. Specifically, the framework was used to explore the effect of transcriptional and post-transcriptional autoregulation on fluctuations in protein expression in simple genetic networks. We found that autoregulation could either suppress or increase the output variability, depending on specific noise sources and network parameters. We showed that transcriptional autoregulation was more successful than post-transcriptional in suppressing variability across a wide range of intrinsic and extrinsic magnitudes and sources. We derived the following design principles to guide the design of circuits that best suppress variability: (i) high protein cooperativity and low miRNA cooperativity, (ii) imperfect complementarity between miRNA and mRNA was preferred to perfect complementarity, and (iii) correlated expression of mRNA and miRNA--for example, on the same transcript--was best for suppression of protein variability. Results further showed that correlations in kinetic parameters between cells affected the ability to suppress variability, and that variability in transient states did not necessarily follow the same principles as variability in the steady state. Our model and findings provide a general framework to guide design principles in synthetic biology.
Toni, Tina; Tidor, Bruce
2013-01-01
Biological systems are inherently variable, with their dynamics influenced by intrinsic and extrinsic sources. These systems are often only partially characterized, with large uncertainties about specific sources of extrinsic variability and biochemical properties. Moreover, it is not yet well understood how different sources of variability combine and affect biological systems in concert. To successfully design biomedical therapies or synthetic circuits with robust performance, it is crucial to account for uncertainty and effects of variability. Here we introduce an efficient modeling and simulation framework to study systems that are simultaneously subject to multiple sources of variability, and apply it to make design decisions on small genetic networks that play a role of basic design elements of synthetic circuits. Specifically, the framework was used to explore the effect of transcriptional and post-transcriptional autoregulation on fluctuations in protein expression in simple genetic networks. We found that autoregulation could either suppress or increase the output variability, depending on specific noise sources and network parameters. We showed that transcriptional autoregulation was more successful than post-transcriptional in suppressing variability across a wide range of intrinsic and extrinsic magnitudes and sources. We derived the following design principles to guide the design of circuits that best suppress variability: (i) high protein cooperativity and low miRNA cooperativity, (ii) imperfect complementarity between miRNA and mRNA was preferred to perfect complementarity, and (iii) correlated expression of mRNA and miRNA – for example, on the same transcript – was best for suppression of protein variability. Results further showed that correlations in kinetic parameters between cells affected the ability to suppress variability, and that variability in transient states did not necessarily follow the same principles as variability in the steady state. Our model and findings provide a general framework to guide design principles in synthetic biology. PMID:23555205
Aragonite-Associated Mollusk Shell Protein Aggregates To Form Mesoscale “Smart” Hydrogels
Perovic, Iva; Davidyants, Anastasia; Evans, John Spencer
2016-11-30
In the mollusk shell there exists a framework silk fibroin-polysaccharide hydrogel coating around nacre aragonite tablets, and this coating facilitates the synthesis and organization of mineral nanoparticles into mesocrystals. In this report, we identify that a protein component of this coating, n16.3, is a hydrogelator. Due to the presence of intrinsic disorder, aggregation-prone regions, and nearly equal balance of anionic and cationic side chains, this protein assembles to form porous mesoscale hydrogel particles in solution and on mica surfaces. These hydrogel particles change their dimensionality, organization, and internal structure in response to pH and ions, particularly Ca(II), which indicates thatmore » these behave as ion-responsive or “smart” hydrogels. Thus, in addition to silk fibroins, the gel phase of the mollusk shell nacre framework layer may actually consist of several framework hydrogelator proteins, such as n16.3, which can promote mineral nanoparticle organization and assembly during the nacre biomineralization process and also serve as a model system for designing ion-responsive, composite, and smart hydrogels.« less
Park, Hahnbeom; Lee, Gyu Rie; Heo, Lim; Seok, Chaok
2014-01-01
Protein loop modeling is a tool for predicting protein local structures of particular interest, providing opportunities for applications involving protein structure prediction and de novo protein design. Until recently, the majority of loop modeling methods have been developed and tested by reconstructing loops in frameworks of experimentally resolved structures. In many practical applications, however, the protein loops to be modeled are located in inaccurate structural environments. These include loops in model structures, low-resolution experimental structures, or experimental structures of different functional forms. Accordingly, discrepancies in the accuracy of the structural environment assumed in development of the method and that in practical applications present additional challenges to modern loop modeling methods. This study demonstrates a new strategy for employing a hybrid energy function combining physics-based and knowledge-based components to help tackle this challenge. The hybrid energy function is designed to combine the strengths of each energy component, simultaneously maintaining accurate loop structure prediction in a high-resolution framework structure and tolerating minor environmental errors in low-resolution structures. A loop modeling method based on global optimization of this new energy function is tested on loop targets situated in different levels of environmental errors, ranging from experimental structures to structures perturbed in backbone as well as side chains and template-based model structures. The new method performs comparably to force field-based approaches in loop reconstruction in crystal structures and better in loop prediction in inaccurate framework structures. This result suggests that higher-accuracy predictions would be possible for a broader range of applications. The web server for this method is available at http://galaxy.seoklab.org/loop with the PS2 option for the scoring function.
Computationally mapping sequence space to understand evolutionary protein engineering.
Armstrong, Kathryn A; Tidor, Bruce
2008-01-01
Evolutionary protein engineering has been dramatically successful, producing a wide variety of new proteins with altered stability, binding affinity, and enzymatic activity. However, the success of such procedures is often unreliable, and the impact of the choice of protein, engineering goal, and evolutionary procedure is not well understood. We have created a framework for understanding aspects of the protein engineering process by computationally mapping regions of feasible sequence space for three small proteins using structure-based design protocols. We then tested the ability of different evolutionary search strategies to explore these sequence spaces. The results point to a non-intuitive relationship between the error-prone PCR mutation rate and the number of rounds of replication. The evolutionary relationships among feasible sequences reveal hub-like sequences that serve as particularly fruitful starting sequences for evolutionary search. Moreover, genetic recombination procedures were examined, and tradeoffs relating sequence diversity and search efficiency were identified. This framework allows us to consider the impact of protein structure on the allowed sequence space and therefore on the challenges that each protein presents to error-prone PCR and genetic recombination procedures.
A quantitative framework for the forward design of synthetic miRNA circuits.
Bloom, Ryan J; Winkler, Sally M; Smolke, Christina D
2014-11-01
Synthetic genetic circuits incorporating regulatory components based on RNA interference (RNAi) have been used in a variety of systems. A comprehensive understanding of the parameters that determine the relationship between microRNA (miRNA) and target expression levels is lacking. We describe a quantitative framework supporting the forward engineering of gene circuits that incorporate RNAi-based regulatory components in mammalian cells. We developed a model that captures the quantitative relationship between miRNA and target gene expression levels as a function of parameters, including mRNA half-life and miRNA target-site number. We extended the model to synthetic circuits that incorporate protein-responsive miRNA switches and designed an optimized miRNA-based protein concentration detector circuit that noninvasively measures small changes in the nuclear concentration of β-catenin owing to induction of the Wnt signaling pathway. Our results highlight the importance of methods for guiding the quantitative design of genetic circuits to achieve robust, reliable and predictable behaviors in mammalian cells.
Defining the conserved internal architecture of a protein kinase.
Kornev, Alexandr P; Taylor, Susan S
2010-03-01
Protein kinases constitute a large protein family of important regulators in all eukaryotic cells. All of the protein kinases have a similar bilobal fold, and their key structural features have been well studied. However, the recent discovery of non-contiguous hydrophobic ensembles inside the protein kinase core shed new light on the internal organization of these molecules. Two hydrophobic "spines" traverse both lobes of the protein kinase molecule, providing a firm but flexible connection between its key elements. The spine model introduces a useful framework for analysis of intramolecular communications, molecular dynamics, and drug design. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perovic, Iva; Davidyants, Anastasia; Evans, John Spencer
In the mollusk shell there exists a framework silk fibroin-polysaccharide hydrogel coating around nacre aragonite tablets, and this coating facilitates the synthesis and organization of mineral nanoparticles into mesocrystals. In this report, we identify that a protein component of this coating, n16.3, is a hydrogelator. Due to the presence of intrinsic disorder, aggregation-prone regions, and nearly equal balance of anionic and cationic side chains, this protein assembles to form porous mesoscale hydrogel particles in solution and on mica surfaces. These hydrogel particles change their dimensionality, organization, and internal structure in response to pH and ions, particularly Ca(II), which indicates thatmore » these behave as ion-responsive or “smart” hydrogels. Thus, in addition to silk fibroins, the gel phase of the mollusk shell nacre framework layer may actually consist of several framework hydrogelator proteins, such as n16.3, which can promote mineral nanoparticle organization and assembly during the nacre biomineralization process and also serve as a model system for designing ion-responsive, composite, and smart hydrogels.« less
SHARPEN-systematic hierarchical algorithms for rotamers and proteins on an extended network.
Loksha, Ilya V; Maiolo, James R; Hong, Cheng W; Ng, Albert; Snow, Christopher D
2009-04-30
Algorithms for discrete optimization of proteins play a central role in recent advances in protein structure prediction and design. We wish to improve the resources available for computational biologists to rapidly prototype such algorithms and to easily scale these algorithms to many processors. To that end, we describe the implementation and use of two new open source resources, citing potential benefits over existing software. We discuss CHOMP, a new object-oriented library for macromolecular optimization, and SHARPEN, a framework for scaling CHOMP scripts to many computers. These tools allow users to develop new algorithms for a variety of applications including protein repacking, protein-protein docking, loop rebuilding, or homology model remediation. Particular care was taken to allow modular energy function design; protein conformations may currently be scored using either the OPLSaa molecular mechanical energy function or an all-atom semiempirical energy function employed by Rosetta. (c) 2009 Wiley Periodicals, Inc.
Installing hydrolytic activity into a completely de novo protein framework
NASA Astrophysics Data System (ADS)
Burton, Antony J.; Thomson, Andrew R.; Dawson, William M.; Brady, R. Leo; Woolfson, Derek N.
2016-09-01
The design of enzyme-like catalysts tests our understanding of sequence-to-structure/function relationships in proteins. Here we install hydrolytic activity predictably into a completely de novo and thermostable α-helical barrel, which comprises seven helices arranged around an accessible channel. We show that the lumen of the barrel accepts 21 mutations to functional polar residues. The resulting variant, which has cysteine-histidine-glutamic acid triads on each helix, hydrolyses p-nitrophenyl acetate with catalytic efficiencies that match the most-efficient redesigned hydrolases based on natural protein scaffolds. This is the first report of a functional catalytic triad engineered into a de novo protein framework. The flexibility of our system also allows the facile incorporation of unnatural side chains to improve activity and probe the catalytic mechanism. Such a predictable and robust construction of truly de novo biocatalysts holds promise for applications in chemical and biochemical synthesis.
Jiang, Jianwen; Babarao, Ravichandar; Hu, Zhongqiao
2011-07-01
Nanoporous materials have widespread applications in chemical industry, but the pathway from laboratory synthesis and testing to practical utilization of nanoporous materials is substantially challenging and requires fundamental understanding from the bottom up. With ever-growing computational resources, molecular simulations have become an indispensable tool for material characterization, screening and design. This tutorial review summarizes the recent simulation studies in zeolites, metal-organic frameworks and protein crystals, and provides a molecular overview for energy, environmental and pharmaceutical applications of nanoporous materials with increasing degree of complexity in building blocks. It is demonstrated that molecular-level studies can bridge the gap between physical and engineering sciences, unravel microscopic insights that are otherwise experimentally inaccessible, and assist in the rational design of new materials. The review is concluded with major challenges in future simulation exploration of novel nanoporous materials for emerging applications.
A Synthetic Biology Framework for Programming Eukaryotic Transcription Functions
Khalil, Ahmad S.; Lu, Timothy K.; Bashor, Caleb J.; Ramirez, Cherie L.; Pyenson, Nora C.; Joung, J. Keith; Collins, James J.
2013-01-01
SUMMARY Eukaryotic transcription factors (TFs) perform complex and combinatorial functions within transcriptional networks. Here, we present a synthetic framework for systematically constructing eukaryotic transcription functions using artificial zinc fingers, modular DNA-binding domains found within many eukaryotic TFs. Utilizing this platform, we construct a library of orthogonal synthetic transcription factors (sTFs) and use these to wire synthetic transcriptional circuits in yeast. We engineer complex functions, such as tunable output strength and transcriptional cooperativity, by rationally adjusting a decomposed set of key component properties, e.g., DNA specificity, affinity, promoter design, protein-protein interactions. We show that subtle perturbations to these properties can transform an individual sTF between distinct roles (activator, cooperative factor, inhibitory factor) within a transcriptional complex, thus drastically altering the signal processing behavior of multi-input systems. This platform provides new genetic components for synthetic biology and enables bottom-up approaches to understanding the design principles of eukaryotic transcriptional complexes and networks. PMID:22863014
Faunus: An object oriented framework for molecular simulation
Lund, Mikael; Trulsson, Martin; Persson, Björn
2008-01-01
Background We present a C++ class library for Monte Carlo simulation of molecular systems, including proteins in solution. The design is generic and highly modular, enabling multiple developers to easily implement additional features. The statistical mechanical methods are documented by extensive use of code comments that – subsequently – are collected to automatically build a web-based manual. Results We show how an object oriented design can be used to create an intuitively appealing coding framework for molecular simulation. This is exemplified in a minimalistic C++ program that can calculate protein protonation states. We further discuss performance issues related to high level coding abstraction. Conclusion C++ and the Standard Template Library (STL) provide a high-performance platform for generic molecular modeling. Automatic generation of code documentation from inline comments has proven particularly useful in that no separate manual needs to be maintained. PMID:18241331
Reinert, Zachary E; Horne, W Seth
2014-11-28
A variety of non-biological structural motifs have been incorporated into the backbone of natural protein sequences. In parallel work, diverse unnatural oligomers of de novo design (termed "foldamers") have been developed that fold in defined ways. In this Perspective article, we survey foundational studies on protein backbone engineering, with a focus on alterations made in the context of complex tertiary folds. We go on to summarize recent work illustrating the potential promise of these methods to provide a general framework for the construction of foldamer mimics of protein tertiary structures.
Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng
2015-01-09
The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.
Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng
2015-01-01
The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly. PMID:25572661
NASA Astrophysics Data System (ADS)
Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng
2015-01-01
The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.
Protein construct storage: Bayesian variable selection and prediction with mixtures.
Clyde, M A; Parmigiani, G
1998-07-01
Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.
Design of Polymer-Grafted Particles for Biocompatability
NASA Astrophysics Data System (ADS)
Trombly, David; Ganesan, Venkat
2009-03-01
Drug designers often coat drug particles with grafted polymers in order to introduce a net repulsion between the particles and blood proteins. This net repulsion results from the energy cost of compressing grafted chains on approach of proteins. It thus overcomes the Van Der Waals attraction between drug and protein which would otherwise cause particle-protein agglomeration and ultimately thrombosis. This study proposes to develop a fundamental understanding of the role of different features in controlling the efficacy of the grafted layers. We address this issue by developing a framework to predict the interactions between a polymer-coated spherical particle and a bare spherical particle. In order to fully capture the two-sphere system, a numerical solution of polymer mean field theory is used in a bispherical coordinate system. Results for protein-particle interaction energies for different design parameters will be presented. For biological applications, polyethylene glycol is often used as the grafted polymer. The unique properties of this molecule will be accounted for using the n-cluster model.
Protein design in systems metabolic engineering for industrial strain development.
Chen, Zhen; Zeng, An-Ping
2013-05-01
Accelerating the process of industrial bacterial host strain development, aimed at increasing productivity, generating new bio-products or utilizing alternative feedstocks, requires the integration of complementary approaches to manipulate cellular metabolism and regulatory networks. Systems metabolic engineering extends the concept of classical metabolic engineering to the systems level by incorporating the techniques used in systems biology and synthetic biology, and offers a framework for the development of the next generation of industrial strains. As one of the most useful tools of systems metabolic engineering, protein design allows us to design and optimize cellular metabolism at a molecular level. Here, we review the current strategies of protein design for engineering cellular synthetic pathways, metabolic control systems and signaling pathways, and highlight the challenges of this subfield within the context of systems metabolic engineering. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Recombinant protein blends: silk beyond natural design.
Dinjaski, Nina; Kaplan, David L
2016-06-01
Recombinant DNA technology and new material concepts are shaping future directions in biomaterial science for the design and production of the next-generation biomaterial platforms. Aside from conventionally used synthetic polymers, numerous natural biopolymers (e.g., silk, elastin, collagen, gelatin, alginate, cellulose, keratin, chitin, polyhydroxyalkanoates) have been investigated for properties and manipulation via bioengineering. Genetic engineering provides a path to increase structural and functional complexity of these biopolymers, and thereby expand the catalog of available biomaterials beyond that which exists in nature. In addition, the integration of experimental approaches with computational modeling to analyze sequence-structure-function relationships is starting to have an impact in the field by establishing predictive frameworks for determining material properties. Herein, we review advances in recombinant DNA-mediated protein production and functionalization approaches, with a focus on hybrids or combinations of proteins; recombinant protein blends or 'recombinamers'. We highlight the potential biomedical applications of fibrous protein recombinamers, such as Silk-Elastin Like Polypeptides (SELPs) and Silk-Bacterial Collagens (SBCs). We also discuss the possibility for the rationale design of fibrous proteins to build smart, stimuli-responsive biomaterials for diverse applications. We underline current limitations with production systems for these proteins and discuss the main trends in systems/synthetic biology that may improve recombinant fibrous protein design and production. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Verlinde, Christophe L. M. J.; Rudenko, Gabrielle; Hol, Wim G. J.
1992-04-01
A modular method for pursuing structure-based inhibitor design in the framework of a design cycle is presented. The approach entails four stages: (1) a design pathway is defined in the three-dimensional structure of a target protein; (2) this pathway is divided into subregions; (3) complementary building blocks, also called fragments, are designed in each subregion; complementarity is defined in terms of shape, hydrophobicity, hydrogen bond properties and electrostatics; and (4) fragments from different subregions are linked into potential lead compounds. Stages (3) and (4) are qualitatively guided by force-field calculations. In addition, the designed fragments serve as entries for retrieving existing compounds from chemical databases. This linked-fragment approach has been applied in the design of potentially selective inhibitors of triosephosphate isomerase from Trypanosoma brucei, the causative agent of sleeping sickness.
Dynamics simulations for engineering macromolecular interactions
NASA Astrophysics Data System (ADS)
Robinson-Mosher, Avi; Shinar, Tamar; Silver, Pamela A.; Way, Jeffrey
2013-06-01
The predictable engineering of well-behaved transcriptional circuits is a central goal of synthetic biology. The artificial attachment of promoters to transcription factor genes usually results in noisy or chaotic behaviors, and such systems are unlikely to be useful in practical applications. Natural transcriptional regulation relies extensively on protein-protein interactions to insure tightly controlled behavior, but such tight control has been elusive in engineered systems. To help engineer protein-protein interactions, we have developed a molecular dynamics simulation framework that simplifies features of proteins moving by constrained Brownian motion, with the goal of performing long simulations. The behavior of a simulated protein system is determined by summation of forces that include a Brownian force, a drag force, excluded volume constraints, relative position constraints, and binding constraints that relate to experimentally determined on-rates and off-rates for chosen protein elements in a system. Proteins are abstracted as spheres. Binding surfaces are defined radially within a protein. Peptide linkers are abstracted as small protein-like spheres with rigid connections. To address whether our framework could generate useful predictions, we simulated the behavior of an engineered fusion protein consisting of two 20 000 Da proteins attached by flexible glycine/serine-type linkers. The two protein elements remained closely associated, as if constrained by a random walk in three dimensions of the peptide linker, as opposed to showing a distribution of distances expected if movement were dominated by Brownian motion of the protein domains only. We also simulated the behavior of fluorescent proteins tethered by a linker of varying length, compared the predicted Förster resonance energy transfer with previous experimental observations, and obtained a good correspondence. Finally, we simulated the binding behavior of a fusion of two ligands that could simultaneously bind to distinct cell-surface receptors, and explored the landscape of linker lengths and stiffnesses that could enhance receptor binding of one ligand when the other ligand has already bound to its receptor, thus, addressing potential mechanisms for improving targeted signal transduction proteins. These specific results have implications for the design of targeted fusion proteins and artificial transcription factors involving fusion of natural domains. More broadly, the simulation framework described here could be extended to include more detailed system features such as non-spherical protein shapes and electrostatics, without requiring detailed, computationally expensive specifications. This framework should be useful in predicting behavior of engineered protein systems including binding and dissociation reactions.
Dynamics simulations for engineering macromolecular interactions.
Robinson-Mosher, Avi; Shinar, Tamar; Silver, Pamela A; Way, Jeffrey
2013-06-01
The predictable engineering of well-behaved transcriptional circuits is a central goal of synthetic biology. The artificial attachment of promoters to transcription factor genes usually results in noisy or chaotic behaviors, and such systems are unlikely to be useful in practical applications. Natural transcriptional regulation relies extensively on protein-protein interactions to insure tightly controlled behavior, but such tight control has been elusive in engineered systems. To help engineer protein-protein interactions, we have developed a molecular dynamics simulation framework that simplifies features of proteins moving by constrained Brownian motion, with the goal of performing long simulations. The behavior of a simulated protein system is determined by summation of forces that include a Brownian force, a drag force, excluded volume constraints, relative position constraints, and binding constraints that relate to experimentally determined on-rates and off-rates for chosen protein elements in a system. Proteins are abstracted as spheres. Binding surfaces are defined radially within a protein. Peptide linkers are abstracted as small protein-like spheres with rigid connections. To address whether our framework could generate useful predictions, we simulated the behavior of an engineered fusion protein consisting of two 20,000 Da proteins attached by flexible glycine/serine-type linkers. The two protein elements remained closely associated, as if constrained by a random walk in three dimensions of the peptide linker, as opposed to showing a distribution of distances expected if movement were dominated by Brownian motion of the protein domains only. We also simulated the behavior of fluorescent proteins tethered by a linker of varying length, compared the predicted Förster resonance energy transfer with previous experimental observations, and obtained a good correspondence. Finally, we simulated the binding behavior of a fusion of two ligands that could simultaneously bind to distinct cell-surface receptors, and explored the landscape of linker lengths and stiffnesses that could enhance receptor binding of one ligand when the other ligand has already bound to its receptor, thus, addressing potential mechanisms for improving targeted signal transduction proteins. These specific results have implications for the design of targeted fusion proteins and artificial transcription factors involving fusion of natural domains. More broadly, the simulation framework described here could be extended to include more detailed system features such as non-spherical protein shapes and electrostatics, without requiring detailed, computationally expensive specifications. This framework should be useful in predicting behavior of engineered protein systems including binding and dissociation reactions.
Islam, R S; Tisi, D; Levy, M S; Lye, G J
2007-01-01
A major bottleneck in drug discovery is the production of soluble human recombinant protein in sufficient quantities for analysis. This problem is compounded by the complex relationship between protein yield and the large number of variables which affect it. Here, we describe a generic framework for the rapid identification and optimization of factors affecting soluble protein yield in microwell plate fermentations as a prelude to the predictive and reliable scaleup of optimized culture conditions. Recombinant expression of firefly luciferase in Escherichia coli was used as a model system. Two rounds of statistical design of experiments (DoE) were employed to first screen (D-optimal design) and then optimize (central composite face design) the yield of soluble protein. Biological variables from the initial screening experiments included medium type and growth and induction conditions. To provide insight into the impact of the engineering environment on cell growth and expression, plate geometry, shaking speed, and liquid fill volume were included as factors since these strongly influence oxygen transfer into the wells. Compared to standard reference conditions, both the screening and optimization designs gave up to 3-fold increases in the soluble protein yield, i.e., a 9-fold increase overall. In general the highest protein yields were obtained when cells were induced at a relatively low biomass concentration and then allowed to grow slowly up to a high final biomass concentration, >8 g.L-1. Consideration and analysis of the model results showed 6 of the original 10 variables to be important at the screening stage and 3 after optimization. The latter included the microwell plate shaking speeds pre- and postinduction, indicating the importance of oxygen transfer into the microwells and identifying this as a critical parameter for subsequent scale translation studies. The optimization process, also known as response surface methodology (RSM), predicted there to be a distinct optimum set of conditions for protein expression which could be verified experimentally. This work provides a generic approach to protein expression optimization in which both biological and engineering variables are investigated from the initial screening stage. The application of DoE reduces the total number of experiments needed to be performed, while experimentation at the microwell scale increases experimental throughput and reduces cost.
Optimizing energy functions for protein-protein interface design.
Sharabi, Oz; Yanover, Chen; Dekel, Ayelet; Shifman, Julia M
2011-01-15
Protein design methods have been originally developed for the design of monomeric proteins. When applied to the more challenging task of protein–protein complex design, these methods yield suboptimal results. In particular, they often fail to recapitulate favorable hydrogen bonds and electrostatic interactions across the interface. In this work, we aim to improve the energy function of the protein design program ORBIT to better account for binding interactions between proteins. By using the advanced machine learning framework of conditional random fields, we optimize the relative importance of all the terms in the energy function, attempting to reproduce the native side-chain conformations in protein–protein interfaces. We evaluate the performance of several optimized energy functions, each describes the van der Waals interactions using a different potential. In comparison with the original energy function, our best energy function (a) incorporates a much “softer” repulsive van der Waals potential, suitable for the discrete rotameric representation of amino acid side chains; (b) does not penalize burial of polar atoms, reflecting the frequent occurrence of polar buried residues in protein–protein interfaces; and (c) significantly up-weights the electrostatic term, attesting to the high importance of these interactions for protein–protein complex formation. Using this energy function considerably improves side chain placement accuracy for interface residues in a large test set of protein–protein complexes. Moreover, the optimized energy function recovers the native sequences of protein–protein interface at a higher rate than the default function and performs substantially better in predicting changes in free energy of binding due to mutations.
Constant pH Molecular Dynamics of Proteins in Explicit Solvent with Proton Tautomerism
Goh, Garrett B.; Hulbert, Benjamin S.; Zhou, Huiqing; Brooks, Charles L.
2015-01-01
pH is a ubiquitous regulator of biological activity, including protein-folding, protein-protein interactions and enzymatic activity. Existing constant pH molecular dynamics (CPHMD) models that were developed to address questions related to the pH-dependent properties of proteins are largely based on implicit solvent models. However, implicit solvent models are known to underestimate the desolvation energy of buried charged residues, increasing the error associated with predictions that involve internal ionizable residue that are important in processes like hydrogen transport and electron transfer. Furthermore, discrete water and ions cannot be modeled in implicit solvent, which are important in systems like membrane proteins and ion channels. We report on an explicit solvent constant pH molecular dynamics framework based on multi-site λ-dynamics (CPHMDMSλD). In the CPHMDMSλD framework, we performed seamless alchemical transitions between protonation and tautomeric states using multi-site λ-dynamics, and designed novel biasing potentials to ensure that the physical end-states are predominantly sampled. We show that explicit solvent CPHMDMSλD simulations model realistic pH-dependent properties of proteins such as the Hen-Egg White Lysozyme (HEWL), binding domain of 2-oxoglutarate dehydrogenase (BBL) and N-terminal domain of ribosomal L9 (NTL9), and the pKa predictions are in excellent agreement with experimental values, with a RMSE ranging from 0.72 to 0.84 pKa units. With the recent development of the explicit solvent CPHMDMSλD framework for nucleic acids, accurate modeling of pH-dependent properties of both major class of biomolecules – proteins and nucleic acids is now possible. PMID:24375620
Hydrogen tunneling links protein dynamics to enzyme catalysis.
Klinman, Judith P; Kohen, Amnon
2013-01-01
The relationship between protein dynamics and function is a subject of considerable contemporary interest. Although protein motions are frequently observed during ligand binding and release steps, the contribution of protein motions to the catalysis of bond making/breaking processes is more difficult to probe and verify. Here, we show how the quantum mechanical hydrogen tunneling associated with enzymatic C-H bond cleavage provides a unique window into the necessity of protein dynamics for achieving optimal catalysis. Experimental findings support a hierarchy of thermodynamically equilibrated motions that control the H-donor and -acceptor distance and active-site electrostatics, creating an ensemble of conformations suitable for H-tunneling. A possible extension of this view to methyl transfer and other catalyzed reactions is also presented. The impact of understanding these dynamics on the conceptual framework for enzyme activity, inhibitor/drug design, and biomimetic catalyst design is likely to be substantial.
Hydrogen Tunneling Links Protein Dynamics to Enzyme Catalysis
Klinman, Judith P.; Kohen, Amnon
2014-01-01
The relationship between protein dynamics and function is a subject of considerable contemporary interest. Although protein motions are frequently observed during ligand binding and release steps, the contribution of protein motions to the catalysis of bond making/breaking processes is more difficult to probe and verify. Here, we show how the quantum mechanical hydrogen tunneling associated with enzymatic C–H bond cleavage provides a unique window into the necessity of protein dynamics for achieving optimal catalysis. Experimental findings support a hierarchy of thermodynamically equilibrated motions that control the H-donor and -acceptor distance and active-site electrostatics, creating an ensemble of conformations suitable for H-tunneling. A possible extension of this view to methyl transfer and other catalyzed reactions is also presented. The impact of understanding these dynamics on the conceptual framework for enzyme activity, inhibitor/drug design, and biomimetic catalyst design is likely to be substantial. PMID:23746260
Constant pH molecular dynamics of proteins in explicit solvent with proton tautomerism.
Goh, Garrett B; Hulbert, Benjamin S; Zhou, Huiqing; Brooks, Charles L
2014-07-01
pH is a ubiquitous regulator of biological activity, including protein-folding, protein-protein interactions, and enzymatic activity. Existing constant pH molecular dynamics (CPHMD) models that were developed to address questions related to the pH-dependent properties of proteins are largely based on implicit solvent models. However, implicit solvent models are known to underestimate the desolvation energy of buried charged residues, increasing the error associated with predictions that involve internal ionizable residue that are important in processes like hydrogen transport and electron transfer. Furthermore, discrete water and ions cannot be modeled in implicit solvent, which are important in systems like membrane proteins and ion channels. We report on an explicit solvent constant pH molecular dynamics framework based on multi-site λ-dynamics (CPHMD(MSλD)). In the CPHMD(MSλD) framework, we performed seamless alchemical transitions between protonation and tautomeric states using multi-site λ-dynamics, and designed novel biasing potentials to ensure that the physical end-states are predominantly sampled. We show that explicit solvent CPHMD(MSλD) simulations model realistic pH-dependent properties of proteins such as the Hen-Egg White Lysozyme (HEWL), binding domain of 2-oxoglutarate dehydrogenase (BBL) and N-terminal domain of ribosomal protein L9 (NTL9), and the pKa predictions are in excellent agreement with experimental values, with a RMSE ranging from 0.72 to 0.84 pKa units. With the recent development of the explicit solvent CPHMD(MSλD) framework for nucleic acids, accurate modeling of pH-dependent properties of both major class of biomolecules-proteins and nucleic acids is now possible. © 2013 Wiley Periodicals, Inc.
Enabling Large-Scale Design, Synthesis and Validation of Small Molecule Protein-Protein Antagonists
Koes, David; Khoury, Kareem; Huang, Yijun; Wang, Wei; Bista, Michal; Popowicz, Grzegorz M.; Wolf, Siglinde; Holak, Tad A.; Dömling, Alexander; Camacho, Carlos J.
2012-01-01
Although there is no shortage of potential drug targets, there are only a handful known low-molecular-weight inhibitors of protein-protein interactions (PPIs). One problem is that current efforts are dominated by low-yield high-throughput screening, whose rigid framework is not suitable for the diverse chemotypes present in PPIs. Here, we developed a novel pharmacophore-based interactive screening technology that builds on the role anchor residues, or deeply buried hot spots, have in PPIs, and redesigns these entry points with anchor-biased virtual multicomponent reactions, delivering tens of millions of readily synthesizable novel compounds. Application of this approach to the MDM2/p53 cancer target led to high hit rates, resulting in a large and diverse set of confirmed inhibitors, and co-crystal structures validate the designed compounds. Our unique open-access technology promises to expand chemical space and the exploration of the human interactome by leveraging in-house small-scale assays and user-friendly chemistry to rationally design ligands for PPIs with known structure. PMID:22427896
MollDE: a homology modeling framework you can click with.
Canutescu, Adrian A; Dunbrack, Roland L
2005-06-15
Molecular Integrated Development Environment (MolIDE) is an integrated application designed to provide homology modeling tools and protocols under a uniform, user-friendly graphical interface. Its main purpose is to combine the most frequent modeling steps in a semi-automatic, interactive way, guiding the user from the target protein sequence to the final three-dimensional protein structure. The typical basic homology modeling process is composed of building sequence profiles of the target sequence family, secondary structure prediction, sequence alignment with PDB structures, assisted alignment editing, side-chain prediction and loop building. All of these steps are available through a graphical user interface. MolIDE's user-friendly and streamlined interactive modeling protocol allows the user to focus on the important modeling questions, hiding from the user the raw data generation and conversion steps. MolIDE was designed from the ground up as an open-source, cross-platform, extensible framework. This allows developers to integrate additional third-party programs to MolIDE. http://dunbrack.fccc.edu/molide/molide.php rl_dunbrack@fccc.edu.
Sarkar, Debasree; Patra, Piya; Ghosh, Abhirupa; Saha, Sudipto
2016-01-01
A considerable proportion of protein-protein interactions (PPIs) in the cell are estimated to be mediated by very short peptide segments that approximately conform to specific sequence patterns known as linear motifs (LMs), often present in the disordered regions in the eukaryotic proteins. These peptides have been found to interact with low affinity and are able bind to multiple interactors, thus playing an important role in the PPI networks involving date hubs. In this work, PPI data and de novo motif identification based method (MEME) were used to identify such peptides in three cancer-associated hub proteins-MYC, APC and MDM2. The peptides corresponding to the significant LMs identified for each hub protein were aligned, the overlapping regions across these peptides being termed as overlapping linear peptides (OLPs). These OLPs were thus predicted to be responsible for multiple PPIs of the corresponding hub proteins and a scoring system was developed to rank them. We predicted six OLPs in MYC and five OLPs in MDM2 that scored higher than OLP predictions from randomly generated protein sets. Two OLP sequences from the C-terminal of MYC were predicted to bind with FBXW7, component of an E3 ubiquitin-protein ligase complex involved in proteasomal degradation of MYC. Similarly, we identified peptides in the C-terminal of MDM2 interacting with FKBP3, which has a specific role in auto-ubiquitinylation of MDM2. The peptide sequences predicted in MYC and MDM2 look promising for designing orthosteric inhibitors against possible disease-associated PPIs. Since these OLPs can interact with other proteins as well, these inhibitors should be specific to the targeted interactor to prevent undesired side-effects. This computational framework has been designed to predict and rank the peptide regions that may mediate multiple PPIs and can be applied to other disease-associated date hub proteins for prediction of novel therapeutic targets of small molecule PPI modulators.
Dynamics simulations for engineering macromolecular interactions
Robinson-Mosher, Avi; Shinar, Tamar; Silver, Pamela A.; Way, Jeffrey
2013-01-01
The predictable engineering of well-behaved transcriptional circuits is a central goal of synthetic biology. The artificial attachment of promoters to transcription factor genes usually results in noisy or chaotic behaviors, and such systems are unlikely to be useful in practical applications. Natural transcriptional regulation relies extensively on protein-protein interactions to insure tightly controlled behavior, but such tight control has been elusive in engineered systems. To help engineer protein-protein interactions, we have developed a molecular dynamics simulation framework that simplifies features of proteins moving by constrained Brownian motion, with the goal of performing long simulations. The behavior of a simulated protein system is determined by summation of forces that include a Brownian force, a drag force, excluded volume constraints, relative position constraints, and binding constraints that relate to experimentally determined on-rates and off-rates for chosen protein elements in a system. Proteins are abstracted as spheres. Binding surfaces are defined radially within a protein. Peptide linkers are abstracted as small protein-like spheres with rigid connections. To address whether our framework could generate useful predictions, we simulated the behavior of an engineered fusion protein consisting of two 20 000 Da proteins attached by flexible glycine/serine-type linkers. The two protein elements remained closely associated, as if constrained by a random walk in three dimensions of the peptide linker, as opposed to showing a distribution of distances expected if movement were dominated by Brownian motion of the protein domains only. We also simulated the behavior of fluorescent proteins tethered by a linker of varying length, compared the predicted Förster resonance energy transfer with previous experimental observations, and obtained a good correspondence. Finally, we simulated the binding behavior of a fusion of two ligands that could simultaneously bind to distinct cell-surface receptors, and explored the landscape of linker lengths and stiffnesses that could enhance receptor binding of one ligand when the other ligand has already bound to its receptor, thus, addressing potential mechanisms for improving targeted signal transduction proteins. These specific results have implications for the design of targeted fusion proteins and artificial transcription factors involving fusion of natural domains. More broadly, the simulation framework described here could be extended to include more detailed system features such as non-spherical protein shapes and electrostatics, without requiring detailed, computationally expensive specifications. This framework should be useful in predicting behavior of engineered protein systems including binding and dissociation reactions. PMID:23822508
TGF-beta signaling proteins and the Protein Ontology.
Arighi, Cecilia N; Liu, Hongfang; Natale, Darren A; Barker, Winona C; Drabkin, Harold; Blake, Judith A; Smith, Barry; Wu, Cathy H
2009-05-06
The Protein Ontology (PRO) is designed as a formal and principled Open Biomedical Ontologies (OBO) Foundry ontology for proteins. The components of PRO extend from a classification of proteins on the basis of evolutionary relationships at the homeomorphic level to the representation of the multiple protein forms of a gene, including those resulting from alternative splicing, cleavage and/or post-translational modifications. Focusing specifically on the TGF-beta signaling proteins, we describe the building, curation, usage and dissemination of PRO. PRO is manually curated on the basis of PrePRO, an automatically generated file with content derived from standard protein data sources. Manual curation ensures that the treatment of the protein classes and the internal and external relationships conform to the PRO framework. The current release of PRO is based upon experimental data from mouse and human proteins wherein equivalent protein forms are represented by single terms. In addition to the PRO ontology, the annotation of PRO terms is released as a separate PRO association file, which contains, for each given PRO term, an annotation from the experimentally characterized sub-types as well as the corresponding database identifiers and sequence coordinates. The annotations are added in the form of relationship to other ontologies. Whenever possible, equivalent forms in other species are listed to facilitate cross-species comparison. Splice and allelic variants, gene fusion products and modified protein forms are all represented as entities in the ontology. Therefore, PRO provides for the representation of protein entities and a resource for describing the associated data. This makes PRO useful both for proteomics studies where isoforms and modified forms must be differentiated, and for studies of biological pathways, where representations need to take account of the different ways in which the cascade of events may depend on specific protein modifications. PRO provides a framework for the formal representation of protein classes and protein forms in the OBO Foundry. It is designed to enable data retrieval and integration and machine reasoning at the molecular level of proteins, thereby facilitating cross-species comparisons, pathway analysis, disease modeling and the generation of new hypotheses.
An ensemble framework for identifying essential proteins.
Zhang, Xue; Xiao, Wangxin; Acencio, Marcio Luis; Lemke, Ney; Wang, Xujing
2016-08-25
Many centrality measures have been proposed to mine and characterize the correlations between network topological properties and protein essentiality. However, most of them show limited prediction accuracy, and the number of common predicted essential proteins by different methods is very small. In this paper, an ensemble framework is proposed which integrates gene expression data and protein-protein interaction networks (PINs). It aims to improve the prediction accuracy of basic centrality measures. The idea behind this ensemble framework is that different protein-protein interactions (PPIs) may show different contributions to protein essentiality. Five standard centrality measures (degree centrality, betweenness centrality, closeness centrality, eigenvector centrality, and subgraph centrality) are integrated into the ensemble framework respectively. We evaluated the performance of the proposed ensemble framework using yeast PINs and gene expression data. The results show that it can considerably improve the prediction accuracy of the five centrality measures individually. It can also remarkably increase the number of common predicted essential proteins among those predicted by each centrality measure individually and enable each centrality measure to find more low-degree essential proteins. This paper demonstrates that it is valuable to differentiate the contributions of different PPIs for identifying essential proteins based on network topological characteristics. The proposed ensemble framework is a successful paradigm to this end.
An Appetite for Modernizing the Regulatory Framework for Protein Content Claims in Canada.
Marinangeli, Christopher P F; Foisy, Samara; Shoveller, Anna K; Porter, Cara; Musa-Veloso, Kathy; Sievenpiper, John L; Jenkins, David J A
2017-08-23
The need for protein-rich plant-based foods continues as dietary guidelines emphasize their contribution to healthy dietary patterns that prevent chronic disease and promote environmental sustainability. However, the Canadian Food and Drug Regulations provide a regulatory framework that can prevent Canadian consumers from identifying protein-rich plant-based foods. In Canada, protein nutrient content claims are based on the protein efficiency ratio (PER) and protein rating method, which is based on a rat growth bioassay. PERs are not additive, and the protein rating of a food is underpinned by its Reasonable Daily Intake. The restrictive nature of Canada's requirements for supporting protein claims therefore presents challenges for Canadian consumers to adapt to a rapidly changing food environment. This commentary will present two options for modernizing the regulatory framework for protein content claims in Canada. The first and preferred option advocates that protein quality not be considered in the determination of the eligibility of a food for protein content claims. The second and less preferred option, an interim solution, is a framework for adopting the protein digestibility corrected amino acid score as the official method for supporting protein content and quality claims and harmonizes Canada's regulatory framework with that of the USA.
Protocols for efficient simulations of long-time protein dynamics using coarse-grained CABS model.
Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian
2014-01-01
Coarse-grained (CG) modeling is a well-acknowledged simulation approach for getting insight into long-time scale protein folding events at reasonable computational cost. Depending on the design of a CG model, the simulation protocols vary from highly case-specific-requiring user-defined assumptions about the folding scenario-to more sophisticated blind prediction methods for which only a protein sequence is required. Here we describe the framework protocol for the simulations of long-term dynamics of globular proteins, with the use of the CABS CG protein model and sequence data. The simulations can start from a random or a selected (e.g., native) structure. The described protocol has been validated using experimental data for protein folding model systems-the prediction results agreed well with the experimental results.
An ensemble framework for clustering protein-protein interaction networks.
Asur, Sitaram; Ucar, Duygu; Parthasarathy, Srinivasan
2007-07-01
Protein-Protein Interaction (PPI) networks are believed to be important sources of information related to biological processes and complex metabolic functions of the cell. The presence of biologically relevant functional modules in these networks has been theorized by many researchers. However, the application of traditional clustering algorithms for extracting these modules has not been successful, largely due to the presence of noisy false positive interactions as well as specific topological challenges in the network. In this article, we propose an ensemble clustering framework to address this problem. For base clustering, we introduce two topology-based distance metrics to counteract the effects of noise. We develop a PCA-based consensus clustering technique, designed to reduce the dimensionality of the consensus problem and yield informative clusters. We also develop a soft consensus clustering variant to assign multifaceted proteins to multiple functional groups. We conduct an empirical evaluation of different consensus techniques using topology-based, information theoretic and domain-specific validation metrics and show that our approaches can provide significant benefits over other state-of-the-art approaches. Our analysis of the consensus clusters obtained demonstrates that ensemble clustering can (a) produce improved biologically significant functional groupings; and (b) facilitate soft clustering by discovering multiple functional associations for proteins. Supplementary data are available at Bioinformatics online.
Johnson, R Jeremy
2014-01-01
HIV protease has served as a model protein for understanding protein structure, enzyme kinetics, structure-based drug design, and protein evolution. Inhibitors of HIV protease are also an essential part of effective HIV/AIDS treatment and have provided great societal benefits. The broad applications for HIV protease and its inhibitors make it a perfect framework for integrating foundational topics in biochemistry around a big picture scientific and societal issue. Herein, I describe a series of classroom exercises that integrate foundational topics in biochemistry around the structure, biology, and therapeutic inhibition of HIV protease. These exercises center on foundational topics in biochemistry including thermodynamics, acid/base properties, protein structure, ligand binding, and enzymatic catalysis. The exercises also incorporate regular student practice of scientific skills including analysis of primary literature, evaluation of scientific data, and presentation of technical scientific arguments. Through the exercises, students also gain experience accessing computational biochemical resources such as the protein data bank, Proteopedia, and protein visualization software. As these HIV centered exercises cover foundational topics common to all first semester biochemistry courses, these exercises should appeal to a broad audience of undergraduate students and should be readily integrated into a variety of teaching styles and classroom sizes. © 2014 The International Union of Biochemistry and Molecular Biology.
SIMS: A Hybrid Method for Rapid Conformational Analysis
Gipson, Bryant; Moll, Mark; Kavraki, Lydia E.
2013-01-01
Proteins are at the root of many biological functions, often performing complex tasks as the result of large changes in their structure. Describing the exact details of these conformational changes, however, remains a central challenge for computational biology due the enormous computational requirements of the problem. This has engendered the development of a rich variety of useful methods designed to answer specific questions at different levels of spatial, temporal, and energetic resolution. These methods fall largely into two classes: physically accurate, but computationally demanding methods and fast, approximate methods. We introduce here a new hybrid modeling tool, the Structured Intuitive Move Selector (sims), designed to bridge the divide between these two classes, while allowing the benefits of both to be seamlessly integrated into a single framework. This is achieved by applying a modern motion planning algorithm, borrowed from the field of robotics, in tandem with a well-established protein modeling library. sims can combine precise energy calculations with approximate or specialized conformational sampling routines to produce rapid, yet accurate, analysis of the large-scale conformational variability of protein systems. Several key advancements are shown, including the abstract use of generically defined moves (conformational sampling methods) and an expansive probabilistic conformational exploration. We present three example problems that sims is applied to and demonstrate a rapid solution for each. These include the automatic determination of “active” residues for the hinge-based system Cyanovirin-N, exploring conformational changes involving long-range coordinated motion between non-sequential residues in Ribose-Binding Protein, and the rapid discovery of a transient conformational state of Maltose-Binding Protein, previously only determined by Molecular Dynamics. For all cases we provide energetic validations using well-established energy fields, demonstrating this framework as a fast and accurate tool for the analysis of a wide range of protein flexibility problems. PMID:23935893
A computational fluid dynamics simulation framework for ventricular catheter design optimization.
Weisenberg, Sofy H; TerMaath, Stephanie C; Barbier, Charlotte N; Hill, Judith C; Killeffer, James A
2017-11-10
OBJECTIVE Cerebrospinal fluid (CSF) shunts are the primary treatment for patients suffering from hydrocephalus. While proven effective in symptom relief, these shunt systems are plagued by high failure rates and often require repeated revision surgeries to replace malfunctioning components. One of the leading causes of CSF shunt failure is obstruction of the ventricular catheter by aggregations of cells, proteins, blood clots, or fronds of choroid plexus that occlude the catheter's small inlet holes or even the full internal catheter lumen. Such obstructions can disrupt CSF diversion out of the ventricular system or impede it entirely. Previous studies have suggested that altering the catheter's fluid dynamics may help to reduce the likelihood of complete ventricular catheter failure caused by obstruction. However, systematic correlation between a ventricular catheter's design parameters and its performance, specifically its likelihood to become occluded, still remains unknown. Therefore, an automated, open-source computational fluid dynamics (CFD) simulation framework was developed for use in the medical community to determine optimized ventricular catheter designs and to rapidly explore parameter influence for a given flow objective. METHODS The computational framework was developed by coupling a 3D CFD solver and an iterative optimization algorithm and was implemented in a high-performance computing environment. The capabilities of the framework were demonstrated by computing an optimized ventricular catheter design that provides uniform flow rates through the catheter's inlet holes, a common design objective in the literature. The baseline computational model was validated using 3D nuclear imaging to provide flow velocities at the inlet holes and through the catheter. RESULTS The optimized catheter design achieved through use of the automated simulation framework improved significantly on previous attempts to reach a uniform inlet flow rate distribution using the standard catheter hole configuration as a baseline. While the standard ventricular catheter design featuring uniform inlet hole diameters and hole spacing has a standard deviation of 14.27% for the inlet flow rates, the optimized design has a standard deviation of 0.30%. CONCLUSIONS This customizable framework, paired with high-performance computing, provides a rapid method of design testing to solve complex flow problems. While a relatively simplified ventricular catheter model was used to demonstrate the framework, the computational approach is applicable to any baseline catheter model, and it is easily adapted to optimize catheters for the unique needs of different patients as well as for other fluid-based medical devices.
Computational design of co-assembling protein-DNA nanowires
NASA Astrophysics Data System (ADS)
Mou, Yun; Yu, Jiun-Yann; Wannier, Timothy M.; Guo, Chin-Lin; Mayo, Stephen L.
2015-09-01
Biomolecular self-assemblies are of great interest to nanotechnologists because of their functional versatility and their biocompatibility. Over the past decade, sophisticated single-component nanostructures composed exclusively of nucleic acids, peptides and proteins have been reported, and these nanostructures have been used in a wide range of applications, from drug delivery to molecular computing. Despite these successes, the development of hybrid co-assemblies of nucleic acids and proteins has remained elusive. Here we use computational protein design to create a protein-DNA co-assembling nanomaterial whose assembly is driven via non-covalent interactions. To achieve this, a homodimerization interface is engineered onto the Drosophila Engrailed homeodomain (ENH), allowing the dimerized protein complex to bind to two double-stranded DNA (dsDNA) molecules. By varying the arrangement of protein-binding sites on the dsDNA, an irregular bulk nanoparticle or a nanowire with single-molecule width can be spontaneously formed by mixing the protein and dsDNA building blocks. We characterize the protein-DNA nanowire using fluorescence microscopy, atomic force microscopy and X-ray crystallography, confirming that the nanowire is formed via the proposed mechanism. This work lays the foundation for the development of new classes of protein-DNA hybrid materials. Further applications can be explored by incorporating DNA origami, DNA aptamers and/or peptide epitopes into the protein-DNA framework presented here.
Controlling cell-free metabolism through physiochemical perturbations.
Karim, Ashty S; Heggestad, Jacob T; Crowe, Samantha A; Jewett, Michael C
2018-01-01
Building biosynthetic pathways and engineering metabolic reactions in cells can be time-consuming due to complexities in cellular metabolism. These complexities often convolute the combinatorial testing of biosynthetic pathway designs needed to define an optimal biosynthetic system. To simplify the optimization of biosynthetic systems, we recently reported a new cell-free framework for pathway construction and testing. In this framework, multiple crude-cell extracts are selectively enriched with individual pathway enzymes, which are then mixed to construct full biosynthetic pathways on the time scale of a day. This rapid approach to building pathways aids in the study of metabolic pathway performance by providing a unique freedom of design to modify and control biological systems for both fundamental and applied biotechnology. The goal of this work was to demonstrate the ability to probe biosynthetic pathway performance in our cell-free framework by perturbing physiochemical conditions, using n-butanol synthesis as a model. We carried out three unique case studies. First, we demonstrated the power of our cell-free approach to maximize biosynthesis yields by mapping physiochemical landscapes using a robotic liquid-handler. This allowed us to determine that NAD and CoA are the most important factors that govern cell-free n-butanol metabolism. Second, we compared metabolic profile differences between two different approaches for building pathways from enriched lysates, heterologous expression and cell-free protein synthesis. We discover that phosphate from PEP utilization, along with other physiochemical reagents, during cell-free protein synthesis-coupled, crude-lysate metabolic system operation inhibits optimal cell-free n-butanol metabolism. Third, we show that non-phosphorylated secondary energy substrates can be used to fuel cell-free protein synthesis and n-butanol biosynthesis. Taken together, our work highlights the ease of using cell-free systems to explore physiochemical perturbations and suggests the need for a more controllable, multi-step, separated cell-free framework for future pathway prototyping and enzyme discovery efforts. Copyright © 2017 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
An Appetite for Modernizing the Regulatory Framework for Protein Content Claims in Canada
Marinangeli, Christopher P. F.; Foisy, Samara; Shoveller, Anna K.; Porter, Cara; Musa-Veloso, Kathy; Sievenpiper, John L.; Jenkins, David J. A.
2017-01-01
The need for protein-rich plant-based foods continues as dietary guidelines emphasize their contribution to healthy dietary patterns that prevent chronic disease and promote environmental sustainability. However, the Canadian Food and Drug Regulations provide a regulatory framework that can prevent Canadian consumers from identifying protein-rich plant-based foods. In Canada, protein nutrient content claims are based on the protein efficiency ratio (PER) and protein rating method, which is based on a rat growth bioassay. PERs are not additive, and the protein rating of a food is underpinned by its Reasonable Daily Intake. The restrictive nature of Canada’s requirements for supporting protein claims therefore presents challenges for Canadian consumers to adapt to a rapidly changing food environment. This commentary will present two options for modernizing the regulatory framework for protein content claims in Canada. The first and preferred option advocates that protein quality not be considered in the determination of the eligibility of a food for protein content claims. The second and less preferred option, an interim solution, is a framework for adopting the protein digestibility corrected amino acid score as the official method for supporting protein content and quality claims and harmonizes Canada’s regulatory framework with that of the USA. PMID:28832556
Building blocks for protein interaction devices
Grünberg, Raik; Ferrar, Tony S.; van der Sloot, Almer M.; Constante, Marco; Serrano, Luis
2010-01-01
Here, we propose a framework for the design of synthetic protein networks from modular protein–protein or protein–peptide interactions and provide a starter toolkit of protein building blocks. Our proof of concept experiments outline a general work flow for part–based protein systems engineering. We streamlined the iterative BioBrick cloning protocol and assembled 25 synthetic multidomain proteins each from seven standardized DNA fragments. A systematic screen revealed two main factors controlling protein expression in Escherichia coli: obstruction of translation initiation by mRNA secondary structure or toxicity of individual domains. Eventually, 13 proteins were purified for further characterization. Starting from well-established biotechnological tools, two general–purpose interaction input and two readout devices were built and characterized in vitro. Constitutive interaction input was achieved with a pair of synthetic leucine zippers. The second interaction was drug-controlled utilizing the rapamycin-induced binding of FRB(T2098L) to FKBP12. The interaction kinetics of both devices were analyzed by surface plasmon resonance. Readout was based on Förster resonance energy transfer between fluorescent proteins and was quantified for various combinations of input and output devices. Our results demonstrate the feasibility of parts-based protein synthetic biology. Additionally, we identify future challenges and limitations of modular design along with approaches to address them. PMID:20215443
Esfahani, Mohammad Shahrokh; Dougherty, Edward R
2015-01-01
Phenotype classification via genomic data is hampered by small sample sizes that negatively impact classifier design. Utilization of prior biological knowledge in conjunction with training data can improve both classifier design and error estimation via the construction of the optimal Bayesian classifier. In the genomic setting, gene/protein signaling pathways provide a key source of biological knowledge. Although these pathways are neither complete, nor regulatory, with no timing associated with them, they are capable of constraining the set of possible models representing the underlying interaction between molecules. The aim of this paper is to provide a framework and the mathematical tools to transform signaling pathways to prior probabilities governing uncertainty classes of feature-label distributions used in classifier design. Structural motifs extracted from the signaling pathways are mapped to a set of constraints on a prior probability on a Multinomial distribution. Being the conjugate prior for the Multinomial distribution, we propose optimization paradigms to estimate the parameters of a Dirichlet distribution in the Bayesian setting. The performance of the proposed methods is tested on two widely studied pathways: mammalian cell cycle and a p53 pathway model.
Sequence Determines Degree of Knottedness in a Coarse-Grained Protein Model
NASA Astrophysics Data System (ADS)
Wüst, Thomas; Reith, Daniel; Virnau, Peter
2015-01-01
Knots are abundant in globular homopolymers but rare in globular proteins. To shed new light on this long-standing conundrum, we study the influence of sequence on the formation of knots in proteins under native conditions within the framework of the hydrophobic-polar lattice protein model. By employing large-scale Wang-Landau simulations combined with suitable Monte Carlo trial moves we show that even though knots are still abundant on average, sequence introduces large variability in the degree of self-entanglements. Moreover, we are able to design sequences which are either almost always or almost never knotted. Our findings serve as proof of concept that the introduction of just one additional degree of freedom per monomer (in our case sequence) facilitates evolution towards a protein universe in which knots are rare.
Scoring functions for protein-protein interactions.
Moal, Iain H; Moretti, Rocco; Baker, David; Fernández-Recio, Juan
2013-12-01
The computational evaluation of protein-protein interactions will play an important role in organising the wealth of data being generated by high-throughput initiatives. Here we discuss future applications, report recent developments and identify areas requiring further investigation. Many functions have been developed to quantify the structural and energetic properties of interacting proteins, finding use in interrelated challenges revolving around the relationship between sequence, structure and binding free energy. These include loop modelling, side-chain refinement, docking, multimer assembly, affinity prediction, affinity change upon mutation, hotspots location and interface design. Information derived from models optimised for one of these challenges can be used to benefit the others, and can be unified within the theoretical frameworks of multi-task learning and Pareto-optimal multi-objective learning. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheng, Ryan; Morcos, Faruck; Levine, Herbert; Onuchic, Jose
2014-03-01
An important challenge in biology is to distinguish the subset of residues that allow bacterial two-component signaling (TCS) proteins to preferentially interact with their correct TCS partner such that they can bind and transfer signal. Detailed knowledge of this information would allow one to search sequence-space for mutations that can systematically tune the signal transmission between TCS partners as well as re-encode a TCS protein to preferentially transfer signals to a non-partner. Motivated by the notion that this detailed information is found in sequence data, we explore the mutual sequence co-evolution between signaling partners to infer how mutations can positively or negatively alter their interaction. Using Direct Coupling Analysis (DCA) for determining evolutionarily conserved interprotein interactions, we apply a DCA-based metric to quantify mutational changes in the interaction between TCS proteins and demonstrate that it accurately correlates with experimental mutagenesis studies probing the mutational change in the in vitro phosphotransfer. Our methodology serves as a potential framework for the rational design of TCS systems as well as a framework for the system-level study of protein-protein interactions in sequence-rich systems. This research has been supported by the NSF INSPIRE award MCB-1241332 and by the CTBP sponsored by the NSF (Grant PHY-1308264).
Mobius Assembly: A versatile Golden-Gate framework towards universal DNA assembly.
Andreou, Andreas I; Nakayama, Naomi
2018-01-01
Synthetic biology builds upon the foundation of engineering principles, prompting innovation and improvement in biotechnology via a design-build-test-learn cycle. A community-wide standard in DNA assembly would enable bio-molecular engineering at the levels of predictivity and universality in design and construction that are comparable to other engineering fields. Golden Gate Assembly technology, with its robust capability to unidirectionally assemble numerous DNA fragments in a one-tube reaction, has the potential to deliver a universal standard framework for DNA assembly. While current Golden Gate Assembly frameworks (e.g. MoClo and Golden Braid) render either high cloning capacity or vector toolkit simplicity, the technology can be made more versatile-simple, streamlined, and cost/labor-efficient, without compromising capacity. Here we report the development of a new Golden Gate Assembly framework named Mobius Assembly, which combines vector toolkit simplicity with high cloning capacity. It is based on a two-level, hierarchical approach and utilizes a low-frequency cutter to reduce domestication requirements. Mobius Assembly embraces the standard overhang designs designated by MoClo, Golden Braid, and Phytobricks and is largely compatible with already available Golden Gate part libraries. In addition, dropout cassettes encoding chromogenic proteins were implemented for cost-free visible cloning screening that color-code different cloning levels. As proofs of concept, we have successfully assembled up to 16 transcriptional units of various pigmentation genes in both operon and multigene arrangements. Taken together, Mobius Assembly delivers enhanced versatility and efficiency in DNA assembly, facilitating improved standardization and automation.
Benchmarking protein classification algorithms via supervised cross-validation.
Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor
2008-04-24
Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.
Structure-based design of combinatorial mutagenesis libraries.
Verma, Deeptak; Grigoryan, Gevorg; Bailey-Kellogg, Chris
2015-05-01
The development of protein variants with improved properties (thermostability, binding affinity, catalytic activity, etc.) has greatly benefited from the application of high-throughput screens evaluating large, diverse combinatorial libraries. At the same time, since only a very limited portion of sequence space can be experimentally constructed and tested, an attractive possibility is to use computational protein design to focus libraries on a productive portion of the space. We present a general-purpose method, called "Structure-based Optimization of Combinatorial Mutagenesis" (SOCoM), which can optimize arbitrarily large combinatorial mutagenesis libraries directly based on structural energies of their constituents. SOCoM chooses both positions and substitutions, employing a combinatorial optimization framework based on library-averaged energy potentials in order to avoid explicitly modeling every variant in every possible library. In case study applications to green fluorescent protein, β-lactamase, and lipase A, SOCoM optimizes relatively small, focused libraries whose variants achieve energies comparable to or better than previous library design efforts, as well as larger libraries (previously not designable by structure-based methods) whose variants cover greater diversity while still maintaining substantially better energies than would be achieved by representative random library approaches. By allowing the creation of large-scale combinatorial libraries based on structural calculations, SOCoM promises to increase the scope of applicability of computational protein design and improve the hit rate of discovering beneficial variants. While designs presented here focus on variant stability (predicted by total energy), SOCoM can readily incorporate other structure-based assessments, such as the energy gap between alternative conformational or bound states. © 2015 The Protein Society.
Host cell proteins in biotechnology-derived products: A risk assessment framework.
de Zafra, Christina L Zuch; Quarmby, Valerie; Francissen, Kathleen; Vanderlaan, Martin; Zhu-Shimoni, Judith
2015-11-01
To manufacture biotechnology products, mammalian or bacterial cells are engineered for the production of recombinant therapeutic human proteins including monoclonal antibodies. Host cells synthesize an entire repertoire of proteins which are essential for their own function and survival. Biotechnology manufacturing processes are designed to produce recombinant therapeutics with a very high degree of purity. While there is typically a low residual level of host cell protein in the final drug product, under some circumstances a host cell protein(s) may copurify with the therapeutic protein and, if it is not detected and removed, it may become an unintended component of the final product. The purpose of this article is to enumerate and discuss factors to be considered in an assessment of risk of residual host cell protein(s) detected and identified in the drug product. The consideration of these factors and their relative ranking will lead to an overall risk assessment that informs decision-making around how to control the levels of host cell proteins. © 2015 Wiley Periodicals, Inc.
Accelerating large-scale protein structure alignments with graphics processing units
2012-01-01
Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU. PMID:22357132
JAXA protein crystallization in space: ongoing improvements for growing high-quality crystals
Takahashi, Sachiko; Ohta, Kazunori; Furubayashi, Naoki; Yan, Bin; Koga, Misako; Wada, Yoshio; Yamada, Mitsugu; Inaka, Koji; Tanaka, Hiroaki; Miyoshi, Hiroshi; Kobayashi, Tomoyuki; Kamigaichi, Shigeki
2013-01-01
The Japan Aerospace Exploration Agency (JAXA) started a high-quality protein crystal growth project, now called JAXA PCG, on the International Space Station (ISS) in 2002. Using the counter-diffusion technique, 14 sessions of experiments have been performed as of 2012 with 580 proteins crystallized in total. Over the course of these experiments, a user-friendly interface framework for high accessibility has been constructed and crystallization techniques improved; devices to maximize the use of the microgravity environment have been designed, resulting in some high-resolution crystal growth. If crystallization conditions were carefully fixed in ground-based experiments, high-quality protein crystals grew in microgravity in many experiments on the ISS, especially when a highly homogeneous protein sample and a viscous crystallization solution were employed. In this article, the current status of JAXA PCG is discussed, and a rational approach to high-quality protein crystal growth in microgravity based on numerical analyses is explained. PMID:24121350
REVIEW ARTICLE: Oscillations and temporal signalling in cells
NASA Astrophysics Data System (ADS)
Tiana, G.; Krishna, S.; Pigolotti, S.; Jensen, M. H.; Sneppen, K.
2007-06-01
The development of new techniques to quantitatively measure gene expression in cells has shed light on a number of systems that display oscillations in protein concentration. Here we review the different mechanisms which can produce oscillations in gene expression or protein concentration using a framework of simple mathematical models. We focus on three eukaryotic genetic regulatory networks which show 'ultradian' oscillations, with a time period of the order of hours, and involve, respectively, proteins important for development (Hes1), apoptosis (p53) and immune response (NF-κB). We argue that underlying all three is a common design consisting of a negative feedback loop with time delay which is responsible for the oscillatory behaviour.
Kurakin, Alexei
2007-01-01
A large body of experimental evidence indicates that the specific molecular interactions and/or chemical conversions depicted as links in the conventional diagrams of cellular signal transduction and metabolic pathways are inherently probabilistic, ambiguous and context-dependent. Being the inevitable consequence of the dynamic nature of protein structure in solution, the ambiguity of protein-mediated interactions and conversions challenges the conceptual adequacy and practical usefulness of the mechanistic assumptions and inferences embodied in the design charts of cellular circuitry. It is argued that the reconceptualization of molecular recognition and cellular organization within the emerging interpretational framework of self-organization, which is expanded here to include such concepts as bounded stochasticity, evolutionary memory, and adaptive plasticity offers a significantly more adequate representation of experimental reality than conventional mechanistic conceptions do. Importantly, the expanded framework of self-organization appears to be universal and scale-invariant, providing conceptual continuity across multiple scales of biological organization, from molecules to societies. This new conceptualization of biological phenomena suggests that such attributes of intelligence as adaptive plasticity, decision-making, and memory are enforced by evolution at different scales of biological organization and may represent inherent properties of living matter. (c) 2007 John Wiley & Sons, Ltd.
Modelling proteins' hidden conformations to predict antibiotic resistance
NASA Astrophysics Data System (ADS)
Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.
2016-10-01
TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM's specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models' prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design.
Wood, Christopher W; Bruning, Marc; Ibarra, Amaurys Á; Bartlett, Gail J; Thomson, Andrew R; Sessions, Richard B; Brady, R Leo; Woolfson, Derek N
2014-11-01
The ability to accurately model protein structures at the atomistic level underpins efforts to understand protein folding, to engineer natural proteins predictably and to design proteins de novo. Homology-based methods are well established and produce impressive results. However, these are limited to structures presented by and resolved for natural proteins. Addressing this problem more widely and deriving truly ab initio models requires mathematical descriptions for protein folds; the means to decorate these with natural, engineered or de novo sequences; and methods to score the resulting models. We present CCBuilder, a web-based application that tackles the problem for a defined but large class of protein structure, the α-helical coiled coils. CCBuilder generates coiled-coil backbones, builds side chains onto these frameworks and provides a range of metrics to measure the quality of the models. Its straightforward graphical user interface provides broad functionality that allows users to build and assess models, in which helix geometry, coiled-coil architecture and topology and protein sequence can be varied rapidly. We demonstrate the utility of CCBuilder by assembling models for 653 coiled-coil structures from the PDB, which cover >96% of the known coiled-coil types, and by generating models for rarer and de novo coiled-coil structures. CCBuilder is freely available, without registration, at http://coiledcoils.chm.bris.ac.uk/app/cc_builder/. © The Author 2014. Published by Oxford University Press.
Karlberg, Micael; von Stosch, Moritz; Glassey, Jarka
2018-03-07
In today's biopharmaceutical industries, the lead time to develop and produce a new monoclonal antibody takes years before it can be launched commercially. The reasons lie in the complexity of the monoclonal antibodies and the need for high product quality to ensure clinical safety which has a significant impact on the process development time. Frameworks such as quality by design are becoming widely used by the pharmaceutical industries as they introduce a systematic approach for building quality into the product. However, full implementation of quality by design has still not been achieved due to attrition mainly from limited risk assessment of product properties as well as the large number of process factors affecting product quality that needs to be investigated during the process development. This has introduced a need for better methods and tools that can be used for early risk assessment and predictions of critical product properties and process factors to enhance process development and reduce costs. In this review, we investigate how the quantitative structure-activity relationships framework can be applied to an existing process development framework such as quality by design in order to increase product understanding based on the protein structure of monoclonal antibodies. Compared to quality by design, where the effect of process parameters on the drug product are explored, quantitative structure-activity relationships gives a reversed perspective which investigates how the protein structure can affect the performance in different unit operations. This provides valuable information that can be used during the early process development of new drug products where limited process understanding is available. Thus, quantitative structure-activity relationships methodology is explored and explained in detail and we investigate the means of directly linking the structural properties of monoclonal antibodies to process data. The resulting information as a decision tool can help to enhance the risk assessment to better aid process development and thereby overcome some of the limitations and challenges present in QbD implementation today.
Mobius Assembly: A versatile Golden-Gate framework towards universal DNA assembly
Andreou, Andreas I.
2018-01-01
Synthetic biology builds upon the foundation of engineering principles, prompting innovation and improvement in biotechnology via a design-build-test-learn cycle. A community-wide standard in DNA assembly would enable bio-molecular engineering at the levels of predictivity and universality in design and construction that are comparable to other engineering fields. Golden Gate Assembly technology, with its robust capability to unidirectionally assemble numerous DNA fragments in a one-tube reaction, has the potential to deliver a universal standard framework for DNA assembly. While current Golden Gate Assembly frameworks (e.g. MoClo and Golden Braid) render either high cloning capacity or vector toolkit simplicity, the technology can be made more versatile—simple, streamlined, and cost/labor-efficient, without compromising capacity. Here we report the development of a new Golden Gate Assembly framework named Mobius Assembly, which combines vector toolkit simplicity with high cloning capacity. It is based on a two-level, hierarchical approach and utilizes a low-frequency cutter to reduce domestication requirements. Mobius Assembly embraces the standard overhang designs designated by MoClo, Golden Braid, and Phytobricks and is largely compatible with already available Golden Gate part libraries. In addition, dropout cassettes encoding chromogenic proteins were implemented for cost-free visible cloning screening that color-code different cloning levels. As proofs of concept, we have successfully assembled up to 16 transcriptional units of various pigmentation genes in both operon and multigene arrangements. Taken together, Mobius Assembly delivers enhanced versatility and efficiency in DNA assembly, facilitating improved standardization and automation. PMID:29293531
Ivanov, Stefan M; Huber, Roland G; Warwicker, Jim; Bond, Peter J
2016-11-01
Critical regulatory pathways are replete with instances of intra- and interfamily protein-protein interactions due to the pervasiveness of gene duplication throughout evolution. Discerning the specificity determinants within these systems has proven a challenging task. Here, we present an energetic analysis of the specificity determinants within the Bcl-2 family of proteins (key regulators of the intrinsic apoptotic pathway) via a total of ∼20 μs of simulation of 60 distinct protein-protein complexes. We demonstrate where affinity and specificity of protein-protein interactions arise across the family, and corroborate our conclusions with extensive experimental evidence. We identify energy and specificity hotspots that may offer valuable guidance in the design of targeted therapeutics for manipulating the protein-protein interactions within the apoptosis-regulating pathway. Moreover, we propose a conceptual framework that allows us to quantify the relationship between sequence, structure, and binding energetics. This approach may represent a general methodology for investigating other paralogous protein-protein interaction sites. Copyright © 2016 Elsevier Ltd. All rights reserved.
The sweet tooth of biopharmaceuticals: importance of recombinant protein glycosylation analysis.
Lingg, Nico; Zhang, Peiqing; Song, Zhiwei; Bardor, Muriel
2012-12-01
Biopharmaceuticals currently represent the fastest growing sector of the pharmaceutical industry, mainly driven by a rapid expansion in the manufacture of recombinant protein-based drugs. Glycosylation is the most prominent post-translational modification occurring on these protein drugs. It constitutes one of the critical quality attributes that requires thorough analysis for optimal efficacy and safety. This review examines the functional importance of glycosylation of recombinant protein drugs, illustrated using three examples of protein biopharmaceuticals: IgG antibodies, erythropoietin and glucocerebrosidase. Current analytical methods are reviewed as solutions for qualitative and quantitative measurements of glycosylation to monitor quality target product profiles of recombinant glycoprotein drugs. Finally, we propose a framework for designing the quality target product profile of recombinant glycoproteins and planning workflow for glycosylation analysis with the selection of available analytical methods and tools. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar
2016-01-01
Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design. PMID:27958331
Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar
2016-12-13
Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design.
NASA Astrophysics Data System (ADS)
Shorb, Justin Matthew
The first portion of this thesis describes an extension of work done in the Skinner group to develop an empirical frequency map for N-methylacetamide (NMA) in water. NMA is a peptide bond capped on either side by a methyl group and is therefore a common prototypical molecule used when studying complicated polypeptides and proteins. This amide bond is present along the backbone of every protein as it connects individual component amino acids. This amide bond also has a strong observable frequency in the IR due to the Amide-I mode (predominantly carbon-oxygen stretching motion). This project describes the simplification of the prior model for mapping the frequency of the Amide-I mode from the electric field due to the environment and develops a parallel implementation of this algorithm for use in larger biological systems, such as the trans-membrane portion of the tetrameric polypeptide bundle protein CD3zeta. The second portion of this thesis describes the development, implementation and evaluation of an online textbook within the context of a cohesive theoretical framework. The project begins by describing what is meant when discussing a digital textbook, including a survey of various types of digital media being used to deliver textbook-like content. This leads into the development of a theoretical framework based on constructivist pedagogical theory, hypertext learning theory, and chemistry visualization and representation frameworks. The implementation and design of ChemPaths, the general chemistry online text developed within the Chemistry Education Digital Library (ChemEd DL) is then described. The effectiveness of ChemPaths being used as a textbook replacement in an advanced general chemistry course is evaluated within the developed theoretical framework both qualitatively and quantitatively.
Cloud Computing for Protein-Ligand Binding Site Comparison
2013-01-01
The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery. PMID:23762824
Cloud computing for protein-ligand binding site comparison.
Hung, Che-Lun; Hua, Guan-Jie
2013-01-01
The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery.
NASA Astrophysics Data System (ADS)
Tavenor, Nathan Albert
Protein-based supramolecular polymers (SMPs) are a class of biomaterials which draw inspiration from and expand upon the many examples of complex protein quaternary structures observed in nature: collagen, microtubules, viral capsids, etc. Designing synthetic supramolecular protein scaffolds both increases our understanding of natural superstructures and allows for the creation of novel materials. Similar to small-molecule SMPs, protein-based SMPs form due to self-assembly driven by intermolecular interactions between monomers, and monomer structure determines the properties of the overall material. Using protein-based monomers takes advantage of the self-assembly and highly specific molecular recognition properties encodable in polypeptide sequences to rationally design SMP architectures. The central hypothesis underlying our work is that alpha-helical coiled coils, a well-studied protein quaternary folding motif, are well-suited to SMP design through the addition of synthetic linkers at solvent-exposed sites. Through small changes in the structures of the cross-links and/or peptide sequence, we have been able to control both the nanoscale organization and the macroscopic properties of the SMPs. Changes to the linker and hydrophobic core of the peptide can be used to control polymer rigidity, stability, and dimensionality. The gaps in knowledge that this thesis sought to fill on this project were 1) the relationship between the molecular structure of the cross-linked polypeptides and the macroscopic properties of the SMPs and 2) a means of creating materials exhibiting multi-dimensional net or framework topologies. Separate from the above efforts on supramolecular architectures was work on improving backbone modification strategies for an alpha-helix in the context of a complex protein tertiary fold. Earlier work in our lab had successfully incorporated unnatural building blocks into every major secondary structure (beta-sheet, alpha-helix, loops and beta-turns) of a small protein with a tertiary fold. Although the tertiary fold of the native sequence was mimicked by the resulting artificial protein, the thermodynamic stability was greatly compromised. Most of this energetic penalty derived from the modifications present in the alpha-helix. The contribution within this thesis was direct comparison of several alpha-helical design strategies and establishment of the thermodynamic consequences of each.
Structure-based design of combinatorial mutagenesis libraries
Verma, Deeptak; Grigoryan, Gevorg; Bailey-Kellogg, Chris
2015-01-01
The development of protein variants with improved properties (thermostability, binding affinity, catalytic activity, etc.) has greatly benefited from the application of high-throughput screens evaluating large, diverse combinatorial libraries. At the same time, since only a very limited portion of sequence space can be experimentally constructed and tested, an attractive possibility is to use computational protein design to focus libraries on a productive portion of the space. We present a general-purpose method, called “Structure-based Optimization of Combinatorial Mutagenesis” (SOCoM), which can optimize arbitrarily large combinatorial mutagenesis libraries directly based on structural energies of their constituents. SOCoM chooses both positions and substitutions, employing a combinatorial optimization framework based on library-averaged energy potentials in order to avoid explicitly modeling every variant in every possible library. In case study applications to green fluorescent protein, β-lactamase, and lipase A, SOCoM optimizes relatively small, focused libraries whose variants achieve energies comparable to or better than previous library design efforts, as well as larger libraries (previously not designable by structure-based methods) whose variants cover greater diversity while still maintaining substantially better energies than would be achieved by representative random library approaches. By allowing the creation of large-scale combinatorial libraries based on structural calculations, SOCoM promises to increase the scope of applicability of computational protein design and improve the hit rate of discovering beneficial variants. While designs presented here focus on variant stability (predicted by total energy), SOCoM can readily incorporate other structure-based assessments, such as the energy gap between alternative conformational or bound states. PMID:25611189
FRAN and RBF-PSO as two components of a hyper framework to recognize protein folds.
Abbasi, Elham; Ghatee, Mehdi; Shiri, M E
2013-09-01
In this paper, an intelligent hyper framework is proposed to recognize protein folds from its amino acid sequence which is a fundamental problem in bioinformatics. This framework includes some statistical and intelligent algorithms for proteins classification. The main components of the proposed framework are the Fuzzy Resource-Allocating Network (FRAN) and the Radial Bases Function based on Particle Swarm Optimization (RBF-PSO). FRAN applies a dynamic method to tune up the RBF network parameters. Due to the patterns complexity captured in protein dataset, FRAN classifies the proteins under fuzzy conditions. Also, RBF-PSO applies PSO to tune up the RBF classifier. Experimental results demonstrate that FRAN improves prediction accuracy up to 51% and achieves acceptable multi-class results for protein fold prediction. Although RBF-PSO provides reasonable results for protein fold recognition up to 48%, it is weaker than FRAN in some cases. However the proposed hyper framework provides an opportunity to use a great range of intelligent methods and can learn from previous experiences. Thus it can avoid the weakness of some intelligent methods in terms of memory, computational time and static structure. Furthermore, the performance of this system can be enhanced throughout the system life-cycle. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mechanochemical models of processive molecular motors
NASA Astrophysics Data System (ADS)
Lan, Ganhui; Sun, Sean X.
2012-05-01
Motor proteins are the molecular engines powering the living cell. These nanometre-sized molecules convert chemical energy, both enthalpic and entropic, into useful mechanical work. High resolution single molecule experiments can now observe motor protein movement with increasing precision. The emerging data must be combined with structural and kinetic measurements to develop a quantitative mechanism. This article describes a modelling framework where quantitative understanding of motor behaviour can be developed based on the protein structure. The framework is applied to myosin motors, with emphasis on how synchrony between motor domains give rise to processive unidirectional movement. The modelling approach shows that the elasticity of protein domains are important in regulating motor function. Simple models of protein domain elasticity are presented. The framework can be generalized to other motor systems, or an ensemble of motors such as muscle contraction. Indeed, for hundreds of myosins, our framework can be reduced to the Huxely-Simmons description of muscle movement in the mean-field limit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovchinnikov, Victor; Louveau, Joy E.; Barton, John P.
Eliciting antibodies that are cross reactive with surface proteins of diverse strains of highly mutable pathogens (e.g., HIV, influenza) could be key for developing effective universal vaccines. Mutations in the framework regions of such broadly neutralizing antibodies (bnAbs) have been reported to play a role in determining their properties. We used molecular dynamics simulations and models of affinity maturation to study specific bnAbs against HIV. Our results suggest that there are different classes of evolutionary lineages for the bnAbs. If germline B cells that initiate affinity maturation have high affinity for the conserved residues of the targeted epitope, framework mutationsmore » increase antibody rigidity as affinity maturation progresses to evolve bnAbs. If the germline B cells exhibit weak/moderate affinity for conserved residues, an initial increase in flexibility via framework mutations may be required for the evolution of bnAbs. Subsequent mutations that increase rigidity result in highly potent bnAbs. Implications of our results for immunogen design are discussed.« less
Ovchinnikov, Victor; Louveau, Joy E.; Barton, John P.; ...
2018-02-14
Eliciting antibodies that are cross reactive with surface proteins of diverse strains of highly mutable pathogens (e.g., HIV, influenza) could be key for developing effective universal vaccines. Mutations in the framework regions of such broadly neutralizing antibodies (bnAbs) have been reported to play a role in determining their properties. We used molecular dynamics simulations and models of affinity maturation to study specific bnAbs against HIV. Our results suggest that there are different classes of evolutionary lineages for the bnAbs. If germline B cells that initiate affinity maturation have high affinity for the conserved residues of the targeted epitope, framework mutationsmore » increase antibody rigidity as affinity maturation progresses to evolve bnAbs. If the germline B cells exhibit weak/moderate affinity for conserved residues, an initial increase in flexibility via framework mutations may be required for the evolution of bnAbs. Subsequent mutations that increase rigidity result in highly potent bnAbs. Implications of our results for immunogen design are discussed.« less
2018-01-01
Eliciting antibodies that are cross reactive with surface proteins of diverse strains of highly mutable pathogens (e.g., HIV, influenza) could be key for developing effective universal vaccines. Mutations in the framework regions of such broadly neutralizing antibodies (bnAbs) have been reported to play a role in determining their properties. We used molecular dynamics simulations and models of affinity maturation to study specific bnAbs against HIV. Our results suggest that there are different classes of evolutionary lineages for the bnAbs. If germline B cells that initiate affinity maturation have high affinity for the conserved residues of the targeted epitope, framework mutations increase antibody rigidity as affinity maturation progresses to evolve bnAbs. If the germline B cells exhibit weak/moderate affinity for conserved residues, an initial increase in flexibility via framework mutations may be required for the evolution of bnAbs. Subsequent mutations that increase rigidity result in highly potent bnAbs. Implications of our results for immunogen design are discussed. PMID:29442996
Protein Surface Mimetics: Understanding How Ruthenium Tris(Bipyridines) Interact with Proteins.
Hewitt, Sarah H; Filby, Maria H; Hayes, Ed; Kuhn, Lars T; Kalverda, Arnout P; Webb, Michael E; Wilson, Andrew J
2017-01-17
Protein surface mimetics achieve high-affinity binding by exploiting a scaffold to project binding groups over a large area of solvent-exposed protein surface to make multiple cooperative noncovalent interactions. Such recognition is a prerequisite for competitive/orthosteric inhibition of protein-protein interactions (PPIs). This paper describes biophysical and structural studies on ruthenium(II) tris(bipyridine) surface mimetics that recognize cytochrome (cyt) c and inhibit the cyt c/cyt c peroxidase (CCP) PPI. Binding is electrostatically driven, with enhanced affinity achieved through enthalpic contributions thought to arise from the ability of the surface mimetics to make a greater number of noncovalent interactions than CCP with surface-exposed basic residues on cyt c. High-field natural abundance 1 H, 15 N HSQC NMR experiments are consistent with surface mimetics binding to cyt c in similar manner to CCP. This provides a framework for understanding recognition of proteins by supramolecular receptors and informing the design of ligands superior to the protein partners upon which they are inspired. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kalyoncu, Sibel; Hyun, Jeongmin; Pai, Jennifer C.; Johnson, Jennifer L.; Entzminger, Kevin; Jain, Avni; Heaner, David P.; Morales, Ivan A.; Truskett, Thomas M.; Maynard, Jennifer A.; Lieberman, Raquel L.
2014-01-01
Protein crystallization is dependent upon, and sensitive to, the intermolecular contacts that assist in ordering proteins into a three dimensional lattice. Here we used protein engineering and mutagenesis to affect the crystallization of single chain antibody fragments (scFvs) that recognize the EE epitope (EYMPME) with high affinity. These hypercrystallizable scFvs are under development to assist difficult proteins, such as membrane proteins, in forming crystals, by acting as crystallization chaperones. Guided by analyses of intermolecular crystal lattice contacts, two second-generation anti-EE scFvs were produced, which bind to proteins with installed EE tags. Surprisingly, although non-complementarity determining region (CDR) lattice residues from the parent scFv framework remained unchanged through the processes of protein engineering and rational design, crystal lattices of the derivative scFvs differ. Comparison of energy calculations and the experimentally-determined lattice interactions for this basis set provides insight into the complexity of the forces driving crystal lattice choice and demonstrates the availability of multiple well-ordered surface features in our scFvs capable of forming versatile crystal contacts. PMID:24615866
NASA Astrophysics Data System (ADS)
Aioanei, Daniel; Samorì, Bruno; Brucale, Marco
2009-12-01
Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.
Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford
2018-04-01
Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.
New paradigm in ankyrin repeats: Beyond protein-protein interaction module.
Islam, Zeyaul; Nagampalli, Raghavendra Sashi Krishna; Fatima, Munazza Tamkeen; Ashraf, Ghulam Md
2018-04-01
Classically, ankyrin repeat (ANK) proteins are built from tandems of two or more repeats and form curved solenoid structures that are associated with protein-protein interactions. These are short, widespread structural motif of around 33 amino acids repeats in tandem, having a canonical helix-loop-helix fold, found individually or in combination with other domains. The multiplicity of structural pattern enables it to form assemblies of diverse sizes, required for their abilities to confer multiple binding and structural roles of proteins. Three-dimensional structures of these repeats determined to date reveal a degree of structural variability that translates into the considerable functional versatility of this protein superfamily. Recent work on the ANK has proposed novel structural information, especially protein-lipid, protein-sugar and protein-protein interaction. Self-assembly of these repeats was also shown to prevent the associated protein in forming filaments. In this review, we summarize the latest findings and how the new structural information has increased our understanding of the structural determinants of ANK proteins. We discussed latest findings on how these proteins participate in various interactions to diversify the ANK roles in numerous biological processes, and explored the emerging and evolving field of designer ankyrins and its framework for protein engineering emphasizing on biotechnological applications. Copyright © 2017 Elsevier B.V. All rights reserved.
GoldenBraid 2.0: A Comprehensive DNA Assembly Framework for Plant Synthetic Biology1[C][W][OA
Sarrion-Perdigones, Alejandro; Vazquez-Vilar, Marta; Palací, Jorge; Castelijns, Bas; Forment, Javier; Ziarsolo, Peio; Blanca, José; Granell, Antonio; Orzaez, Diego
2013-01-01
Plant synthetic biology aims to apply engineering principles to plant genetic design. One strategic requirement of plant synthetic biology is the adoption of common standardized technologies that facilitate the construction of increasingly complex multigene structures at the DNA level while enabling the exchange of genetic building blocks among plant bioengineers. Here, we describe GoldenBraid 2.0 (GB2.0), a comprehensive technological framework that aims to foster the exchange of standard DNA parts for plant synthetic biology. GB2.0 relies on the use of type IIS restriction enzymes for DNA assembly and proposes a modular cloning schema with positional notation that resembles the grammar of natural languages. Apart from providing an optimized cloning strategy that generates fully exchangeable genetic elements for multigene engineering, the GB2.0 toolkit offers an ever-growing open collection of DNA parts, including a group of functionally tested, premade genetic modules to build frequently used modules like constitutive and inducible expression cassettes, endogenous gene silencing and protein-protein interaction tools, etc. Use of the GB2.0 framework is facilitated by a number of Web resources that include a publicly available database, tutorials, and a software package that provides in silico simulations and laboratory protocols for GB2.0 part domestication and multigene engineering. In short, GB2.0 provides a framework to exchange both information and physical DNA elements among bioengineers to help implement plant synthetic biology projects. PMID:23669743
GoldenBraid 2.0: a comprehensive DNA assembly framework for plant synthetic biology.
Sarrion-Perdigones, Alejandro; Vazquez-Vilar, Marta; Palací, Jorge; Castelijns, Bas; Forment, Javier; Ziarsolo, Peio; Blanca, José; Granell, Antonio; Orzaez, Diego
2013-07-01
Plant synthetic biology aims to apply engineering principles to plant genetic design. One strategic requirement of plant synthetic biology is the adoption of common standardized technologies that facilitate the construction of increasingly complex multigene structures at the DNA level while enabling the exchange of genetic building blocks among plant bioengineers. Here, we describe GoldenBraid 2.0 (GB2.0), a comprehensive technological framework that aims to foster the exchange of standard DNA parts for plant synthetic biology. GB2.0 relies on the use of type IIS restriction enzymes for DNA assembly and proposes a modular cloning schema with positional notation that resembles the grammar of natural languages. Apart from providing an optimized cloning strategy that generates fully exchangeable genetic elements for multigene engineering, the GB2.0 toolkit offers an evergrowing open collection of DNA parts, including a group of functionally tested, premade genetic modules to build frequently used modules like constitutive and inducible expression cassettes, endogenous gene silencing and protein-protein interaction tools, etc. Use of the GB2.0 framework is facilitated by a number of Web resources that include a publicly available database, tutorials, and a software package that provides in silico simulations and laboratory protocols for GB2.0 part domestication and multigene engineering. In short, GB2.0 provides a framework to exchange both information and physical DNA elements among bioengineers to help implement plant synthetic biology projects.
Koide, Shohei; Sidhu, Sachdev S.
2010-01-01
Summary Combinatorial libraries built with severely restricted chemical diversity have yielded highly functional synthetic binding proteins. Structural analyses of these minimalist binding sites have revealed the dominant role of large tyrosine residues for mediating molecular contacts and of small serine/glycine residues for providing space and flexibility. The concept of using limited residue types to construct optimized binding proteins mirrors findings in the field of small molecule drug development, where it has been proposed that most drugs are built from a limited set of side chains presented by diverse frameworks. The physicochemical properties of tyrosine make it the amino acid that is most effective for mediating molecular recognition, and protein engineers have taken advantage of these characteristics to build tyrosine-rich protein binding sites that outperform natural proteins in terms of affinity and specificity. Knowledge from preceding studies can be used to improve current designs, and thus, synthetic protein libraries will continue to evolve and improve. In the near future, it seems likely that synthetic binding proteins will supersede natural antibodies for most purposes, and moreover, synthetic proteins will enable many new applications beyond the scope of natural proteins. PMID:19298050
Tramontano, A; Bianchi, E; Venturini, S; Martin, F; Pessi, A; Sollazzo, M
1994-03-01
Conformationally constraining selectable peptides onto a suitable scaffold that enables their conformation to be predicted or readily determined by experimental techniques would considerably boost the drug discovery process by reducing the gap between the discovery of a peptide lead and the design of a peptidomimetic with a more desirable pharmacological profile. With this in mind, we designed the minibody, a 61-residue beta-protein aimed at retaining some desirable features of immunoglobulin variable domains, such as tolerance to sequence variability in selected regions of the protein and predictability of the main chain conformation of the same regions, based on the 'canonical structures' model. To test the ability of the minibody scaffold to support functional sites we also designed a metal binding version of the protein by suitably choosing the sequences of its loops. The minibody was produced both by chemical synthesis and expression in E. coli and characterized by size exclusion chromatography, UV CD (circular dichroism) spectroscopy and metal binding activity. All our data supported the model, but a more detailed structural characterization of the molecule was impaired by its low solubility. We were able to overcome this problem both by further mutagenesis of the framework and by addition of a solubilizing motif. The minibody is being used to select constrained human IL-6 peptidic ligands from a library displayed on the surface of the f1 bacteriophage.
Improved packing of protein side chains with parallel ant colonies.
Quan, Lijun; Lü, Qiang; Li, Haiou; Xia, Xiaoyan; Wu, Hongjie
2014-01-01
The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms.
NASA Astrophysics Data System (ADS)
Miyakawa, Takuya; Tanokura, Masaru
The phytohormone abscisic acid (ABA) plays a key role in the rapid adaptation of plants to environmental stresses such as drought and high salinity. Accumulated ABA in plant cells promotes stomatal closure in guard cells and transcription of stress-tolerant genes. Our understanding of ABA responses dramatically improved by the discovery of both PYR/PYL/RCAR as a soluble ABA receptor and inhibitory complex of a protein phospatase PP2C and a protein kinase SnRK2. Moreover, several structural analyses of PYR/PYL/RCAR revealed the mechanistic basis for the regulatory mechanism of ABA signaling, which provides a rational framework for the design of alternative agonists in future.
Skates, Steven J.; Gillette, Michael A.; LaBaer, Joshua; Carr, Steven A.; Anderson, N. Leigh; Liebler, Daniel C.; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L.; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R.; Rodriguez, Henry; Boja, Emily S.
2014-01-01
Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC), with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance, and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step towards building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research. PMID:24063748
Skates, Steven J; Gillette, Michael A; LaBaer, Joshua; Carr, Steven A; Anderson, Leigh; Liebler, Daniel C; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R; Rodriguez, Henry; Boja, Emily S
2013-12-06
Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor, and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung, and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC) with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step toward building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research.
Ma, Jianzhu; Wang, Sheng
2015-01-01
The solvent accessibility of protein residues is one of the driving forces of protein folding, while the contact number of protein residues limits the possibilities of protein conformations. The de novo prediction of these properties from protein sequence is important for the study of protein structure and function. Although these two properties are certainly related with each other, it is challenging to exploit this dependency for the prediction. We present a method AcconPred for predicting solvent accessibility and contact number simultaneously, which is based on a shared weight multitask learning framework under the CNF (conditional neural fields) model. The multitask learning framework on a collection of related tasks provides more accurate prediction than the framework trained only on a single task. The CNF method not only models the complex relationship between the input features and the predicted labels, but also exploits the interdependency among adjacent labels. Trained on 5729 monomeric soluble globular protein datasets, AcconPred could reach 0.68 three-state accuracy for solvent accessibility and 0.75 correlation for contact number. Tested on the 105 CASP11 domain datasets for solvent accessibility, AcconPred could reach 0.64 accuracy, which outperforms existing methods.
Ma, Jianzhu; Wang, Sheng
2015-01-01
Motivation. The solvent accessibility of protein residues is one of the driving forces of protein folding, while the contact number of protein residues limits the possibilities of protein conformations. The de novo prediction of these properties from protein sequence is important for the study of protein structure and function. Although these two properties are certainly related with each other, it is challenging to exploit this dependency for the prediction. Method. We present a method AcconPred for predicting solvent accessibility and contact number simultaneously, which is based on a shared weight multitask learning framework under the CNF (conditional neural fields) model. The multitask learning framework on a collection of related tasks provides more accurate prediction than the framework trained only on a single task. The CNF method not only models the complex relationship between the input features and the predicted labels, but also exploits the interdependency among adjacent labels. Results. Trained on 5729 monomeric soluble globular protein datasets, AcconPred could reach 0.68 three-state accuracy for solvent accessibility and 0.75 correlation for contact number. Tested on the 105 CASP11 domain datasets for solvent accessibility, AcconPred could reach 0.64 accuracy, which outperforms existing methods. PMID:26339631
Validating a Coarse-Grained Potential Energy Function through Protein Loop Modelling
MacDonald, James T.; Kelley, Lawrence A.; Freemont, Paul S.
2013-01-01
Coarse-grained (CG) methods for sampling protein conformational space have the potential to increase computational efficiency by reducing the degrees of freedom. The gain in computational efficiency of CG methods often comes at the expense of non-protein like local conformational features. This could cause problems when transitioning to full atom models in a hierarchical framework. Here, a CG potential energy function was validated by applying it to the problem of loop prediction. A novel method to sample the conformational space of backbone atoms was benchmarked using a standard test set consisting of 351 distinct loops. This method used a sequence-independent CG potential energy function representing the protein using -carbon positions only and sampling conformations with a Monte Carlo simulated annealing based protocol. Backbone atoms were added using a method previously described and then gradient minimised in the Rosetta force field. Despite the CG potential energy function being sequence-independent, the method performed similarly to methods that explicitly use either fragments of known protein backbones with similar sequences or residue-specific /-maps to restrict the search space. The method was also able to predict with sub-Angstrom accuracy two out of seven loops from recently solved crystal structures of proteins with low sequence and structure similarity to previously deposited structures in the PDB. The ability to sample realistic loop conformations directly from a potential energy function enables the incorporation of additional geometric restraints and the use of more advanced sampling methods in a way that is not possible to do easily with fragment replacement methods and also enable multi-scale simulations for protein design and protein structure prediction. These restraints could be derived from experimental data or could be design restraints in the case of computational protein design. C++ source code is available for download from http://www.sbg.bio.ic.ac.uk/phyre2/PD2/. PMID:23824634
Cytoprophet: a Cytoscape plug-in for protein and domain interaction networks inference.
Morcos, Faruck; Lamanna, Charles; Sikora, Marcin; Izaguirre, Jesús
2008-10-01
Cytoprophet is a software tool that allows prediction and visualization of protein and domain interaction networks. It is implemented as a plug-in of Cytoscape, an open source software framework for analysis and visualization of molecular networks. Cytoprophet implements three algorithms that predict new potential physical interactions using the domain composition of proteins and experimental assays. The algorithms for protein and domain interaction inference include maximum likelihood estimation (MLE) using expectation maximization (EM); the set cover approach maximum specificity set cover (MSSC) and the sum-product algorithm (SPA). After accepting an input set of proteins with Uniprot ID/Accession numbers and a selected prediction algorithm, Cytoprophet draws a network of potential interactions with probability scores and GO distances as edge attributes. A network of domain interactions between the domains of the initial protein list can also be generated. Cytoprophet was designed to take advantage of the visual capabilities of Cytoscape and be simple to use. An example of inference in a signaling network of myxobacterium Myxococcus xanthus is presented and available at Cytoprophet's website. http://cytoprophet.cse.nd.edu.
Batyuk, Alexander; Wu, Yufan; Honegger, Annemarie; Heberling, Matthew M; Plückthun, Andreas
2016-04-24
DARPin libraries, based on a Designed Ankyrin Repeat Protein consensus framework, are a rich source of binding partners for a wide variety of proteins. Their modular structure, stability, ease of in vitro selection and high production yields make DARPins an ideal starting point for further engineering. The X-ray structures of around 30 different DARPin complexes demonstrate their ability to facilitate crystallization of their target proteins by restricting flexibility and preventing undesired interactions of the target molecule. However, their small size (18 kDa), very hydrophilic surface and repetitive structure can limit the DARPins' ability to provide essential crystal contacts and their usefulness as a search model for addressing the crystallographic phase problem in molecular replacement. To optimize DARPins for their application as crystallization chaperones, rigid domain-domain fusions of the DARPins to larger proteins, proven to yield high-resolution crystal structures, were generated. These fusions were designed in such a way that they affect only one of the terminal capping repeats of the DARPin and do not interfere with residues involved in target binding, allowing to exchange at will the binding specificities of the DARPin in the fusion construct. As a proof of principle, we designed rigid fusions of a stabilized version of Escherichia coli TEM-1 β-lactamase to the C-terminal capping repeat of various DARPins in six different relative domain orientations. Five crystal structures representing four different fusion constructs, alone or in complex with the cognate target, show the predicted relative domain orientations and prove the validity of the concept. Copyright © 2016 Elsevier Ltd. All rights reserved.
Towards a Pharmacophore for Amyloid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landau, Meytal; Sawaya, Michael R.; Faull, Kym F.
2011-09-16
Diagnosing and treating Alzheimer's and other diseases associated with amyloid fibers remains a great challenge despite intensive research. To aid in this effort, we present atomic structures of fiber-forming segments of proteins involved in Alzheimer's disease in complex with small molecule binders, determined by X-ray microcrystallography. The fiber-like complexes consist of pairs of {beta}-sheets, with small molecules binding between the sheets, roughly parallel to the fiber axis. The structures suggest that apolar molecules drift along the fiber, consistent with the observation of nonspecific binding to a variety of amyloid proteins. In contrast, negatively charged orange-G binds specifically to lysine sidemore » chains of adjacent sheets. These structures provide molecular frameworks for the design of diagnostics and drugs for protein aggregation diseases. The devastating and incurable dementia known as Alzheimer's disease affects the thinking, memory, and behavior of dozens of millions of people worldwide. Although amyloid fibers and oligomers of two proteins, tau and amyloid-{beta}, have been identified in association with this disease, the development of diagnostics and therapeutics has proceeded to date in a near vacuum of information about their structures. Here we report the first atomic structures of small molecules bound to amyloid. These are of the dye orange-G, the natural compound curcumin, and the Alzheimer's diagnostic compound DDNP bound to amyloid-like segments of tau and amyloid-{beta}. The structures reveal the molecular framework of small-molecule binding, within cylindrical cavities running along the {beta}-spines of the fibers. Negatively charged orange-G wedges into a specific binding site between two sheets of the fiber, combining apolar binding with electrostatic interactions, whereas uncharged compounds slide along the cavity. We observed that different amyloid polymorphs bind different small molecules, revealing that a cocktail of compounds may be required for future amyloid therapies. The structures described here start to define the amyloid pharmacophore, opening the way to structure-based design of improved diagnostics and therapeutics.« less
Ion transport across the biological membrane by computational protein design
NASA Astrophysics Data System (ADS)
Grigoryan, Gevorg
The cellular membrane is impermeable to most of the chemicals the cell needs to take in or discard to survive. Therefore, transporters-a class of transmembrane proteins tasked with shuttling cargo chemicals in and out of the cell-are essential to all cellular life. From existing crystal structures, we know transporters to be complex machines, exquisitely tuned for specificity and controllability. But how could membrane-bound life have evolved if it needed such complex machines to exist first? To shed light onto this question, we considered the task of designing a transporter de novo. As our guiding principle, we took the ``alternating-access model''-a conceptual mechanism stating that transporters work by rocking between two conformations, each exposing the cargo-binding site to either the intra- or the extra-cellular environment. A computational design framework was developed to encode an anti-parallel four-helix bundle that rocked between two alternative states to orchestrate the movement of Zn(II) ions across the membrane. The ensemble nature of both states was accounted for using a free energy-based approach, and sequences were chosen based on predicted formation of the targeted topology in the membrane and bi-stability. A single sequence was prepared experimentally and shown to function as a Zn(II) transporter in lipid vesicles. Further, transport was specific to Zn(II) ions and several control peptides supported the underlying design principles. This included a mutant designed to retain all properties but with reduced rocking, which showed greatly depressed transport ability. These results suggest that early transporters could have evolved in the context of simple topologies, to be later tuned by evolution for improved properties and controllability. Our study also serves as an important advance in computational protein design, showing the feasibility of designing functional membrane proteins and of tuning conformational landscapes for desired function. Alfred P. Sloan Foundation Research Fellowship.
NASA Astrophysics Data System (ADS)
Drake, J.; Mass, T.; Haramaty, L.; Zelzion, U.; Bhattacharya, D.; Falkowski, P. G.
2012-12-01
Carbonate formation by biological organisms is catalyzed by a set of proteins. In corals, the proteins form a subset of a poorly characterized skeletal organic matrix (SOM). This matrix is not simply cells occluded in the mineral, but is instead a suite of biomolecules secreted from cells for the purpose of nucleation and/or scaffolding. However, the mechanism(s) for SOM's role in biomineral formation remain to be elucidated, in part because, for many organisms including stony corals, the organic molecules have yet to be characterized much less modeled. In an effort to understand the calcification process, we sequenced the SOM protein complex in the zooxanthellate coral, Stylophora pistillata, by liquid chromatography-tandem mass spectrometry. Our analysis reveals several 'framework' proteins as well as three highly acidic proteins (proteins that contain >30% aspartic and glutamic acids). The SOM framework proteins show sequence homology with other stony corals as well as with calcite biomineralizers. Several of these proteins exhibit calcium-binding domains, while others are likely involved in attachment of the coral calicoblastic layer to the newly formed skeleton substrate. We have begun to express and purify the framework proteins to (1) confirm and visualize their presence in the extracted SOM and in intact skeleton by antibody staining and immunolocalization, and (2) test their interaction with the highly acidic SOM proteins that may direct aragonite nucleation. This work is the first comprehensive proteomic analysis of coral SOM. Together with our genomic work investigating highly acidic SOM candidates (Mass et al. 2012 AGU Fall Meeting abstract), this will allow us to construct a three-dimensional model of the coral calcifying space to better understand the mechanisms of coral biomineralization.
Geometrical tile design for complex neighborhoods.
Czeizler, Eugen; Kari, Lila
2009-01-01
Recent research has showed that tile systems are one of the most suitable theoretical frameworks for the spatial study and modeling of self-assembly processes, such as the formation of DNA and protein oligomeric structures. A Wang tile is a unit square, with glues on its edges, attaching to other tiles and forming larger and larger structures. Although quite intuitive, the idea of glues placed on the edges of a tile is not always natural for simulating the interactions occurring in some real systems. For example, when considering protein self-assembly, the shape of a protein is the main determinant of its functions and its interactions with other proteins. Our goal is to use geometric tiles, i.e., square tiles with geometrical protrusions on their edges, for simulating tiled paths (zippers) with complex neighborhoods, by ribbons of geometric tiles with simple, local neighborhoods. This paper is a step toward solving the general case of an arbitrary neighborhood, by proposing geometric tile designs that solve the case of a "tall" von Neumann neighborhood, the case of the f-shaped neighborhood, and the case of a 3 x 5 "filled" rectangular neighborhood. The techniques can be combined and generalized to solve the problem in the case of any neighborhood, centered at the tile of reference, and included in a 3 x (2k + 1) rectangle.
A designed glycoprotein analogue of Gc-MAF exhibits native-like phagocytic activity.
Bogani, Federica; McConnell, Elizabeth; Joshi, Lokesh; Chang, Yung; Ghirlanda, Giovanna
2006-06-07
Rational protein design has been successfully used to create mimics of natural proteins that retain native activity. In the present work, de novo protein engineering is explored to develop a mini-protein analogue of Gc-MAF, a glycoprotein involved in the immune system activation that has shown anticancer activity in mice. Gc-MAF is derived in vivo from vitamin D binding protein (VDBP) via enzymatic processing of its glycosaccharide to leave a single GalNAc residue located on an exposed loop. We used molecular modeling tools in conjunction with structural analysis to splice the glycosylated loop onto a stable three-helix bundle (alpha3W, PDB entry 1LQ7). The resulting 69-residue model peptide, MM1, has been successfully synthesized by solid-phase synthesis both in the aglycosylated and the glycosylated (GalNAc-MM1) form. Circular dichroism spectroscopy confirmed the expected alpha-helical secondary structure. The thermodynamic stability as evaluated from chemical and thermal denaturation is comparable with that of the scaffold protein, alpha3W, indicating that the insertion of the exogenous loop of Gc-MAF did not significantly perturb the overall structure. GalNAc-MM1 retains the macrophage stimulation activity of natural Gc-MAF; in vitro tests show an identical enhancement of Fc-receptor-mediated phagocytosis in primary macrophages. GalNAc-MM1 provides a framework for the development of mutants with increased activity that could be used in place of Gc-MAF as an immunomodulatory agent in therapy.
A novel integrated framework and improved methodology of computer-aided drug design.
Chen, Calvin Yu-Chian
2013-01-01
Computer-aided drug design (CADD) is a critical initiating step of drug development, but a single model capable of covering all designing aspects remains to be elucidated. Hence, we developed a drug design modeling framework that integrates multiple approaches, including machine learning based quantitative structure-activity relationship (QSAR) analysis, 3D-QSAR, Bayesian network, pharmacophore modeling, and structure-based docking algorithm. Restrictions for each model were defined for improved individual and overall accuracy. An integration method was applied to join the results from each model to minimize bias and errors. In addition, the integrated model adopts both static and dynamic analysis to validate the intermolecular stabilities of the receptor-ligand conformation. The proposed protocol was applied to identifying HER2 inhibitors from traditional Chinese medicine (TCM) as an example for validating our new protocol. Eight potent leads were identified from six TCM sources. A joint validation system comprised of comparative molecular field analysis, comparative molecular similarity indices analysis, and molecular dynamics simulation further characterized the candidates into three potential binding conformations and validated the binding stability of each protein-ligand complex. The ligand pathway was also performed to predict the ligand "in" and "exit" from the binding site. In summary, we propose a novel systematic CADD methodology for the identification, analysis, and characterization of drug-like candidates.
Light Hydrocarbon Adsorption Mechanisms in Two Calcium-Based Microporous Metal Organic Frameworks
Plonka, Anna M.; Chen, Xianyin; Wang, Hao; ...
2016-01-25
The adsorption mechanism of ethane, ethylene, and acetylene (C 2H n; n = 2, 4, 6) on two microporous metal organic frameworks (MOFs) is described here that is consistent with observations from single crystal and powder X-ray diffraction, calorimetric measurements, and gas adsorption isotherm measurements. Two calcium-based MOFs, designated as SBMOF-1 and SBMOF-2 (SB: Stony Brook), form three-dimensional frameworks with one-dimensional open channels. As determined from single crystal diffraction experiments, channel geometries of both SBMOF-1 and SBMOF-2 provide multiple adsorption sites for hydrocarbon molecules through C–H···π and C–H···O interactions, similarly to interactions in the molecular and protein crystals. In conclusion,more » both materials selectively adsorb C 2 hydrocarbon gases over methane as determined with IAST and breakthrough calculations as well as experimental breakthrough measurements, with C 2H 6/CH 4 selectivity as high as 74 in SBMOF-1.« less
Protein Hydration Thermodynamics: The Influence of Flexibility and Salt on Hydrophobin II Hydration.
Remsing, Richard C; Xi, Erte; Patel, Amish J
2018-04-05
The solubility of proteins and other macromolecular solutes plays an important role in numerous biological, chemical, and medicinal processes. An important determinant of protein solubility is the solvation free energy of the protein, which quantifies the overall strength of the interactions between the protein and the aqueous solution that surrounds it. Here we present an all-atom explicit-solvent computational framework for the rapid estimation of protein solvation free energies. Using this framework, we estimate the hydration free energy of hydrophobin II, an amphiphilic fungal protein, in a computationally efficient manner. We further explore how the protein hydration free energy is influenced by enhancing flexibility and by the addition of sodium chloride, and find that it increases in both cases, making protein hydration less favorable.
Toxins and derivatives in molecular pharmaceutics: Drug delivery and targeted therapy.
Zhan, Changyou; Li, Chong; Wei, Xiaoli; Lu, Wuyuan; Lu, Weiyue
2015-08-01
Protein and peptide toxins offer an invaluable source for the development of actively targeted drug delivery systems. They avidly bind to a variety of cognate receptors, some of which are expressed or even up-regulated in diseased tissues and biological barriers. Protein and peptide toxins or their derivatives can act as ligands to facilitate tissue- or organ-specific accumulation of therapeutics. Some toxins have evolved from a relatively small number of structural frameworks that are particularly suitable for addressing the crucial issues of potency and stability, making them an instrumental source of leads and templates for targeted therapy. The focus of this review is on protein and peptide toxins for the development of targeted drug delivery systems and molecular therapies. We summarize disease- and biological barrier-related toxin receptors, as well as targeted drug delivery strategies inspired by those receptors. The design of new therapeutics based on protein and peptide toxins is also discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Hierarchy and extremes in selections from pools of randomized proteins.
Boyer, Sébastien; Biswas, Dipanwita; Kumar Soshee, Ananda; Scaramozzino, Natale; Nizak, Clément; Rivoire, Olivier
2016-03-29
Variation and selection are the core principles of Darwinian evolution, but quantitatively relating the diversity of a population to its capacity to respond to selection is challenging. Here, we examine this problem at a molecular level in the context of populations of partially randomized proteins selected for binding to well-defined targets. We built several minimal protein libraries, screened them in vitro by phage display, and analyzed their response to selection by high-throughput sequencing. A statistical analysis of the results reveals two main findings. First, libraries with the same sequence diversity but built around different "frameworks" typically have vastly different responses; second, the distribution of responses of the best binders in a library follows a simple scaling law. We show how an elementary probabilistic model based on extreme value theory rationalizes the latter finding. Our results have implications for designing synthetic protein libraries, estimating the density of functional biomolecules in sequence space, characterizing diversity in natural populations, and experimentally investigating evolvability (i.e., the potential for future evolution).
ERIC Educational Resources Information Center
Gynther, Karsten
2016-01-01
The research project has developed a design framework for an adaptive MOOC that complements the MOOC format with blended learning. The design framework consists of a design model and a series of learning design principles which can be used to design in-service courses for teacher professional development. The framework has been evaluated by…
A periodic table of coiled-coil protein structures.
Moutevelis, Efrosini; Woolfson, Derek N
2009-01-23
Coiled coils are protein structure domains with two or more alpha-helices packed together via interlacing of side chains known as knob-into-hole packing. We analysed and classified a large set of coiled-coil structures using a combination of automated and manual methods. This led to a systematic classification that we termed a "periodic table of coiled coils," which we have made available at http://coiledcoils.chm.bris.ac.uk/ccplus/search/periodic_table. In this table, coiled-coil assemblies are arranged in columns with increasing numbers of alpha-helices and in rows of increased complexity. The table provides a framework for understanding possibilities in and limits on coiled-coil structures and a basis for future prediction, engineering and design studies.
Protein crystallography and infectious diseases.
Verlinde, C. L.; Merritt, E. A.; Van den Akker, F.; Kim, H.; Feil, I.; Delboni, L. F.; Mande, S. C.; Sarfaty, S.; Petra, P. H.; Hol, W. G.
1994-01-01
The current rapid growth in the number of known 3-dimensional protein structures is producing a database of structures that is increasingly useful as a starting point for the development of new medically relevant molecules such as drugs, therapeutic proteins, and vaccines. This development is beautifully illustrated in the recent book, Protein structure: New approaches to disease and therapy (Perutz, 1992). There is a great and growing promise for the design of molecules for the treatment or prevention of a wide variety of diseases, an endeavor made possible by the insights derived from the structure and function of crucial proteins from pathogenic organisms and from man. We present here 2 illustrations of structure-based drug design. The first is the prospect of developing antitrypanosomal drugs based on crystallographic, ligand-binding, and molecular modeling studies of glycolytic glycosomal enzymes from Trypanosomatidae. These unicellular organisms are responsible for several tropical diseases, including African and American trypanosomiases, as well as various forms of leishmaniasis. Because the target enzymes are also present in the human host, this project is a pioneering study in selective design. The second illustrative case is the prospect of designing anti-cholera drugs based on detailed analysis of the structure of cholera toxin and the closely related Escherichia coli heat-labile enterotoxin. Such potential drugs can be targeted either at inhibiting the toxin's receptor binding site or at blocking the toxin's intracellular catalytic activity. Study of the Vibrio cholerae and E. coli toxins serves at the same time as an example of a general approach to structure-based vaccine design. These toxins exhibit a remarkable ability to stimulate the mucosal immune system, and early results have suggested that this property can be maintained by engineered fusion proteins based on the native toxin structure. The challenge is thus to incorporate selected epitopes from foreign pathogens into the native framework of the toxin such that crucial features of both the epitope and the toxin are maintained. That is, the modified toxin must continue to evoke a strong mucosal immune response, and this response must be directed against an epitope conformation characteristic of the original pathogen. PMID:7849584
Morimoto, Shimpei; Yahara, Koji
2018-03-01
Protein expression is regulated by the production and degradation of mRNAs and proteins but the specifics of their relationship are controversial. Although technological advances have enabled genome-wide and time-series surveys of mRNA and protein abundance, recent studies have shown paradoxical results, with most statistical analyses being limited to linear correlation, or analysis of variance applied separately to mRNA and protein datasets. Here, using recently analyzed genome-wide time-series data, we have developed a statistical analysis framework for identifying which types of genes or biological gene groups have significant correlation between mRNA and protein abundance after accounting for potential time delays. Our framework stratifies all genes in terms of the extent of time delay, conducts gene clustering in each stratum, and performs a non-parametric statistical test of the correlation between mRNA and protein abundance in a gene cluster. Consequently, we revealed stronger correlations than previously reported between mRNA and protein abundance in two metabolic pathways. Moreover, we identified a pair of stress responsive genes ( ADC17 and KIN1 ) that showed a highly similar time series of mRNA and protein abundance. Furthermore, we confirmed robustness of the analysis framework by applying it to another genome-wide time-series data and identifying a cytoskeleton-related gene cluster (keratin 18, keratin 17, and mitotic spindle positioning) that shows similar correlation. The significant correlation and highly similar changes of mRNA and protein abundance suggests a concerted role of these genes in cellular stress response, which we consider provides an answer to the question of the specific relationships between mRNA and protein in a cell. In addition, our framework for studying the relationship between mRNAs and proteins in a cell will provide a basis for studying specific relationships between mRNA and protein abundance after accounting for potential time delays.
Characterizing the reliability of a bioMEMS-based cantilever sensor
NASA Astrophysics Data System (ADS)
Bhalerao, Kaustubh D.
2004-12-01
The cantilever-based BioMEMS sensor represents one instance from many competing ideas of biosensor technology based on Micro Electro Mechanical Systems. The advancement of BioMEMS from laboratory-scale experiments to applications in the field will require standardization of their components and manufacturing procedures as well as frameworks to evaluate their performance. Reliability, the likelihood with which a system performs its intended task, is a compact mathematical description of its performance. The mathematical and statistical foundation of systems-reliability has been applied to the cantilever-based BioMEMS sensor. The sensor is designed to detect one aspect of human ovarian cancer, namely the over-expression of the folate receptor surface protein (FR-alpha). Even as the application chosen is clinically motivated, the objective of this study was to demonstrate the underlying systems-based methodology used to design, develop and evaluate the sensor. The framework development can be readily extended to other BioMEMS-based devices for disease detection and will have an impact in the rapidly growing $30 bn industry. The Unified Modeling Language (UML) is a systems-based framework for design and development of object-oriented information systems which has potential application for use in systems designed to interact with biological environments. The UML has been used to abstract and describe the application of the biosensor, to identify key components of the biosensor, and the technology needed to link them together in a coherent manner. The use of the framework is also demonstrated in computation of system reliability from first principles as a function of the structure and materials of the biosensor. The outcomes of applying the systems-based framework to the study are the following: (1) Characterizing the cantilever-based MEMS device for disease (cell) detection. (2) Development of a novel chemical interface between the analyte and the sensor that provides a degree of selectivity towards the disease. (3) Demonstrating the performance and measuring the reliability of the biosensor prototype, and (4) Identification of opportunities in technological development in order to further refine the proposed biosensor. Application of the methodology to design develop and evaluate the reliability of BioMEMS devices will be beneficial in the streamlining the growth of the BioMEMS industry, while providing a decision-support tool in comparing and adopting suitable technologies from available competing options.
Improved packing of protein side chains with parallel ant colonies
2014-01-01
Introduction The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. Methods We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. Results We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. Conclusions This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms. PMID:25474164
ScaffoldSeq: Software for characterization of directed evolution populations.
Woldring, Daniel R; Holec, Patrick V; Hackel, Benjamin J
2016-07-01
ScaffoldSeq is software designed for the numerous applications-including directed evolution analysis-in which a user generates a population of DNA sequences encoding for partially diverse proteins with related functions and would like to characterize the single site and pairwise amino acid frequencies across the population. A common scenario for enzyme maturation, antibody screening, and alternative scaffold engineering involves naïve and evolved populations that contain diversified regions, varying in both sequence and length, within a conserved framework. Analyzing the diversified regions of such populations is facilitated by high-throughput sequencing platforms; however, length variability within these regions (e.g., antibody CDRs) encumbers the alignment process. To overcome this challenge, the ScaffoldSeq algorithm takes advantage of conserved framework sequences to quickly identify diverse regions. Beyond this, unintended biases in sequence frequency are generated throughout the experimental workflow required to evolve and isolate clones of interest prior to DNA sequencing. ScaffoldSeq software uniquely handles this issue by providing tools to quantify and remove background sequences, cluster similar protein families, and dampen the impact of dominant clones. The software produces graphical and tabular summaries for each region of interest, allowing users to evaluate diversity in a site-specific manner as well as identify epistatic pairwise interactions. The code and detailed information are freely available at http://research.cems.umn.edu/hackel. Proteins 2016; 84:869-874. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Dal Palù, Alessandro; Pontelli, Enrico; He, Jing; Lu, Yonggang
2007-01-01
The paper describes a novel framework, constructed using Constraint Logic Programming (CLP) and parallelism, to determine the association between parts of the primary sequence of a protein and alpha-helices extracted from 3D low-resolution descriptions of large protein complexes. The association is determined by extracting constraints from the 3D information, regarding length, relative position and connectivity of helices, and solving these constraints with the guidance of a secondary structure prediction algorithm. Parallelism is employed to enhance performance on large proteins. The framework provides a fast, inexpensive alternative to determine the exact tertiary structure of unknown proteins.
Increased protein intake in military special operations.
Ferrando, Arny A
2013-11-01
Special operations are so designated for the specialized military missions they address. As a result, special operations present some unique metabolic challenges. In particular, soldiers often operate in a negative energy balance in stressful and demanding conditions with little opportunity for rest or recovery. In this framework, findings inferred from the performance literature suggest that increased protein intake may be beneficial. In particular, increased protein intake during negative caloric balance maintains lean body mass and blood glucose production. The addition of protein to mixed macronutrient supplements is beneficial for muscle endurance and power endpoints, and the use of amino acids improves gross and fine motor skills. Increasing protein intake during periods of intense training and/or metabolic demand improves subsequent performance, improves muscular recovery, and reduces symptoms of psychological stress. Consumption of protein before sleep confers the anabolic responses required for the maintenance of lean mass and muscle recovery. A maximal response in muscle protein synthesis is achieved with the consumption of 20-25 g of protein alone. However, higher protein intakes in the context of mixed-nutrient ingestion also confer anabolic benefits by reducing protein breakdown. Restricted rations issued to special operators provide less than the RDA for protein ( ∼ 0.6 g/kg), and these soldiers often rely on commercial products to augment their rations. The provision of reasonable alternatives and/or certification of approved supplements by the U.S. Department of Defense would be prudent.
Evaluation of Frameworks for HSCT Design Optimization
NASA Technical Reports Server (NTRS)
Krishnan, Ramki
1998-01-01
This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.
Armean, Irina M; Lilley, Kathryn S; Trotter, Matthew W B; Pilkington, Nicholas C V; Holden, Sean B
2018-06-01
Protein-protein interactions (PPI) play a crucial role in our understanding of protein function and biological processes. The standardization and recording of experimental findings is increasingly stored in ontologies, with the Gene Ontology (GO) being one of the most successful projects. Several PPI evaluation algorithms have been based on the application of probabilistic frameworks or machine learning algorithms to GO properties. Here, we introduce a new training set design and machine learning based approach that combines dependent heterogeneous protein annotations from the entire ontology to evaluate putative co-complex protein interactions determined by empirical studies. PPI annotations are built combinatorically using corresponding GO terms and InterPro annotation. We use a S.cerevisiae high-confidence complex dataset as a positive training set. A series of classifiers based on Maximum Entropy and support vector machines (SVMs), each with a composite counterpart algorithm, are trained on a series of training sets. These achieve a high performance area under the ROC curve of ≤0.97, outperforming go2ppi-a previously established prediction tool for protein-protein interactions (PPI) based on Gene Ontology (GO) annotations. https://github.com/ima23/maxent-ppi. sbh11@cl.cam.ac.uk. Supplementary data are available at Bioinformatics online.
McCoy, Kimberly; Uchida, Masaki; Lee, Byeongdu; Douglas, Trevor
2018-04-24
Bottom-up construction of mesoscale materials using biologically derived nanoscale building blocks enables engineering of desired physical properties using green production methods. Virus-like particles (VLPs) are exceptional building blocks due to their monodispersed sizes, geometric shapes, production ease, proteinaceous composition, and our ability to independently functionalize the interior and exterior interfaces. Here a VLP, derived from bacteriophage P22, is used as a building block for the fabrication of a protein macromolecular framework (PMF), a tightly linked 3D network of functional protein cages that exhibit long-range order and catalytic activity. Assembly of PMFs was electrostatically templated, using amine-terminated dendrimers, then locked into place with a ditopic cementing protein that binds to P22. Long-range order is preserved on removal of the dendrimer, leaving a framework material composed completely of protein. Encapsulation of β-glucosidase enzymes inside of P22 VLPs results in formation of stable, condensed-phase materials with high local concentration of enzymes generating catalytically active PMFs.
jMetalCpp: optimizing molecular docking problems with a C++ metaheuristic framework.
López-Camacho, Esteban; García Godoy, María Jesús; Nebro, Antonio J; Aldana-Montes, José F
2014-02-01
Molecular docking is a method for structure-based drug design and structural molecular biology, which attempts to predict the position and orientation of a small molecule (ligand) in relation to a protein (receptor) to produce a stable complex with a minimum binding energy. One of the most widely used software packages for this purpose is AutoDock, which incorporates three metaheuristic techniques. We propose the integration of AutoDock with jMetalCpp, an optimization framework, thereby providing both single- and multi-objective algorithms that can be used to effectively solve docking problems. The resulting combination of AutoDock + jMetalCpp allows users of the former to easily use the metaheuristics provided by the latter. In this way, biologists have at their disposal a richer set of optimization techniques than those already provided in AutoDock. Moreover, designers of metaheuristic techniques can use molecular docking for case studies, which can lead to more efficient algorithms oriented to solving the target problems. jMetalCpp software adapted to AutoDock is freely available as a C++ source code at http://khaos.uma.es/AutodockjMetal/.
Biological applications of zinc imidazole framework through protein encapsulation
NASA Astrophysics Data System (ADS)
Kumar, Pawan; Bansal, Vasudha; Paul, A. K.; Bharadwaj, Lalit M.; Deep, Akash; Kim, Ki-Hyun
2016-10-01
The robustness of biomolecules is always a significant challenge in the application of biostorage in biotechnology or pharmaceutical research. To learn more about biostorage in porous materials, we investigated the feasibility of using zeolite imidazolate framework (ZIF-8) with respect to protein encapsulation. Here, bovine serum albumin (BSA) was selected as a model protein for encapsulation with the synthesis of ZIF-8 using water as a media. ZIF-8 exhibited excellent protein adsorption capacity through successive adsorption of free BSA with the formation of hollow crystals. The loading of protein in ZIF-8 crystals is affected by the molecular weight due to diffusion-limited permeation inside the crystals and also by the affinity of the protein to the pendent group on the ZIF-8 surface. The polar nature of BSA not only supported adsorption on the solid surface, but also enhanced the affinity of crystal spheres through weak coordination interactions with the ZIF-8 framework. The novel approach tested in this study was therefore successful in achieving protein encapsulation with porous, biocompatible, and decomposable microcrystalline ZIF-8. The presence of both BSA and FITC-BSA in ZIF-8 was confirmed consistently by spectroscopy as well as optical and electron microscopy.
NASA Technical Reports Server (NTRS)
Agena, S. M.; Pusey, M. L.; Bogle, I. D.
1999-01-01
A thermodynamic framework (UNIQUAC model with temperature dependent parameters) is applied to model the salt-induced protein crystallization equilibrium, i.e., protein solubility. The framework introduces a term for the solubility product describing protein transfer between the liquid and solid phase and a term for the solution behavior describing deviation from ideal solution. Protein solubility is modeled as a function of salt concentration and temperature for a four-component system consisting of a protein, pseudo solvent (water and buffer), cation, and anion (salt). Two different systems, lysozyme with sodium chloride and concanavalin A with ammonium sulfate, are investigated. Comparison of the modeled and experimental protein solubility data results in an average root mean square deviation of 5.8%, demonstrating that the model closely follows the experimental behavior. Model calculations and model parameters are reviewed to examine the model and protein crystallization process. Copyright 1999 John Wiley & Sons, Inc.
Advanced Information Technology in Simulation Based Life Cycle Design
NASA Technical Reports Server (NTRS)
Renaud, John E.
2003-01-01
In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design.
Zhan, Xi; Shen, Hong
2015-05-28
In order for a more precise control over the quality and quantity of immune responses stimulated by synthetic particle-based vaccines, it is critical to control the colloidal stability of particles and the release of protein antigens in both extracellular space and intracellular compartments. Different proteins exhibit different sizes, charges and solubilities. This study focused on modulating the release and colloidal stability of proteins with varied isoelectric points. A polymer particle delivery platform made from the blend of three polymers, poly(lactic-co-glycolic acid) (PLGA) and two random pH-sensitive copolymers, were developed. Our study demonstrated its programmability with respective to individual proteins. We showed the colloidal stability of particles at neutral environment and the release of each individual protein at different pH environments were dependent on the ratio of two charge polymers. Subsequently, two antigenic proteins, ovalbumin (OVA) and Type 2 Herpes Simplex Virus (HSV-2) glycoprotein D (gD) protein, were incorporated into particles with systematically varied compositions. We demonstrated that the level of in vitro CD8(+) T cell and in vivo immune responses were dependent on the ratio of two charged polymers, which correlated well with the release of proteins. This study provided a promising design framework of pH-responsive synthetic vaccines for protein antigens of interest. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Triantafyllakos, George; Palaigeorgiou, George; Tsoukalas, Ioannis A.
2011-01-01
In this paper, we present a framework for the development of collaborative design games that can be employed in participatory design sessions with students for the design of educational applications. The framework is inspired by idea generation theory and the design games literature, and guides the development of board games which, through the use…
Villada, Juan C.; Brustolini, Otávio José Bernardes
2017-01-01
Abstract Gene codon optimization may be impaired by the misinterpretation of frequency and optimality of codons. Although recent studies have revealed the effects of codon usage bias (CUB) on protein biosynthesis, an integrated perspective of the biological role of individual codons remains unknown. Unlike other previous studies, we show, through an integrated framework that attributes of codons such as frequency, optimality and positional dependency should be combined to unveil individual codon contribution for protein biosynthesis. We designed a codon quantification method for assessing CUB as a function of position within genes with a novel constraint: the relativity of position-dependent codon usage shaped by coding sequence length. Thus, we propose a new way of identifying the enrichment, depletion and non-uniform positional distribution of codons in different regions of yeast genes. We clustered codons that shared attributes of frequency and optimality. The cluster of non-optimal codons with rare occurrence displayed two remarkable characteristics: higher codon decoding time than frequent–non-optimal cluster and enrichment at the 5′-end region, where optimal codons with the highest frequency are depleted. Interestingly, frequent codons with non-optimal adaptation to tRNAs are uniformly distributed in the Saccharomyces cerevisiae genes, suggesting their determinant role as a speed regulator in protein elongation. PMID:28449100
Layman, Donald K
2014-07-01
The food industry is the point of final integration of consumer food choices with dietary guidelines. For more than 40 years, nutrition recommendations emphasized reducing dietary intake of animal fats, cholesterol, and protein and increasing intake of cereal grains. The food industry responded by creating a convenient, low cost and diverse food supply that featured fat-free cookies, cholesterol-free margarines, and spaghetti with artificial meat sauce. However, research focused on obesity, aging, and Metabolic Syndrome has demonstrated merits of increased dietary protein and reduced amounts of carbohydrates. Dietary guidelines have changed from a conceptual framework of a daily balance of food groups represented as building blocks in a pyramid designed to encourage consumers to avoid fat, to a plate design that creates a meal approach to nutrition and highlights protein and vegetables and minimizes grain carbohydrates. Coincident with the changing dietary guidelines, consumers are placing higher priority on foods for health and seeking foods with more protein, less sugars and minimal processing that are fresh, natural, and with fewer added ingredients. Individual food companies must adapt to changing nutrition knowledge, dietary guidelines, and consumer priorities. The impact on the food industry will be specific to each company based on their products, culture and capacity to adapt. Copyright © 2014 Elsevier Inc. All rights reserved.
Villada, Juan C; Brustolini, Otávio José Bernardes; Batista da Silveira, Wendel
2017-08-01
Gene codon optimization may be impaired by the misinterpretation of frequency and optimality of codons. Although recent studies have revealed the effects of codon usage bias (CUB) on protein biosynthesis, an integrated perspective of the biological role of individual codons remains unknown. Unlike other previous studies, we show, through an integrated framework that attributes of codons such as frequency, optimality and positional dependency should be combined to unveil individual codon contribution for protein biosynthesis. We designed a codon quantification method for assessing CUB as a function of position within genes with a novel constraint: the relativity of position-dependent codon usage shaped by coding sequence length. Thus, we propose a new way of identifying the enrichment, depletion and non-uniform positional distribution of codons in different regions of yeast genes. We clustered codons that shared attributes of frequency and optimality. The cluster of non-optimal codons with rare occurrence displayed two remarkable characteristics: higher codon decoding time than frequent-non-optimal cluster and enrichment at the 5'-end region, where optimal codons with the highest frequency are depleted. Interestingly, frequent codons with non-optimal adaptation to tRNAs are uniformly distributed in the Saccharomyces cerevisiae genes, suggesting their determinant role as a speed regulator in protein elongation. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.
Geometrical Tile Design for Complex Neighborhoods
Czeizler, Eugen; Kari, Lila
2009-01-01
Recent research has showed that tile systems are one of the most suitable theoretical frameworks for the spatial study and modeling of self-assembly processes, such as the formation of DNA and protein oligomeric structures. A Wang tile is a unit square, with glues on its edges, attaching to other tiles and forming larger and larger structures. Although quite intuitive, the idea of glues placed on the edges of a tile is not always natural for simulating the interactions occurring in some real systems. For example, when considering protein self-assembly, the shape of a protein is the main determinant of its functions and its interactions with other proteins. Our goal is to use geometric tiles, i.e., square tiles with geometrical protrusions on their edges, for simulating tiled paths (zippers) with complex neighborhoods, by ribbons of geometric tiles with simple, local neighborhoods. This paper is a step toward solving the general case of an arbitrary neighborhood, by proposing geometric tile designs that solve the case of a “tall” von Neumann neighborhood, the case of the f-shaped neighborhood, and the case of a 3 × 5 “filled” rectangular neighborhood. The techniques can be combined and generalized to solve the problem in the case of any neighborhood, centered at the tile of reference, and included in a 3 × (2k + 1) rectangle. PMID:19956398
[Computer aided design and rapid manufacturing of removable partial denture frameworks].
Han, Jing; Lü, Pei-jun; Wang, Yong
2010-08-01
To introduce a method of digital modeling and fabricating removable partial denture (RPD) frameworks using self-developed software for RPD design and rapid manufacturing system. The three-dimensional data of two partially dentate dental casts were obtained using a three-dimensional crossing section scanner. Self-developed software package for RPD design was used to decide the path of insertion and to design different components of RPD frameworks. The components included occlusal rest, clasp, lingual bar, polymeric retention framework and maxillary major connector. The design procedure for the components was as following: first, determine the outline of the component. Second, build the tissue surface of the component using the scanned data within the outline. Third, preset cross section was used to produce the polished surface. Finally, different RPD components were modeled respectively and connected by minor connectors to form an integrated RPD framework. The finished data were imported into a self-developed selective laser melting (SLM) machine and metal frameworks were fabricated directly. RPD frameworks for the two scanned dental casts were modeled with this self-developed program and metal RPD frameworks were successfully fabricated using SLM method. The finished metal frameworks fit well on the plaster models. The self-developed computer aided design and computer aided manufacture (CAD-CAM) system for RPD design and fabrication has completely independent intellectual property rights. It provides a new method of manufacturing metal RPD frameworks.
Saha, Sudipto; Dazard, Jean-Eudes; Xu, Hua; Ewing, Rob M.
2013-01-01
Large-scale protein–protein interaction data sets have been generated for several species including yeast and human and have enabled the identification, quantification, and prediction of cellular molecular networks. Affinity purification-mass spectrometry (AP-MS) is the preeminent methodology for large-scale analysis of protein complexes, performed by immunopurifying a specific “bait” protein and its associated “prey” proteins. The analysis and interpretation of AP-MS data sets is, however, not straightforward. In addition, although yeast AP-MS data sets are relatively comprehensive, current human AP-MS data sets only sparsely cover the human interactome. Here we develop a framework for analysis of AP-MS data sets that addresses the issues of noise, missing data, and sparsity of coverage in the context of a current, real world human AP-MS data set. Our goal is to extend and increase the density of the known human interactome by integrating bait–prey and cocomplexed preys (prey–prey associations) into networks. Our framework incorporates a score for each identified protein, as well as elements of signal processing to improve the confidence of identified protein–protein interactions. We identify many protein networks enriched in known biological processes and functions. In addition, we show that integrated bait–prey and prey–prey interactions can be used to refine network topology and extend known protein networks. PMID:22845868
Marinangeli, Christopher P F; House, James D
2017-01-01
Abstract Regulatory frameworks for protein content claims in Canada and the United States are underpinned by the protein efficiency ratio and protein digestibility-corrected amino acid score (PDCAAS), respectively, which are used to assess the protein quality of a given food. The digestible indispensable amino acid score (DIAAS) is a novel approach to measuring the protein quality of foods and is supported by the Food and Agriculture Organization of the United Nations. Methodological concerns about the PDCAAS are addressed by the DIAAS through introduction of the use of ileal amino acid digestibility coefficients and untruncated protein scores. However, before the DIAAS is widely adopted within regulatory frameworks, a comprehensive assessment is required. Accordingly, this review addresses the potential impact of the DIAAS on regulation, communication, and public health, as well as knowledge gaps, analytical challenges, and cost of implementation. A pragmatic approach to addressing protein quality is advocated by suggesting the use of conservative coefficients of digestibility that are derived from in vitro methods. Before adopting the DIAAS as a framework for supporting protein content claims, updated food-related regulations and policies should also be evaluated through a lens that anticipates the impact on consumer-facing nutrition communication, the adoption of dietary patterns that are nutritionally adequate, and a food value chain that fosters a spirit of food and nutritional innovation. PMID:28969364
An Instructional Design Framework for Fostering Student Engagement in Online Learning Environments
ERIC Educational Resources Information Center
Czerkawski, Betul C.; Lyman, Eugene W.
2016-01-01
Many approaches, models and frameworks exist when designing quality online learning environments. These approaches assist and guide instructional designers through the process of analysis, design, development, implementation and evaluation of instructional processes. Some of these frameworks are concerned with student participation, some with…
A visualization framework for design and evaluation
NASA Astrophysics Data System (ADS)
Blundell, Benjamin J.; Ng, Gary; Pettifer, Steve
2006-01-01
The creation of compelling visualisation paradigms is a craft often dominated by intuition and issues of aesthetics, with relatively few models to support good design. The majority of problem cases are approached by simply applying a previously evaluated visualisation technique. A large body of work exists covering the individual aspects of visualisation design such as the human cognition aspects visualisation methods for specific problem areas, psychology studies and so forth, yet most frameworks regarding visualisation are applied after-the-fact as an evaluation measure. We present an extensible framework for visualisation aimed at structuring the design process, increasing decision traceability and delineating the notions of function, aesthetics and usability. The framework can be used to derive a set of requirements for good visualisation design and evaluating existing visualisations, presenting possible improvements. Our framework achieves this by being both broad and general, built on top of existing works, with hooks for extensions and customizations. This paper shows how existing theories of information visualisation fit into the scheme, presents our experience in the application of this framework on several designs, and offers our evaluation of the framework and the designs studied.
NASA Astrophysics Data System (ADS)
Lin, Y.; Zhang, W. J.
2005-02-01
This paper presents an approach to human-machine interface design for control room operators of nuclear power plants. The first step in designing an interface for a particular application is to determine information content that needs to be displayed. The design methodology for this step is called the interface design framework (called framework ). Several frameworks have been proposed for applications at varying levels, including process plants. However, none is based on the design and manufacture of a plant system for which the interface is designed. This paper presents an interface design framework which originates from design theory and methodology for general technical systems. Specifically, the framework is based on a set of core concepts of a function-behavior-state model originally proposed by the artificial intelligence research community and widely applied in the design research community. Benefits of this new framework include the provision of a model-based fault diagnosis facility, and the seamless integration of the design (manufacture, maintenance) of plants and the design of human-machine interfaces. The missing linkage between design and operation of a plant was one of the causes of the Three Mile Island nuclear reactor incident. A simulated plant system is presented to explain how to apply this framework in designing an interface. The resulting human-machine interface is discussed; specifically, several fault diagnosis examples are elaborated to demonstrate how this interface could support operators' fault diagnosis in an unanticipated situation.
Tutorial on Protein Ontology Resources
Arighi, Cecilia; Drabkin, Harold; Christie, Karen R.; Ross, Karen; Natale, Darren
2017-01-01
The Protein Ontology (PRO) is the reference ontology for proteins in the Open Biomedical Ontologies (OBO) foundry and consists of three sub-ontologies representing protein classes of homologous genes, proteoforms (e.g., splice isoforms, sequence variants, and post-translationally modified forms), and protein complexes. PRO defines classes of proteins and protein complexes, both species-specific and species non-specific, and indicates their relationships in a hierarchical framework, supporting accurate protein annotation at the appropriate level of granularity, analyses of protein conservation across species, and semantic reasoning. In this first section of this chapter, we describe the PRO framework including categories of PRO terms and the relationship of PRO to other ontologies and protein resources. Next, we provide a tutorial about the PRO website (proconsortium.org) where users can browse and search the PRO hierarchy, view reports on individual PRO terms, and visualize relationships among PRO terms in a hierarchical table view, a multiple sequence alignment view, and a Cytoscape network view. Finally, we describe several examples illustrating the unique and rich information available in PRO. PMID:28150233
Bacterial protease uses distinct thermodynamic signatures for substrate recognition.
Bezerra, Gustavo Arruda; Ohara-Nemoto, Yuko; Cornaciu, Irina; Fedosyuk, Sofiya; Hoffmann, Guillaume; Round, Adam; Márquez, José A; Nemoto, Takayuki K; Djinović-Carugo, Kristina
2017-06-06
Porphyromonas gingivalis and Porphyromonas endodontalis are important bacteria related to periodontitis, the most common chronic inflammatory disease in humans worldwide. Its comorbidity with systemic diseases, such as type 2 diabetes, oral cancers and cardiovascular diseases, continues to generate considerable interest. Surprisingly, these two microorganisms do not ferment carbohydrates; rather they use proteinaceous substrates as carbon and energy sources. However, the underlying biochemical mechanisms of their energy metabolism remain unknown. Here, we show that dipeptidyl peptidase 11 (DPP11), a central metabolic enzyme in these bacteria, undergoes a conformational change upon peptide binding to distinguish substrates from end products. It binds substrates through an entropy-driven process and end products in an enthalpy-driven fashion. We show that increase in protein conformational entropy is the main-driving force for substrate binding via the unfolding of specific regions of the enzyme ("entropy reservoirs"). The relationship between our structural and thermodynamics data yields a distinct model for protein-protein interactions where protein conformational entropy modulates the binding free-energy. Further, our findings provide a framework for the structure-based design of specific DPP11 inhibitors.
Interactive comparison and remediation of collections of macromolecular structures.
Moriarty, Nigel W; Liebschner, Dorothee; Klei, Herbert E; Echols, Nathaniel; Afonine, Pavel V; Headd, Jeffrey J; Poon, Billy K; Adams, Paul D
2018-01-01
Often similar structures need to be compared to reveal local differences throughout the entire model or between related copies within the model. Therefore, a program to compare multiple structures and enable correction any differences not supported by the density map was written within the Phenix framework (Adams et al., Acta Cryst 2010; D66:213-221). This program, called Structure Comparison, can also be used for structures with multiple copies of the same protein chain in the asymmetric unit, that is, as a result of non-crystallographic symmetry (NCS). Structure Comparison was designed to interface with Coot(Emsley et al., Acta Cryst 2010; D66:486-501) and PyMOL(DeLano, PyMOL 0.99; 2002) to facilitate comparison of large numbers of related structures. Structure Comparison analyzes collections of protein structures using several metrics, such as the rotamer conformation of equivalent residues, displays the results in tabular form and allows superimposed protein chains and density maps to be quickly inspected and edited (via the tools in Coot) for consistency, completeness and correctness. © 2017 The Protein Society.
A cell-free framework for rapid biosynthetic pathway prototyping and enzyme discovery.
Karim, Ashty S; Jewett, Michael C
2016-07-01
Speeding up design-build-test (DBT) cycles is a fundamental challenge facing biochemical engineering. To address this challenge, we report a new cell-free protein synthesis driven metabolic engineering (CFPS-ME) framework for rapid biosynthetic pathway prototyping. In our framework, cell-free cocktails for synthesizing target small molecules are assembled in a mix-and-match fashion from crude cell lysates either containing selectively enriched pathway enzymes from heterologous overexpression or directly producing pathway enzymes in lysates by CFPS. As a model, we apply our approach to n-butanol biosynthesis showing that Escherichia coli lysates support a highly active 17-step CoA-dependent n-butanol pathway in vitro. The elevated degree of flexibility in the cell-free environment allows us to manipulate physiochemical conditions, access enzymatic nodes, discover new enzymes, and prototype enzyme sets with linear DNA templates to study pathway performance. We anticipate that CFPS-ME will facilitate efforts to define, manipulate, and understand metabolic pathways for accelerated DBT cycles without the need to reengineer organisms. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; Samoylova, Liubov; Buzmakov, Alexey; Jurek, Zoltan; Ziaja, Beata; Santra, Robin; Loh, N. Duane; Tschentscher, Thomas; Mancuso, Adrian P.
2016-04-01
The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy and incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. We demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.
Physics-based enzyme design: predicting binding affinity and catalytic activity.
Sirin, Sarah; Pearlman, David A; Sherman, Woody
2014-12-01
Computational enzyme design is an emerging field that has yielded promising success stories, but where numerous challenges remain. Accurate methods to rapidly evaluate possible enzyme design variants could provide significant value when combined with experimental efforts by reducing the number of variants needed to be synthesized and speeding the time to reach the desired endpoint of the design. To that end, extending our computational methods to model the fundamental physical-chemical principles that regulate activity in a protocol that is automated and accessible to a broad population of enzyme design researchers is essential. Here, we apply a physics-based implicit solvent MM-GBSA scoring approach to enzyme design and benchmark the computational predictions against experimentally determined activities. Specifically, we evaluate the ability of MM-GBSA to predict changes in affinity for a steroid binder protein, catalytic turnover for a Kemp eliminase, and catalytic activity for α-Gliadin peptidase variants. Using the enzyme design framework developed here, we accurately rank the most experimentally active enzyme variants, suggesting that this approach could provide enrichment of active variants in real-world enzyme design applications. © 2014 Wiley Periodicals, Inc.
A DBR Framework for Designing Mobile Virtual Reality Learning Environments
ERIC Educational Resources Information Center
Cochrane, Thomas Donald; Cook, Stuart; Aiello, Stephen; Christie, Duncan; Sinfield, David; Steagall, Marcus; Aguayo, Claudio
2017-01-01
This paper proposes a design based research (DBR) framework for designing mobile virtual reality learning environments. The application of the framework is illustrated by two design-based research projects that aim to develop more authentic educational experiences and learner-centred pedagogies in higher education. The projects highlight the first…
Cytosolic proteins can exploit membrane localization to trigger functional assembly
2018-01-01
Cell division, endocytosis, and viral budding would not function without the localization and assembly of protein complexes on membranes. What is poorly appreciated, however, is that by localizing to membranes, proteins search in a reduced space that effectively drives up concentration. Here we derive an accurate and practical analytical theory to quantify the significance of this dimensionality reduction in regulating protein assembly on membranes. We define a simple metric, an effective equilibrium constant, that allows for quantitative comparison of protein-protein interactions with and without membrane present. To test the importance of membrane localization for driving protein assembly, we collected the protein-protein and protein-lipid affinities, protein and lipid concentrations, and volume-to-surface-area ratios for 46 interactions between 37 membrane-targeting proteins in human and yeast cells. We find that many of the protein-protein interactions between pairs of proteins involved in clathrin-mediated endocytosis in human and yeast cells can experience enormous increases in effective protein-protein affinity (10–1000 fold) due to membrane localization. Localization of binding partners thus triggers robust protein complexation, suggesting that it can play an important role in controlling the timing of endocytic protein coat formation. Our analysis shows that several other proteins involved in membrane remodeling at various organelles have similar potential to exploit localization. The theory highlights the master role of phosphoinositide lipid concentration, the volume-to-surface-area ratio, and the ratio of 3D to 2D equilibrium constants in triggering (or preventing) constitutive assembly on membranes. Our simple model provides a novel quantitative framework for interpreting or designing in vitro experiments of protein complexation influenced by membrane binding. PMID:29505559
Kamio, Shingo; Komine, Futoshi; Taguchi, Kohei; Iwasaki, Taro; Blatz, Markus B; Matsumura, Hideo
2015-12-01
To evaluate the effects of framework design and layering material on the fracture strength of implant-supported zirconia-based molar crowns. Sixty-six titanium abutments (GingiHue Post) were tightened onto dental implants (Implant Lab Analog). These abutment-implant complexes were randomly divided into three groups (n = 22) according to the design of the zirconia framework (Katana), namely, uniform-thickness (UNI), anatomic (ANA), and supported anatomic (SUP) designs. The specimens in each design group were further divided into two subgroups (n = 11): zirconia-based all-ceramic restorations (ZAC group) and zirconia-based restorations with an indirect composite material (Estenia C&B) layered onto the zirconia framework (ZIC group). All crowns were cemented on implant abutments, after which the specimens were tested for fracture resistance. The data were analyzed with the Kruskal-Wallis test and the Mann-Whitney U-test with the Bonferroni correction (α = 0.05). The following mean fracture strength values (kN) were obtained in UNI design, ANA design, and SUP design, respectively: Group ZAC, 3.78, 6.01, 6.50 and Group ZIC, 3.15, 5.65, 5.83. In both the ZAC and ZIC groups, fracture strength was significantly lower for the UNI design than the other two framework designs (P = 0.001). Fracture strength did not significantly differ (P > 0.420) between identical framework designs in the ZAC and ZIC groups. A framework design with standardized layer thickness and adequate support of veneer by zirconia frameworks, as in the ANA and SUP designs, increases fracture resistance in implant-supported zirconia-based restorations under conditions of chewing attrition. Indirect composite material and porcelain perform similarly as layering materials on zirconia frameworks. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Crops in silico: A community wide multi-scale computational modeling framework of plant canopies
NASA Astrophysics Data System (ADS)
Srinivasan, V.; Christensen, A.; Borkiewic, K.; Yiwen, X.; Ellis, A.; Panneerselvam, B.; Kannan, K.; Shrivastava, S.; Cox, D.; Hart, J.; Marshall-Colon, A.; Long, S.
2016-12-01
Current crop models predict a looming gap between supply and demand for primary foodstuffs over the next 100 years. While significant yield increases were achieved in major food crops during the early years of the green revolution, the current rates of yield increases are insufficient to meet future projected food demand. Furthermore, with projected reduction in arable land, decrease in water availability, and increasing impacts of climate change on future food production, innovative technologies are required to sustainably improve crop yield. To meet these challenges, we are developing Crops in silico (Cis), a biologically informed, multi-scale, computational modeling framework that can facilitate whole plant simulations of crop systems. The Cis framework is capable of linking models of gene networks, protein synthesis, metabolic pathways, physiology, growth, and development in order to investigate crop response to different climate scenarios and resource constraints. This modeling framework will provide the mechanistic details to generate testable hypotheses toward accelerating directed breeding and engineering efforts to increase future food security. A primary objective for building such a framework is to create synergy among an inter-connected community of biologists and modelers to create a realistic virtual plant. This framework advantageously casts the detailed mechanistic understanding of individual plant processes across various scales in a common scalable framework that makes use of current advances in high performance and parallel computing. We are currently designing a user friendly interface that will make this tool equally accessible to biologists and computer scientists. Critically, this framework will provide the community with much needed tools for guiding future crop breeding and engineering, understanding the emergent implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment.
Zou, J; Saven, J G
2000-02-11
A self-consistent theory is presented that can be used to estimate the number and composition of sequences satisfying a predetermined set of constraints. The theory is formulated so as to examine the features of sequences having a particular value of Delta=E(f)-
Controlling Styrene Maleic Acid Lipid Particles through RAFT.
Smith, Anton A A; Autzen, Henriette E; Laursen, Tomas; Wu, Vincent; Yen, Max; Hall, Aaron; Hansen, Scott D; Cheng, Yifan; Xu, Ting
2017-11-13
The ability of styrene maleic acid copolymers to dissolve lipid membranes into nanosized lipid particles is a facile method of obtaining membrane proteins in solubilized lipid discs while conserving part of their native lipid environment. While the currently used copolymers can readily extract membrane proteins in native nanodiscs, their highly disperse composition is likely to influence the dispersity of the discs as well as the extraction efficiency. In this study, reversible addition-fragmentation chain transfer was used to control the polymer architecture and dispersity of molecular weights with a high-precision. Based on Monte Carlo simulations of the polymerizations, the monomer composition was predicted and allowed a structure-function analysis of the polymer architecture, in relation to their ability to assemble into lipid nanoparticles. We show that a higher degree of control of the polymer architecture generates more homogeneous samples. We hypothesize that low dispersity copolymers, with control of polymer architecture are an ideal framework for the rational design of polymers for customized isolation and characterization of integral membrane proteins in native lipid bilayer systems.
Discovering Conformational Sub-States Relevant to Protein Function
Ramanathan, Arvind; Savol, Andrej J.; Langmead, Christopher J.; Agarwal, Pratul K.; Chennubhotla, Chakra S.
2011-01-01
Background Internal motions enable proteins to explore a range of conformations, even in the vicinity of native state. The role of conformational fluctuations in the designated function of a protein is widely debated. Emerging evidence suggests that sub-groups within the range of conformations (or sub-states) contain properties that may be functionally relevant. However, low populations in these sub-states and the transient nature of conformational transitions between these sub-states present significant challenges for their identification and characterization. Methods and Findings To overcome these challenges we have developed a new computational technique, quasi-anharmonic analysis (QAA). QAA utilizes higher-order statistics of protein motions to identify sub-states in the conformational landscape. Further, the focus on anharmonicity allows identification of conformational fluctuations that enable transitions between sub-states. QAA applied to equilibrium simulations of human ubiquitin and T4 lysozyme reveals functionally relevant sub-states and protein motions involved in molecular recognition. In combination with a reaction pathway sampling method, QAA characterizes conformational sub-states associated with cis/trans peptidyl-prolyl isomerization catalyzed by the enzyme cyclophilin A. In these three proteins, QAA allows identification of conformational sub-states, with critical structural and dynamical features relevant to protein function. Conclusions Overall, QAA provides a novel framework to intuitively understand the biophysical basis of conformational diversity and its relevance to protein function. PMID:21297978
2011-12-28
specify collaboration constraints that occur in Java and XML frameworks and that the collaboration constraints from these frameworks matter in practice. (a...programming language boundaries, and Chapter 6 and Appendix A demonstrate that Fusion can specify constraints across both Java and XML in practice. (c...designed JUnit, Josh Bloch designed Java Collec- tions, and Krzysztof Cwalina designed the .NET Framework APIs. While all of these frameworks are very
Lakatos, Eszter; Salehi-Reyhani, Ali; Barclay, Michael; Stumpf, Michael P H; Klug, David R
2017-01-01
We determine p53 protein abundances and cell to cell variation in two human cancer cell lines with single cell resolution, and show that the fractional width of the distributions is the same in both cases despite a large difference in average protein copy number. We developed a computational framework to identify dominant mechanisms controlling the variation of protein abundance in a simple model of gene expression from the summary statistics of single cell steady state protein expression distributions. Our results, based on single cell data analysed in a Bayesian framework, lends strong support to a model in which variation in the basal p53 protein abundance may be best explained by variations in the rate of p53 protein degradation. This is supported by measurements of the relative average levels of mRNA which are very similar despite large variation in the level of protein.
De Novo Design and Experimental Characterization of Ultrashort Self-Associating Peptides
Xue, Bo; Robinson, Robert C.; Hauser, Charlotte A. E.; Floudas, Christodoulos A.
2014-01-01
Self-association is a common phenomenon in biology and one that can have positive and negative impacts, from the construction of the architectural cytoskeleton of cells to the formation of fibrils in amyloid diseases. Understanding the nature and mechanisms of self-association is important for modulating these systems and in creating biologically-inspired materials. Here, we present a two-stage de novo peptide design framework that can generate novel self-associating peptide systems. The first stage uses a simulated multimeric template structure as input into the optimization-based Sequence Selection to generate low potential energy sequences. The second stage is a computational validation procedure that calculates Fold Specificity and/or Approximate Association Affinity (K*association) based on metrics that we have devised for multimeric systems. This framework was applied to the design of self-associating tripeptides using the known self-associating tripeptide, Ac-IVD, as a structural template. Six computationally predicted tripeptides (Ac-LVE, Ac-YYD, Ac-LLE, Ac-YLD, Ac-MYD, Ac-VIE) were chosen for experimental validation in order to illustrate the self-association outcomes predicted by the three metrics. Self-association and electron microscopy studies revealed that Ac-LLE formed bead-like microstructures, Ac-LVE and Ac-YYD formed fibrillar aggregates, Ac-VIE and Ac-MYD formed hydrogels, and Ac-YLD crystallized under ambient conditions. An X-ray crystallographic study was carried out on a single crystal of Ac-YLD, which revealed that each molecule adopts a β-strand conformation that stack together to form parallel β-sheets. As an additional validation of the approach, the hydrogel-forming sequences of Ac-MYD and Ac-VIE were shuffled. The shuffled sequences were computationally predicted to have lower K*association values and were experimentally verified to not form hydrogels. This illustrates the robustness of the framework in predicting self-associating tripeptides. We expect that this enhanced multimeric de novo peptide design framework will find future application in creating novel self-associating peptides based on unnatural amino acids, and inhibitor peptides of detrimental self-aggregating biological proteins. PMID:25010703
Choices, Frameworks and Refinement
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Islam, Nayeem; Johnson, Ralph; Kougiouris, Panos; Madany, Peter
1991-01-01
In this paper we present a method for designing operating systems using object-oriented frameworks. A framework can be refined into subframeworks. Constraints specify the interactions between the subframeworks. We describe how we used object-oriented frameworks to design Choices, an object-oriented operating system.
Phaser.MRage: automated molecular replacement
Bunkóczi, Gábor; Echols, Nathaniel; McCoy, Airlie J.; Oeffner, Robert D.; Adams, Paul D.; Read, Randy J.
2013-01-01
Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement. PMID:24189240
The driving regulators of the connectivity protein network of brain malignancies
NASA Astrophysics Data System (ADS)
Tahmassebi, Amirhessam; Pinker-Domenig, Katja; Wengert, Georg; Lobbes, Marc; Stadlbauer, Andreas; Wildburger, Norelle C.; Romero, Francisco J.; Morales, Diego P.; Castillo, Encarnacion; Garcia, Antonio; Botella, Guillermo; Meyer-Bäse, Anke
2017-05-01
An important problem in modern therapeutics at the proteomic level remains to identify therapeutic targets in a plentitude of high-throughput data from experiments relevant to a variety of diseases. This paper presents the application of novel modern control concepts, such as pinning controllability and observability applied to the glioma cancer stem cells (GSCs) protein graph network with known and novel association to glioblastoma (GBM). The theoretical frameworks provides us with the minimal number of "driver nodes", which are necessary, and their location to determine the full control over the obtained graph network in order to provide a change in the network's dynamics from an initial state (disease) to a desired state (non-disease). The achieved results will provide biochemists with techniques to identify more metabolic regions and biological pathways for complex diseases, to design and test novel therapeutic solutions.
Shears, Rebecca K; Bancroft, Allison J; Sharpe, Catherine; Grencis, Richard K; Thornton, David J
2018-03-14
Trichuris trichiura (whipworm) is one of the four major soil-transmitted helminth infections of man, affecting an estimated 465 million people worldwide. An effective vaccine that induces long-lasting protective immunity against T. trichiura would alleviate the morbidity associated with this intestinal-dwelling parasite, however the lack of known host protective antigens has hindered vaccine development. Here, we show that vaccination with ES products stimulates long-lasting protection against chronic infection in male C57BL/6 mice. We also provide a framework for the identification of immunogenic proteins within T. muris ES, and identify eleven candidates with direct homologues in T. trichiura that warrant further study. Given the extensive homology between T. muris and T. trichiura at both the genomic and transcriptomic levels, this work has the potential to advance vaccine design for T. trichiura.
Phaser.MRage: automated molecular replacement.
Bunkóczi, Gábor; Echols, Nathaniel; McCoy, Airlie J; Oeffner, Robert D; Adams, Paul D; Read, Randy J
2013-11-01
Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement.
A Proposed Framework for Collaborative Design in a Virtual Environment
NASA Astrophysics Data System (ADS)
Breland, Jason S.; Shiratuddin, Mohd Fairuz
This paper describes a proposed framework for a collaborative design in a virtual environment. The framework consists of components that support a true collaborative design in a real-time 3D virtual environment. In support of the proposed framework, a prototype application is being developed. The authors envision the framework will have, but not limited to the following features: (1) real-time manipulation of 3D objects across the network, (2) support for multi-designer activities and information access, (3) co-existence within same virtual space, etc. This paper also discusses a proposed testing to determine the possible benefits of a collaborative design in a virtual environment over other forms of collaboration, and results from a pilot test.
Topology-function conservation in protein-protein interaction networks.
Davis, Darren; Yaveroğlu, Ömer Nebil; Malod-Dognin, Noël; Stojmirovic, Aleksandar; Pržulj, Nataša
2015-05-15
Proteins underlay the functioning of a cell and the wiring of proteins in protein-protein interaction network (PIN) relates to their biological functions. Proteins with similar wiring in the PIN (topology around them) have been shown to have similar functions. This property has been successfully exploited for predicting protein functions. Topological similarity is also used to guide network alignment algorithms that find similarly wired proteins between PINs of different species; these similarities are used to transfer annotation across PINs, e.g. from model organisms to human. To refine these functional predictions and annotation transfers, we need to gain insight into the variability of the topology-function relationships. For example, a function may be significantly associated with specific topologies, while another function may be weakly associated with several different topologies. Also, the topology-function relationships may differ between different species. To improve our understanding of topology-function relationships and of their conservation among species, we develop a statistical framework that is built upon canonical correlation analysis. Using the graphlet degrees to represent the wiring around proteins in PINs and gene ontology (GO) annotations to describe their functions, our framework: (i) characterizes statistically significant topology-function relationships in a given species, and (ii) uncovers the functions that have conserved topology in PINs of different species, which we term topologically orthologous functions. We apply our framework to PINs of yeast and human, identifying seven biological process and two cellular component GO terms to be topologically orthologous for the two organisms. © The Author 2015. Published by Oxford University Press.
Modelling proteins’ hidden conformations to predict antibiotic resistance
Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.
2016-01-01
TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM’s specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models’ prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design. PMID:27708258
Applying the Ottawa Charter to inform health promotion programme design.
Fry, Denise; Zask, Avigdor
2017-10-01
There is evidence of a correlation between adoption of the Ottawa Charter's framework of five action areas and health promotion programme effectiveness, but the Charter's framework has not been as fully implemented as hoped, nor is generally used by formal programme design models. In response, we aimed to translate the Charter's framework into a method to inform programme design. Our resulting design process uses detailed definitions of the Charter's action areas and evidence of predicted effectiveness to prompt greater consideration and use of the Charter's framework. We piloted the process by applying it to the design of four programmes of the Healthy Children's Initiative in New South Wales, Australia; refined the criteria via consensus; and made consensus decisions on the extent to which programme designs reflected the Charter's framework. The design process has broad potential applicability to health promotion programmes; facilitating greater use of the Ottawa Charter framework, which evidence indicates can increase programme effectiveness. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Tiller, Thomas; Schuster, Ingrid; Deppe, Dorothée; Siegers, Katja; Strohner, Ralf; Herrmann, Tanja; Berenguer, Marion; Poujol, Dominique; Stehle, Jennifer; Stark, Yvonne; Heßling, Martin; Daubert, Daniela; Felderer, Karin; Kaden, Stefan; Kölln, Johanna; Enzelberger, Markus; Urlinger, Stefanie
2013-01-01
This report describes the design, generation and testing of Ylanthia, a fully synthetic human Fab antibody library with 1.3E+11 clones. Ylanthia comprises 36 fixed immunoglobulin (Ig) variable heavy (VH)/variable light (VL) chain pairs, which cover a broad range of canonical complementarity-determining region (CDR) structures. The variable Ig heavy and Ig light (VH/VL) chain pairs were selected for biophysical characteristics favorable to manufacturing and development. The selection process included multiple parameters, e.g., assessment of protein expression yield, thermal stability and aggregation propensity in fragment antigen binding (Fab) and IgG1 formats, and relative Fab display rate on phage. The framework regions are fixed and the diversified CDRs were designed based on a systematic analysis of a large set of rearranged human antibody sequences. Care was taken to minimize the occurrence of potential posttranslational modification sites within the CDRs. Phage selection was performed against various antigens and unique antibodies with excellent biophysical properties were isolated. Our results confirm that quality can be built into an antibody library by prudent selection of unmodified, fully human VH/VL pairs as scaffolds. PMID:23571156
Evidence-Based mHealth Chronic Disease Mobile App Intervention Design: Development of a Framework.
Wilhide Iii, Calvin C; Peeples, Malinda M; Anthony Kouyaté, Robin C
2016-02-16
Mobile technology offers new capabilities that can help to drive important aspects of chronic disease management at both an individual and population level, including the ability to deliver real-time interventions that can be connected to a health care team. A framework that supports both development and evaluation is needed to understand the aspects of mHealth that work for specific diseases, populations, and in the achievement of specific outcomes in real-world settings. This framework should incorporate design structure and process, which are important to translate clinical and behavioral evidence, user interface, experience design and technical capabilities into scalable, replicable, and evidence-based mobile health (mHealth) solutions to drive outcomes. The purpose of this paper is to discuss the identification and development of an app intervention design framework, and its subsequent refinement through development of various types of mHealth apps for chronic disease. The process of developing the framework was conducted between June 2012 and June 2014. Informed by clinical guidelines, standards of care, clinical practice recommendations, evidence-based research, best practices, and translated by subject matter experts, a framework for mobile app design was developed and the refinement of the framework across seven chronic disease states and three different product types is described. The result was the development of the Chronic Disease mHealth App Intervention Design Framework. This framework allowed for the integration of clinical and behavioral evidence for intervention and feature design. The application to different diseases and implementation models guided the design of mHealth solutions for varying levels of chronic disease management. The framework and its design elements enable replicable product development for mHealth apps and may provide a foundation for the digital health industry to systematically expand mobile health interventions and validate their effectiveness across multiple implementation settings and chronic diseases.
The Effect of Framework Design on Stress Distribution in Implant-Supported FPDs: A 3-D FEM Study
Eraslan, Oguz; Inan, Ozgur; Secilmis, Asli
2010-01-01
Objectives: The biomechanical behavior of the superstructure plays an important role in the functional longevity of dental implants. However, information about the influence of framework design on stresses transmitted to the implants and supporting tissues is limited. The purpose of this study was to evaluate the effects of framework designs on stress distribution at the supporting bone and supporting implants. Methods: In this study, the three-dimensional (3D) finite element stress analysis method was used. Three types of 3D mathematical models simulating three different framework designs for implant-supported 3-unit posterior fixed partial dentures were prepared with supporting structures. Convex (1), concave (2), and conventional (3) pontic framework designs were simulated. A 300-N static vertical occlusal load was applied on the node at the center of occlusal surface of the pontic to calculate the stress distributions. As a second condition, frameworks were directly loaded to evaluate the effect of the framework design clearly. The Solidworks/Cosmosworks structural analysis programs were used for finite element modeling/analysis. Results: The analysis of the von Mises stress values revealed that maximum stress concentrations were located at the loading areas for all models. The pontic side marginal edges of restorations and the necks of implants were other stress concentration regions. There was no clear difference among models when the restorations were loaded at occlusal surfaces. When the veneering porcelain was removed, and load was applied directly to the framework, there was a clear increase in stress concentration with a concave design on supporting implants and bone structure. Conclusions: The present study showed that the use of a concave design in the pontic frameworks of fixed partial dentures increases the von Mises stress levels on implant abutments and supporting bone structure. However, the veneering porcelain element reduces the effect of the framework and compensates for design weaknesses. PMID:20922156
Evidence-Based mHealth Chronic Disease Mobile App Intervention Design: Development of a Framework
Peeples, Malinda M; Anthony Kouyaté, Robin C
2016-01-01
Background Mobile technology offers new capabilities that can help to drive important aspects of chronic disease management at both an individual and population level, including the ability to deliver real-time interventions that can be connected to a health care team. A framework that supports both development and evaluation is needed to understand the aspects of mHealth that work for specific diseases, populations, and in the achievement of specific outcomes in real-world settings. This framework should incorporate design structure and process, which are important to translate clinical and behavioral evidence, user interface, experience design and technical capabilities into scalable, replicable, and evidence-based mobile health (mHealth) solutions to drive outcomes. Objective The purpose of this paper is to discuss the identification and development of an app intervention design framework, and its subsequent refinement through development of various types of mHealth apps for chronic disease. Methods The process of developing the framework was conducted between June 2012 and June 2014. Informed by clinical guidelines, standards of care, clinical practice recommendations, evidence-based research, best practices, and translated by subject matter experts, a framework for mobile app design was developed and the refinement of the framework across seven chronic disease states and three different product types is described. Results The result was the development of the Chronic Disease mHealth App Intervention Design Framework. This framework allowed for the integration of clinical and behavioral evidence for intervention and feature design. The application to different diseases and implementation models guided the design of mHealth solutions for varying levels of chronic disease management. Conclusions The framework and its design elements enable replicable product development for mHealth apps and may provide a foundation for the digital health industry to systematically expand mobile health interventions and validate their effectiveness across multiple implementation settings and chronic diseases. PMID:26883135
2014-01-01
Background The complement protein C5a acts by primarily binding and activating the G-protein coupled C5a receptor C5aR (CD88), and is implicated in many inflammatory diseases. The cyclic hexapeptide PMX53 (sequence Ace-Phe-[Orn-Pro-dCha-Trp-Arg]) is a full C5aR antagonist of nanomolar potency, and is widely used to study C5aR function in disease. Results We construct for the first time molecular models for the C5aR:PMX53 complex without the a priori use of experimental constraints, via a computational framework of molecular dynamics (MD) simulations, docking, conformational clustering and free energy filtering. The models agree with experimental data, and are used to propose important intermolecular interactions contributing to binding, and to develop a hypothesis for the mechanism of PMX53 antagonism. Conclusion This work forms the basis for the design of improved C5aR antagonists, as well as for atomic-detail mechanistic studies of complement activation and function. Our computational framework can be widely used to develop GPCR-ligand structural models in membrane environments, peptidomimetics and other chemical compounds with potential clinical use. PMID:25170421
Design and analysis of quantitative differential proteomics investigations using LC-MS technology.
Bukhman, Yury V; Dharsee, Moyez; Ewing, Rob; Chu, Peter; Topaloglou, Thodoros; Le Bihan, Thierry; Goh, Theo; Duewel, Henry; Stewart, Ian I; Wisniewski, Jacek R; Ng, Nancy F
2008-02-01
Liquid chromatography-mass spectrometry (LC-MS)-based proteomics is becoming an increasingly important tool in characterizing the abundance of proteins in biological samples of various types and across conditions. Effects of disease or drug treatments on protein abundance are of particular interest for the characterization of biological processes and the identification of biomarkers. Although state-of-the-art instrumentation is available to make high-quality measurements and commercially available software is available to process the data, the complexity of the technology and data presents challenges for bioinformaticians and statisticians. Here, we describe a pipeline for the analysis of quantitative LC-MS data. Key components of this pipeline include experimental design (sample pooling, blocking, and randomization) as well as deconvolution and alignment of mass chromatograms to generate a matrix of molecular abundance profiles. An important challenge in LC-MS-based quantitation is to be able to accurately identify and assign abundance measurements to members of protein families. To address this issue, we implement a novel statistical method for inferring the relative abundance of related members of protein families from tryptic peptide intensities. This pipeline has been used to analyze quantitative LC-MS data from multiple biomarker discovery projects. We illustrate our pipeline here with examples from two of these studies, and show that the pipeline constitutes a complete workable framework for LC-MS-based differential quantitation. Supplementary material is available at http://iec01.mie.utoronto.ca/~thodoros/Bukhman/.
Curto, Lucrecia María; Caramelo, Julio Javier; Franchini, Gisela Raquel; Delfino, José María
2009-01-01
The design of β-barrels has always been a formidable challenge for de novo protein design. For instance, a persistent problem is posed by the intrinsic tendency to associate given by free edges. From the opposite standpoint provided by the redesign of natural motifs, we believe that the intestinal fatty acid binding protein (IFABP) framework allows room for intervention, giving rise to abridged forms from which lessons on β-barrel architecture and stability could be learned. In this context, Δ98Δ (encompassing residues 29–126 of IFABP) emerges as a monomeric variant that folds properly, retaining functional activity, despite lacking extensive stretches involved in the closure of the β-barrel. Spectroscopic probes (fluorescence and circular dichroism) support the existence of a form preserving the essential determinants of the parent structure, albeit endowed with enhanced flexibility. Chemical and physical perturbants reveal cooperative unfolding transitions, with evidence of significant population of intermediate species in equilibrium, structurally akin to those transiently observed in IFABP. The recognition by the natural ligand oleic acid exerts a mild stabilizing effect, being of a greater magnitude than that found for IFABP. In summary, Δ98Δ adopts a monomeric state with a compact core and a loose periphery, thus pointing to the nonintuitive notion that the integrity of the β-barrel can indeed be compromised with no consequence on the ability to attain a native-like and functional fold. PMID:19309727
Review of Literature on Environmentally Conscious Design.
1995-12-01
framework for a demonstration project for a business phone (Keoleian, et al.). Pitney Bowes has developed a framework for implementing a Design for... developed for the U. S. EPA by the principle author and the University of Michigan, was used as a framework for this demonstration project for an...AT&T business phone. The purpose of the project was to explore the feasibility and applicability of the life cycle design framework
Parallels in Computer-Aided Design Framework and Software Development Environment Efforts.
1992-05-01
de - sign kits, and tool and design management frameworks. Also, books about software engineer- ing environments [Long 91] and electronic design...tool integration [Zarrella 90], and agreement upon a universal de - sign automation framework, such as the CAD Framework Initiative (CFI) [Malasky 91...ments: identification, control, status accounting, and audit and review. The paper by Dart ex- tracts 15 CM concepts from existing SDEs and tools
Vulnerability detection using data-flow graphs and SMT solvers
2016-10-31
concerns. The framework is modular and pipelined to allow scalable analysis on distributed systems. Our vulnerability detection framework employs machine...Design We designed the framework to be modular to enable flexible reuse and extendibility. In its current form, our framework performs the following
Sockolow, Paulina; Joppa, Meredith; Zhu, Jichen
2018-01-01
Adolescent sexual risk behavior (SRB), a major public health problem affects urban Black adolescent girls increasing their health disparities and risks for sexually transmitted infections. Collaborating with these adolescents, we designed a game for smartphones that incorporates elements of trauma-informed care and social cognitive theory to reduce SRB. Game researchers promote use of a comprehensive, multipurpose framework for development and evaluation of games for health applications. Our first game development step was framework selection and measurable health outcomes identification. Literature search identified two health game frameworks, both incorporating pedagogical theory, learning theory, and gaming requirements. Arnab used the IM + LM-GM framework to develop and implement a game in a school intervention program. Yusoff's framework was developed for use during game design. We investigated concordance and discordance between our SRB game design characteristics and each framework's components. Findings indicated Arnab's framework was sufficiently comprehensive to guide development of our game and outcome measure selection.
Bernini, Andrea; Henrici De Angelis, Lucia; Morandi, Edoardo; Spiga, Ottavia; Santucci, Annalisa; Assfalg, Michael; Molinari, Henriette; Pillozzi, Serena; Arcangeli, Annarosa; Niccolai, Neri
2014-03-01
Hotspot delineation on protein surfaces represents a fundamental step for targeting protein-protein interfaces. Disruptors of protein-protein interactions can be designed provided that the sterical features of binding pockets, including the transient ones, can be defined. Molecular Dynamics, MD, simulations have been used as a reliable framework for identifying transient pocket openings on the protein surface. Accessible surface area and intramolecular H-bond involvement of protein backbone amides are proposed as descriptors for characterizing binding pocket occurrence and evolution along MD trajectories. TEMPOL induced paramagnetic perturbations on (1)H-(15)N HSQC signals of protein backbone amides have been analyzed as a fragment-based search for surface hotspots, in order to validate MD predicted pockets. This procedure has been applied to CXCL12, a small chemokine responsible for tumor progression and proliferation. From combined analysis of MD data and paramagnetic profiles, two CXCL12 sites suitable for the binding of small molecules were identified. One of these sites is the already well characterized CXCL12 region involved in the binding to CXCR4 receptor. The other one is a transient pocket predicted by Molecular Dynamics simulations, which could not be observed from static analysis of CXCL12 PDB structures. The present results indicate how TEMPOL, instrumental in identifying this transient pocket, can be a powerful tool to delineate minor conformations which can be highly relevant in dynamic discovery of antitumoral drugs. Copyright © 2013 Elsevier B.V. All rights reserved.
Combs, Steven A; Mueller, Benjamin K; Meiler, Jens
2018-05-29
Partial covalent interactions (PCIs) in proteins, which include hydrogen bonds, salt bridges, cation-π, and π-π interactions, contribute to thermodynamic stability and facilitate interactions with other biomolecules. Several score functions have been developed within the Rosetta protein modeling framework that identify and evaluate these PCIs through analyzing the geometry between participating atoms. However, we hypothesize that PCIs can be unified through a simplified electron orbital representation. To test this hypothesis, we have introduced orbital based chemical descriptors for PCIs into Rosetta, called the PCI score function. Optimal geometries for the PCIs are derived from a statistical analysis of high-quality protein structures obtained from the Protein Data Bank (PDB), and the relative orientation of electron deficient hydrogen atoms and electron-rich lone pair or π orbitals are evaluated. We demonstrate that nativelike geometries of hydrogen bonds, salt bridges, cation-π, and π-π interactions are recapitulated during minimization of protein conformation. The packing density of tested protein structures increased from the standard score function from 0.62 to 0.64, closer to the native value of 0.70. Overall, rotamer recovery improved when using the PCI score function (75%) as compared to the standard Rosetta score function (74%). The PCI score function represents an improvement over the standard Rosetta score function for protein model scoring; in addition, it provides a platform for future directions in the analysis of small molecule to protein interactions, which depend on partial covalent interactions.
A framework for classification of prokaryotic protein kinases.
Tyagi, Nidhi; Anamika, Krishanpal; Srinivasan, Narayanaswamy
2010-05-26
Overwhelming majority of the Serine/Threonine protein kinases identified by gleaning archaeal and eubacterial genomes could not be classified into any of the well known Hanks and Hunter subfamilies of protein kinases. This is owing to the development of Hanks and Hunter classification scheme based on eukaryotic protein kinases which are highly divergent from their prokaryotic homologues. A large dataset of prokaryotic Serine/Threonine protein kinases recognized from genomes of prokaryotes have been used to develop a classification framework for prokaryotic Ser/Thr protein kinases. We have used traditional sequence alignment and phylogenetic approaches and clustered the prokaryotic kinases which represent 72 subfamilies with at least 4 members in each. Such a clustering enables classification of prokaryotic Ser/Thr kinases and it can be used as a framework to classify newly identified prokaryotic Ser/Thr kinases. After series of searches in a comprehensive sequence database we recognized that 38 subfamilies of prokaryotic protein kinases are associated to a specific taxonomic level. For example 4, 6 and 3 subfamilies have been identified that are currently specific to phylum proteobacteria, cyanobacteria and actinobacteria respectively. Similarly subfamilies which are specific to an order, sub-order, class, family and genus have also been identified. In addition to these, we also identify organism-diverse subfamilies. Members of these clusters are from organisms of different taxonomic levels, such as archaea, bacteria, eukaryotes and viruses. Interestingly, occurrence of several taxonomic level specific subfamilies of prokaryotic kinases contrasts with classification of eukaryotic protein kinases in which most of the popular subfamilies of eukaryotic protein kinases occur diversely in several eukaryotes. Many prokaryotic Ser/Thr kinases exhibit a wide variety of modular organization which indicates a degree of complexity and protein-protein interactions in the signaling pathways in these microbes.
A Design Framework for Online Teacher Professional Development Communities
ERIC Educational Resources Information Center
Liu, Katrina Yan
2012-01-01
This paper provides a design framework for building online teacher professional development communities for preservice and inservice teachers. The framework is based on a comprehensive literature review on the latest technology and epistemology of online community and teacher professional development, comprising four major design factors and three…
Computer-Aided Design of RNA Origami Structures.
Sparvath, Steffen L; Geary, Cody W; Andersen, Ebbe S
2017-01-01
RNA nanostructures can be used as scaffolds to organize, combine, and control molecular functionalities, with great potential for applications in nanomedicine and synthetic biology. The single-stranded RNA origami method allows RNA nanostructures to be folded as they are transcribed by the RNA polymerase. RNA origami structures provide a stable framework that can be decorated with functional RNA elements such as riboswitches, ribozymes, interaction sites, and aptamers for binding small molecules or protein targets. The rich library of RNA structural and functional elements combined with the possibility to attach proteins through aptamer-based binding creates virtually limitless possibilities for constructing advanced RNA-based nanodevices.In this chapter we provide a detailed protocol for the single-stranded RNA origami design method using a simple 2-helix tall structure as an example. The first step involves 3D modeling of a double-crossover between two RNA double helices, followed by decoration with tertiary motifs. The second step deals with the construction of a 2D blueprint describing the secondary structure and sequence constraints that serves as the input for computer programs. In the third step, computer programs are used to design RNA sequences that are compatible with the structure, and the resulting outputs are evaluated and converted into DNA sequences to order.
Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.
Otero-Muras, Irene; Banga, Julio R
2017-07-21
In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.
Designing effective human-automation-plant interfaces: a control-theoretic perspective.
Jamieson, Greg A; Vicente, Kim J
2005-01-01
In this article, we propose the application of a control-theoretic framework to human-automation interaction. The framework consists of a set of conceptual distinctions that should be respected in automation research and design. We demonstrate how existing automation interface designs in some nuclear plants fail to recognize these distinctions. We further show the value of the approach by applying it to modes of automation. The design guidelines that have been proposed in the automation literature are evaluated from the perspective of the framework. This comparison shows that the framework reveals insights that are frequently overlooked in this literature. A new set of design guidelines is introduced that builds upon the contributions of previous research and draws complementary insights from the control-theoretic framework. The result is a coherent and systematic approach to the design of human-automation-plant interfaces that will yield more concrete design criteria and a broader set of design tools. Applications of this research include improving the effectiveness of human-automation interaction design and the relevance of human-automation interaction research.
ELPSA as a Lesson Design Framework
ERIC Educational Resources Information Center
Lowrie, Tom; Patahuddin, Sitti Maesuri
2015-01-01
This paper offers a framework for a mathematics lesson design that is consistent with the way we learn about, and discover, most things in life. In addition, the framework provides a structure for identifying how mathematical concepts and understanding are acquired and developed. This framework is called ELPSA and represents five learning…
Lin, Guo; Gao, Chaohong; Zheng, Qiong; Lei, Zhixian; Geng, Huijuan; Lin, Zian; Yang, Huanghao; Cai, Zongwei
2017-03-28
Core-shell structured magnetic covalent organic frameworks (Fe 3 O 4 @COFs) were synthesized via a facile approach at room temperature. Combining the advantages of high porosity, magnetic responsiveness, chemical stability and selectivity, Fe 3 O 4 @COFs can serve as an ideal absorbent for the highly efficient enrichment of peptides and the simultaneous exclusion of proteins from complex biological samples.
Gouvea, Julia Svoboda; Sawtelle, Vashti; Geller, Benjamin D; Turpen, Chandra
2013-06-01
The national conversation around undergraduate science instruction is calling for increased interdisciplinarity. As these calls increase, there is a need to consider the learning objectives of interdisciplinary science courses and how to design curricula to support those objectives. We present a framework that can help support interdisciplinary design research. We developed this framework in an introductory physics for life sciences majors (IPLS) course for which we designed a series of interdisciplinary tasks that bridge physics and biology. We illustrate how this framework can be used to describe the variation in the nature and degree of interdisciplinary interaction in tasks, to aid in redesigning tasks to better align with interdisciplinary learning objectives, and finally, to articulate design conjectures that posit how different characteristics of these tasks might support or impede interdisciplinary learning objectives. This framework will be useful for both curriculum designers and education researchers seeking to understand, in more concrete terms, what interdisciplinary learning means and how integrated science curricula can be designed to support interdisciplinary learning objectives.
Gouvea, Julia Svoboda; Sawtelle, Vashti; Geller, Benjamin D.; Turpen, Chandra
2013-01-01
The national conversation around undergraduate science instruction is calling for increased interdisciplinarity. As these calls increase, there is a need to consider the learning objectives of interdisciplinary science courses and how to design curricula to support those objectives. We present a framework that can help support interdisciplinary design research. We developed this framework in an introductory physics for life sciences majors (IPLS) course for which we designed a series of interdisciplinary tasks that bridge physics and biology. We illustrate how this framework can be used to describe the variation in the nature and degree of interdisciplinary interaction in tasks, to aid in redesigning tasks to better align with interdisciplinary learning objectives, and finally, to articulate design conjectures that posit how different characteristics of these tasks might support or impede interdisciplinary learning objectives. This framework will be useful for both curriculum designers and education researchers seeking to understand, in more concrete terms, what interdisciplinary learning means and how integrated science curricula can be designed to support interdisciplinary learning objectives. PMID:23737627
50 CFR 86.102 - How did the Service design the National Framework?
Code of Federal Regulations, 2013 CFR
2013-10-01
... design the National Framework? The Framework divides the survey into two components: boater survey, and boat access provider survey. (a) The purpose of the boater survey component is to identify boat user... 50 Wildlife and Fisheries 9 2013-10-01 2013-10-01 false How did the Service design the National...
50 CFR 86.102 - How did the Service design the National Framework?
Code of Federal Regulations, 2011 CFR
2011-10-01
... design the National Framework? The Framework divides the survey into two components: boater survey, and boat access provider survey. (a) The purpose of the boater survey component is to identify boat user... 50 Wildlife and Fisheries 8 2011-10-01 2011-10-01 false How did the Service design the National...
50 CFR 86.102 - How did the Service design the National Framework?
Code of Federal Regulations, 2012 CFR
2012-10-01
... design the National Framework? The Framework divides the survey into two components: boater survey, and boat access provider survey. (a) The purpose of the boater survey component is to identify boat user... 50 Wildlife and Fisheries 9 2012-10-01 2012-10-01 false How did the Service design the National...
50 CFR 86.102 - How did the Service design the National Framework?
Code of Federal Regulations, 2010 CFR
2010-10-01
... design the National Framework? The Framework divides the survey into two components: boater survey, and boat access provider survey. (a) The purpose of the boater survey component is to identify boat user... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false How did the Service design the National...
50 CFR 86.102 - How did the Service design the National Framework?
Code of Federal Regulations, 2014 CFR
2014-10-01
... design the National Framework? The Framework divides the survey into two components: boater survey, and boat access provider survey. (a) The purpose of the boater survey component is to identify boat user... 50 Wildlife and Fisheries 9 2014-10-01 2014-10-01 false How did the Service design the National...
Building a Framework for Engineering Design Experiences in High School
ERIC Educational Resources Information Center
Denson, Cameron D.; Lammi, Matthew
2014-01-01
In this article, Denson and Lammi put forth a conceptual framework that will help promote the successful infusion of engineering design experiences into high school settings. When considering a conceptual framework of engineering design in high school settings, it is important to consider the complex issue at hand. For the purposes of this…
Slater, Garett P.; Rajamohan, Arun; Yocum, George D.; Greenlee, Kendra J.; Bowsher, Julia H.
2017-01-01
ABSTRACT In holometabolous insects, larval nutrition affects adult body size, a life history trait with a profound influence on performance and fitness. Individual nutritional components of larval diets are often complex and may interact with one another, necessitating the use of a geometric framework for elucidating nutritional effects. In the honey bee, Apis mellifera, nurse bees provision food to developing larvae, directly moderating growth rates and caste development. However, the eusocial nature of honey bees makes nutritional studies challenging, because diet components cannot be systematically manipulated in the hive. Using in vitro rearing, we investigated the roles and interactions between carbohydrate and protein content on larval survival, growth, and development in A. mellifera. We applied a geometric framework to determine how these two nutritional components interact across nine artificial diets. Honey bees successfully completed larval development under a wide range of protein and carbohydrate contents, with the medium protein (∼5%) diet having the highest survival. Protein and carbohydrate both had significant and non-linear effects on growth rate, with the highest growth rates observed on a medium-protein, low-carbohydrate diet. Diet composition did not have a statistically significant effect on development time. These results confirm previous findings that protein and carbohydrate content affect the growth of A. mellifera larvae. However, this study identified an interaction between carbohydrate and protein content that indicates a low-protein, high-carb diet has a negative effect on larval growth and survival. These results imply that worker recruitment in the hive would decline under low protein conditions, even when nectar abundance or honey stores are sufficient. PMID:28396492
ERIC Educational Resources Information Center
Games, Ivan Alex
2008-01-01
This article discusses a framework for the analysis and assessment of twenty-first-century language and literacy practices in game and design-based contexts. It presents the framework in the context of game design within "Gamestar Mechanic", an innovative game-based learning environment where children learn the Discourse of game design. It…
Ergonomics action research II: a framework for integrating HF into work system design.
Neumann, W P; Village, J
2012-01-01
This paper presents a conceptual framework that can support efforts to integrate human factors (HF) into the work system design process, where improved and cost-effective application of HF is possible. The framework advocates strategies of broad stakeholder participation, linking of performance and health goals, and process focussed change tools that can help practitioners engage in improvements to embed HF into a firm's work system design process. Recommended tools include business process mapping of the design process, implementing design criteria, using cognitive mapping to connect to managers' strategic goals, tactical use of training and adopting virtual HF (VHF) tools to support the integration effort. Consistent with organisational change research, the framework provides guidance but does not suggest a strict set of steps. This allows more adaptability for the practitioner who must navigate within a particular organisational context to secure support for embedding HF into the design process for improved operator wellbeing and system performance. There has been little scientific literature about how a practitioner might integrate HF into a company's work system design process. This paper proposes a framework for this effort by presenting a coherent conceptual framework, process tools, design tools and procedural advice that can be adapted for a target organisation.
Biana: a software framework for compiling biological interactions and analyzing networks
2010-01-01
Background The analysis and usage of biological data is hindered by the spread of information across multiple repositories and the difficulties posed by different nomenclature systems and storage formats. In particular, there is an important need for data unification in the study and use of protein-protein interactions. Without good integration strategies, it is difficult to analyze the whole set of available data and its properties. Results We introduce BIANA (Biologic Interactions and Network Analysis), a tool for biological information integration and network management. BIANA is a Python framework designed to achieve two major goals: i) the integration of multiple sources of biological information, including biological entities and their relationships, and ii) the management of biological information as a network where entities are nodes and relationships are edges. Moreover, BIANA uses properties of proteins and genes to infer latent biomolecular relationships by transferring edges to entities sharing similar properties. BIANA is also provided as a plugin for Cytoscape, which allows users to visualize and interactively manage the data. A web interface to BIANA providing basic functionalities is also available. The software can be downloaded under GNU GPL license from http://sbi.imim.es/web/BIANA.php. Conclusions BIANA's approach to data unification solves many of the nomenclature issues common to systems dealing with biological data. BIANA can easily be extended to handle new specific data repositories and new specific data types. The unification protocol allows BIANA to be a flexible tool suitable for different user requirements: non-expert users can use a suggested unification protocol while expert users can define their own specific unification rules. PMID:20105306
Biana: a software framework for compiling biological interactions and analyzing networks.
Garcia-Garcia, Javier; Guney, Emre; Aragues, Ramon; Planas-Iglesias, Joan; Oliva, Baldo
2010-01-27
The analysis and usage of biological data is hindered by the spread of information across multiple repositories and the difficulties posed by different nomenclature systems and storage formats. In particular, there is an important need for data unification in the study and use of protein-protein interactions. Without good integration strategies, it is difficult to analyze the whole set of available data and its properties. We introduce BIANA (Biologic Interactions and Network Analysis), a tool for biological information integration and network management. BIANA is a Python framework designed to achieve two major goals: i) the integration of multiple sources of biological information, including biological entities and their relationships, and ii) the management of biological information as a network where entities are nodes and relationships are edges. Moreover, BIANA uses properties of proteins and genes to infer latent biomolecular relationships by transferring edges to entities sharing similar properties. BIANA is also provided as a plugin for Cytoscape, which allows users to visualize and interactively manage the data. A web interface to BIANA providing basic functionalities is also available. The software can be downloaded under GNU GPL license from http://sbi.imim.es/web/BIANA.php. BIANA's approach to data unification solves many of the nomenclature issues common to systems dealing with biological data. BIANA can easily be extended to handle new specific data repositories and new specific data types. The unification protocol allows BIANA to be a flexible tool suitable for different user requirements: non-expert users can use a suggested unification protocol while expert users can define their own specific unification rules.
MISTIC2: comprehensive server to study coevolution in protein families.
Colell, Eloy A; Iserte, Javier A; Simonetti, Franco L; Marino-Buslje, Cristina
2018-06-14
Correlated mutations between residue pairs in evolutionarily related proteins arise from constraints needed to maintain a functional and stable protein. Identifying these inter-related positions narrows down the search for structurally or functionally important sites. MISTIC is a server designed to assist users to calculate covariation in protein families and provide them with an interactive tool to visualize the results. Here, we present MISTIC2, an update to the previous server, that allows to calculate four covariation methods (MIp, mfDCA, plmDCA and gaussianDCA). The results visualization framework has been reworked for improved performance, compatibility and user experience. It includes a circos representation of the information contained in the alignment, an interactive covariation network, a 3D structure viewer and a sequence logo. Others components provide additional information such as residue annotations, a roc curve for assessing contact prediction, data tables and different ways of filtering the data and exporting figures. Comparison of different methods is easily done and scores combination is also possible. A newly implemented web service allows users to access MISTIC2 programmatically using an API to calculate covariation and retrieve results. MISTIC2 is available at: https://mistic2.leloir.org.ar.
A Complex Prime Numerical Representation of Amino Acids for Protein Function Comparison.
Chen, Duo; Wang, Jiasong; Yan, Ming; Bao, Forrest Sheng
2016-08-01
Computationally assessing the functional similarity between proteins is an important task of bioinformatics research. It can help molecular biologists transfer knowledge on certain proteins to others and hence reduce the amount of tedious and costly benchwork. Representation of amino acids, the building blocks of proteins, plays an important role in achieving this goal. Compared with symbolic representation, representing amino acids numerically can expand our ability to analyze proteins, including comparing the functional similarity of them. Among the state-of-the-art methods, electro-ion interaction pseudopotential (EIIP) is widely adopted for the numerical representation of amino acids. However, it could suffer from degeneracy that two different amino acid sequences have the same numerical representation, due to the design of EIIP. In light of this challenge, we propose a complex prime numerical representation (CPNR) of amino acids, inspired by the similarity between a pattern among prime numbers and the number of codons of amino acids. To empirically assess the effectiveness of the proposed method, we compare CPNR against EIIP. Experimental results demonstrate that the proposed method CPNR always achieves better performance than EIIP. We also develop a framework to combine the advantages of CPNR and EIIP, which enables us to improve the performance and study the unique characteristics of different representations.
Staritzbichler, René; Anselmi, Claudio; Forrest, Lucy R.; Faraldo-Gómez, José D.
2014-01-01
As new atomic structures of membrane proteins are resolved, they reveal increasingly complex transmembrane topologies, and highly irregular surfaces with crevices and pores. In many cases, specific interactions formed with the lipid membrane are functionally crucial, as is the overall lipid composition. Compounded with increasing protein size, these characteristics pose a challenge for the construction of simulation models of membrane proteins in lipid environments; clearly, that these models are sufficiently realistic bears upon the reliability of simulation-based studies of these systems. Here, we introduce GRIFFIN, which uses a versatile framework to automate and improve a widely-used membrane-embedding protocol. Initially, GRIFFIN carves out lipid and water molecules from a volume equivalent to that of the protein, so as to conserve the system density. In the subsequent optimization phase GRIFFIN adds an implicit grid-based protein force-field to a molecular dynamics simulation of the pre-carved membrane. In this force-field, atoms inside the implicit protein volume experience an outward force that will expel them from that volume, whereas those outside are subject to electrostatic and van-der-Waals interactions with the implicit protein. At each step of the simulation, these forces are updated by GRIFFIN and combined with the intermolecular forces of the explicit lipid-water system. This procedure enables the construction of realistic and reproducible starting configurations of the protein-membrane interface within a reasonable timeframe and with minimal intervention. GRIFFIN is a standalone tool designed to work alongside any existing molecular dynamics package, such as NAMD or GROMACS. PMID:24707227
Structural basis for precursor protein-directed ribosomal peptide macrocyclization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Kunhua; Condurso, Heather L.; Li, Gengnan
Macrocyclization is a common feature of natural product biosynthetic pathways including the diverse family of ribosomal peptides. Microviridins are architecturally complex cyanobacterial ribosomal peptides that target proteases with potent reversible inhibition. The product structure is constructed via three macrocyclizations catalyzed sequentially by two members of the ATP-grasp family, a unique strategy for ribosomal peptide macrocyclization. Here we describe in detail the structural basis for the enzyme-catalyzed macrocyclizations in the microviridin J pathway of Microcystis aeruginosa. The macrocyclases MdnC and MdnB interact with a conserved α-helix of the precursor peptide using a novel precursor-peptide recognition mechanism. The results provide insight intomore » the unique protein–protein interactions that are key to the chemistry, suggest an origin for the natural combinatorial synthesis of microviridin peptides, and provide a framework for future engineering efforts to generate designed compounds.« less
Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; Samoylova, Liubov; Buzmakov, Alexey; Jurek, Zoltan; Ziaja, Beata; Santra, Robin; Loh, N. Duane; Tschentscher, Thomas; Mancuso, Adrian P.
2016-01-01
The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy and incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. We demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design. PMID:27109208
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.
The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy andmore » incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. Furthermore, we demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.« less
Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; ...
2016-04-25
The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy andmore » incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. Furthermore, we demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.« less
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task. both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation, while maintaining high performance across numerous supercomputer and workstation architectures. This document proposes a strawman framework design for the climate community based on the integration of Cactus, from the relativistic physics community, and UCLA/UCB Distributed Data Broker (DDB) from the climate community. This design is the result of an extensive survey of climate models and frameworks in the climate community as well as frameworks from many other scientific communities. The design addresses fundamental development and runtime needs using Cactus, a framework with interfaces for FORTRAN and C-based languages, and high-performance model communication needs using DDB. This document also specifically explores object-oriented design issues in the context of climate modeling as well as climate modeling issues in terms of object-oriented design.
A Systematic Approach for Quantitative Analysis of Multidisciplinary Design Optimization Framework
NASA Astrophysics Data System (ADS)
Kim, Sangho; Park, Jungkeun; Lee, Jeong-Oog; Lee, Jae-Woo
An efficient Multidisciplinary Design and Optimization (MDO) framework for an aerospace engineering system should use and integrate distributed resources such as various analysis codes, optimization codes, Computer Aided Design (CAD) tools, Data Base Management Systems (DBMS), etc. in a heterogeneous environment, and need to provide user-friendly graphical user interfaces. In this paper, we propose a systematic approach for determining a reference MDO framework and for evaluating MDO frameworks. The proposed approach incorporates two well-known methods, Analytic Hierarchy Process (AHP) and Quality Function Deployment (QFD), in order to provide a quantitative analysis of the qualitative criteria of MDO frameworks. Identification and hierarchy of the framework requirements and the corresponding solutions for the reference MDO frameworks, the general one and the aircraft oriented one were carefully investigated. The reference frameworks were also quantitatively identified using AHP and QFD. An assessment of three in-house frameworks was then performed. The results produced clear and useful guidelines for improvement of the in-house MDO frameworks and showed the feasibility of the proposed approach for evaluating an MDO framework without a human interference.
A novel framework for virtual prototyping of rehabilitation exoskeletons.
Agarwal, Priyanshu; Kuo, Pei-Hsin; Neptune, Richard R; Deshpande, Ashish D
2013-06-01
Human-worn rehabilitation exoskeletons have the potential to make therapeutic exercises increasingly accessible to disabled individuals while reducing the cost and labor involved in rehabilitation therapy. In this work, we propose a novel human-model-in-the-loop framework for virtual prototyping (design, control and experimentation) of rehabilitation exoskeletons by merging computational musculoskeletal analysis with simulation-based design techniques. The framework allows to iteratively optimize design and control algorithm of an exoskeleton using simulation. We introduce biomechanical, morphological, and controller measures to quantify the performance of the device for optimization study. Furthermore, the framework allows one to carry out virtual experiments for testing specific "what-if" scenarios to quantify device performance and recovery progress. To illustrate the application of the framework, we present a case study wherein the design and analysis of an index-finger exoskeleton is carried out using the proposed framework.
ERIC Educational Resources Information Center
Klebansky, Anna; Fraser, Sharon P.
2013-01-01
This paper details a conceptual framework that situates curriculum design for information literacy and lifelong learning, through a cohesive developmental information literacy based model for learning, at the core of teacher education courses at UTAS. The implementation of the framework facilitates curriculum design that systematically,…
Application of Frameworks in the Analysis and (Re)design of Interactive Visual Learning Tools
ERIC Educational Resources Information Center
Liang, Hai-Ning; Sedig, Kamran
2009-01-01
Interactive visual learning tools (IVLTs) are software environments that encode and display information visually and allow learners to interact with the visual information. This article examines the application and utility of frameworks in the analysis and design of IVLTs at the micro level. Frameworks play an important role in any design. They…
Integrated Bio-Entity Network: A System for Biological Knowledge Discovery
Bell, Lindsey; Chowdhary, Rajesh; Liu, Jun S.; Niu, Xufeng; Zhang, Jinfeng
2011-01-01
A significant part of our biological knowledge is centered on relationships between biological entities (bio-entities) such as proteins, genes, small molecules, pathways, gene ontology (GO) terms and diseases. Accumulated at an increasing speed, the information on bio-entity relationships is archived in different forms at scattered places. Most of such information is buried in scientific literature as unstructured text. Organizing heterogeneous information in a structured form not only facilitates study of biological systems using integrative approaches, but also allows discovery of new knowledge in an automatic and systematic way. In this study, we performed a large scale integration of bio-entity relationship information from both databases containing manually annotated, structured information and automatic information extraction of unstructured text in scientific literature. The relationship information we integrated in this study includes protein–protein interactions, protein/gene regulations, protein–small molecule interactions, protein–GO relationships, protein–pathway relationships, and pathway–disease relationships. The relationship information is organized in a graph data structure, named integrated bio-entity network (IBN), where the vertices are the bio-entities and edges represent their relationships. Under this framework, graph theoretic algorithms can be designed to perform various knowledge discovery tasks. We designed breadth-first search with pruning (BFSP) and most probable path (MPP) algorithms to automatically generate hypotheses—the indirect relationships with high probabilities in the network. We show that IBN can be used to generate plausible hypotheses, which not only help to better understand the complex interactions in biological systems, but also provide guidance for experimental designs. PMID:21738677
Tran, Hoang T.; Pappu, Rohit V.
2006-01-01
Our focus is on an appropriate theoretical framework for describing highly denatured proteins. In high concentrations of denaturants, proteins behave like polymers in a good solvent and ensembles for denatured proteins can be modeled by ignoring all interactions except excluded volume (EV) effects. To assay conformational preferences of highly denatured proteins, we quantify a variety of properties for EV-limit ensembles of 23 two-state proteins. We find that modeled denatured proteins can be best described as follows. Average shapes are consistent with prolate ellipsoids. Ensembles are characterized by large correlated fluctuations. Sequence-specific conformational preferences are restricted to local length scales that span five to nine residues. Beyond local length scales, chain properties follow well-defined power laws that are expected for generic polymers in the EV limit. The average available volume is filled inefficiently, and cavities of all sizes are found within the interiors of denatured proteins. All properties characterized from simulated ensembles match predictions from rigorous field theories. We use our results to resolve between conflicting proposals for structure in ensembles for highly denatured states. PMID:16766618
Putting proteins back into water
NASA Astrophysics Data System (ADS)
de Los Rios, Paolo; Caldarelli, Guido
2000-12-01
We introduce a simplified protein model where the solvent (water) degrees of freedom appear explicitly (although in an extremely simplified fashion). Using this model we are able to recover the thermodynamic phenomenology of proteins over a wide range of temperatures. In particular we describe both the warm and the cold protein denaturation within a single framework, while addressing important issues about the structure of model proteins.
Computational methods in sequence and structure prediction
NASA Astrophysics Data System (ADS)
Lang, Caiyi
This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed a software package which is capable of designing novel protein structures at the atomic resolution. This software package allows us to perform protein structure design with a flexible backbone. The backbone flexibility includes loop region relaxation as well as a secondary structure collective mode relaxation scheme. (Abstract shortened by UMI.)
Design of Mobile Augmented Reality in Health Care Education: A Theory-Driven Framework.
Zhu, Egui; Lilienthal, Anneliese; Shluzas, Lauren Aquino; Masiello, Italo; Zary, Nabil
2015-09-18
Augmented reality (AR) is increasingly used across a range of subject areas in health care education as health care settings partner to bridge the gap between knowledge and practice. As the first contact with patients, general practitioners (GPs) are important in the battle against a global health threat, the spread of antibiotic resistance. AR has potential as a practical tool for GPs to combine learning and practice in the rational use of antibiotics. This paper was driven by learning theory to develop a mobile augmented reality education (MARE) design framework. The primary goal of the framework is to guide the development of AR educational apps. This study focuses on (1) identifying suitable learning theories for guiding the design of AR education apps, (2) integrating learning outcomes and learning theories to support health care education through AR, and (3) applying the design framework in the context of improving GPs' rational use of antibiotics. The design framework was first constructed with the conceptual framework analysis method. Data were collected from multidisciplinary publications and reference materials and were analyzed with directed content analysis to identify key concepts and their relationships. Then the design framework was applied to a health care educational challenge. The proposed MARE framework consists of three hierarchical layers: the foundation, function, and outcome layers. Three learning theories-situated, experiential, and transformative learning-provide foundational support based on differing views of the relationships among learning, practice, and the environment. The function layer depends upon the learners' personal paradigms and indicates how health care learning could be achieved with MARE. The outcome layer analyzes different learning abilities, from knowledge to the practice level, to clarify learning objectives and expectations and to avoid teaching pitched at the wrong level. Suggestions for learning activities and the requirements of the learning environment form the foundation for AR to fill the gap between learning outcomes and medical learners' personal paradigms. With the design framework, the expected rational use of antibiotics by GPs is described and is easy to execute and evaluate. The comparison of specific expected abilities with the GP personal paradigm helps solidify the GP practical learning objectives and helps design the learning environment and activities. The learning environment and activities were supported by learning theories. This paper describes a framework for guiding the design, development, and application of mobile AR for medical education in the health care setting. The framework is theory driven with an understanding of the characteristics of AR and specific medical disciplines toward helping medical education improve professional development from knowledge to practice. Future research will use the framework as a guide for developing AR apps in practice to validate and improve the design framework.
Design of Mobile Augmented Reality in Health Care Education: A Theory-Driven Framework
Lilienthal, Anneliese; Shluzas, Lauren Aquino; Masiello, Italo; Zary, Nabil
2015-01-01
Background Augmented reality (AR) is increasingly used across a range of subject areas in health care education as health care settings partner to bridge the gap between knowledge and practice. As the first contact with patients, general practitioners (GPs) are important in the battle against a global health threat, the spread of antibiotic resistance. AR has potential as a practical tool for GPs to combine learning and practice in the rational use of antibiotics. Objective This paper was driven by learning theory to develop a mobile augmented reality education (MARE) design framework. The primary goal of the framework is to guide the development of AR educational apps. This study focuses on (1) identifying suitable learning theories for guiding the design of AR education apps, (2) integrating learning outcomes and learning theories to support health care education through AR, and (3) applying the design framework in the context of improving GPs’ rational use of antibiotics. Methods The design framework was first constructed with the conceptual framework analysis method. Data were collected from multidisciplinary publications and reference materials and were analyzed with directed content analysis to identify key concepts and their relationships. Then the design framework was applied to a health care educational challenge. Results The proposed MARE framework consists of three hierarchical layers: the foundation, function, and outcome layers. Three learning theories—situated, experiential, and transformative learning—provide foundational support based on differing views of the relationships among learning, practice, and the environment. The function layer depends upon the learners’ personal paradigms and indicates how health care learning could be achieved with MARE. The outcome layer analyzes different learning abilities, from knowledge to the practice level, to clarify learning objectives and expectations and to avoid teaching pitched at the wrong level. Suggestions for learning activities and the requirements of the learning environment form the foundation for AR to fill the gap between learning outcomes and medical learners’ personal paradigms. With the design framework, the expected rational use of antibiotics by GPs is described and is easy to execute and evaluate. The comparison of specific expected abilities with the GP personal paradigm helps solidify the GP practical learning objectives and helps design the learning environment and activities. The learning environment and activities were supported by learning theories. Conclusions This paper describes a framework for guiding the design, development, and application of mobile AR for medical education in the health care setting. The framework is theory driven with an understanding of the characteristics of AR and specific medical disciplines toward helping medical education improve professional development from knowledge to practice. Future research will use the framework as a guide for developing AR apps in practice to validate and improve the design framework. PMID:27731839
Arighi, Cecilia; Shamovsky, Veronica; Masci, Anna Maria; Ruttenberg, Alan; Smith, Barry; Natale, Darren A; Wu, Cathy; D'Eustachio, Peter
2015-01-01
The Protein Ontology (PRO) provides terms for and supports annotation of species-specific protein complexes in an ontology framework that relates them both to their components and to species-independent families of complexes. Comprehensive curation of experimentally known forms and annotations thereof is expected to expose discrepancies, differences, and gaps in our knowledge. We have annotated the early events of innate immune signaling mediated by Toll-Like Receptor 3 and 4 complexes in human, mouse, and chicken. The resulting ontology and annotation data set has allowed us to identify species-specific gaps in experimental data and possible functional differences between species, and to employ inferred structural and functional relationships to suggest plausible resolutions of these discrepancies and gaps.
SCOWLP classification: Structural comparison and analysis of protein binding regions
Teyra, Joan; Paszkowski-Rogacz, Maciej; Anders, Gerd; Pisabarro, M Teresa
2008-01-01
Background Detailed information about protein interactions is critical for our understanding of the principles governing protein recognition mechanisms. The structures of many proteins have been experimentally determined in complex with different ligands bound either in the same or different binding regions. Thus, the structural interactome requires the development of tools to classify protein binding regions. A proper classification may provide a general view of the regions that a protein uses to bind others and also facilitate a detailed comparative analysis of the interacting information for specific protein binding regions at atomic level. Such classification might be of potential use for deciphering protein interaction networks, understanding protein function, rational engineering and design. Description Protein binding regions (PBRs) might be ideally described as well-defined separated regions that share no interacting residues one another. However, PBRs are often irregular, discontinuous and can share a wide range of interacting residues among them. The criteria to define an individual binding region can be often arbitrary and may differ from other binding regions within a protein family. Therefore, the rational behind protein interface classification should aim to fulfil the requirements of the analysis to be performed. We extract detailed interaction information of protein domains, peptides and interfacial solvent from the SCOWLP database and we classify the PBRs of each domain family. For this purpose, we define a similarity index based on the overlapping of interacting residues mapped in pair-wise structural alignments. We perform our classification with agglomerative hierarchical clustering using the complete-linkage method. Our classification is calculated at different similarity cut-offs to allow flexibility in the analysis of PBRs, feature especially interesting for those protein families with conflictive binding regions. The hierarchical classification of PBRs is implemented into the SCOWLP database and extends the SCOP classification with three additional family sub-levels: Binding Region, Interface and Contacting Domains. SCOWLP contains 9,334 binding regions distributed within 2,561 families. In 65% of the cases we observe families containing more than one binding region. Besides, 22% of the regions are forming complex with more than one different protein family. Conclusion The current SCOWLP classification and its web application represent a framework for the study of protein interfaces and comparative analysis of protein family binding regions. This comparison can be performed at atomic level and allows the user to study interactome conservation and variability. The new SCOWLP classification may be of great utility for reconstruction of protein complexes, understanding protein networks and ligand design. SCOWLP will be updated with every SCOP release. The web application is available at . PMID:18182098
ERIC Educational Resources Information Center
Bozkurt, Ipek; Helm, James
2013-01-01
This paper develops a systems engineering-based framework to assist in the design of an online engineering course. Specifically, the purpose of the framework is to provide a structured methodology for the design, development and delivery of a fully online course, either brand new or modified from an existing face-to-face course. The main strength…
A modular toolset for recombination transgenesis and neurogenetic analysis of Drosophila.
Wang, Ji-Wu; Beck, Erin S; McCabe, Brian D
2012-01-01
Transgenic Drosophila have contributed extensively to our understanding of nervous system development, physiology and behavior in addition to being valuable models of human neurological disease. Here, we have generated a novel series of modular transgenic vectors designed to optimize and accelerate the production and analysis of transgenes in Drosophila. We constructed a novel vector backbone, pBID, that allows both phiC31 targeted transgene integration and incorporates insulator sequences to ensure specific and uniform transgene expression. Upon this framework, we have built a series of constructs that are either backwards compatible with existing restriction enzyme based vectors or utilize Gateway recombination technology for high-throughput cloning. These vectors allow for endogenous promoter or Gal4 targeted expression of transgenic proteins with or without fluorescent protein or epitope tags. In addition, we have generated constructs that facilitate transgenic splice isoform specific RNA inhibition of gene expression. We demonstrate the utility of these constructs to analyze proteins involved in nervous system development, physiology and neurodegenerative disease. We expect that these reagents will facilitate the proficiency and sophistication of Drosophila genetic analysis in both the nervous system and other tissues.
Quantitative analysis of intra-Golgi transport shows intercisternal exchange for all cargo
Dmitrieff, Serge; Rao, Madan; Sens, Pierre
2013-01-01
The mechanisms controlling the transport of proteins through the Golgi stack of mammalian and plant cells is the subject of intense debate, with two models, cisternal progression and intercisternal exchange, emerging as major contenders. A variety of transport experiments have claimed support for each of these models. We reevaluate these experiments using a single quantitative coarse-grained framework of intra-Golgi transport that accounts for both transport models and their many variants. Our analysis makes a definitive case for the existence of intercisternal exchange both for small membrane proteins and large protein complexes––this implies that membrane structures larger than the typical protein-coated vesicles must be involved in transport. Notwithstanding, we find that current observations on protein transport cannot rule out cisternal progression as contributing significantly to the transport process. To discriminate between the different models of intra-Golgi transport, we suggest experiments and an analysis based on our extended theoretical framework that compare the dynamics of transiting and resident proteins. PMID:24019488
Framework to Delay Corn Rootworm Resistance
This proposed framework is intended to delay the corn rootworm pest becoming resistant to corn genetically engineered to produce Bt proteins, which kill corn rootworms but do not affect people or wildlife. It includes requirements on Bt corn manufacturers.
Jiang, Guoqian; Wang, Chen; Zhu, Qian; Chute, Christopher G
2013-01-01
Knowledge-driven text mining is becoming an important research area for identifying pharmacogenomics target genes. However, few of such studies have been focused on the pharmacogenomics targets of adverse drug events (ADEs). The objective of the present study is to build a framework of knowledge integration and discovery that aims to support pharmacogenomics target predication of ADEs. We integrate a semantically annotated literature corpus Semantic MEDLINE with a semantically coded ADE knowledgebase known as ADEpedia using a semantic web based framework. We developed a knowledge discovery approach combining a network analysis of a protein-protein interaction (PPI) network and a gene functional classification approach. We performed a case study of drug-induced long QT syndrome for demonstrating the usefulness of the framework in predicting potential pharmacogenomics targets of ADEs.
Structure-Based Virtual Screening for Drug Discovery: Principles, Applications and Recent Advances
Lionta, Evanthia; Spyrou, George; Vassilatis, Demetrios K.; Cournia, Zoe
2014-01-01
Structure-based drug discovery (SBDD) is becoming an essential tool in assisting fast and cost-efficient lead discovery and optimization. The application of rational, structure-based drug design is proven to be more efficient than the traditional way of drug discovery since it aims to understand the molecular basis of a disease and utilizes the knowledge of the three-dimensional structure of the biological target in the process. In this review, we focus on the principles and applications of Virtual Screening (VS) within the context of SBDD and examine different procedures ranging from the initial stages of the process that include receptor and library pre-processing, to docking, scoring and post-processing of topscoring hits. Recent improvements in structure-based virtual screening (SBVS) efficiency through ensemble docking, induced fit and consensus docking are also discussed. The review highlights advances in the field within the framework of several success studies that have led to nM inhibition directly from VS and provides recent trends in library design as well as discusses limitations of the method. Applications of SBVS in the design of substrates for engineered proteins that enable the discovery of new metabolic and signal transduction pathways and the design of inhibitors of multifunctional proteins are also reviewed. Finally, we contribute two promising VS protocols recently developed by us that aim to increase inhibitor selectivity. In the first protocol, we describe the discovery of micromolar inhibitors through SBVS designed to inhibit the mutant H1047R PI3Kα kinase. Second, we discuss a strategy for the identification of selective binders for the RXRα nuclear receptor. In this protocol, a set of target structures is constructed for ensemble docking based on binding site shape characterization and clustering, aiming to enhance the hit rate of selective inhibitors for the desired protein target through the SBVS process. PMID:25262799
Solving Homeland Security’s Wicked Problems: A Design Thinking Approach
2015-09-01
spur solutions. This thesis provides a framework for how S&T can incorporate design- thinking principles that are working well in other domains to...to spur solutions. This thesis provides a framework for how S&T can incorporate design-thinking principles that are working well in other domains to...Galbraith’s Star Model was used to analyze how DHS S&T, MindLab, and DARPA apply design-thinking principles to inform the framework to apply and
AlBader, Bader; AlHelal, Abdulaziz; Proussaefs, Periklis; Garbacea, Antonela; Kattadiyil, Mathew T; Lozada, Jaime
Implant-supported fixed complete dentures, often referred to as hybrid prostheses, have been associated with high implant survival rates but also with a high incidence of mechanical prosthetic complications. The most frequent of these complications have been fracture and wear of the veneering material. The proposed design concept incorporates the occlusal surfaces of the posterior teeth as part of a digital milled metal framework by designing the posterior first molars in full contour as part of the framework. The framework can be designed, scanned, and milled from a titanium blank using a milling machine. Acrylic resin teeth can then be placed on the framework by conventional protocol. The metal occlusal surfaces of the titanium-countered molars will be at centric occlusion. It is hypothesized that metal occlusal surfaces in the posterior region may reduce occlusal wear in these types of prostheses. When the proposed design protocol is followed, the connection between the metal frame and the cantilever part of the prosthesis is reinforced, which may lead to fewer fractures of the metal framework.
Martinez, Carlos A.; Barr, Kenneth; Kim, Ah-Ram; Reinitz, John
2013-01-01
Synthetic biology offers novel opportunities for elucidating transcriptional regulatory mechanisms and enhancer logic. Complex cis-regulatory sequences—like the ones driving expression of the Drosophila even-skipped gene—have proven difficult to design from existing knowledge, presumably due to the large number of protein-protein interactions needed to drive the correct expression patterns of genes in multicellular organisms. This work discusses two novel computational methods for the custom design of enhancers that employ a sophisticated, empirically validated transcriptional model, optimization algorithms, and synthetic biology. These synthetic elements have both utilitarian and academic value, including improving existing regulatory models as well as evolutionary questions. The first method involves the use of simulated annealing to explore the sequence space for synthetic enhancers whose expression output fit a given search criterion. The second method uses a novel optimization algorithm to find functionally accessible pathways between two enhancer sequences. These paths describe a set of mutations wherein the predicted expression pattern does not significantly vary at any point along the path. Both methods rely on a predictive mathematical framework that maps the enhancer sequence space to functional output. PMID:23732772
Applying systems thinking to inform studies of wildlife trade in primates.
Blair, Mary E; Le, Minh D; Thạch, Hoàng M; Panariello, Anna; Vũ, Ngọc B; Birchette, Mark G; Sethi, Gautam; Sterling, Eleanor J
2017-11-01
Wildlife trade presents a major threat to primate populations, which are in demand from local to international scales for a variety of uses from food and traditional medicine to the exotic pet trade. We argue that an interdisciplinary framework to facilitate integration of socioeconomic, anthropological, and biological data across multiple spatial and temporal scales is essential to guide the study of wildlife trade dynamics and its impacts on primate populations. Here, we present a new way to design research on wildlife trade in primates using a systems thinking framework. We discuss how we constructed our framework, which follows a social-ecological system framework, to design an ongoing study of local, regional, and international slow loris (Nycticebus spp.) trade in Vietnam. We outline the process of iterative variable exploration and selection via this framework to inform study design. Our framework, guided by systems thinking, enables recognition of complexity in study design, from which the results can inform more holistic, site-appropriate, and effective trade management practices. We place our framework in the context of other approaches to studying wildlife trade and discuss options to address foreseeable challenges to implementing this new framework. © 2017 Wiley Periodicals, Inc.
Sharp, Phillip P; Garnier, Jean-Marc; Hatfaludi, Tamas; Xu, Zhen; Segal, David; Jarman, Kate E; Jousset, Hélène; Garnham, Alexandra; Feutrill, John T; Cuzzupe, Anthony; Hall, Peter; Taylor, Scott; Walkley, Carl R; Tyler, Dean; Dawson, Mark A; Czabotar, Peter; Wilks, Andrew F; Glaser, Stefan; Huang, David C S; Burns, Christopher J
2017-12-14
A number of diazepines are known to inhibit bromo- and extra-terminal domain (BET) proteins. Their BET inhibitory activity derives from the fusion of an acetyl-lysine mimetic heterocycle onto the diazepine framework. Herein we describe a straightforward, modular synthesis of novel 1,2,3-triazolobenzodiazepines and show that the 1,2,3-triazole acts as an effective acetyl-lysine mimetic heterocycle. Structure-based optimization of this series of compounds led to the development of potent BET bromodomain inhibitors with excellent activity against leukemic cells, concomitant with a reduction in c- MYC expression. These novel benzodiazepines therefore represent a promising class of therapeutic BET inhibitors.
Fey, E G; Wan, K M; Penman, S
1984-06-01
Madin-Darby canine kidney (MDCK) cells grow as differentiated, epithelial colonies that display tissue-like organization. We examined the structural elements underlying the colony morphology in situ using three consecutive extractions that produce well-defined fractions for both microscopy and biochemical analysis. First, soluble proteins and phospholipid were removed with Triton X-100 in a physiological buffer. The resulting skeletal framework retained nuclei, dense cytoplasmic filament networks, intercellular junctional complexes, and apical microvillar structures. Scanning electron microscopy showed that the apical cell morphology is largely unaltered by detergent extraction. Residual desmosomes, as can be seen in thin sections, were also well-preserved. The skeletal framework was visualized in three dimensions as an unembedded whole mount that revealed the filament networks that were masked in Epon-embedded thin sections of the same preparation. The topography of cytoskeletal filaments was relatively constant throughout the epithelial sheet, particularly across intercellular borders. This ordering of epithelial skeletal filaments across contiguous cell boundaries was in sharp contrast to the more independent organization of networks in autonomous cells such as fibroblasts. Further extraction removed the proteins of the salt-labile cytoskeleton and the chromatin as separate fractions, and left the nuclear matrix-intermediate filament (NM-IF) scaffold. The NM-IF contained only 5% of total cellular protein, but whole mount transmission electron microscopy and immunofluorescence showed that this scaffold was organized as in the intact epithelium. Immunoblots demonstrate that vimentin, cytokeratins, desmosomal proteins, and a 52,000-mol-wt nuclear matrix protein were found almost exclusively in the NM-IF scaffold. Vimentin was largely perinuclear while the cytokeratins were localized at the cell borders. The 52,000-mol-wt nuclear matrix protein was confined to the chromatin-depleted matrix and the desmosomal proteins were observed in punctate polygonal arrays at intercellular junctions. The filaments of the NM-IF were seen to be interconnected, via the desmosomes, over the entire epithelial colony. The differentiated epithelial morphology was reflected in both the cytoskeletal framework and the NM-IF scaffold.
1984-01-01
Madin-Darby canine kidney (MDCK) cells grow as differentiated, epithelial colonies that display tissue-like organization. We examined the structural elements underlying the colony morphology in situ using three consecutive extractions that produce well-defined fractions for both microscopy and biochemical analysis. First, soluble proteins and phospholipid were removed with Triton X-100 in a physiological buffer. The resulting skeletal framework retained nuclei, dense cytoplasmic filament networks, intercellular junctional complexes, and apical microvillar structures. Scanning electron microscopy showed that the apical cell morphology is largely unaltered by detergent extraction. Residual desmosomes, as can be seen in thin sections, were also well- preserved. The skeletal framework was visualized in three dimensions as an unembedded whole mount that revealed the filament networks that were masked in Epon-embedded thin sections of the same preparation. The topography of cytoskeletal filaments was relatively constant throughout the epithelial sheet, particularly across intercellular borders. This ordering of epithelial skeletal filaments across contiguous cell boundaries was in sharp contrast to the more independent organization of networks in autonomous cells such as fibroblasts. Further extraction removed the proteins of the salt-labile cytoskeleton and the chromatin as separate fractions, and left the nuclear matrix-intermediate filament (NM-IF) scaffold. The NM-IF contained only 5% of total cellular protein, but whole mount transmission electron microscopy and immunofluorescence showed that this scaffold was organized as in the intact epithelium. Immunoblots demonstrate that vimentin, cytokeratins, desmosomal proteins, and a 52,000-mol-wt nuclear matrix protein were found almost exclusively in the NM-IF scaffold. Vimentin was largely perinuclear while the cytokeratins were localized at the cell borders. The 52,000-mol-wt nuclear matrix protein was confined to the chromatin- depleted matrix and the desmosomal proteins were observed in punctate polygonal arrays at intercellular junctions. The filaments of the NM-IF were seen to be interconnected, via the desmosomes, over the entire epithelial colony. The differentiated epithelial morphology was reflected in both the cytoskeletal framework and the NM-IF scaffold. PMID:6202700
A stochastic context free grammar based framework for analysis of protein sequences
Dyrka, Witold; Nebel, Jean-Christophe
2009-01-01
Background In the last decade, there have been many applications of formal language theory in bioinformatics such as RNA structure prediction and detection of patterns in DNA. However, in the field of proteomics, the size of the protein alphabet and the complexity of relationship between amino acids have mainly limited the application of formal language theory to the production of grammars whose expressive power is not higher than stochastic regular grammars. However, these grammars, like other state of the art methods, cannot cover any higher-order dependencies such as nested and crossing relationships that are common in proteins. In order to overcome some of these limitations, we propose a Stochastic Context Free Grammar based framework for the analysis of protein sequences where grammars are induced using a genetic algorithm. Results This framework was implemented in a system aiming at the production of binding site descriptors. These descriptors not only allow detection of protein regions that are involved in these sites, but also provide insight in their structure. Grammars were induced using quantitative properties of amino acids to deal with the size of the protein alphabet. Moreover, we imposed some structural constraints on grammars to reduce the extent of the rule search space. Finally, grammars based on different properties were combined to convey as much information as possible. Evaluation was performed on sites of various sizes and complexity described either by PROSITE patterns, domain profiles or a set of patterns. Results show the produced binding site descriptors are human-readable and, hence, highlight biologically meaningful features. Moreover, they achieve good accuracy in both annotation and detection. In addition, findings suggest that, unlike current state-of-the-art methods, our system may be particularly suited to deal with patterns shared by non-homologous proteins. Conclusion A new Stochastic Context Free Grammar based framework has been introduced allowing the production of binding site descriptors for analysis of protein sequences. Experiments have shown that not only is this new approach valid, but produces human-readable descriptors for binding sites which have been beyond the capability of current machine learning techniques. PMID:19814800
A Framework for the Design of Service Systems
NASA Astrophysics Data System (ADS)
Tan, Yao-Hua; Hofman, Wout; Gordijn, Jaap; Hulstijn, Joris
We propose a framework for the design and implementation of service systems, especially to design controls for long-term sustainable value co-creation. The framework is based on the software support tool e3-control. To illustrate the framework we use a large-scale case study, the Beer Living Lab, for simplification of customs procedures in international trade. The BeerLL shows how value co-creation can be achieved by reduction of administrative burden in international beer export due to electronic customs. Participants in the BeerLL are Heineken, IBM and Dutch Tax & Customs.
Toward a More Flexible Web-Based Framework for Multidisciplinary Design
NASA Technical Reports Server (NTRS)
Rogers, J. L.; Salas, A. O.
1999-01-01
In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary design, is defined as a hardware-software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, monitoring, controlling, and displaying the design process. The objective of this research is to explore how Web technology can improve these areas of weakness and lead toward a more flexible framework. This article describes a Web-based system that optimizes and controls the execution sequence of design processes in addition to monitoring the project status and displaying the design results.
NASA Technical Reports Server (NTRS)
Axdahl, Erik L.
2015-01-01
Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.
Lee, Joseph G L; Averett, Paige E; Blanchflower, Tiffany; Gregory, Kyle R
2018-02-01
Researchers and regulators need to know how changes to cigarette packages can influence population health. We sought to advance research on the role of cigarette packaging by assessing a theory-informed framework from the fields of design and consumer research. The selected Context of Consumption Framework posits cognitive, affective, and behavioral responses to visual design. To assess the Framework's potential for guiding research on the visual design of cigarette packaging in the U.S., this study seeks to understand to what extent the Context of Consumption Framework converges with how adult smokers think and talk about cigarette pack designs. Data for this qualitative study came from six telephone-based focus groups conducted in March 2017. Two groups consisted of lesbian, gay, and bisexual participants; two groups of participants with less than four years college education; one group of LGB and straight identity; and one group the general population. All groups were selected for regional, gender, and racial/ethnic diversity. Participants (n=33) represented all nine U.S. Census divisions. We conducted a deductive qualitative analysis. Cigarette package designs captured the participants' attention, suggested the characteristics of the product, and reflected (or could be leveraged to convey) multiple dimensions of consumer identity. Particular to the affective responses to design, our participants shared that cigarette packaging conveyed how the pack could be used to particular ends, created an emotional response to the designs, complied with normative expectations of a cigarette, elicited interest when designs change, and prompted fascination when unique design characteristics are used. Use of the Context of Consumption Framework for cigarette product packaging design can inform regulatory research on tobacco product packaging. Researchers and regulators should consider multiple cognitive, affective, and behavioral responses to cigarette pack design.
A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints.
Sundharam, Sakthivel Manikandan; Navet, Nicolas; Altmeyer, Sebastian; Havet, Lionel
2018-02-20
Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system.
A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints
Navet, Nicolas; Havet, Lionel
2018-01-01
Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system. PMID:29461489
A Novel Design Framework for Structures/Materials with Enhanced Mechanical Performance
Liu, Jie; Fan, Xiaonan; Wen, Guilin; Qing, Qixiang; Wang, Hongxin; Zhao, Gang
2018-01-01
Structure/material requires simultaneous consideration of both its design and manufacturing processes to dramatically enhance its manufacturability, assembly and maintainability. In this work, a novel design framework for structural/material with a desired mechanical performance and compelling topological design properties achieved using origami techniques is presented. The framework comprises four procedures, including topological design, unfold, reduction manufacturing, and fold. The topological design method, i.e., the solid isotropic material penalization (SIMP) method, serves to optimize the structure in order to achieve the preferred mechanical characteristics, and the origami technique is exploited to allow the structure to be rapidly and easily fabricated. Topological design and unfold procedures can be conveniently completed in a computer; then, reduction manufacturing, i.e., cutting, is performed to remove materials from the unfolded flat plate; the final structure is obtained by folding out the plate from the previous procedure. A series of cantilevers, consisting of origami parallel creases and Miura-ori (usually regarded as a metamaterial) and made of paperboard, are designed with the least weight and the required stiffness by using the proposed framework. The findings here furnish an alternative design framework for engineering structures that could be better than the 3D-printing technique, especially for large structures made of thin metal materials. PMID:29642555
A concept ideation framework for medical device design.
Hagedorn, Thomas J; Grosse, Ian R; Krishnamurty, Sundar
2015-06-01
Medical device design is a challenging process, often requiring collaboration between medical and engineering domain experts. This collaboration can be best institutionalized through systematic knowledge transfer between the two domains coupled with effective knowledge management throughout the design innovation process. Toward this goal, we present the development of a semantic framework for medical device design that unifies a large medical ontology with detailed engineering functional models along with the repository of design innovation information contained in the US Patent Database. As part of our development, existing medical, engineering, and patent document ontologies were modified and interlinked to create a comprehensive medical device innovation and design tool with appropriate properties and semantic relations to facilitate knowledge capture, enrich existing knowledge, and enable effective knowledge reuse for different scenarios. The result is a Concept Ideation Framework for Medical Device Design (CIFMeDD). Key features of the resulting framework include function-based searching and automated inter-domain reasoning to uniquely enable identification of functionally similar procedures, tools, and inventions from multiple domains based on simple semantic searches. The significance and usefulness of the resulting framework for aiding in conceptual design and innovation in the medical realm are explored via two case studies examining medical device design problems. Copyright © 2015 Elsevier Inc. All rights reserved.
A Novel Design Framework for Structures/Materials with Enhanced Mechanical Performance.
Liu, Jie; Fan, Xiaonan; Wen, Guilin; Qing, Qixiang; Wang, Hongxin; Zhao, Gang
2018-04-09
Abstract : Structure/material requires simultaneous consideration of both its design and manufacturing processes to dramatically enhance its manufacturability, assembly and maintainability. In this work, a novel design framework for structural/material with a desired mechanical performance and compelling topological design properties achieved using origami techniques is presented. The framework comprises four procedures, including topological design, unfold, reduction manufacturing, and fold. The topological design method, i.e., the solid isotropic material penalization (SIMP) method, serves to optimize the structure in order to achieve the preferred mechanical characteristics, and the origami technique is exploited to allow the structure to be rapidly and easily fabricated. Topological design and unfold procedures can be conveniently completed in a computer; then, reduction manufacturing, i.e., cutting, is performed to remove materials from the unfolded flat plate; the final structure is obtained by folding out the plate from the previous procedure. A series of cantilevers, consisting of origami parallel creases and Miura-ori (usually regarded as a metamaterial) and made of paperboard, are designed with the least weight and the required stiffness by using the proposed framework. The findings here furnish an alternative design framework for engineering structures that could be better than the 3D-printing technique, especially for large structures made of thin metal materials.
Use of theoretical and conceptual frameworks in qualitative research.
Green, Helen Elise
2014-07-01
To debate the definition and use of theoretical and conceptual frameworks in qualitative research. There is a paucity of literature to help the novice researcher to understand what theoretical and conceptual frameworks are and how they should be used. This paper acknowledges the interchangeable usage of these terms and researchers' confusion about the differences between the two. It discusses how researchers have used theoretical and conceptual frameworks and the notion of conceptual models. Detail is given about how one researcher incorporated a conceptual framework throughout a research project, the purpose for doing so and how this led to a resultant conceptual model. Concepts from Abbott (1988) and Witz ( 1992 ) were used to provide a framework for research involving two case study sites. The framework was used to determine research questions and give direction to interviews and discussions to focus the research. Some research methods do not overtly use a theoretical framework or conceptual framework in their design, but this is implicit and underpins the method design, for example in grounded theory. Other qualitative methods use one or the other to frame the design of a research project or to explain the outcomes. An example is given of how a conceptual framework was used throughout a research project. Theoretical and conceptual frameworks are terms that are regularly used in research but rarely explained. Textbooks should discuss what they are and how they can be used, so novice researchers understand how they can help with research design. Theoretical and conceptual frameworks need to be more clearly understood by researchers and correct terminology used to ensure clarity for novice researchers.
A Markov Random Field Framework for Protein Side-Chain Resonance Assignment
NASA Astrophysics Data System (ADS)
Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall
Nuclear magnetic resonance (NMR) spectroscopy plays a critical role in structural genomics, and serves as a primary tool for determining protein structures, dynamics and interactions in physiologically-relevant solution conditions. The current speed of protein structure determination via NMR is limited by the lengthy time required in resonance assignment, which maps spectral peaks to specific atoms and residues in the primary sequence. Although numerous algorithms have been developed to address the backbone resonance assignment problem [68,2,10,37,14,64,1,31,60], little work has been done to automate side-chain resonance assignment [43, 48, 5]. Most previous attempts in assigning side-chain resonances depend on a set of NMR experiments that record through-bond interactions with side-chain protons for each residue. Unfortunately, these NMR experiments have low sensitivity and limited performance on large proteins, which makes it difficult to obtain enough side-chain resonance assignments. On the other hand, it is essential to obtain almost all of the side-chain resonance assignments as a prerequisite for high-resolution structure determination. To overcome this deficiency, we present a novel side-chain resonance assignment algorithm based on alternative NMR experiments measuring through-space interactions between protons in the protein, which also provide crucial distance restraints and are normally required in high-resolution structure determination. We cast the side-chain resonance assignment problem into a Markov Random Field (MRF) framework, and extend and apply combinatorial protein design algorithms to compute the optimal solution that best interprets the NMR data. Our MRF framework captures the contact map information of the protein derived from NMR spectra, and exploits the structural information available from the backbone conformations determined by orientational restraints and a set of discretized side-chain conformations (i.e., rotamers). A Hausdorff-based computation is employed in the scoring function to evaluate the probability of side-chain resonance assignments to generate the observed NMR spectra. The complexity of the assignment problem is first reduced by using a dead-end elimination (DEE) algorithm, which prunes side-chain resonance assignments that are provably not part of the optimal solution. Then an A* search algorithm is used to find a set of optimal side-chain resonance assignments that best fit the NMR data. We have tested our algorithm on NMR data for five proteins, including the FF Domain 2 of human transcription elongation factor CA150 (FF2), the B1 domain of Protein G (GB1), human ubiquitin, the ubiquitin-binding zinc finger domain of the human Y-family DNA polymerase Eta (pol η UBZ), and the human Set2-Rpb1 interacting domain (hSRI). Our algorithm assigns resonances for more than 90% of the protons in the proteins, and achieves about 80% correct side-chain resonance assignments. The final structures computed using distance restraints resulting from the set of assigned side-chain resonances have backbone RMSD 0.5 - 1.4 Å and all-heavy-atom RMSD 1.0 - 2.2 Å from the reference structures that were determined by X-ray crystallography or traditional NMR approaches. These results demonstrate that our algorithm can be successfully applied to automate side-chain resonance assignment and high-quality protein structure determination. Since our algorithm does not require any specific NMR experiments for measuring the through-bond interactions with side-chain protons, it can save a significant amount of both experimental cost and spectrometer time, and hence accelerate the NMR structure determination process.
Research design: the methodology for interdisciplinary research framework.
Tobi, Hilde; Kampen, Jarl K
2018-01-01
Many of today's global scientific challenges require the joint involvement of researchers from different disciplinary backgrounds (social sciences, environmental sciences, climatology, medicine, etc.). Such interdisciplinary research teams face many challenges resulting from differences in training and scientific culture. Interdisciplinary education programs are required to train truly interdisciplinary scientists with respect to the critical factor skills and competences. For that purpose this paper presents the Methodology for Interdisciplinary Research (MIR) framework. The MIR framework was developed to help cross disciplinary borders, especially those between the natural sciences and the social sciences. The framework has been specifically constructed to facilitate the design of interdisciplinary scientific research, and can be applied in an educational program, as a reference for monitoring the phases of interdisciplinary research, and as a tool to design such research in a process approach. It is suitable for research projects of different sizes and levels of complexity, and it allows for a range of methods' combinations (case study, mixed methods, etc.). The different phases of designing interdisciplinary research in the MIR framework are described and illustrated by real-life applications in teaching and research. We further discuss the framework's utility in research design in landscape architecture, mixed methods research, and provide an outlook to the framework's potential in inclusive interdisciplinary research, and last but not least, research integrity.
Kinjo, Akira R.; Suzuki, Hirofumi; Yamashita, Reiko; Ikegawa, Yasuyo; Kudou, Takahiro; Igarashi, Reiko; Kengaku, Yumiko; Cho, Hasumi; Standley, Daron M.; Nakagawa, Atsushi; Nakamura, Haruki
2012-01-01
The Protein Data Bank Japan (PDBj, http://pdbj.org) is a member of the worldwide Protein Data Bank (wwPDB) and accepts and processes the deposited data of experimentally determined macromolecular structures. While maintaining the archive in collaboration with other wwPDB partners, PDBj also provides a wide range of services and tools for analyzing structures and functions of proteins, which are summarized in this article. To enhance the interoperability of the PDB data, we have recently developed PDB/RDF, PDB data in the Resource Description Framework (RDF) format, along with its ontology in the Web Ontology Language (OWL) based on the PDB mmCIF Exchange Dictionary. Being in the standard format for the Semantic Web, the PDB/RDF data provide a means to integrate the PDB with other biological information resources. PMID:21976737
Interior Design Education within a Human Ecological Framework
ERIC Educational Resources Information Center
Kaup, Migette L.; Anderson, Barbara G.; Honey, Peggy
2007-01-01
An education based in human ecology can greatly benefit interior designers as they work to understand and improve the human condition. Design programs housed in colleges focusing on human ecology can improve the interior design profession by taking advantage of their home base and emphasizing the human ecological framework in the design curricula.…
[Computer aided design for fixed partial denture framework based on reverse engineering technology].
Sun, Yu-chun; Lü, Pei-jun; Wang, Yong
2006-03-01
To explore a computer aided design (CAD) route for the framework of domestic fixed partial denture (FPD) and confirm the suitable method of 3-D CAD. The working area of a dentition model was scanned with a 3-D mechanical scanner. Using the reverse engineering (RE) software, margin and border curves were extracted and several reference curves were created to ensure the dimension and location of pontic framework that was taken from the standard database. The shoulder parts of the retainers were created after axial surfaces constructed. The connecting areas, axial line and curving surface of the framework connector were finally created. The framework of a three-unit FPD was designed with RE technology, which showed smooth surfaces and continuous contours. The design route is practical. The result of this study is significant in theory and practice, which will provide a reference for establishing the computer aided design/computer aided manufacture (CAD/CAM) system of domestic FPD.
A Framework for Teaching Tactical Game Knowledge.
ERIC Educational Resources Information Center
Wilson, Gail E.
2002-01-01
Provides an example of a framework of generic knowledge, designed for teachers, that describes and explains the foundational tactical aspects of invasive team-game play. The framework consists of four modules: participants and their roles, objectives, action principles, and action options. Guidelines to help instructors design practical activities…
Protein fold recognition using geometric kernel data fusion.
Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves
2014-07-01
Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.
NASA Technical Reports Server (NTRS)
McGowan, Anna-Maria Rivas; Papalambros, Panos Y.; Baker, Wayne E.
2015-01-01
This paper examines four primary methods of working across disciplines during R&D and early design of large-scale complex engineered systems such as aerospace systems. A conceptualized framework, called the Combining System Elements framework, is presented to delineate several aspects of cross-discipline and system integration practice. The framework is derived from a theoretical and empirical analysis of current work practices in actual operational settings and is informed by theories from organization science and engineering. The explanatory framework may be used by teams to clarify assumptions and associated work practices, which may reduce ambiguity in understanding diverse approaches to early systems research, development and design. The framework also highlights that very different engineering results may be obtained depending on work practices, even when the goals for the engineered system are the same.
NASA Astrophysics Data System (ADS)
Halbe, Johannes; Pahl-Wostl, Claudia; Adamowski, Jan
2018-01-01
Multiple barriers constrain the widespread application of participatory methods in water management, including the more technical focus of most water agencies, additional cost and time requirements for stakeholder involvement, as well as institutional structures that impede collaborative management. This paper presents a stepwise methodological framework that addresses the challenges of context-sensitive initiation, design and institutionalization of participatory modeling processes. The methodological framework consists of five successive stages: (1) problem framing and stakeholder analysis, (2) process design, (3) individual modeling, (4) group model building, and (5) institutionalized participatory modeling. The Management and Transition Framework is used for problem diagnosis (Stage One), context-sensitive process design (Stage Two) and analysis of requirements for the institutionalization of participatory water management (Stage Five). Conceptual modeling is used to initiate participatory modeling processes (Stage Three) and ensure a high compatibility with quantitative modeling approaches (Stage Four). This paper describes the proposed participatory model building (PMB) framework and provides a case study of its application in Québec, Canada. The results of the Québec study demonstrate the applicability of the PMB framework for initiating and designing participatory model building processes and analyzing barriers towards institutionalization.
ERIC Educational Resources Information Center
Cumming, Brett
2012-01-01
The three concepts Approach, Design and Procedure as proposed in Rodgers' Framework are considered particularly effective as a framework in second language teaching with the specific aim of developing communication as well as for better understanding methodology in the use of communicative language use.
ERIC Educational Resources Information Center
Wang, Zhijun; Anderson, Terry; Chen, Li; Barbera, Elena
2017-01-01
Connectivist learning is interaction-centered learning. A framework describing interaction and cognitive engagement in connectivist learning was constructed using logical reasoning techniques. The framework and analysis was designed to help researchers and learning designers understand and adapt the characteristics and principles of interaction in…
A Graphics Design Framework to Visualize Multi-Dimensional Economic Datasets
ERIC Educational Resources Information Center
Chandramouli, Magesh; Narayanan, Badri; Bertoline, Gary R.
2013-01-01
This study implements a prototype graphics visualization framework to visualize multidimensional data. This graphics design framework serves as a "visual analytical database" for visualization and simulation of economic models. One of the primary goals of any kind of visualization is to extract useful information from colossal volumes of…
Adventure Learning and Learner-Engagement: Frameworks for Designers and Educators
ERIC Educational Resources Information Center
Henrickson, Jeni; Doering, Aaron
2013-01-01
There is a recognized need for theoretical frameworks that can guide designers and educators in the development of engagement-rich learning experiences that incorporate emerging technologies in pedagogically sound ways. This study investigated one such promising framework, adventure learning (AL). Data were gathered via surveys, interviews, direct…
A human-oriented framework for developing assistive service robots.
McGinn, Conor; Cullinan, Michael F; Culleton, Mark; Kelly, Kevin
2018-04-01
Multipurpose robots that can perform a range of useful tasks have the potential to increase the quality of life for many people living with disabilities. Owing to factors such as high system complexity, as-yet unresolved research questions and current technology limitations, there is a need for effective strategies to coordinate the development process. Integrating established methodologies based on human-centred design and universal design, a framework was formulated to coordinate the robot design process over successive iterations of prototype development. An account is given of how the framework was practically applied to the problem of developing a personal service robot. Application of the framework led to the formation of several design goals which addressed a wide range of identified user needs. The resultant prototype solution, which consisted of several component elements, succeeded in demonstrating the performance stipulated by all of the proposed metrics. Application of the framework resulted in the development of a complex prototype that addressed many aspects of the functional and usability requirements of a personal service robot. Following the process led to several important insights which directly benefit the development of subsequent prototypes. Implications for Rehabilitation This research shows how universal design might be used to formulate usability requirements for assistive service robots. A framework is presented that guides the process of designing service robots in a human-centred way. Through practical application of the framework, a prototype robot system that addressed a range of identified user needs was developed.
Applying a Conceptual Design Framework to Study Teachers' Use of Educational Technology
ERIC Educational Resources Information Center
Holmberg, Jörgen
2017-01-01
Theoretical outcomes of design-based research (DBR) are often presented in the form of local theory design principles. This article suggests a complementary theoretical construction in DBR, in the form of a "design framework" at a higher abstract level, to study and inform educational design with ICT in different situated contexts.…
NASA Astrophysics Data System (ADS)
El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel
This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.
From pull-down data to protein interaction networks and complexes with biological relevance.
Zhang, Bing; Park, Byung-Hoon; Karpinets, Tatiana; Samatova, Nagiza F
2008-04-01
Recent improvements in high-throughput Mass Spectrometry (MS) technology have expedited genome-wide discovery of protein-protein interactions by providing a capability of detecting protein complexes in a physiological setting. Computational inference of protein interaction networks and protein complexes from MS data are challenging. Advances are required in developing robust and seamlessly integrated procedures for assessment of protein-protein interaction affinities, mathematical representation of protein interaction networks, discovery of protein complexes and evaluation of their biological relevance. A multi-step but easy-to-follow framework for identifying protein complexes from MS pull-down data is introduced. It assesses interaction affinity between two proteins based on similarity of their co-purification patterns derived from MS data. It constructs a protein interaction network by adopting a knowledge-guided threshold selection method. Based on the network, it identifies protein complexes and infers their core components using a graph-theoretical approach. It deploys a statistical evaluation procedure to assess biological relevance of each found complex. On Saccharomyces cerevisiae pull-down data, the framework outperformed other more complicated schemes by at least 10% in F(1)-measure and identified 610 protein complexes with high-functional homogeneity based on the enrichment in Gene Ontology (GO) annotation. Manual examination of the complexes brought forward the hypotheses on cause of false identifications. Namely, co-purification of different protein complexes as mediated by a common non-protein molecule, such as DNA, might be a source of false positives. Protein identification bias in pull-down technology, such as the hydrophilic bias could result in false negatives.
Nutrient compensatory foraging in a free-living social insect
NASA Astrophysics Data System (ADS)
Christensen, Keri L.; Gallacher, Anthony P.; Martin, Lizzie; Tong, Desmond; Elgar, Mark A.
2010-10-01
The geometric framework model predicts that animal foraging decisions are influenced by their dietary history, with animals targeting a combination of essential nutrients through compensatory foraging. We provide experimental confirmation of nutrient-specific compensatory foraging in a natural, free-living population of social insects by supplementing their diet with sources of protein- or carbohydrate-rich food. Colonies of the ant Iridomyrmex suchieri were provided with feeders containing food rich in either carbohydrate or protein for 6 days, and were then provided with a feeder containing the same or different diet. The patterns of recruitment were consistent with the geometric framework: while feeders with a carbohydrate diet typically attracted more workers than did feeders with protein diet, the difference in recruitment between the two nutrients was smaller if the colonies had had prior access to carbohydrate than protein. Further, fewer ants visited feeders if the colony had had prior access to protein than to carbohydrates, suggesting that the larvae play a role in worker foraging behaviour.
Protein Polymerization into Fibrils from the Viewpoint of Nucleation Theory.
Kashchiev, Dimo
2015-11-17
The assembly of various proteins into fibrillar aggregates is an important phenomenon with wide implications ranging from human disease to nanoscience. Using general kinetic results of nucleation theory, we analyze the polymerization of protein into linear or helical fibrils in the framework of the Oosawa-Kasai (OK) model. We show that while within the original OK model of linear polymerization the process does not involve nucleation, within a modified OK model it is nucleation-mediated. Expressions are derived for the size of the fibril nucleus, the work for fibril formation, the nucleation barrier, the equilibrium and stationary fibril size distributions, and the stationary fibril nucleation rate. Under otherwise equal conditions, this rate decreases considerably when the short (subnucleus) fibrils lose monomers much more frequently than the long (supernucleus) fibrils, a feature that should be born in mind when designing a strategy for stymying or stimulating fibril nucleation. The obtained dependence of the nucleation rate on the concentration of monomeric protein is convenient for experimental verification and for use in rate equations accounting for nucleation-mediated fibril formation. The analysis and the results obtained for linear fibrils are fully applicable to helical fibrils whose formation is describable by a simplified OK model. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tarumi, Shinya; Kozaki, Kouji; Kitamura, Yoshinobu; Mizoguchi, Riichiro
In the recent materials research, much work aims at realization of ``functional materials'' by changing structure and/or manufacturing process with nanotechnology. However, knowledge about the relationship among function, structure and manufacturing process is not well organized. So, material designers have to consider a lot of things at the same time. It would be very helpful for them to support their design process by a computer system. In this article, we discuss a conceptual design supporting system for nano-materials. Firstly, we consider a framework for representing functional structures and manufacturing processes of nano-materials with relationships among them. We expand our former framework for representing functional knowledge based on our investigation through discussion with experts of nano-materials. The extended framework has two features: 1) it represents functional structures and manufacturing processes comprehensively, 2) it expresses parameters of function and ways with their dependencies because they are important for material design. Next, we describe a conceptual design support system we developed based on the framework with its functionalities. Lastly, we evaluate the utility of our system in terms of functionality for design supports. For this purpose, we tried to represent two real examples of material design. And then we did an evaluation experiment on conceptual design of material using our system with the collaboration of domain experts.
A Unified Framework for Analyzing and Designing for Stationary Arterial Networks
DOT National Transportation Integrated Search
2017-05-17
This research aims to develop a unified theoretical and simulation framework for analyzing and designing signals for stationary arterial networks. Existing traffic flow models used in design and analysis of signal control strategies are either too si...
Lewis, Ioni; Watson, Barry; White, Katherine M
2016-12-01
This paper provides an important and timely overview of a conceptual framework designed to assist with the development of message content, as well as the evaluation, of persuasive health messages. While an earlier version of this framework was presented in a prior publication by the authors in 2009, important refinements to the framework have seen it evolve in recent years, warranting the need for an updated review. This paper outlines the Step approach to Message Design and Testing (or SatMDT) in accordance with the theoretical evidence which underpins, as well as empirical evidence which demonstrates the relevance and feasibility of, each of the framework's steps. The development and testing of the framework have thus far been based exclusively within the road safety advertising context; however, the view expressed herein is that the framework may have broader appeal and application to the health persuasion context. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, Wes; Sanders, Les
1991-01-01
The design of the Framework Processor (FP) component of the Framework Programmable Software Development Platform (FFP) is described. The FFP is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by the model, this Framework Processor will take advantage of an integrated operating environment to provide automated support for the management and control of the software development process so that costly mistakes during the development phase can be eliminated.
How does symmetry impact the flexibility of proteins?
Schulze, Bernd; Sljoka, Adnan; Whiteley, Walter
2014-02-13
It is well known that (i) the flexibility and rigidity of proteins are central to their function, (ii) a number of oligomers with several copies of individual protein chains assemble with symmetry in the native state and (iii) added symmetry sometimes leads to added flexibility in structures. We observe that the most common symmetry classes of protein oligomers are also the symmetry classes that lead to increased flexibility in certain three-dimensional structures-and investigate the possible significance of this coincidence. This builds on the well-developed theory of generic rigidity of body-bar frameworks, which permits an analysis of the rigidity and flexibility of molecular structures such as proteins via fast combinatorial algorithms. In particular, we outline some very simple counting rules and possible algorithmic extensions that allow us to predict continuous symmetry-preserving motions in body-bar frameworks that possess non-trivial point-group symmetry. For simplicity, we focus on dimers, which typically assemble with twofold rotational axes, and often have allosteric function that requires motions to link distant sites on the two protein chains.
How does symmetry impact the flexibility of proteins?
Schulze, Bernd; Sljoka, Adnan; Whiteley, Walter
2014-01-01
It is well known that (i) the flexibility and rigidity of proteins are central to their function, (ii) a number of oligomers with several copies of individual protein chains assemble with symmetry in the native state and (iii) added symmetry sometimes leads to added flexibility in structures. We observe that the most common symmetry classes of protein oligomers are also the symmetry classes that lead to increased flexibility in certain three-dimensional structures—and investigate the possible significance of this coincidence. This builds on the well-developed theory of generic rigidity of body–bar frameworks, which permits an analysis of the rigidity and flexibility of molecular structures such as proteins via fast combinatorial algorithms. In particular, we outline some very simple counting rules and possible algorithmic extensions that allow us to predict continuous symmetry-preserving motions in body–bar frameworks that possess non-trivial point-group symmetry. For simplicity, we focus on dimers, which typically assemble with twofold rotational axes, and often have allosteric function that requires motions to link distant sites on the two protein chains. PMID:24379431
Design of 240,000 orthogonal 25mer DNA barcode probes.
Xu, Qikai; Schlabach, Michael R; Hannon, Gregory J; Elledge, Stephen J
2009-02-17
DNA barcodes linked to genetic features greatly facilitate screening these features in pooled formats using microarray hybridization, and new tools are needed to design large sets of barcodes to allow construction of large barcoded mammalian libraries such as shRNA libraries. Here we report a framework for designing large sets of orthogonal barcode probes. We demonstrate the utility of this framework by designing 240,000 barcode probes and testing their performance by hybridization. From the test hybridizations, we also discovered new probe design rules that significantly reduce cross-hybridization after their introduction into the framework of the algorithm. These rules should improve the performance of DNA microarray probe designs for many applications.
Design of 240,000 orthogonal 25mer DNA barcode probes
Xu, Qikai; Schlabach, Michael R.; Hannon, Gregory J.; Elledge, Stephen J.
2009-01-01
DNA barcodes linked to genetic features greatly facilitate screening these features in pooled formats using microarray hybridization, and new tools are needed to design large sets of barcodes to allow construction of large barcoded mammalian libraries such as shRNA libraries. Here we report a framework for designing large sets of orthogonal barcode probes. We demonstrate the utility of this framework by designing 240,000 barcode probes and testing their performance by hybridization. From the test hybridizations, we also discovered new probe design rules that significantly reduce cross-hybridization after their introduction into the framework of the algorithm. These rules should improve the performance of DNA microarray probe designs for many applications. PMID:19171886
Mining protein database using machine learning techniques.
Camargo, Renata da Silva; Niranjan, Mahesan
2008-08-25
With a large amount of information relating to proteins accumulating in databases widely available online, it is of interest to apply machine learning techniques that, by extracting underlying statistical regularities in the data, make predictions about the functional and evolutionary characteristics of unseen proteins. Such predictions can help in achieving a reduction in the space over which experiment designers need to search in order to improve our understanding of the biochemical properties. Previously it has been suggested that an integration of features computable by comparing a pair of proteins can be achieved by an artificial neural network, hence predicting the degree to which they may be evolutionary related and homologous.
We compiled two datasets of pairs of proteins, each pair being characterised by seven distinct features. We performed an exhaustive search through all possible combinations of features, for the problem of separating remote homologous from analogous pairs, we note that significant performance gain was obtained by the inclusion of sequence and structure information. We find that the use of a linear classifier was enough to discriminate a protein pair at the family level. However, at the superfamily level, to detect remote homologous pairs was a relatively harder problem. We find that the use of nonlinear classifiers achieve significantly higher accuracies.
In this paper, we compare three different pattern classification methods on two problems formulated as detecting evolutionary and functional relationships between pairs of proteins, and from extensive cross validation and feature selection based studies quantify the average limits and uncertainties with which such predictions may be made. Feature selection points to a \\"knowledge gap\\" in currently available functional annotations. We demonstrate how the scheme may be employed in a framework to associate an individual protein with an existing family of evolutionarily related proteins.
Antibody mimetics: promising complementary agents to animal-sourced antibodies.
Baloch, Abdul Rasheed; Baloch, Abdul Wahid; Sutton, Brian J; Zhang, Xiaoying
2016-01-01
Despite their wide use as therapeutic, diagnostic and detection agents, the limitations of polyclonal and monoclonal antibodies have inspired scientists to design the next generation biomedical agents, so-called antibody mimetics that offer many advantages over conventional antibodies. Antibody mimetics can be constructed by protein-directed evolution or fusion of complementarity-determining regions through intervening framework regions. Substantial progress in exploiting human, butterfly (Pieris brassicae) and bacterial systems to design and select mimetics using display technologies has been made in the past 10 years, and one of these mimetics [Kalbitor® (Dyax)] has made its way to market. Many challenges lie ahead to develop mimetics for various biomedical applications, especially those for which conventional antibodies are ineffective, and this review describes the current characteristics, construction and applications of antibody mimetics compared to animal-sourced antibodies. The possible limitations of mimetics and future perspectives are also discussed.
Apgar, James R; Mader, Michelle; Agostinelli, Rita; Benard, Susan; Bialek, Peter; Johnson, Mark; Gao, Yijie; Krebs, Mark; Owens, Jane; Parris, Kevin; St Andre, Michael; Svenson, Kris; Morris, Carl; Tchistiakova, Lioudmila
2016-10-01
Antibodies are an important class of biotherapeutics that offer specificity to their antigen, long half-life, effector function interaction and good manufacturability. The immunogenicity of non-human-derived antibodies, which can be a major limitation to development, has been partially overcome by humanization through complementarity-determining region (CDR) grafting onto human acceptor frameworks. The retention of foreign content in the CDR regions, however, is still a potential immunogenic liability. Here, we describe the humanization of an anti-myostatin antibody utilizing a 2-step process of traditional CDR-grafting onto a human acceptor framework, followed by a structure-guided approach to further reduce the murine content of CDR-grafted antibodies. To accomplish this, we solved the co-crystal structures of myostatin with the chimeric (Protein Databank (PDB) id 5F3B) and CDR-grafted anti-myostatin antibody (PDB id 5F3H), allowing us to computationally predict the structurally important CDR residues as well as those making significant contacts with the antigen. Structure-based rational design enabled further germlining of the CDR-grafted antibody, reducing the murine content of the antibody without affecting antigen binding. The overall "humanness" was increased for both the light and heavy chain variable regions.
Apgar, James R.; Mader, Michelle; Agostinelli, Rita; Benard, Susan; Bialek, Peter; Johnson, Mark; Gao, Yijie; Krebs, Mark; Owens, Jane; Parris, Kevin; St. Andre, Michael; Svenson, Kris; Morris, Carl; Tchistiakova, Lioudmila
2016-01-01
ABSTRACT Antibodies are an important class of biotherapeutics that offer specificity to their antigen, long half-life, effector function interaction and good manufacturability. The immunogenicity of non-human-derived antibodies, which can be a major limitation to development, has been partially overcome by humanization through complementarity-determining region (CDR) grafting onto human acceptor frameworks. The retention of foreign content in the CDR regions, however, is still a potential immunogenic liability. Here, we describe the humanization of an anti-myostatin antibody utilizing a 2-step process of traditional CDR-grafting onto a human acceptor framework, followed by a structure-guided approach to further reduce the murine content of CDR-grafted antibodies. To accomplish this, we solved the co-crystal structures of myostatin with the chimeric (Protein Databank (PDB) id 5F3B) and CDR-grafted anti-myostatin antibody (PDB id 5F3H), allowing us to computationally predict the structurally important CDR residues as well as those making significant contacts with the antigen. Structure-based rational design enabled further germlining of the CDR-grafted antibody, reducing the murine content of the antibody without affecting antigen binding. The overall “humanness” was increased for both the light and heavy chain variable regions. PMID:27625211
Kinjo, Akira R.; Bekker, Gert-Jan; Suzuki, Hirofumi; Tsuchiya, Yuko; Kawabata, Takeshi; Ikegawa, Yasuyo; Nakamura, Haruki
2017-01-01
The Protein Data Bank Japan (PDBj, http://pdbj.org), a member of the worldwide Protein Data Bank (wwPDB), accepts and processes the deposited data of experimentally determined macromolecular structures. While maintaining the archive in collaboration with other wwPDB partners, PDBj also provides a wide range of services and tools for analyzing structures and functions of proteins. We herein outline the updated web user interfaces together with RESTful web services and the backend relational database that support the former. To enhance the interoperability of the PDB data, we have previously developed PDB/RDF, PDB data in the Resource Description Framework (RDF) format, which is now a wwPDB standard called wwPDB/RDF. We have enhanced the connectivity of the wwPDB/RDF data by incorporating various external data resources. Services for searching, comparing and analyzing the ever-increasing large structures determined by hybrid methods are also described. PMID:27789697
QoS Composition and Decomposition Model in Uniframe
2003-08-01
Architecture Tradeoff Analysis Method.………………….19 2.2 Analysis of Non-Functional Requirements at the Early Design Phase………19 2.2.1 Parmenides Framework...early design phase are discussed in the following sections. 2.2.1 Parmenides Framework In [22], an architecture-based framework is proposed for
Towards a Theory-Based Design Framework for an Effective E-Learning Computer Programming Course
ERIC Educational Resources Information Center
McGowan, Ian S.
2016-01-01
Built on Dabbagh (2005), this paper presents a four component theory-based design framework for an e-learning session in introductory computer programming. The framework, driven by a body of exemplars component, emphasizes the transformative interaction between the knowledge building community (KBC) pedagogical model, a mixed instructional…
Assessing Higher-Order Cognitive Constructs by Using an Information-Processing Framework
ERIC Educational Resources Information Center
Dickison, Philip; Luo, Xiao; Kim, Doyoung; Woo, Ada; Muntean, William; Bergstrom, Betty
2016-01-01
Designing a theory-based assessment with sound psychometric qualities to measure a higher-order cognitive construct is a highly desired yet challenging task for many practitioners. This paper proposes a framework for designing a theory-based assessment to measure a higher-order cognitive construct. This framework results in a modularized yet…
Evidence-Based Leadership Development: The 4L Framework
ERIC Educational Resources Information Center
Scott, Shelleyann; Webber, Charles F.
2008-01-01
Purpose: This paper aims to use the results of three research initiatives to present the life-long learning leader 4L framework, a model for leadership development intended for use by designers and providers of leadership development programming. Design/methodology/approach: The 4L model is a conceptual framework that emerged from the analysis of…
ERIC Educational Resources Information Center
Milne, Louise; Eames, Chris
2011-01-01
This paper describes teacher responses to a framework designed to support teacher planning for technology. It includes a learning experience outside the classroom [LEOTC] and is designed specifically for five-year-old students. The planning framework draws together characteristics of technology education, junior primary classrooms and LEOTC to…
ERIC Educational Resources Information Center
Stolk, Machiel Johan; Bulte, Astrid; De Jong, Onno; Pilot, Albert
2012-01-01
Even experienced chemistry teachers require professional development when they are encouraged to become actively engaged in the design of new context-based education. This study briefly describes the development of a framework consisting of goals, learning phases, strategies and instructional functions, and how the framework was translated into a…
Model-theoretic framework for sensor data fusion
NASA Astrophysics Data System (ADS)
Zavoleas, Kyriakos P.; Kokar, Mieczyslaw M.
1993-09-01
The main goal of our research in sensory data fusion (SDF) is the development of a systematic approach (a methodology) to designing systems for interpreting sensory information and for reasoning about the situation based upon this information and upon available data bases and knowledge bases. To achieve such a goal, two kinds of subgoals have been set: (1) develop a theoretical framework in which rational design/implementation decisions can be made, and (2) design a prototype SDF system along the lines of the framework. Our initial design of the framework has been described in our previous papers. In this paper we concentrate on the model-theoretic aspects of this framework. We postulate that data are embedded in data models, and information processing mechanisms are embedded in model operators. The paper is devoted to analyzing the classes of model operators and their significance in SDF. We investigate transformation abstraction and fusion operators. A prototype SDF system, fusing data from range and intensity sensors, is presented, exemplifying the structures introduced. Our framework is justified by the fact that it provides modularity, traceability of information flow, and a basis for a specification language for SDF.
2012-01-01
Background Despite computational challenges, elucidating conformations that a protein system assumes under physiologic conditions for the purpose of biological activity is a central problem in computational structural biology. While these conformations are associated with low energies in the energy surface that underlies the protein conformational space, few existing conformational search algorithms focus on explicitly sampling low-energy local minima in the protein energy surface. Methods This work proposes a novel probabilistic search framework, PLOW, that explicitly samples low-energy local minima in the protein energy surface. The framework combines algorithmic ingredients from evolutionary computation and computational structural biology to effectively explore the subspace of local minima. A greedy local search maps a conformation sampled in conformational space to a nearby local minimum. A perturbation move jumps out of a local minimum to obtain a new starting conformation for the greedy local search. The process repeats in an iterative fashion, resulting in a trajectory-based exploration of the subspace of local minima. Results and conclusions The analysis of PLOW's performance shows that, by navigating only the subspace of local minima, PLOW is able to sample conformations near a protein's native structure, either more effectively or as well as state-of-the-art methods that focus on reproducing the native structure for a protein system. Analysis of the actual subspace of local minima shows that PLOW samples this subspace more effectively that a naive sampling approach. Additional theoretical analysis reveals that the perturbation function employed by PLOW is key to its ability to sample a diverse set of low-energy conformations. This analysis also suggests directions for further research and novel applications for the proposed framework. PMID:22759582
Glusman, Gustavo; Rose, Peter W; Prlić, Andreas; Dougherty, Jennifer; Duarte, José M; Hoffman, Andrew S; Barton, Geoffrey J; Bendixen, Emøke; Bergquist, Timothy; Bock, Christian; Brunk, Elizabeth; Buljan, Marija; Burley, Stephen K; Cai, Binghuang; Carter, Hannah; Gao, JianJiong; Godzik, Adam; Heuer, Michael; Hicks, Michael; Hrabe, Thomas; Karchin, Rachel; Leman, Julia Koehler; Lane, Lydie; Masica, David L; Mooney, Sean D; Moult, John; Omenn, Gilbert S; Pearl, Frances; Pejaver, Vikas; Reynolds, Sheila M; Rokem, Ariel; Schwede, Torsten; Song, Sicheng; Tilgner, Hagen; Valasatava, Yana; Zhang, Yang; Deutsch, Eric W
2017-12-18
The translation of personal genomics to precision medicine depends on the accurate interpretation of the multitude of genetic variants observed for each individual. However, even when genetic variants are predicted to modify a protein, their functional implications may be unclear. Many diseases are caused by genetic variants affecting important protein features, such as enzyme active sites or interaction interfaces. The scientific community has catalogued millions of genetic variants in genomic databases and thousands of protein structures in the Protein Data Bank. Mapping mutations onto three-dimensional (3D) structures enables atomic-level analyses of protein positions that may be important for the stability or formation of interactions; these may explain the effect of mutations and in some cases even open a path for targeted drug development. To accelerate progress in the integration of these data types, we held a two-day Gene Variation to 3D (GVto3D) workshop to report on the latest advances and to discuss unmet needs. The overarching goal of the workshop was to address the question: what can be done together as a community to advance the integration of genetic variants and 3D protein structures that could not be done by a single investigator or laboratory? Here we describe the workshop outcomes, review the state of the field, and propose the development of a framework with which to promote progress in this arena. The framework will include a set of standard formats, common ontologies, a common application programming interface to enable interoperation of the resources, and a Tool Registry to make it easy to find and apply the tools to specific analysis problems. Interoperability will enable integration of diverse data sources and tools and collaborative development of variant effect prediction methods.
78 FR 9633 - Policy Statement on the Scenario Design Framework for Stress Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-11
... Statement on the Scenario Design Framework for Stress Testing AGENCY: Board of Governors of the Federal... design for stress testing that would be used in connection with the supervisory and company-run stress...) requesting public comment on a policy statement on the approach to scenario design for stress testing that...
ERIC Educational Resources Information Center
Lee, Sung Heum; Boling, Elizabeth
1999-01-01
Identifies guidelines from the literature relating to screen design and design of interactive instructional materials. Describes two types of guidelines--those aimed at enhancing motivation and those aimed at preventing loss of motivation--for typography, graphics, color, and animation and audio. Proposes a framework for considering motivation in…
Towards a Framework for Evolvable Network Design
NASA Astrophysics Data System (ADS)
Hassan, Hoda; Eltarras, Ramy; Eltoweissy, Mohamed
The layered Internet architecture that had long guided network design and protocol engineering was an “interconnection architecture” defining a framework for interconnecting networks rather than a model for generic network structuring and engineering. We claim that the approach of abstracting the network in terms of an internetwork hinders the thorough understanding of the network salient characteristics and emergent behavior resulting in impeding design evolution required to address extreme scale, heterogeneity, and complexity. This paper reports on our work in progress that aims to: 1) Investigate the problem space in terms of the factors and decisions that influenced the design and development of computer networks; 2) Sketch the core principles for designing complex computer networks; and 3) Propose a model and related framework for building evolvable, adaptable and self organizing networks We will adopt a bottom up strategy primarily focusing on the building unit of the network model, which we call the “network cell”. The model is inspired by natural complex systems. A network cell is intrinsically capable of specialization, adaptation and evolution. Subsequently, we propose CellNet; a framework for evolvable network design. We outline scenarios for using the CellNet framework to enhance legacy Internet protocol stack.
Development and evaluation of task-specific NLP framework in China.
Ge, Caixia; Zhang, Yinsheng; Huang, Zhenzhen; Jia, Zheng; Ju, Meizhi; Duan, Huilong; Li, Haomin
2015-01-01
Natural language processing (NLP) has been designed to convert narrative text into structured data. Although some general NLP architectures have been developed, a task-specific NLP framework to facilitate the effective use of data is still a challenge in lexical resource limited regions, such as China. The purpose of this study is to design and develop a task-specific NLP framework to extract targeted information from particular documents by adopting dedicated algorithms on current limited lexical resources. In this framework, a shared and evolving ontology mechanism was designed. The result has shown that such a free text driven platform will accelerate the NLP technology acceptance in China.
Recent advances in automated protein design and its future challenges.
Setiawan, Dani; Brender, Jeffrey; Zhang, Yang
2018-04-25
Protein function is determined by protein structure which is in turn determined by the corresponding protein sequence. If the rules that cause a protein to adopt a particular structure are understood, it should be possible to refine or even redefine the function of a protein by working backwards from the desired structure to the sequence. Automated protein design attempts to calculate the effects of mutations computationally with the goal of more radical or complex transformations than are accessible by experimental techniques. Areas covered: The authors give a brief overview of the recent methodological advances in computer-aided protein design, showing how methodological choices affect final design and how automated protein design can be used to address problems considered beyond traditional protein engineering, including the creation of novel protein scaffolds for drug development. Also, the authors address specifically the future challenges in the development of automated protein design. Expert opinion: Automated protein design holds potential as a protein engineering technique, particularly in cases where screening by combinatorial mutagenesis is problematic. Considering solubility and immunogenicity issues, automated protein design is initially more likely to make an impact as a research tool for exploring basic biology in drug discovery than in the design of protein biologics.
Hanson-Smith, Victor; Johnson, Alexander
2016-07-01
The method of phylogenetic ancestral sequence reconstruction is a powerful approach for studying evolutionary relationships among protein sequence, structure, and function. In particular, this approach allows investigators to (1) reconstruct and "resurrect" (that is, synthesize in vivo or in vitro) extinct proteins to study how they differ from modern proteins, (2) identify key amino acid changes that, over evolutionary timescales, have altered the function of the protein, and (3) order historical events in the evolution of protein function. Widespread use of this approach has been slow among molecular biologists, in part because the methods require significant computational expertise. Here we present PhyloBot, a web-based software tool that makes ancestral sequence reconstruction easy. Designed for non-experts, it integrates all the necessary software into a single user interface. Additionally, PhyloBot provides interactive tools to explore evolutionary trajectories between ancestors, enabling the rapid generation of hypotheses that can be tested using genetic or biochemical approaches. Early versions of this software were used in previous studies to discover genetic mechanisms underlying the functions of diverse protein families, including V-ATPase ion pumps, DNA-binding transcription regulators, and serine/threonine protein kinases. PhyloBot runs in a web browser, and is available at the following URL: http://www.phylobot.com. The software is implemented in Python using the Django web framework, and runs on elastic cloud computing resources from Amazon Web Services. Users can create and submit jobs on our free server (at the URL listed above), or use our open-source code to launch their own PhyloBot server.
Hanson-Smith, Victor; Johnson, Alexander
2016-01-01
The method of phylogenetic ancestral sequence reconstruction is a powerful approach for studying evolutionary relationships among protein sequence, structure, and function. In particular, this approach allows investigators to (1) reconstruct and “resurrect” (that is, synthesize in vivo or in vitro) extinct proteins to study how they differ from modern proteins, (2) identify key amino acid changes that, over evolutionary timescales, have altered the function of the protein, and (3) order historical events in the evolution of protein function. Widespread use of this approach has been slow among molecular biologists, in part because the methods require significant computational expertise. Here we present PhyloBot, a web-based software tool that makes ancestral sequence reconstruction easy. Designed for non-experts, it integrates all the necessary software into a single user interface. Additionally, PhyloBot provides interactive tools to explore evolutionary trajectories between ancestors, enabling the rapid generation of hypotheses that can be tested using genetic or biochemical approaches. Early versions of this software were used in previous studies to discover genetic mechanisms underlying the functions of diverse protein families, including V-ATPase ion pumps, DNA-binding transcription regulators, and serine/threonine protein kinases. PhyloBot runs in a web browser, and is available at the following URL: http://www.phylobot.com. The software is implemented in Python using the Django web framework, and runs on elastic cloud computing resources from Amazon Web Services. Users can create and submit jobs on our free server (at the URL listed above), or use our open-source code to launch their own PhyloBot server. PMID:27472806
EBprot: Statistical analysis of labeling-based quantitative proteomics data.
Koh, Hiromi W L; Swa, Hannah L F; Fermin, Damian; Ler, Siok Ghee; Gunaratne, Jayantha; Choi, Hyungwon
2015-08-01
Labeling-based proteomics is a powerful method for detection of differentially expressed proteins (DEPs). The current data analysis platform typically relies on protein-level ratios, which is obtained by summarizing peptide-level ratios for each protein. In shotgun proteomics, however, some proteins are quantified with more peptides than others, and this reproducibility information is not incorporated into the differential expression (DE) analysis. Here, we propose a novel probabilistic framework EBprot that directly models the peptide-protein hierarchy and rewards the proteins with reproducible evidence of DE over multiple peptides. To evaluate its performance with known DE states, we conducted a simulation study to show that the peptide-level analysis of EBprot provides better receiver-operating characteristic and more accurate estimation of the false discovery rates than the methods based on protein-level ratios. We also demonstrate superior classification performance of peptide-level EBprot analysis in a spike-in dataset. To illustrate the wide applicability of EBprot in different experimental designs, we applied EBprot to a dataset for lung cancer subtype analysis with biological replicates and another dataset for time course phosphoproteome analysis of EGF-stimulated HeLa cells with multiplexed labeling. Through these examples, we show that the peptide-level analysis of EBprot is a robust alternative to the existing statistical methods for the DE analysis of labeling-based quantitative datasets. The software suite is freely available on the Sourceforge website http://ebprot.sourceforge.net/. All MS data have been deposited in the ProteomeXchange with identifier PXD001426 (http://proteomecentral.proteomexchange.org/dataset/PXD001426/). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Jeong, Hyundoo; Qian, Xiaoning; Yoon, Byung-Jun
2016-10-06
Comparative analysis of protein-protein interaction (PPI) networks provides an effective means of detecting conserved functional network modules across different species. Such modules typically consist of orthologous proteins with conserved interactions, which can be exploited to computationally predict the modules through network comparison. In this work, we propose a novel probabilistic framework for comparing PPI networks and effectively predicting the correspondence between proteins, represented as network nodes, that belong to conserved functional modules across the given PPI networks. The basic idea is to estimate the steady-state network flow between nodes that belong to different PPI networks based on a Markov random walk model. The random walker is designed to make random moves to adjacent nodes within a PPI network as well as cross-network moves between potential orthologous nodes with high sequence similarity. Based on this Markov random walk model, we estimate the steady-state network flow - or the long-term relative frequency of the transitions that the random walker makes - between nodes in different PPI networks, which can be used as a probabilistic score measuring their potential correspondence. Subsequently, the estimated scores can be used for detecting orthologous proteins in conserved functional modules through network alignment. Through evaluations based on multiple real PPI networks, we demonstrate that the proposed scheme leads to improved alignment results that are biologically more meaningful at reduced computational cost, outperforming the current state-of-the-art algorithms. The source code and datasets can be downloaded from http://www.ece.tamu.edu/~bjyoon/CUFID .
Multi-tasking arbitration and behaviour design for human-interactive robots
NASA Astrophysics Data System (ADS)
Kobayashi, Yuichi; Onishi, Masaki; Hosoe, Shigeyuki; Luo, Zhiwei
2013-05-01
Robots that interact with humans in household environments are required to handle multiple real-time tasks simultaneously, such as carrying objects, collision avoidance and conversation with human. This article presents a design framework for the control and recognition processes to meet these requirements taking into account stochastic human behaviour. The proposed design method first introduces a Petri net for synchronisation of multiple tasks. The Petri net formulation is converted to Markov decision processes and processed in an optimal control framework. Three tasks (safety confirmation, object conveyance and conversation) interact and are expressed by the Petri net. Using the proposed framework, tasks that normally tend to be designed by integrating many if-then rules can be designed in a systematic manner in a state estimation and optimisation framework from the viewpoint of the shortest time optimal control. The proposed arbitration method was verified by simulations and experiments using RI-MAN, which was developed for interactive tasks with humans.
RIPOSTE: a framework for improving the design and analysis of laboratory-based research.
Masca, Nicholas Gd; Hensor, Elizabeth Ma; Cornelius, Victoria R; Buffa, Francesca M; Marriott, Helen M; Eales, James M; Messenger, Michael P; Anderson, Amy E; Boot, Chris; Bunce, Catey; Goldin, Robert D; Harris, Jessica; Hinchliffe, Rod F; Junaid, Hiba; Kingston, Shaun; Martin-Ruiz, Carmen; Nelson, Christopher P; Peacock, Janet; Seed, Paul T; Shinkins, Bethany; Staples, Karl J; Toombs, Jamie; Wright, Adam Ka; Teare, M Dawn
2015-05-07
Lack of reproducibility is an ongoing problem in some areas of the biomedical sciences. Poor experimental design and a failure to engage with experienced statisticians at key stages in the design and analysis of experiments are two factors that contribute to this problem. The RIPOSTE (Reducing IrreProducibility in labOratory STudiEs) framework has been developed to support early and regular discussions between scientists and statisticians in order to improve the design, conduct and analysis of laboratory studies and, therefore, to reduce irreproducibility. This framework is intended for use during the early stages of a research project, when specific questions or hypotheses are proposed. The essential points within the framework are explained and illustrated using three examples (a medical equipment test, a macrophage study and a gene expression study). Sound study design minimises the possibility of bias being introduced into experiments and leads to higher quality research with more reproducible results.
Ratwani, Raj M; Zachary Hettinger, A; Kosydar, Allison; Fairbanks, Rollin J; Hodgkins, Michael L
2017-04-01
Currently, there are few resources for electronic health record (EHR) purchasers and end users to understand the usability processes employed by EHR vendors during product design and development. We developed a framework, based on human factors literature and industry standards, to systematically evaluate the user-centered design processes and usability testing methods used by EHR vendors. We reviewed current usability certification requirements and the human factors literature to develop a 15-point framework for evaluating EHR products. The framework is based on 3 dimensions: user-centered design process, summative testing methodology, and summative testing results. Two vendor usability reports were retrieved from the Office of the National Coordinator's Certified Health IT Product List and were evaluated using the framework. One vendor scored low on the framework (5 pts) while the other vendor scored high on the framework (15 pts). The 2 scored vendor reports demonstrate the framework's ability to discriminate between the variabilities in vendor processes and to determine which vendors are meeting best practices. The framework provides a method to more easily comprehend EHR vendors' usability processes and serves to highlight where EHR vendors may be falling short in terms of best practices. The framework provides a greater level of transparency for both purchasers and end users of EHRs. The framework highlights the need for clearer certification requirements and suggests that the authorized certification bodies that examine vendor usability reports may need to be provided with clearer guidance. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Multidisciplinary Optimization Branch Experience Using iSIGHT Software
NASA Technical Reports Server (NTRS)
Padula, S. L.; Korte, J. J.; Dunn, H. J.; Salas, A. O.
1999-01-01
The Multidisciplinary Optimization (MDO) Branch at NASA Langley Research Center is investigating frameworks for supporting multidisciplinary analysis and optimization research. An optimization framework call improve the design process while reducing time and costs. A framework provides software and system services to integrate computational tasks and allows the researcher to concentrate more on the application and less on the programming details. A framework also provides a common working environment and a full range of optimization tools, and so increases the productivity of multidisciplinary research teams. Finally, a framework enables staff members to develop applications for use by disciplinary experts in other organizations. Since the release of version 4.0, the MDO Branch has gained experience with the iSIGHT framework developed by Engineous Software, Inc. This paper describes experiences with four aerospace applications: (1) reusable launch vehicle sizing, (2) aerospike nozzle design, (3) low-noise rotorcraft trajectories, and (4) acoustic liner design. All applications have been successfully tested using the iSIGHT framework, except for the aerospike nozzle problem, which is in progress. Brief overviews of each problem are provided. The problem descriptions include the number and type of disciplinary codes, as well as all estimate of the multidisciplinary analysis execution time. In addition, the optimization methods, objective functions, design variables, and design constraints are described for each problem. Discussions on the experience gained and lessons learned are provided for each problem. These discussions include the advantages and disadvantages of using the iSIGHT framework for each case as well as the ease of use of various advanced features. Potential areas of improvement are identified.
Hleap, Jose Sergio; Blouin, Christian
2018-01-01
The Glycoside Hydrolase Family 13 (GH13) is both evolutionarily diverse and relevant to many industrial applications. Its members hydrolyze starch into smaller carbohydrates and members of the family have been bioengineered to improve catalytic function under industrial environments. We introduce a framework to analyze the response to selection of GH13 protein structures given some phylogenetic and simulated dynamic information. We find that the TIM-barrel (a conserved protein fold consisting of eight α-helices and eight parallel β-strands that alternate along the peptide backbone, common to all amylases) is not selectable since it is under purifying selection. We also show a method to rank important residues with higher inferred response to selection. These residues can be altered to effect change in properties. In this work, we define fitness as inferred thermodynamic stability. We show that under the developed framework, residues 112Y, 122K, 124D, 125W, and 126P are good candidates to increase the stability of the truncated α-amylase protein from Geobacillus thermoleovorans (PDB code: 4E2O; α-1,4-glucan-4-glucanohydrolase; EC 3.2.1.1). Overall, this paper demonstrates the feasibility of a framework for the analysis of protein structures for any other fitness landscape.
2018-01-01
The Glycoside Hydrolase Family 13 (GH13) is both evolutionarily diverse and relevant to many industrial applications. Its members hydrolyze starch into smaller carbohydrates and members of the family have been bioengineered to improve catalytic function under industrial environments. We introduce a framework to analyze the response to selection of GH13 protein structures given some phylogenetic and simulated dynamic information. We find that the TIM-barrel (a conserved protein fold consisting of eight α-helices and eight parallel β-strands that alternate along the peptide backbone, common to all amylases) is not selectable since it is under purifying selection. We also show a method to rank important residues with higher inferred response to selection. These residues can be altered to effect change in properties. In this work, we define fitness as inferred thermodynamic stability. We show that under the developed framework, residues 112Y, 122K, 124D, 125W, and 126P are good candidates to increase the stability of the truncated α-amylase protein from Geobacillus thermoleovorans (PDB code: 4E2O; α-1,4-glucan-4-glucanohydrolase; EC 3.2.1.1). Overall, this paper demonstrates the feasibility of a framework for the analysis of protein structures for any other fitness landscape. PMID:29698417
Agarwal, Shashank; Liu, Feifan; Yu, Hong
2011-10-03
Protein-protein interaction (PPI) is an important biomedical phenomenon. Automatically detecting PPI-relevant articles and identifying methods that are used to study PPI are important text mining tasks. In this study, we have explored domain independent features to develop two open source machine learning frameworks. One performs binary classification to determine whether the given article is PPI relevant or not, named "Simple Classifier", and the other one maps the PPI relevant articles with corresponding interaction method nodes in a standardized PSI-MI (Proteomics Standards Initiative-Molecular Interactions) ontology, named "OntoNorm". We evaluated our system in the context of BioCreative challenge competition using the standardized data set. Our systems are amongst the top systems reported by the organizers, attaining 60.8% F1-score for identifying relevant documents, and 52.3% F1-score for mapping articles to interaction method ontology. Our results show that domain-independent machine learning frameworks can perform competitively well at the tasks of detecting PPI relevant articles and identifying the methods that were used to study the interaction in such articles. Simple Classifier is available at http://sourceforge.net/p/simpleclassify/home/ and OntoNorm at http://sourceforge.net/p/ontonorm/home/.
Integration of a CAD System Into an MDO Framework
NASA Technical Reports Server (NTRS)
Townsend, J. C.; Samareh, J. A.; Weston, R. P.; Zorumski, W. E.
1998-01-01
NASA Langley has developed a heterogeneous distributed computing environment, called the Framework for Inter-disciplinary Design Optimization, or FIDO. Its purpose has been to demonstrate framework technical feasibility and usefulness for optimizing the preliminary design of complex systems and to provide a working environment for testing optimization schemes. Its initial implementation has been for a simplified model of preliminary design of a high-speed civil transport. Upgrades being considered for the FIDO system include a more complete geometry description, required by high-fidelity aerodynamics and structures codes and based on a commercial Computer Aided Design (CAD) system. This report presents the philosophy behind some of the decisions that have shaped the FIDO system and gives a brief case study of the problems and successes encountered in integrating a CAD system into the FEDO framework.
2014-01-01
Background The Medical Research Councils’ framework for complex interventions has been criticized for not including theory-driven approaches to evaluation. Although the framework does include broad guidance on the use of theory, it contains little practical guidance for implementers and there have been calls to develop a more comprehensive approach. A prospective, theory-driven process of intervention design and evaluation is required to develop complex healthcare interventions which are more likely to be effective, sustainable and scalable. Methods We propose a theory-driven approach to the design and evaluation of complex interventions by adapting and integrating a programmatic design and evaluation tool, Theory of Change (ToC), into the MRC framework for complex interventions. We provide a guide to what ToC is, how to construct one, and how to integrate its use into research projects seeking to design, implement and evaluate complex interventions using the MRC framework. We test this approach by using ToC within two randomized controlled trials and one non-randomized evaluation of complex interventions. Results Our application of ToC in three research projects has shown that ToC can strengthen key stages of the MRC framework. It can aid the development of interventions by providing a framework for enhanced stakeholder engagement and by explicitly designing an intervention that is embedded in the local context. For the feasibility and piloting stage, ToC enables the systematic identification of knowledge gaps to generate research questions that strengthen intervention design. ToC may improve the evaluation of interventions by providing a comprehensive set of indicators to evaluate all stages of the causal pathway through which an intervention achieves impact, combining evaluations of intervention effectiveness with detailed process evaluations into one theoretical framework. Conclusions Incorporating a ToC approach into the MRC framework holds promise for improving the design and evaluation of complex interventions, thereby increasing the likelihood that the intervention will be ultimately effective, sustainable and scalable. We urge researchers developing and evaluating complex interventions to consider using this approach, to evaluate its usefulness and to build an evidence base to further refine the methodology. Trial registration Clinical trials.gov: NCT02160249 PMID:24996765
De Silva, Mary J; Breuer, Erica; Lee, Lucy; Asher, Laura; Chowdhary, Neerja; Lund, Crick; Patel, Vikram
2014-07-05
The Medical Research Councils' framework for complex interventions has been criticized for not including theory-driven approaches to evaluation. Although the framework does include broad guidance on the use of theory, it contains little practical guidance for implementers and there have been calls to develop a more comprehensive approach. A prospective, theory-driven process of intervention design and evaluation is required to develop complex healthcare interventions which are more likely to be effective, sustainable and scalable. We propose a theory-driven approach to the design and evaluation of complex interventions by adapting and integrating a programmatic design and evaluation tool, Theory of Change (ToC), into the MRC framework for complex interventions. We provide a guide to what ToC is, how to construct one, and how to integrate its use into research projects seeking to design, implement and evaluate complex interventions using the MRC framework. We test this approach by using ToC within two randomized controlled trials and one non-randomized evaluation of complex interventions. Our application of ToC in three research projects has shown that ToC can strengthen key stages of the MRC framework. It can aid the development of interventions by providing a framework for enhanced stakeholder engagement and by explicitly designing an intervention that is embedded in the local context. For the feasibility and piloting stage, ToC enables the systematic identification of knowledge gaps to generate research questions that strengthen intervention design. ToC may improve the evaluation of interventions by providing a comprehensive set of indicators to evaluate all stages of the causal pathway through which an intervention achieves impact, combining evaluations of intervention effectiveness with detailed process evaluations into one theoretical framework. Incorporating a ToC approach into the MRC framework holds promise for improving the design and evaluation of complex interventions, thereby increasing the likelihood that the intervention will be ultimately effective, sustainable and scalable. We urge researchers developing and evaluating complex interventions to consider using this approach, to evaluate its usefulness and to build an evidence base to further refine the methodology. Clinical trials.gov: NCT02160249.
Design and performance frameworks for constructing problem-solving simulations.
Stevens, Ron; Palacio-Cayetano, Joycelin
2003-01-01
Rapid advancements in hardware, software, and connectivity are helping to shorten the times needed to develop computer simulations for science education. These advancements, however, have not been accompanied by corresponding theories of how best to design and use these technologies for teaching, learning, and testing. Such design frameworks ideally would be guided less by the strengths/limitations of the presentation media and more by cognitive analyses detailing the goals of the tasks, the needs and abilities of students, and the resulting decision outcomes needed by different audiences. This article describes a problem-solving environment and associated theoretical framework for investigating how students select and use strategies as they solve complex science problems. A framework is first described for designing on-line problem spaces that highlights issues of content, scale, cognitive complexity, and constraints. While this framework was originally designed for medical education, it has proven robust and has been successfully applied to learning environments from elementary school through medical school. Next, a similar framework is detailed for collecting student performance and progress data that can provide evidence of students' strategic thinking and that could potentially be used to accelerate student progress. Finally, experimental validation data are presented that link strategy selection and use with other metrics of scientific reasoning and student achievement.
Design and Performance Frameworks for Constructing Problem-Solving Simulations
Stevens, Ron; Palacio-Cayetano, Joycelin
2003-01-01
Rapid advancements in hardware, software, and connectivity are helping to shorten the times needed to develop computer simulations for science education. These advancements, however, have not been accompanied by corresponding theories of how best to design and use these technologies for teaching, learning, and testing. Such design frameworks ideally would be guided less by the strengths/limitations of the presentation media and more by cognitive analyses detailing the goals of the tasks, the needs and abilities of students, and the resulting decision outcomes needed by different audiences. This article describes a problem-solving environment and associated theoretical framework for investigating how students select and use strategies as they solve complex science problems. A framework is first described for designing on-line problem spaces that highlights issues of content, scale, cognitive complexity, and constraints. While this framework was originally designed for medical education, it has proven robust and has been successfully applied to learning environments from elementary school through medical school. Next, a similar framework is detailed for collecting student performance and progress data that can provide evidence of students' strategic thinking and that could potentially be used to accelerate student progress. Finally, experimental validation data are presented that link strategy selection and use with other metrics of scientific reasoning and student achievement. PMID:14506505
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pachuilo, Andrew R; Ragan, Eric; Goodall, John R
Visualization tools can take advantage of multiple coordinated views to support analysis of large, multidimensional data sets. Effective design of such views and layouts can be challenging, but understanding users analysis strategies can inform design improvements. We outline an approach for intelligent design configuration of visualization tools with multiple coordinated views, and we discuss a proposed software framework to support the approach. The proposed software framework could capture and learn from user interaction data to automate new compositions of views and widgets. Such a framework could reduce the time needed for meta analysis of the visualization use and lead tomore » more effective visualization design.« less
Planning for Program Design and Assessment Using Value Creation Frameworks
ERIC Educational Resources Information Center
Whisler, Laurel; Anderson, Rachel; Brown, Jenai
2017-01-01
This article explains a program design and planning process using the Value Creation Framework (VCF) developed by Wenger, Trayner, and de Laat (2011). The framework involves identifying types of value or benefit for those involved in the program, conditions and activities that support creation of that value, data that measure whether the value was…
ERIC Educational Resources Information Center
Angkananon, Kewalin; Wald, Mike; Gilbert, Lester
2014-01-01
This paper focuses on the development and evaluation of a Technology Enhanced Interaction Framework and Method that can help with designing accessible mobile learning interactions involving disabled people. This new framework and method were developed to help design technological support for communication and interactions between people,…
A Framework for the Design and Integration of Collaborative Classroom Games
ERIC Educational Resources Information Center
Echeverria, Alejandro; Garcia-Campo, Cristian; Nussbaum, Miguel; Gil, Francisca; Villalta, Marco; Amestica, Matias; Echeverria, Sebastian
2011-01-01
The progress registered in the use of video games as educational tools has not yet been successfully transferred to the classroom. In an attempt to close this gap, a framework was developed that assists in the design and classroom integration of educational games. The framework addresses both the educational dimension and the ludic dimension. The…
Feliciano, Patricia R; Drennan, Catherine L; Nonato, M Cristina
2016-08-30
Fumarate hydratases (FHs) are essential metabolic enzymes grouped into two classes. Here, we present the crystal structure of a class I FH, the cytosolic FH from Leishmania major, which reveals a previously undiscovered protein fold that coordinates a catalytically essential [4Fe-4S] cluster. Our 2.05 Å resolution data further reveal a dimeric architecture for this FH that resembles a heart, with each lobe comprised of two domains that are arranged around the active site. Besides the active site, where the substrate S-malate is bound bidentate to the unique iron of the [4Fe-4S] cluster, other binding pockets are found near the dimeric enzyme interface, some of which are occupied by malonate, shown here to be a weak inhibitor of this enzyme. Taken together, these data provide a framework both for investigations of the class I FH catalytic mechanism and for drug design aimed at fighting neglected tropical diseases.
An audience-channel-message-evaluation (ACME) framework for health communication campaigns.
Noar, Seth M
2012-07-01
Recent reviews of the literature have indicated that a number of health communication campaigns continue to fail to adhere to principles of effective campaign design. The lack of an integrated, organizing framework for the design, implementation, and evaluation of health communication campaigns may contribute to this state of affairs. The current article introduces an audience-channel-message-evaluation (ACME) framework that organizes the major principles of health campaign design, implementation, and evaluation. ACME also explicates the relationships and linkages between the varying principles. Insights from ACME include the following: The choice of audience segment(s) to focus on in a campaign affects all other campaign design choices, including message strategy and channel/component options. Although channel selection influences options for message design, choice of message design also influences channel options. Evaluation should not be thought of as a separate activity, but rather should be infused and integrated throughout the campaign design and implementation process, including formative, process, and outcome evaluation activities. Overall, health communication campaigns that adhere to this integrated set of principles of effective campaign design will have a greater chance of success than those using principles idiosyncratically. These design, implementation, and evaluation principles are embodied in the ACME framework.
NASA Astrophysics Data System (ADS)
Liang, Likai; Bi, Yushen
Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.
ERIC Educational Resources Information Center
Black, Robert D.; Weinberg, Lois A.; Brodwin, Martin G.
2015-01-01
Universal design in education is a framework of instruction that aims to be inclusive of different learning preferences and learners, and helps to reduce barriers for students with disabilities. The principles of Universal Design for Learning (UDL) and Universal Design for Instruction (UDI) were used as the framework for this study. The purposes…
ERIC Educational Resources Information Center
Lee, Young S.
2014-01-01
The article focuses on a systematic approach to the instructional framework to incorporate three aspects of sustainable design. It also aims to provide an instruction model for sustainable design stressing a collective effort to advance knowledge creation as a community. It develops a framework conjoining the concept of integrated process in…
Geometry of proteins: hydrogen bonding, sterics, and marginally compact tubes.
Banavar, Jayanth R; Cieplak, Marek; Flammini, Alessandro; Hoang, Trinh X; Kamien, Randall D; Lezon, Timothy; Marenduzzo, Davide; Maritan, Amos; Seno, Flavio; Snir, Yehuda; Trovato, Antonio
2006-03-01
The functionality of proteins is governed by their structure in the native state. Protein structures are made up of emergent building blocks of helices and almost planar sheets. A simple coarse-grained geometrical model of a flexible tube barely subject to compaction provides a unified framework for understanding the common character of globular proteins. We argue that a recent critique of the tube idea is not well founded.
Geometry of proteins: Hydrogen bonding, sterics, and marginally compact tubes
NASA Astrophysics Data System (ADS)
Banavar, Jayanth R.; Cieplak, Marek; Flammini, Alessandro; Hoang, Trinh X.; Kamien, Randall D.; Lezon, Timothy; Marenduzzo, Davide; Maritan, Amos; Seno, Flavio; Snir, Yehuda; Trovato, Antonio
2006-03-01
The functionality of proteins is governed by their structure in the native state. Protein structures are made up of emergent building blocks of helices and almost planar sheets. A simple coarse-grained geometrical model of a flexible tube barely subject to compaction provides a unified framework for understanding the common character of globular proteins. We argue that a recent critique of the tube idea is not well founded.
Classification of Dynamical Diffusion States in Single Molecule Tracking Microscopy
Bosch, Peter J.; Kanger, Johannes S.; Subramaniam, Vinod
2014-01-01
Single molecule tracking of membrane proteins by fluorescence microscopy is a promising method to investigate dynamic processes in live cells. Translating the trajectories of proteins to biological implications, such as protein interactions, requires the classification of protein motion within the trajectories. Spatial information of protein motion may reveal where the protein interacts with cellular structures, because binding of proteins to such structures often alters their diffusion speed. For dynamic diffusion systems, we provide an analytical framework to determine in which diffusion state a molecule is residing during the course of its trajectory. We compare different methods for the quantification of motion to utilize this framework for the classification of two diffusion states (two populations with different diffusion speed). We found that a gyration quantification method and a Bayesian statistics-based method are the most accurate in diffusion-state classification for realistic experimentally obtained datasets, of which the gyration method is much less computationally demanding. After classification of the diffusion, the lifetime of the states can be determined, and images of the diffusion states can be reconstructed at high resolution. Simulations validate these applications. We apply the classification and its applications to experimental data to demonstrate the potential of this approach to obtain further insights into the dynamics of cell membrane proteins. PMID:25099798
Hidden Markov model approach for identifying the modular framework of the protein backbone.
Camproux, A C; Tuffery, P; Chevrolat, J P; Boisvieux, J F; Hazout, S
1999-12-01
The hidden Markov model (HMM) was used to identify recurrent short 3D structural building blocks (SBBs) describing protein backbones, independently of any a priori knowledge. Polypeptide chains are decomposed into a series of short segments defined by their inter-alpha-carbon distances. Basically, the model takes into account the sequentiality of the observed segments and assumes that each one corresponds to one of several possible SBBs. Fitting the model to a database of non-redundant proteins allowed us to decode proteins in terms of 12 distinct SBBs with different roles in protein structure. Some SBBs correspond to classical regular secondary structures. Others correspond to a significant subdivision of their bounding regions previously considered to be a single pattern. The major contribution of the HMM is that this model implicitly takes into account the sequential connections between SBBs and thus describes the most probable pathways by which the blocks are connected to form the framework of the protein structures. Validation of the SBBs code was performed by extracting SBB series repeated in recoding proteins and examining their structural similarities. Preliminary results on the sequence specificity of SBBs suggest promising perspectives for the prediction of SBBs or series of SBBs from the protein sequences.
BAYESIAN PROTEIN STRUCTURE ALIGNMENT.
Rodriguez, Abel; Schmidler, Scott C
The analysis of the three-dimensional structure of proteins is an important topic in molecular biochemistry. Structure plays a critical role in defining the function of proteins and is more strongly conserved than amino acid sequence over evolutionary timescales. A key challenge is the identification and evaluation of structural similarity between proteins; such analysis can aid in understanding the role of newly discovered proteins and help elucidate evolutionary relationships between organisms. Computational biologists have developed many clever algorithmic techniques for comparing protein structures, however, all are based on heuristic optimization criteria, making statistical interpretation somewhat difficult. Here we present a fully probabilistic framework for pairwise structural alignment of proteins. Our approach has several advantages, including the ability to capture alignment uncertainty and to estimate key "gap" parameters which critically affect the quality of the alignment. We show that several existing alignment methods arise as maximum a posteriori estimates under specific choices of prior distributions and error models. Our probabilistic framework is also easily extended to incorporate additional information, which we demonstrate by including primary sequence information to generate simultaneous sequence-structure alignments that can resolve ambiguities obtained using structure alone. This combined model also provides a natural approach for the difficult task of estimating evolutionary distance based on structural alignments. The model is illustrated by comparison with well-established methods on several challenging protein alignment examples.
Initial Multidisciplinary Design and Analysis Framework
NASA Technical Reports Server (NTRS)
Ozoroski, L. P.; Geiselhart, K. A.; Padula, S. L.; Li, W.; Olson, E. D.; Campbell, R. L.; Shields, E. W.; Berton, J. J.; Gray, J. S.; Jones, S. M.;
2010-01-01
Within the Supersonics (SUP) Project of the Fundamental Aeronautics Program (FAP), an initial multidisciplinary design & analysis framework has been developed. A set of low- and intermediate-fidelity discipline design and analysis codes were integrated within a multidisciplinary design and analysis framework and demonstrated on two challenging test cases. The first test case demonstrates an initial capability to design for low boom and performance. The second test case demonstrates rapid assessment of a well-characterized design. The current system has been shown to greatly increase the design and analysis speed and capability, and many future areas for development were identified. This work has established a state-of-the-art capability for immediate use by supersonic concept designers and systems analysts at NASA, while also providing a strong base to build upon for future releases as more multifidelity capabilities are developed and integrated.
Tucker, George; Loh, Po-Ru; Berger, Bonnie
2013-10-04
Comprehensive protein-protein interaction (PPI) maps are a powerful resource for uncovering the molecular basis of genetic interactions and providing mechanistic insights. Over the past decade, high-throughput experimental techniques have been developed to generate PPI maps at proteome scale, first using yeast two-hybrid approaches and more recently via affinity purification combined with mass spectrometry (AP-MS). Unfortunately, data from both protocols are prone to both high false positive and false negative rates. To address these issues, many methods have been developed to post-process raw PPI data. However, with few exceptions, these methods only analyze binary experimental data (in which each potential interaction tested is deemed either observed or unobserved), neglecting quantitative information available from AP-MS such as spectral counts. We propose a novel method for incorporating quantitative information from AP-MS data into existing PPI inference methods that analyze binary interaction data. Our approach introduces a probabilistic framework that models the statistical noise inherent in observations of co-purifications. Using a sampling-based approach, we model the uncertainty of interactions with low spectral counts by generating an ensemble of possible alternative experimental outcomes. We then apply the existing method of choice to each alternative outcome and aggregate results over the ensemble. We validate our approach on three recent AP-MS data sets and demonstrate performance comparable to or better than state-of-the-art methods. Additionally, we provide an in-depth discussion comparing the theoretical bases of existing approaches and identify common aspects that may be key to their performance. Our sampling framework extends the existing body of work on PPI analysis using binary interaction data to apply to the richer quantitative data now commonly available through AP-MS assays. This framework is quite general, and many enhancements are likely possible. Fruitful future directions may include investigating more sophisticated schemes for converting spectral counts to probabilities and applying the framework to direct protein complex prediction methods.
A Modular Toolset for Recombination Transgenesis and Neurogenetic Analysis of Drosophila
Wang, Ji-Wu; Beck, Erin S.; McCabe, Brian D.
2012-01-01
Transgenic Drosophila have contributed extensively to our understanding of nervous system development, physiology and behavior in addition to being valuable models of human neurological disease. Here, we have generated a novel series of modular transgenic vectors designed to optimize and accelerate the production and analysis of transgenes in Drosophila. We constructed a novel vector backbone, pBID, that allows both phiC31 targeted transgene integration and incorporates insulator sequences to ensure specific and uniform transgene expression. Upon this framework, we have built a series of constructs that are either backwards compatible with existing restriction enzyme based vectors or utilize Gateway recombination technology for high-throughput cloning. These vectors allow for endogenous promoter or Gal4 targeted expression of transgenic proteins with or without fluorescent protein or epitope tags. In addition, we have generated constructs that facilitate transgenic splice isoform specific RNA inhibition of gene expression. We demonstrate the utility of these constructs to analyze proteins involved in nervous system development, physiology and neurodegenerative disease. We expect that these reagents will facilitate the proficiency and sophistication of Drosophila genetic analysis in both the nervous system and other tissues. PMID:22848718
Pey, Jon; Valgepea, Kaspar; Rubio, Angel; Beasley, John E; Planes, Francisco J
2013-12-08
The study of cellular metabolism in the context of high-throughput -omics data has allowed us to decipher novel mechanisms of importance in biotechnology and health. To continue with this progress, it is essential to efficiently integrate experimental data into metabolic modeling. We present here an in-silico framework to infer relevant metabolic pathways for a particular phenotype under study based on its gene/protein expression data. This framework is based on the Carbon Flux Path (CFP) approach, a mixed-integer linear program that expands classical path finding techniques by considering additional biophysical constraints. In particular, the objective function of the CFP approach is amended to account for gene/protein expression data and influence obtained paths. This approach is termed integrative Carbon Flux Path (iCFP). We show that gene/protein expression data also influences the stoichiometric balancing of CFPs, which provides a more accurate picture of active metabolic pathways. This is illustrated in both a theoretical and real scenario. Finally, we apply this approach to find novel pathways relevant in the regulation of acetate overflow metabolism in Escherichia coli. As a result, several targets which could be relevant for better understanding of the phenomenon leading to impaired acetate overflow are proposed. A novel mathematical framework that determines functional pathways based on gene/protein expression data is presented and validated. We show that our approach is able to provide new insights into complex biological scenarios such as acetate overflow in Escherichia coli.
Embo, M; Driessen, E; Valcke, M; van der Vleuten, C P M
2015-02-01
Although competency-based education is well established in health care education, research shows that the competencies do not always match the reality of clinical workplaces. Therefore, there is a need to design feasible and evidence-based competency frameworks that fit the workplace reality. This theoretical paper outlines a competency-based framework, designed to facilitate learning, assessment and supervision in clinical workplace education. Integration is the cornerstone of this holistic competency framework. Copyright © 2014 Elsevier Ltd. All rights reserved.
A general observatory control software framework design for existing small and mid-size telescopes
NASA Astrophysics Data System (ADS)
Ge, Liang; Lu, Xiao-Meng; Jiang, Xiao-Jun
2015-07-01
A general framework for observatory control software would help to improve the efficiency of observation and operation of telescopes, and would also be advantageous for remote and joint observations. We describe a general framework for observatory control software, which considers principles of flexibility and inheritance to meet the expectations from observers and technical personnel. This framework includes observation scheduling, device control and data storage. The design is based on a finite state machine that controls the whole process.
Aryloxyalkanoic Acids as Non-Covalent Modifiers of the Allosteric Properties of Hemoglobin
Omar, Abdelsattar M.; Mahran, Mona A.; Ghatge, Mohini S.; Bamane, Faida H. A.; Ahmed, Mostafa H.; El-Araby, Moustafa E.; Abdulmalik, Osheiza; Safo, Martin K.
2017-01-01
Hemoglobin (Hb) modifiers that stereospecifically inhibit sickle hemoglobin polymer formation and/or allosterically increase Hb affinity for oxygen have been shown to prevent the primary pathophysiology of sickle cell disease (SCD), specifically, Hb polymerization and red blood cell sickling. Several such compounds are currently being clinically studied for the treatment of SCD. Based on the previously reported non-covalent Hb binding characteristics of substituted aryloxyalkanoic acids that exhibited antisickling properties, we designed, synthesized and evaluated 18 new compounds (KAUS II series) for enhanced antisickling activities. Surprisingly, select test compounds showed no antisickling effects or promoted erythrocyte sickling. Additionally, the compounds showed no significant effect on Hb oxygen affinity (or in some cases, even decreased the affinity for oxygen). The X-ray structure of deoxygenated Hb in complex with a prototype compound, KAUS-23, revealed that the effector bound in the central water cavity of the protein, providing atomic level explanations for the observed functional and biological activities. Although the structural modification did not lead to the anticipated biological effects, the findings provide important direction for designing candidate antisickling agents, as well as a framework for novel Hb allosteric effectors that conversely, decrease the protein affinity for oxygen for potential therapeutic use for hypoxic- and/or ischemic-related diseases. PMID:27529207
Cervera-Padrell, Albert E; Skovby, Tommy; Kiil, Søren; Gani, Rafiqul; Gernaey, Krist V
2012-10-01
A systematic framework is proposed for the design of continuous pharmaceutical manufacturing processes. Specifically, the design framework focuses on organic chemistry based, active pharmaceutical ingredient (API) synthetic processes, but could potentially be extended to biocatalytic and fermentation-based products. The method exploits the synergic combination of continuous flow technologies (e.g., microfluidic techniques) and process systems engineering (PSE) methods and tools for faster process design and increased process understanding throughout the whole drug product and process development cycle. The design framework structures the many different and challenging design problems (e.g., solvent selection, reactor design, and design of separation and purification operations), driving the user from the initial drug discovery steps--where process knowledge is very limited--toward the detailed design and analysis. Examples from the literature of PSE methods and tools applied to pharmaceutical process design and novel pharmaceutical production technologies are provided along the text, assisting in the accumulation and interpretation of process knowledge. Different criteria are suggested for the selection of batch and continuous processes so that the whole design results in low capital and operational costs as well as low environmental footprint. The design framework has been applied to the retrofit of an existing batch-wise process used by H. Lundbeck A/S to produce an API: zuclopenthixol. Some of its batch operations were successfully converted into continuous mode, obtaining higher yields that allowed a significant simplification of the whole process. The material and environmental footprint of the process--evaluated through the process mass intensity index, that is, kg of material used per kg of product--was reduced to half of its initial value, with potential for further reduction. The case-study includes reaction steps typically used by the pharmaceutical industry featuring different characteristic reaction times, as well as L-L separation and distillation-based solvent exchange steps, and thus constitutes a good example of how the design framework can be useful to efficiently design novel or already existing API manufacturing processes taking advantage of continuous processes. Copyright © 2012 Elsevier B.V. All rights reserved.
Definition and classification of cancer cachexia: an international consensus.
Fearon, Kenneth; Strasser, Florian; Anker, Stefan D; Bosaeus, Ingvar; Bruera, Eduardo; Fainsinger, Robin L; Jatoi, Aminah; Loprinzi, Charles; MacDonald, Neil; Mantovani, Giovanni; Davis, Mellar; Muscaritoli, Maurizio; Ottery, Faith; Radbruch, Lukas; Ravasco, Paula; Walsh, Declan; Wilcock, Andrew; Kaasa, Stein; Baracos, Vickie E
2011-05-01
To develop a framework for the definition and classification of cancer cachexia a panel of experts participated in a formal consensus process, including focus groups and two Delphi rounds. Cancer cachexia was defined as a multifactorial syndrome defined by an ongoing loss of skeletal muscle mass (with or without loss of fat mass) that cannot be fully reversed by conventional nutritional support and leads to progressive functional impairment. Its pathophysiology is characterised by a negative protein and energy balance driven by a variable combination of reduced food intake and abnormal metabolism. The agreed diagnostic criterion for cachexia was weight loss greater than 5%, or weight loss greater than 2% in individuals already showing depletion according to current bodyweight and height (body-mass index [BMI] <20 kg/m(2)) or skeletal muscle mass (sarcopenia). An agreement was made that the cachexia syndrome can develop progressively through various stages--precachexia to cachexia to refractory cachexia. Severity can be classified according to degree of depletion of energy stores and body protein (BMI) in combination with degree of ongoing weight loss. Assessment for classification and clinical management should include the following domains: anorexia or reduced food intake, catabolic drive, muscle mass and strength, functional and psychosocial impairment. Consensus exists on a framework for the definition and classification of cancer cachexia. After validation, this should aid clinical trial design, development of practice guidelines, and, eventually, routine clinical management. Copyright © 2011 Elsevier Ltd. All rights reserved.
A UML profile for framework modeling.
Xu, Xiao-liang; Wang, Le-yu; Zhou, Hong
2004-01-01
The current standard Unified Modeling Language(UML) could not model framework flexibility and extendability adequately due to lack of appropriate constructs to distinguish framework hot-spots from kernel elements. A new UML profile that may customize UML for framework modeling was presented using the extension mechanisms of UML, providing a group of UML extensions to meet the needs of framework modeling. In this profile, the extended class diagrams and sequence diagrams were defined to straightforwardly identify the hot-spots and describe their instantiation restrictions. A transformation model based on design patterns was also put forward, such that the profile based framework design diagrams could be automatically mapped to the corresponding implementation diagrams. It was proved that the presented profile makes framework modeling more straightforwardly and therefore easier to understand and instantiate.
Students' Construction of External Representations in Design-Based Learning Situations
ERIC Educational Resources Information Center
de Vries, Erica
2006-01-01
This article develops a theoretical framework for the study of students' construction of mixed multiple external representations in design-based learning situations involving an adaptation of professional tasks and tools to a classroom setting. The framework draws on research on professional design processes and on learning with multiple external…
Learning Experience as Transaction: A Framework for Instructional Design
ERIC Educational Resources Information Center
Parrish, Patrick E.; Wilson, Brent G.; Dunlap, Joanna C.
2011-01-01
This article presents a framework for understanding learning experience as an object for instructional design--as an object for design as well as research and understanding. Compared to traditional behavioral objectives or discrete cognitive skills, the object of experience is more holistic, requiring simultaneous attention to cognition, behavior,…
Designing and Evaluating Representations to Model Pedagogy
ERIC Educational Resources Information Center
Masterman, Elizabeth; Craft, Brock
2013-01-01
This article presents the case for a theory-informed approach to designing and evaluating representations for implementation in digital tools to support Learning Design, using the framework of epistemic efficacy as an example. This framework, which is rooted in the literature of cognitive psychology, is operationalised through dimensions of fit…
Scientific Arguments as Learning Artifacts: Designing for Learning from the Web with KIE.
ERIC Educational Resources Information Center
Bell, Philip; Linn, Marcia C.
2000-01-01
Examines how students use evidence, determines when they add further ideas and claims, and measures progress in understanding light propagation. Uses the Scaffolded Knowledge Integration (SKI) instructional framework for design decisions. Discusses design studies that test and elaborate on the instructional framework. (Contains 33 references.)…
Learning to Design Collaboratively: Participation of Student Designers in a Community of Innovation
ERIC Educational Resources Information Center
West, Richard E.; Hannafin, Michael J.
2011-01-01
Creativity researchers have drawn on cognitive principles to characterize individual innovation. However, few comprehensive frameworks have been developed to relate social innovation to social cognition research. This article introduces the Communities of Innovation (COI) framework and examines its applications in a culture designed to promote…
Data Intensive Analysis of Biomolecular Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straatsma, TP; Soares, Thereza A.
2007-12-01
The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less
Data Intensive Analysis of Biomolecular Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straatsma, TP
2008-03-01
The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less
A business rules design framework for a pharmaceutical validation and alert system.
Boussadi, A; Bousquet, C; Sabatier, B; Caruba, T; Durieux, P; Degoulet, P
2011-01-01
Several alert systems have been developed to improve the patient safety aspects of clinical information systems (CIS). Most studies have focused on the evaluation of these systems, with little information provided about the methodology leading to system implementation. We propose here an 'agile' business rule design framework (BRDF) supporting both the design of alerts for the validation of drug prescriptions and the incorporation of the end user into the design process. We analyzed the unified process (UP) design life cycle and defined the activities, subactivities, actors and UML artifacts that could be used to enhance the agility of the proposed framework. We then applied the proposed framework to two different sets of data in the context of the Georges Pompidou University Hospital (HEGP) CIS. We introduced two new subactivities into UP: business rule specification and business rule instantiation activity. The pharmacist made an effective contribution to five of the eight BRDF design activities. Validation of the two new subactivities was effected in the context of drug dosage adaption to the patients' clinical and biological contexts. Pilot experiment shows that business rules modeled with BRDF and implemented as an alert system triggered an alert for 5824 of the 71,413 prescriptions considered (8.16%). A business rule design framework approach meets one of the strategic objectives for decision support design by taking into account three important criteria posing a particular challenge to system designers: 1) business processes, 2) knowledge modeling of the context of application, and 3) the agility of the various design steps.
Wang, Mingming; Sweetapple, Chris; Fu, Guangtao; Farmani, Raziyeh; Butler, David
2017-10-01
This paper presents a new framework for decision making in sustainable drainage system (SuDS) scheme design. It integrates resilience, hydraulic performance, pollution control, rainwater usage, energy analysis, greenhouse gas (GHG) emissions and costs, and has 12 indicators. The multi-criteria analysis methods of entropy weight and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) were selected to support SuDS scheme selection. The effectiveness of the framework is demonstrated with a SuDS case in China. Indicators used include flood volume, flood duration, a hydraulic performance indicator, cost and resilience. Resilience is an important design consideration, and it supports scheme selection in the case study. The proposed framework will help a decision maker to choose an appropriate design scheme for implementation without subjectivity. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Sweet, Shauna J.; Rupp, Andre A.
2012-01-01
The "evidence-centered design" (ECD) framework is a powerful tool that supports careful and critical thinking about the identification and accumulation of evidence in assessment contexts. In this paper, we demonstrate how the ECD framework provides critical support for designing simulation studies to investigate statistical methods…
A Framework for Designing Cluster Randomized Trials with Binary Outcomes
ERIC Educational Resources Information Center
Spybrook, Jessaca; Martinez, Andres
2011-01-01
The purpose of this paper is to provide a frame work for approaching a power analysis for a CRT (cluster randomized trial) with a binary outcome. The authors suggest a framework in the context of a simple CRT and then extend it to a blocked design, or a multi-site cluster randomized trial (MSCRT). The framework is based on proportions, an…
Biomimetic mineralization of metal-organic frameworks around polysaccharides.
Liang, Kang; Wang, Ru; Boutter, Manon; Doherty, Cara M; Mulet, Xavier; Richardson, Joseph J
2017-01-19
Biomimetic mineralization exploits natural biomineralization processes for the design and fabrication of synthetic functional materials. Here, we report for the first time the use of carbohydrates (polysaccharides) for the biomimetic crystallization of metal-organic frameworks. This discovery greatly expands the potential and diversity of biomimetic approaches for the design, synthesis, and functionalization of new bio-metal-organic framework composite materials.
A Conceptual Framework for Educational Design at Modular Level to Promote Transfer of Learning
ERIC Educational Resources Information Center
Botma, Yvonne; Van Rensburg, G. H.; Coetzee, I. M.; Heyns, T.
2015-01-01
Students bridge the theory-practice gap when they apply in practice what they have learned in class. A conceptual framework was developed that can serve as foundation to design for learning transfer at modular level. The framework is based on an adopted and adapted systemic model of transfer of learning, existing learning theories, constructive…
ERIC Educational Resources Information Center
Guerra-Lopez, Ingrid; Toker, Sacip
2012-01-01
This article illustrates the application of the Impact Evaluation Process for the design of a performance measurement and evaluation framework for an urban high school. One of the key aims of this framework is to enhance decision-making by providing timely feedback about the effectiveness of various performance improvement interventions. The…
ERIC Educational Resources Information Center
Ifenthaler, Dirk; Gosper, Maree
2014-01-01
This paper introduces the MAPLET framework that was developed to map and link teaching aims, learning processes, learner expertise and technologies. An experimental study with 65 participants is reported to test the effectiveness of the framework as a guide to the design of lessons embedded within larger units of study. The findings indicate the…
Redwood-Campbell, Lynda; Pakes, Barry; Rouleau, Katherine; MacDonald, Colla J; Arya, Neil; Purkey, Eva; Schultz, Karen; Dhatt, Reena; Wilson, Briana; Hadi, Abdullahel; Pottie, Kevin
2011-07-22
Recognizing the growing demand from medical students and residents for more comprehensive global health training, and the paucity of explicit curricula on such issues, global health and curriculum experts from the six Ontario Family Medicine Residency Programs worked together to design a framework for global health curricula in family medicine training programs. A working group comprised of global health educators from Ontario's six medical schools conducted a scoping review of global health curricula, competencies, and pedagogical approaches. The working group then hosted a full day meeting, inviting experts in education, clinical care, family medicine and public health, and developed a consensus process and draft framework to design global health curricula. Through a series of weekly teleconferences over the next six months, the framework was revised and used to guide the identification of enabling global health competencies (behaviours, skills and attitudes) for Canadian Family Medicine training. The main outcome was an evidence-informed interactive framework http://globalhealth.ennovativesolution.com/ to provide a shared foundation to guide the design, delivery and evaluation of global health education programs for Ontario's family medicine residency programs. The curriculum framework blended a definition and mission for global health training, core values and principles, global health competencies aligning with the Canadian Medical Education Directives for Specialists (CanMEDS) competencies, and key learning approaches. The framework guided the development of subsequent enabling competencies. The shared curriculum framework can support the design, delivery and evaluation of global health curriculum in Canada and around the world, lay the foundation for research and development, provide consistency across programmes, and support the creation of learning and evaluation tools to align with the framework. The process used to develop this framework can be applied to other aspects of residency curriculum development.
2011-01-01
Background Recognizing the growing demand from medical students and residents for more comprehensive global health training, and the paucity of explicit curricula on such issues, global health and curriculum experts from the six Ontario Family Medicine Residency Programs worked together to design a framework for global health curricula in family medicine training programs. Methods A working group comprised of global health educators from Ontario's six medical schools conducted a scoping review of global health curricula, competencies, and pedagogical approaches. The working group then hosted a full day meeting, inviting experts in education, clinical care, family medicine and public health, and developed a consensus process and draft framework to design global health curricula. Through a series of weekly teleconferences over the next six months, the framework was revised and used to guide the identification of enabling global health competencies (behaviours, skills and attitudes) for Canadian Family Medicine training. Results The main outcome was an evidence-informed interactive framework http://globalhealth.ennovativesolution.com/ to provide a shared foundation to guide the design, delivery and evaluation of global health education programs for Ontario's family medicine residency programs. The curriculum framework blended a definition and mission for global health training, core values and principles, global health competencies aligning with the Canadian Medical Education Directives for Specialists (CanMEDS) competencies, and key learning approaches. The framework guided the development of subsequent enabling competencies. Conclusions The shared curriculum framework can support the design, delivery and evaluation of global health curriculum in Canada and around the world, lay the foundation for research and development, provide consistency across programmes, and support the creation of learning and evaluation tools to align with the framework. The process used to develop this framework can be applied to other aspects of residency curriculum development. PMID:21781319
Building a Semantic Framework for eScience
NASA Astrophysics Data System (ADS)
Movva, S.; Ramachandran, R.; Maskey, M.; Li, X.
2009-12-01
The e-Science vision focuses on the use of advanced computing technologies to support scientists. Recent research efforts in this area have focused primarily on “enabling” use of infrastructure resources for both data and computational access especially in Geosciences. One of the existing gaps in the existing e-Science efforts has been the failure to incorporate stable semantic technologies within the design process itself. In this presentation, we describe our effort in designing a framework for e-Science built using Service Oriented Architecture. Our framework provides users capabilities to create science workflows and mine distributed data. Our e-Science framework is being designed around a mass market tool to promote reusability across many projects. Semantics is an integral part of this framework and our design goal is to leverage the latest stable semantic technologies. The use of these stable semantic technologies will provide the users of our framework the useful features such as: allow search engines to find their content with RDFa tags; create RDF triple data store for their content; create RDF end points to share with others; and semantically mash their content with other online content available as RDF end point.
SoMIR framework for designing high-NDBP photonic crystal waveguides.
Mirjalili, Seyed Mohammad
2014-06-20
This work proposes a modularized framework for designing the structure of photonic crystal waveguides (PCWs) and reducing human involvement during the design process. The proposed framework consists of three main modules: parameters module, constraints module, and optimizer module. The first module is responsible for defining the structural parameters of a given PCW. The second module defines various limitations in order to achieve desirable optimum designs. The third module is the optimizer, in which a numerical optimization method is employed to perform optimization. As case studies, two new structures called Ellipse PCW (EPCW) and Hypoellipse PCW (HPCW) with different shape of holes in each row are proposed and optimized by the framework. The calculation results show that the proposed framework is able to successfully optimize the structures of the new EPCW and HPCW. In addition, the results demonstrate the applicability of the proposed framework for optimizing different PCWs. The results of the comparative study show that the optimized EPCW and HPCW provide 18% and 9% significant improvements in normalized delay-bandwidth product (NDBP), respectively, compared to the ring-shape-hole PCW, which has the highest NDBP in the literature. Finally, the simulations of pulse propagation confirm the manufacturing feasibility of both optimized structures.
Origin and Consequences of the Relationship between Protein Mean and Variance
Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David
2014-01-01
Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome. PMID:25062021
Presence+Experience: A Framework for the Purposeful Design of Presence in Online Courses
ERIC Educational Resources Information Center
Dunlap, Joanna C.; Verma, Geeta; Johnson, Heather Lynn
2016-01-01
In this article, we share a framework for the purposeful design of presence in online courses. Instead of developing something new, we looked at two models that have helped us with previous instructional design projects, providing us with some assurance that the design decisions we were making were fundamentally sound. As we began to work with the…
Universal Design for Learning: A Collaborative Framework for Designing Inclusive Curriculum
ERIC Educational Resources Information Center
Wu, Xiuwen
2010-01-01
The purpose of this article is twofold: (1) to introduce the concept of Universal Design for Learning (UDL) by going to its origination in the field of architecture--the Universal Design, in order to illustrate the inclusive nature of UDL; and (2) to shed light on one of the most important aspects of UDL--collaboration. The UDL framework provides…
6-D, A Process Framework for the Design and Development of Web-based Systems.
ERIC Educational Resources Information Center
Christian, Phillip
2001-01-01
Explores how the 6-D framework can form the core of a comprehensive systemic strategy and help provide a supporting structure for more robust design and development while allowing organizations to support whatever methods and models best suit their purpose. 6-D stands for the phases of Web design and development: Discovery, Definition, Design,…
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2005-07-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
A Framework to Design and Optimize Chemical Flooding Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2006-08-31
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2004-11-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
Social research design: framework for integrating philosophical and practical elements.
Cunningham, Kathryn Burns
2014-09-01
To provide and elucidate a comprehensible framework for the design of social research. An abundance of information exists concerning the process of designing social research. The overall message that can be gleaned is that numerable elements - both philosophical (ontological and epistemological assumptions and theoretical perspective) and practical (issue to be addressed, purpose, aims and research questions) - are influential in the process of selecting a research methodology and methods, and that these elements and their inter-relationships must be considered and explicated to ensure a coherent research design that enables well-founded and meaningful conclusions. There is a lack of guidance concerning the integration of practical and philosophical elements, hindering their consideration and explication. The author's PhD research into loneliness and cancer. This is a methodology paper. A guiding framework that incorporates all of the philosophical and practical elements influential in social research design is presented. The chronological and informative relationships between the elements are discussed. The framework presented can be used by social researchers to consider and explicate the practical and philosophical elements influential in the selection of a methodology and methods. It is hoped that the framework presented will aid social researchers with the design and the explication of the design of their research, thereby enhancing the credibility of their projects and enabling their research to establish well-founded and meaningful conclusions.
NASA Astrophysics Data System (ADS)
Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa
2018-03-01
Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.
Planning in context: A situated view of children's management of science projects
NASA Astrophysics Data System (ADS)
Marshall, Susan Katharine
This study investigated children's collaborative planning of a complex, long-term software design project. Using sociocultural methods, it examined over time the development of design teams' planning negotiations and tools to document the coconstruction of cultural frameworks to organize teams' shared understanding of what and how to plan. Results indicated that student teams developed frameworks to address a set of common planning functions that included design planning, project metaplanning (things such as division of labor or sharing of computer resources) and team collaboration management planning. There were also some between-team variations in planning frameworks, within a bandwidth of options. Teams engaged in opportunistic planning, which reflected shifts in strategies in response to new circumstances over time. Team members with past design project experience ("oldtimers") demonstrated the transfer of their planning framework to the current design task, and they supported the developing participation of "newcomers." Teams constructed physical tools (e.g. planning boards) that acted as visual representations of teams' planning frameworks, and inscriptions of team thinking. The assigned functions of the tools also shifted over time with changing project circumstances. The discussion reexamines current approaches to the study of planning and discusses their educational implications.
Protein design to understand peptide ligand recognition by tetratricopeptide repeat proteins.
Cortajarena, Aitziber L; Kajander, Tommi; Pan, Weilan; Cocco, Melanie J; Regan, Lynne
2004-04-01
Protein design aims to understand the fundamentals of protein structure by creating novel proteins with pre-specified folds. An equally important goal is to understand protein function by creating novel proteins with pre-specified activities. Here we describe the design and characterization of a tetratricopeptide (TPR) protein, which binds to the C-terminal peptide of the eukaryotic chaperone Hsp90. The design emphasizes the importance of both direct, short-range protein-peptide interactions and of long-range electrostatic optimization. We demonstrate that the designed protein binds specifically to the desired peptide and discriminates between it and the similar C-terminal peptide of Hsp70.
Understanding cancer complexome using networks, spectral graph theory and multilayer framework
NASA Astrophysics Data System (ADS)
Rai, Aparna; Pradhan, Priodyuti; Nagraj, Jyothi; Lohitesh, K.; Chowdhury, Rajdeep; Jalan, Sarika
2017-02-01
Cancer complexome comprises a heterogeneous and multifactorial milieu that varies in cytology, physiology, signaling mechanisms and response to therapy. The combined framework of network theory and spectral graph theory along with the multilayer analysis provides a comprehensive approach to analyze the proteomic data of seven different cancers, namely, breast, oral, ovarian, cervical, lung, colon and prostate. Our analysis demonstrates that the protein-protein interaction networks of the normal and the cancerous tissues associated with the seven cancers have overall similar structural and spectral properties. However, few of these properties implicate unsystematic changes from the normal to the disease networks depicting difference in the interactions and highlighting changes in the complexity of different cancers. Importantly, analysis of common proteins of all the cancer networks reveals few proteins namely the sensors, which not only occupy significant position in all the layers but also have direct involvement in causing cancer. The prediction and analysis of miRNAs targeting these sensor proteins hint towards the possible role of these proteins in tumorigenesis. This novel approach helps in understanding cancer at the fundamental level and provides a clue to develop promising and nascent concept of single drug therapy for multiple diseases as well as personalized medicine.
Understanding cancer complexome using networks, spectral graph theory and multilayer framework.
Rai, Aparna; Pradhan, Priodyuti; Nagraj, Jyothi; Lohitesh, K; Chowdhury, Rajdeep; Jalan, Sarika
2017-02-03
Cancer complexome comprises a heterogeneous and multifactorial milieu that varies in cytology, physiology, signaling mechanisms and response to therapy. The combined framework of network theory and spectral graph theory along with the multilayer analysis provides a comprehensive approach to analyze the proteomic data of seven different cancers, namely, breast, oral, ovarian, cervical, lung, colon and prostate. Our analysis demonstrates that the protein-protein interaction networks of the normal and the cancerous tissues associated with the seven cancers have overall similar structural and spectral properties. However, few of these properties implicate unsystematic changes from the normal to the disease networks depicting difference in the interactions and highlighting changes in the complexity of different cancers. Importantly, analysis of common proteins of all the cancer networks reveals few proteins namely the sensors, which not only occupy significant position in all the layers but also have direct involvement in causing cancer. The prediction and analysis of miRNAs targeting these sensor proteins hint towards the possible role of these proteins in tumorigenesis. This novel approach helps in understanding cancer at the fundamental level and provides a clue to develop promising and nascent concept of single drug therapy for multiple diseases as well as personalized medicine.
Ibrahim, Wisam; Abadeh, Mohammad Saniee
2017-05-21
Protein fold recognition is an important problem in bioinformatics to predict three-dimensional structure of a protein. One of the most challenging tasks in protein fold recognition problem is the extraction of efficient features from the amino-acid sequences to obtain better classifiers. In this paper, we have proposed six descriptors to extract features from protein sequences. These descriptors are applied in the first stage of a three-stage framework PCA-DELM-LDA to extract feature vectors from the amino-acid sequences. Principal Component Analysis PCA has been implemented to reduce the number of extracted features. The extracted feature vectors have been used with original features to improve the performance of the Deep Extreme Learning Machine DELM in the second stage. Four new features have been extracted from the second stage and used in the third stage by Linear Discriminant Analysis LDA to classify the instances into 27 folds. The proposed framework is implemented on the independent and combined feature sets in SCOP datasets. The experimental results show that extracted feature vectors in the first stage could improve the performance of DELM in extracting new useful features in second stage. Copyright © 2017 Elsevier Ltd. All rights reserved.
Stochastic calculus of protein filament formation under spatial confinement
NASA Astrophysics Data System (ADS)
Michaels, Thomas C. T.; Dear, Alexander J.; Knowles, Tuomas P. J.
2018-05-01
The growth of filamentous aggregates from precursor proteins is a process of central importance to both normal and aberrant biology, for instance as the driver of devastating human disorders such as Alzheimer's and Parkinson's diseases. The conventional theoretical framework for describing this class of phenomena in bulk is based upon the mean-field limit of the law of mass action, which implicitly assumes deterministic dynamics. However, protein filament formation processes under spatial confinement, such as in microdroplets or in the cellular environment, show intrinsic variability due to the molecular noise associated with small-volume effects. To account for this effect, in this paper we introduce a stochastic differential equation approach for investigating protein filament formation processes under spatial confinement. Using this framework, we study the statistical properties of stochastic aggregation curves, as well as the distribution of reaction lag-times. Moreover, we establish the gradual breakdown of the correlation between lag-time and normalized growth rate under spatial confinement. Our results establish the key role of spatial confinement in determining the onset of stochasticity in protein filament formation and offer a formalism for studying protein aggregation kinetics in small volumes in terms of the kinetic parameters describing the aggregation dynamics in bulk.
Proteomics profiling of interactome dynamics by colocalisation analysis (COLA).
Mardakheh, Faraz K; Sailem, Heba Z; Kümper, Sandra; Tape, Christopher J; McCully, Ryan R; Paul, Angela; Anjomani-Virmouni, Sara; Jørgensen, Claus; Poulogiannis, George; Marshall, Christopher J; Bakal, Chris
2016-12-20
Localisation and protein function are intimately linked in eukaryotes, as proteins are localised to specific compartments where they come into proximity of other functionally relevant proteins. Significant co-localisation of two proteins can therefore be indicative of their functional association. We here present COLA, a proteomics based strategy coupled with a bioinformatics framework to detect protein-protein co-localisations on a global scale. COLA reveals functional interactions by matching proteins with significant similarity in their subcellular localisation signatures. The rapid nature of COLA allows mapping of interactome dynamics across different conditions or treatments with high precision.
Computational approaches for rational design of proteins with novel functionalities
Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul
2012-01-01
Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643
RIPOSTE: a framework for improving the design and analysis of laboratory-based research
Masca, Nicholas GD; Hensor, Elizabeth MA; Cornelius, Victoria R; Buffa, Francesca M; Marriott, Helen M; Eales, James M; Messenger, Michael P; Anderson, Amy E; Boot, Chris; Bunce, Catey; Goldin, Robert D; Harris, Jessica; Hinchliffe, Rod F; Junaid, Hiba; Kingston, Shaun; Martin-Ruiz, Carmen; Nelson, Christopher P; Peacock, Janet; Seed, Paul T; Shinkins, Bethany; Staples, Karl J; Toombs, Jamie; Wright, Adam KA; Teare, M Dawn
2015-01-01
Lack of reproducibility is an ongoing problem in some areas of the biomedical sciences. Poor experimental design and a failure to engage with experienced statisticians at key stages in the design and analysis of experiments are two factors that contribute to this problem. The RIPOSTE (Reducing IrreProducibility in labOratory STudiEs) framework has been developed to support early and regular discussions between scientists and statisticians in order to improve the design, conduct and analysis of laboratory studies and, therefore, to reduce irreproducibility. This framework is intended for use during the early stages of a research project, when specific questions or hypotheses are proposed. The essential points within the framework are explained and illustrated using three examples (a medical equipment test, a macrophage study and a gene expression study). Sound study design minimises the possibility of bias being introduced into experiments and leads to higher quality research with more reproducible results. DOI: http://dx.doi.org/10.7554/eLife.05519.001 PMID:25951517
Onyx-Advanced Aeropropulsion Simulation Framework Created
NASA Technical Reports Server (NTRS)
Reed, John A.
2001-01-01
The Numerical Propulsion System Simulation (NPSS) project at the NASA Glenn Research Center is developing a new software environment for analyzing and designing aircraft engines and, eventually, space transportation systems. Its purpose is to dramatically reduce the time, effort, and expense necessary to design and test jet engines by creating sophisticated computer simulations of an aerospace object or system (refs. 1 and 2). Through a university grant as part of that effort, researchers at the University of Toledo have developed Onyx, an extensible Java-based (Sun Micro-systems, Inc.), objectoriented simulation framework, to investigate how advanced software design techniques can be successfully applied to aeropropulsion system simulation (refs. 3 and 4). The design of Onyx's architecture enables users to customize and extend the framework to add new functionality or adapt simulation behavior as required. It exploits object-oriented technologies, such as design patterns, domain frameworks, and software components, to develop a modular system in which users can dynamically replace components with others having different functionality.
77 FR 70124 - Policy Statement on the Scenario Design Framework for Stress Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-23
... Statement on the Scenario Design Framework for Stress Testing AGENCY: Board of Governors of the Federal... Board is requesting public comment on a policy statement on the approach to scenario design for stress testing that would be used in connection with the supervisory and company-run stress tests conducted under...
Framework for Implementing Engineering Senior Design Capstone Courses and Design Clinics
ERIC Educational Resources Information Center
Franchetti, Matthew; Hefzy, Mohamed Samir; Pourazady, Mehdi; Smallman, Christine
2012-01-01
Senior design capstone projects for engineering students are essential components of an undergraduate program that enhances communication, teamwork, and problem solving skills. Capstone projects with industry are well established in management, but not as heavily utilized in engineering. This paper outlines a general framework that can be used by…
Design-Based Research: Case of a Teaching Sequence on Mechanics
ERIC Educational Resources Information Center
Tiberghien, Andree; Vince, Jacques; Gaidioz, Pierre
2009-01-01
Design-based research, and particularly its theoretical status, is a subject of debate in the science education community. In the first part of this paper, a theoretical framework drawn up to develop design-based research will be presented. This framework is mainly based on epistemological analysis of physics modelling, learning and teaching…
A Framework for the Flexible Content Packaging of Learning Objects and Learning Designs
ERIC Educational Resources Information Center
Lukasiak, Jason; Agostinho, Shirley; Burnett, Ian; Drury, Gerrard; Goodes, Jason; Bennett, Sue; Lockyer, Lori; Harper, Barry
2004-01-01
This paper presents a platform-independent method for packaging learning objects and learning designs. The method, entitled a Smart Learning Design Framework, is based on the MPEG-21 standard, and uses IEEE Learning Object Metadata (LOM) to provide bibliographic, technical, and pedagogical descriptors for the retrieval and description of learning…
Rouwette, Tom; Sondermann, Julia; Avenali, Luca; Gomez-Varela, David; Schmidt, Manuela
2016-06-01
Chronic pain is a complex disease with limited treatment options. Several profiling efforts have been employed with the aim to dissect its molecular underpinnings. However, generated results are often inconsistent and nonoverlapping, which is largely because of inherent technical constraints. Emerging data-independent acquisition (DIA)-mass spectrometry (MS) has the potential to provide unbiased, reproducible and quantitative proteome maps - a prerequisite for standardization among experiments. Here, we designed a DIA-based proteomics workflow to profile changes in the abundance of dorsal root ganglia (DRG) proteins in two mouse models of chronic pain, inflammatory and neuropathic. We generated a DRG-specific spectral library containing 3067 DRG proteins, which enables their standardized quantification by means of DIA-MS in any laboratory. Using this resource, we profiled 2526 DRG proteins in each biological replicate of both chronic pain models and respective controls with unprecedented reproducibility. We detected numerous differentially regulated proteins, the majority of which exhibited pain model-specificity. Our approach recapitulates known biology and discovers dozens of proteins that have not been characterized in the somatosensory system before. Functional validation experiments and analysis of mouse pain behaviors demonstrate that indeed meaningful protein alterations were discovered. These results illustrate how the application of DIA-MS can open new avenues to achieve the long-awaited standardization in the molecular dissection of pathologies of the somatosensory system. Therefore, our findings provide a valuable framework to qualitatively extend our understanding of chronic pain and somatosensation. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Framework for Development of Object-Oriented Software
NASA Technical Reports Server (NTRS)
Perez-Poveda, Gus; Ciavarella, Tony; Nieten, Dan
2004-01-01
The Real-Time Control (RTC) Application Framework is a high-level software framework written in C++ that supports the rapid design and implementation of object-oriented application programs. This framework provides built-in functionality that solves common software development problems within distributed client-server, multi-threaded, and embedded programming environments. When using the RTC Framework to develop software for a specific domain, designers and implementers can focus entirely on the details of the domain-specific software rather than on creating custom solutions, utilities, and frameworks for the complexities of the programming environment. The RTC Framework was originally developed as part of a Space Shuttle Launch Processing System (LPS) replacement project called Checkout and Launch Control System (CLCS). As a result of the framework s development, CLCS software development time was reduced by 66 percent. The framework is generic enough for developing applications outside of the launch-processing system domain. Other applicable high-level domains include command and control systems and simulation/ training systems.
Browne, Fiona; Wang, Haiying; Zheng, Huiru; Azuaje, Francisco
2010-03-01
This study applied a knowledge-driven data integration framework for the inference of protein-protein interactions (PPI). Evidence from diverse genomic features is integrated using a knowledge-driven Bayesian network (KD-BN). Receiver operating characteristic (ROC) curves may not be the optimal assessment method to evaluate a classifier's performance in PPI prediction as the majority of the area under the curve (AUC) may not represent biologically meaningful results. It may be of benefit to interpret the AUC of a partial ROC curve whereby biologically interesting results are represented. Therefore, the novel application of the assessment method referred to as the partial ROC has been employed in this study to assess predictive performance of PPI predictions along with calculating the True positive/false positive rate and true positive/positive rate. By incorporating domain knowledge into the construction of the KD-BN, we demonstrate improvement in predictive performance compared with previous studies based upon the Naive Bayesian approach. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
ssbio: a Python framework for structural systems biology.
Mih, Nathan; Brunk, Elizabeth; Chen, Ke; Catoiu, Edward; Sastry, Anand; Kavvas, Erol; Monk, Jonathan M; Zhang, Zhen; Palsson, Bernhard O
2018-06-15
Working with protein structures at the genome-scale has been challenging in a variety of ways. Here, we present ssbio, a Python package that provides a framework to easily work with structural information in the context of genome-scale network reconstructions, which can contain thousands of individual proteins. The ssbio package provides an automated pipeline to construct high quality genome-scale models with protein structures (GEM-PROs), wrappers to popular third-party programs to compute associated protein properties, and methods to visualize and annotate structures directly in Jupyter notebooks, thus lowering the barrier of linking 3D structural data with established systems workflows. ssbio is implemented in Python and available to download under the MIT license at http://github.com/SBRG/ssbio. Documentation and Jupyter notebook tutorials are available at http://ssbio.readthedocs.io/en/latest/. Interactive notebooks can be launched using Binder at https://mybinder.org/v2/gh/SBRG/ssbio/master?filepath=Binder.ipynb. Supplementary data are available at Bioinformatics online.
A Proteome-wide Fission Yeast Interactome Reveals Network Evolution Principles from Yeasts to Human.
Vo, Tommy V; Das, Jishnu; Meyer, Michael J; Cordero, Nicolas A; Akturk, Nurten; Wei, Xiaomu; Fair, Benjamin J; Degatano, Andrew G; Fragoza, Robert; Liu, Lisa G; Matsuyama, Akihisa; Trickey, Michelle; Horibata, Sachi; Grimson, Andrew; Yamano, Hiroyuki; Yoshida, Minoru; Roth, Frederick P; Pleiss, Jeffrey A; Xia, Yu; Yu, Haiyuan
2016-01-14
Here, we present FissionNet, a proteome-wide binary protein interactome for S. pombe, comprising 2,278 high-quality interactions, of which ∼ 50% were previously not reported in any species. FissionNet unravels previously unreported interactions implicated in processes such as gene silencing and pre-mRNA splicing. We developed a rigorous network comparison framework that accounts for assay sensitivity and specificity, revealing extensive species-specific network rewiring between fission yeast, budding yeast, and human. Surprisingly, although genes are better conserved between the yeasts, S. pombe interactions are significantly better conserved in human than in S. cerevisiae. Our framework also reveals that different modes of gene duplication influence the extent to which paralogous proteins are functionally repurposed. Finally, cross-species interactome mapping demonstrates that coevolution of interacting proteins is remarkably prevalent, a result with important implications for studying human disease in model organisms. Overall, FissionNet is a valuable resource for understanding protein functions and their evolution. Copyright © 2016 Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
In holometabolous insects, larval nutrition affects adult body size, a life history trait with a profound influence on performance and fitness. Individual nutritional components of larval diet are often complex and may interact with one another, necessitating the use of a geometric framework for und...
A framework for designing hand hygiene educational interventions in schools.
Appiah-Brempong, Emmanuel; Harris, Muriel J; Newton, Samuel; Gulis, Gabriel
2018-03-01
Hygiene education appears to be the commonest school-based intervention for preventing infectious diseases, especially in the developing world. Nevertheless, there remains a gap in literature regarding a school-specific theory-based framework for designing a hand hygiene educational intervention in schools. We sought to suggest a framework underpinned by psychosocial theories towards bridging this knowledge gap. Furthermore, we sought to propound a more comprehensive definition of hand hygiene which could guide the conceptualisation of hand hygiene interventions in varied settings. Literature search was guided by a standardized tool and literature was retrieved on the basis of a predetermined inclusion criteria. Databases consulted include PubMed, ERIC, and EBSCO host (Medline, CINAHL, PsycINFO, etc.). Evidence bordering on a theoretical framework to aid the design of school-based hand hygiene educational interventions is summarized narratively. School-based hand hygiene educational interventions seeking to positively influence behavioural outcomes could consider enhancing psychosocial variables including behavioural capacity, attitudes and subjective norms (normative beliefs and motivation to comply). A framework underpinned by formalized psychosocial theories has relevance and could enhance the design of hand hygiene educational interventions, especially in schools.
Computational protein design-the next generation tool to expand synthetic biology applications.
Gainza-Cirauqui, Pablo; Correia, Bruno Emanuel
2018-05-02
One powerful approach to engineer synthetic biology pathways is the assembly of proteins sourced from one or more natural organisms. However, synthetic pathways often require custom functions or biophysical properties not displayed by natural proteins, limitations that could be overcome through modern protein engineering techniques. Structure-based computational protein design is a powerful tool to engineer new functional capabilities in proteins, and it is beginning to have a profound impact in synthetic biology. Here, we review efforts to increase the capabilities of synthetic biology using computational protein design. We focus primarily on computationally designed proteins not only validated in vitro, but also shown to modulate different activities in living cells. Efforts made to validate computational designs in cells can illustrate both the challenges and opportunities in the intersection of protein design and synthetic biology. We also highlight protein design approaches, which although not validated as conveyors of new cellular function in situ, may have rapid and innovative applications in synthetic biology. We foresee that in the near-future, computational protein design will vastly expand the functional capabilities of synthetic cells. Copyright © 2018. Published by Elsevier Ltd.
DNA origami-based standards for quantitative fluorescence microscopy.
Schmied, Jürgen J; Raab, Mario; Forthmann, Carsten; Pibiri, Enrico; Wünsch, Bettina; Dammeyer, Thorben; Tinnefeld, Philip
2014-01-01
Validating and testing a fluorescence microscope or a microscopy method requires defined samples that can be used as standards. DNA origami is a new tool that provides a framework to place defined numbers of small molecules such as fluorescent dyes or proteins in a programmed geometry with nanometer precision. The flexibility and versatility in the design of DNA origami microscopy standards makes them ideally suited for the broad variety of emerging super-resolution microscopy methods. As DNA origami structures are durable and portable, they can become a universally available specimen to check the everyday functionality of a microscope. The standards are immobilized on a glass slide, and they can be imaged without further preparation and can be stored for up to 6 months. We describe a detailed protocol for the design, production and use of DNA origami microscopy standards, and we introduce a DNA origami rectangle, bundles and a nanopillar as fluorescent nanoscopic rulers. The protocol provides procedures for the design and realization of fluorescent marks on DNA origami structures, their production and purification, quality control, handling, immobilization, measurement and data analysis. The procedure can be completed in 1-2 d.
A user-centered model for designing consumer mobile health (mHealth) applications (apps).
Schnall, Rebecca; Rojas, Marlene; Bakken, Suzanne; Brown, William; Carballo-Dieguez, Alex; Carry, Monique; Gelaude, Deborah; Mosley, Jocelyn Patterson; Travers, Jasmine
2016-04-01
Mobile technologies are a useful platform for the delivery of health behavior interventions. Yet little work has been done to create a rigorous and standardized process for the design of mobile health (mHealth) apps. This project sought to explore the use of the Information Systems Research (ISR) framework as guide for the design of mHealth apps. Our work was guided by the ISR framework which is comprised of 3 cycles: Relevance, Rigor and Design. In the Relevance cycle, we conducted 5 focus groups with 33 targeted end-users. In the Rigor cycle, we performed a review to identify technology-based interventions for meeting the health prevention needs of our target population. In the Design Cycle, we employed usability evaluation methods to iteratively develop and refine mock-ups for a mHealth app. Through an iterative process, we identified barriers and facilitators to the use of mHealth technology for HIV prevention for high-risk MSM, developed 'use cases' and identified relevant functional content and features for inclusion in a design document to guide future app development. Findings from our work support the use of the ISR framework as a guide for designing future mHealth apps. Results from this work provide detailed descriptions of the user-centered design and system development and have heuristic value for those venturing into the area of technology-based intervention work. Findings from this study support the use of the ISR framework as a guide for future mHealth app development. Use of the ISR framework is a potentially useful approach for the design of a mobile app that incorporates end-users' design preferences. Copyright © 2016 Elsevier Inc. All rights reserved.
Amanullah, Ayeman; Upadhyay, Arun; Joshi, Vibhuti; Mishra, Ribhav; Jana, Nihar Ranjan; Mishra, Amit
2017-12-01
Proteins are ordered useful cellular entities, required for normal health and organism's survival. The proteome is the absolute set of cellular expressed proteins, which regulates a wide range of physiological functions linked with all domains of life. In aging cells or under unfavorable cellular conditions, misfolding of proteins generates common pathological events linked with neurodegenerative diseases and aging. Current advances of proteome studies systematically generates some progress in our knowledge that how misfolding of proteins or their accumulation can contribute to the impairment or depletion of proteome functions. Still, the underlying causes of this unrecoverable loss are not clear that how such unsolved transitions give rise to multifactorial challengeable degenerative pathological conditions in neurodegeneration. In this review, we specifically focus and systematically summarize various molecular mechanisms of proteostasis maintenance, as well as discuss progressing neurobiological strategies, promising natural and pharmacological candidates, which can be useful to counteract the problem of proteopathies. Our article emphasizes an urgent need that now it is important for us to recognize the fundamentals of proteostasis to design a new molecular framework and fruitful strategies to uncover how the proteome defects are associated with aging and neurodegenerative diseases. A enhance understanding of progress link with proteome and neurobiological challenges may provide new basic concepts in the near future, based on pharmacological agents, linked with impaired proteostasis and neurodegenerative diseases. Copyright © 2017 Elsevier Ltd. All rights reserved.
The role of proline substitutions within flexible regions on thermostability of luciferase.
Yu, Haoran; Zhao, Yang; Guo, Chao; Gan, Yiru; Huang, He
2015-01-01
Improving the stability of firefly luciferase has been a critical issue for its wider industrial applications. Studies about hyperthermophile proteins show that flexibility could be an effective indicator to find out weak spots to engineering thermostability of proteins. However, the relationship among flexibility, activity and stability in most of proteins is unclear. Proline is the most rigid residue and can be introduced to rigidify flexible regions to enhance thermostability of proteins. We firstly apply three different methods, molecular dynamics (MD) simulation, B-FITTER and framework rigidity optimized dynamics algorithm (FRODA) to determine the flexible regions of Photinus pyralis luciferase: Fragment 197-207; Fragment 471-481 and Fragment 487-495. Then, introduction of proline is used to rigidify these flexible regions. Two mutants D476P and H489P within most flexible regions are finally designed. In the results, H489P mutant shows improved thermostability while maintaining its catalytic efficiency compared to that of wild type luciferase. Flexibility analysis confirms that the overall rigidity and local rigidity of H489P mutant are greatly strengthened. D476P mutant shows decreased thermosatbility and the reason for this is elucidated at the molecular level. S307P mutation is randomly chosen outside the flexible regions as a control. Thermostability analysis shows that S307P mutation has decreased kinetic stability and enhanced thermodynamic stability. Copyright © 2014 Elsevier B.V. All rights reserved.
Berenson, Daniel F; Weiss, Allison R; Wan, Zhu-Li; Weiss, Michael A
2011-12-01
The engineering of insulin analogs represents a triumph of structure-based protein design. A framework has been provided by structures of insulin hexamers. Containing a zinc-coordinated trimer of dimers, such structures represent a storage form of the active insulin monomer. Initial studies focused on destabilization of subunit interfaces. Because disassembly facilitates capillary absorption, such targeted destabilization enabled development of rapid-acting insulin analogs. Converse efforts were undertaken to stabilize the insulin hexamer and promote higher-order self-assembly within the subcutaneous depot toward the goal of enhanced basal glycemic control with reduced risk of hypoglycemia. Current products either operate through isoelectric precipitation (insulin glargine, the active component of Lantus(®); Sanofi-Aventis) or employ an albumin-binding acyl tether (insulin detemir, the active component of Levemir(®); Novo-Nordisk). To further improve pharmacokinetic properties, modified approaches are presently under investigation. Novel strategies have recently been proposed based on subcutaneous supramolecular assembly coupled to (a) large-scale allosteric reorganization of the insulin hexamer (the TR transition), (b) pH-dependent binding of zinc ions to engineered His-X(3)-His sites at hexamer surfaces, or (c) the long-range vision of glucose-responsive polymers for regulated hormone release. Such designs share with wild-type insulin and current insulin products a susceptibility to degradation above room temperature, and so their delivery, storage, and use require the infrastructure of an affluent society. Given the global dimensions of the therapeutic supply chain, we envisage that concurrent engineering of ultra-stable protein analog formulations would benefit underprivileged patients in the developing world.
Kamminga, Tjerko; Koehorst, Jasper J; Vermeij, Paul; Slagman, Simen-Jan; Martins Dos Santos, Vitor A P; Bijlsma, Jetta J E; Schaap, Peter J
2017-01-01
Mycoplasmas are the smallest self-replicating organisms and obligate parasites of a specific vertebrate host. An in-depth analysis of the functional capabilities of mycoplasma species is fundamental to understand how some of simplest forms of life on Earth succeeded in subverting complex hosts with highly sophisticated immune systems. In this study we present a genome-scale comparison, focused on identification of functional protein domains, of 80 publically available mycoplasma genomes which were consistently re-annotated using a standardized annotation pipeline embedded in a semantic framework to keep track of the data provenance. We examined the pan- and core-domainome and studied predicted functional capability in relation to host specificity and phylogenetic distance. We show that the pan- and core-domainome of mycoplasma species is closed. A comparison with the proteome of the "minimal" synthetic bacterium JCVI-Syn3.0 allowed us to classify domains and proteins essential for minimal life. Many of those essential protein domains, essential Domains of Unknown Function (DUFs) and essential hypothetical proteins are not persistent across mycoplasma genomes suggesting that mycoplasma species support alternative domain configurations that bypass their essentiality. Based on the protein domain composition, we could separate mycoplasma species infecting blood and tissue. For selected genomes of tissue infecting mycoplasmas, we could also predict whether the host is ruminant, pig or human. Functionally closely related mycoplasma species, which have a highly similar protein domain repertoire, but different hosts could not be separated. This study provides a concise overview of the functional capabilities of mycoplasma species, which can be used as a basis to further understand host-pathogen interaction or to design synthetic minimal life.
Kamminga, Tjerko; Koehorst, Jasper J.; Vermeij, Paul; Slagman, Simen-Jan; Martins dos Santos, Vitor A. P.; Bijlsma, Jetta J. E.; Schaap, Peter J.
2017-01-01
Mycoplasmas are the smallest self-replicating organisms and obligate parasites of a specific vertebrate host. An in-depth analysis of the functional capabilities of mycoplasma species is fundamental to understand how some of simplest forms of life on Earth succeeded in subverting complex hosts with highly sophisticated immune systems. In this study we present a genome-scale comparison, focused on identification of functional protein domains, of 80 publically available mycoplasma genomes which were consistently re-annotated using a standardized annotation pipeline embedded in a semantic framework to keep track of the data provenance. We examined the pan- and core-domainome and studied predicted functional capability in relation to host specificity and phylogenetic distance. We show that the pan- and core-domainome of mycoplasma species is closed. A comparison with the proteome of the “minimal” synthetic bacterium JCVI-Syn3.0 allowed us to classify domains and proteins essential for minimal life. Many of those essential protein domains, essential Domains of Unknown Function (DUFs) and essential hypothetical proteins are not persistent across mycoplasma genomes suggesting that mycoplasma species support alternative domain configurations that bypass their essentiality. Based on the protein domain composition, we could separate mycoplasma species infecting blood and tissue. For selected genomes of tissue infecting mycoplasmas, we could also predict whether the host is ruminant, pig or human. Functionally closely related mycoplasma species, which have a highly similar protein domain repertoire, but different hosts could not be separated. This study provides a concise overview of the functional capabilities of mycoplasma species, which can be used as a basis to further understand host-pathogen interaction or to design synthetic minimal life. PMID:28224116
RAI14 (retinoic acid induced protein 14) is an F-actin regulator
Qian, Xiaojing; Mruk, Dolores D.; Cheng, Yan-ho; Cheng, C. Yan
2013-01-01
RAI14 (retinoic acid induced protein 14) is an actin-binding protein first identified in the liver. In the testis, RAI14 is expressed by both Sertoli and germ cells in the seminiferous epithelium. Besides binding to actin in the testis, RAI14 is also a binding protein for palladin, an actin cross-linking and bundling protein. A recent report has shown that RAI14 displays stage-specific and spatiotemporal expression at the ES [ectoplasmic specialization, a testis-specific filamentous (F)-actin-rich adherens junction] in the seminiferous epithelium of adult rat testes during the epithelial cycle of spermatogenesis, illustrating its likely involvement in F-actin organization at the ES. Functional studies in which RAI14 was knocked down by RNAi in Sertoli cells in vitro and also in testicular cells in vivo have illustrated its role in conferring the integrity of actin filament bundles at the ES, perturbing the Sertoli cell tight junction (TJ)-pemeability barrier function in vitro, and also spermatid polarity and adhesion in vivo, thereby regulating spermatid transport at spermiation. Herein, we critically evaluate these earlier findings and also provide a likely hypothetic model based on the functional role of RAI14 at the ES, and how RAI14 is working with palladin and other actin regulatory proteins in the testis to regulate the transport of (1) spermatids and (2) preleptotene spermatocytes across the seminiferous epithelium and the blood-testis barrier (BTB), respectively, during spermatogenesis. This model should serve as a framework upon which functional experiments can be designed to better understand the biology of RAI14 and other actin-binding and regulatory proteins in the testis. PMID:23885305
Chakravorty, Dhruva K.; Wang, Bing; Lee, Chul Won; Guerra, Alfredo J.; Giedroc, David P.; Merz, Kenneth M.
2013-01-01
Correctly calculating the structure of metal coordination sites in a protein during the process of nuclear magnetic resonance (NMR) structure determination and refinement continues to be a challenging task. In this study, we present an accurate and convenient means by which to include metal ions in the NMR structure determination process using molecular dynamics (MD) constrained by NMR-derived data to obtain a realistic and physically viable description of the metal binding site(s). This method provides the framework to accurately portray the metal ions and its binding residues in a pseudo-bond or dummy-cation like approach, and is validated by quantum mechanical/molecular mechanical (QM/MM) MD calculations constrained by NMR-derived data. To illustrate this approach, we refine the zinc coordination complex structure of the zinc sensing transcriptional repressor protein Staphylococcus aureus CzrA, generating over 130 ns of MD and QM/MM MD NMR-data compliant sampling. In addition to refining the first coordination shell structure of the Zn(II) ion, this protocol benefits from being performed in a periodically replicated solvation environment including long-range electrostatics. We determine that unrestrained (not based on NMR data) MD simulations correlated to the NMR data in a time-averaged ensemble. The accurate solution structure ensemble of the metal-bound protein accurately describes the role of conformational dynamics in allosteric regulation of DNA binding by zinc and serves to validate our previous unrestrained MD simulations of CzrA. This methodology has potentially broad applicability in the structure determination of metal ion bound proteins, protein folding and metal template protein-design studies. PMID:23609042
An Information Technology Framework for Strengthening Telehealthcare Service Delivery
Chen, Chi-Wen; Weng, Yung-Ching; Shang, Rung-Ji; Yu, Hui-Chu; Chung, Yufang; Lai, Feipei
2012-01-01
Abstract Objective: Telehealthcare has been used to provide healthcare service, and information technology infrastructure appears to be essential while providing telehealthcare service. Insufficiencies have been identified, such as lack of integration, need of accommodation of diverse biometric sensors, and accessing diverse networks as different houses have varying facilities, which challenge the promotion of telehealthcare. This study designs an information technology framework to strengthen telehealthcare delivery. Materials and Methods: The proposed framework consists of a system architecture design and a network transmission design. The aim of the framework is to integrate data from existing information systems, to adopt medical informatics standards, to integrate diverse biometric sensors, and to provide different data transmission networks to support a patient's house network despite the facilities. The proposed framework has been evaluated with a case study of two telehealthcare programs, with and without the adoption of the framework. Results: The proposed framework facilitates the functionality of the program and enables steady patient enrollments. The overall patient participations are increased, and the patient outcomes appear positive. The attitudes toward the service and self-improvement also are positive. Conclusions: The findings of this study add up to the construction of a telehealthcare system. Implementing the proposed framework further assists the functionality of the service and enhances the availability of the service and patient acceptances. PMID:23061641
An information technology framework for strengthening telehealthcare service delivery.
Chen, Li-Chin; Chen, Chi-Wen; Weng, Yung-Ching; Shang, Rung-Ji; Yu, Hui-Chu; Chung, Yufang; Lai, Feipei
2012-10-01
Telehealthcare has been used to provide healthcare service, and information technology infrastructure appears to be essential while providing telehealthcare service. Insufficiencies have been identified, such as lack of integration, need of accommodation of diverse biometric sensors, and accessing diverse networks as different houses have varying facilities, which challenge the promotion of telehealthcare. This study designs an information technology framework to strengthen telehealthcare delivery. The proposed framework consists of a system architecture design and a network transmission design. The aim of the framework is to integrate data from existing information systems, to adopt medical informatics standards, to integrate diverse biometric sensors, and to provide different data transmission networks to support a patient's house network despite the facilities. The proposed framework has been evaluated with a case study of two telehealthcare programs, with and without the adoption of the framework. The proposed framework facilitates the functionality of the program and enables steady patient enrollments. The overall patient participations are increased, and the patient outcomes appear positive. The attitudes toward the service and self-improvement also are positive. The findings of this study add up to the construction of a telehealthcare system. Implementing the proposed framework further assists the functionality of the service and enhances the availability of the service and patient acceptances.
Computational Design of Self-Assembling Protein Nanomaterials with Atomic Level Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Neil P.; Sheffler, William; Sawaya, Michael R.
2015-09-17
We describe a general computational method for designing proteins that self-assemble to a desired symmetric architecture. Protein building blocks are docked together symmetrically to identify complementary packing arrangements, and low-energy protein-protein interfaces are then designed between the building blocks in order to drive self-assembly. We used trimeric protein building blocks to design a 24-subunit, 13-nm diameter complex with octahedral symmetry and a 12-subunit, 11-nm diameter complex with tetrahedral symmetry. The designed proteins assembled to the desired oligomeric states in solution, and the crystal structures of the complexes revealed that the resulting materials closely match the design models. The method canmore » be used to design a wide variety of self-assembling protein nanomaterials.« less
A framework for development of an intelligent system for design and manufacturing of stamping dies
NASA Astrophysics Data System (ADS)
Hussein, H. M. A.; Kumar, S.
2014-07-01
An integration of computer aided design (CAD), computer aided process planning (CAPP) and computer aided manufacturing (CAM) is required for development of an intelligent system to design and manufacture stamping dies in sheet metal industries. In this paper, a framework for development of an intelligent system for design and manufacturing of stamping dies is proposed. In the proposed framework, the intelligent system is structured in form of various expert system modules for different activities of design and manufacturing of dies. All system modules are integrated with each other. The proposed system takes its input in form of a CAD file of sheet metal part, and then system modules automate all tasks related to design and manufacturing of stamping dies. Modules are coded using Visual Basic (VB) and developed on the platform of AutoCAD software.
Review article: A systematic review of emergency department incident classification frameworks.
Murray, Matthew; McCarthy, Sally
2018-06-01
As in any part of the hospital system, safety incidents can occur in the ED. These incidents arguably have a distinct character, as the ED involves unscheduled flows of urgent patients who require disparate services. To aid understanding of safety issues and support risk management of the ED, a comparison of published ED specific incident classification frameworks was performed. A review of emergency medicine, health management and general medical publications, using Ovid SP to interrogate Medline (1976-2016) was undertaken to identify any type of taxonomy or classification-like framework for ED related incidents. These frameworks were then analysed and compared. The review identified 17 publications containing an incident classification framework. Comparison of factors and themes making up the classification constituent elements revealed some commonality, but no overall consistency, nor evolution towards an ideal framework. Inconsistency arises from differences in the evidential basis and design methodology of classifications, with design itself being an inherently subjective process. It was not possible to identify an 'ideal' incident classification framework for ED risk management, and there is significant variation in the selection of categories used by frameworks. The variation in classification could risk an unbalanced emphasis in findings through application of a particular framework. Design of an ED specific, ideal incident classification framework should be informed by a much wider range of theories of how organisations and systems work, in addition to clinical and human factors. © 2017 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
Implementation of sustainability in bridge design, construction and maintenance.
DOT National Transportation Integrated Search
2012-12-01
The focus of this research is to develop a framework for more sustainable design and construction : processes for new bridges, and sustainable maintenance practices for existing bridges. The framework : includes a green rating system for bridges. The...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Jian-Bo; Ji, Nan; Pan, Wen
2014-01-01
Drugs may induce adverse drug reactions (ADRs) when they unexpectedly bind to proteins other than their therapeutic targets. Identification of these undesired protein binding partners, called off-targets, can facilitate toxicity assessment in the early stages of drug development. In this study, a computational framework was introduced for the exploration of idiosyncratic mechanisms underlying analgesic-induced severe adverse drug reactions (SADRs). The putative analgesic-target interactions were predicted by performing reverse docking of analgesics or their active metabolites against human/mammal protein structures in a high-throughput manner. Subsequently, bioinformatics analyses were undertaken to identify ADR-associated proteins (ADRAPs) and pathways. Using the pathways and ADRAPsmore » that this analysis identified, the mechanisms of SADRs such as cardiac disorders were explored. For instance, 53 putative ADRAPs and 24 pathways were linked with cardiac disorders, of which 10 ADRAPs were confirmed by previous experiments. Moreover, it was inferred that pathways such as base excision repair, glycolysis/glyconeogenesis, ErbB signaling, calcium signaling, and phosphatidyl inositol signaling likely play pivotal roles in drug-induced cardiac disorders. In conclusion, our framework offers an opportunity to globally understand SADRs at the molecular level, which has been difficult to realize through experiments. It also provides some valuable clues for drug repurposing. - Highlights: • A novel computational framework was developed for mechanistic study of SADRs. • Off-targets of drugs were identified in large scale and in a high-throughput manner. • SADRs like cardiac disorders were systematically explored in molecular networks. • A number of ADR-associated proteins were identified.« less
Numerical simulation of the casting process of titanium removable partial denture frameworks.
Wu, Menghuai; Wagner, Ingo; Sahm, Peter R; Augthun, Michael
2002-03-01
The objective of this work was to study the filling incompleteness and porosity defects in titanium removal partial denture frameworks by means of numerical simulation. Two frameworks, one for lower jaw and one for upper jaw, were chosen according to dentists' recommendation to be simulated. Geometry of the frameworks were laser-digitized and converted into a simulation software (MAGMASOFT). Both mold filling and solidification of the castings with different sprue designs (e.g. tree, ball, and runner-bar) were numerically calculated. The shrinkage porosity was quantitatively predicted by a feeding criterion, the potential filling defect and gas pore sensitivity were estimated based on the filling and solidification results. A satisfactory sprue design with process parameters was finally recommended for real casting trials (four replica for each frameworks). All the frameworks were successfully cast. Through X-ray radiographic inspections it was found that all the castings were acceptably sound except for only one case in which gas bubbles were detected in the grasp region of the frame. It is concluded that numerical simulation aids to achieve understanding of the casting process and defect formation in titanium frameworks, hence to minimize the risk of producing defect casting by improving the sprue design and process parameters.
Aesthetic taste versus utility: the emotional and rational of the individual.
Mourthé, Claudia; Dejean, Pierre-Henri
2012-01-01
This article explores the development of an aesthetics framework that aims to provide designers with parameters to understand emotion, taste, and aesthetic judgment under their own cultural influence. This framework will equip designers with tangible criteria for judging cultural influences that have an impact on industrial design while preventing designers from adopting subjective options or being "followers of the current trend." To address the complexity of the topic, a systemic approach is taken so as to be able to capture its several elements. Therefore, the aesthetics framework adopts a systemic approach, which enables its constituents to be compared and the interplay or "links" between these different elements to be identified.
Scaffolding Students' Development of Creative Design Skills: A Curriculum Reference Model
ERIC Educational Resources Information Center
Lee, Chien-Sing; Kolodner, Janet L.
2011-01-01
This paper provides a framework for promoting creative design capabilities in the context of achieving community goals pertaining to sustainable development among high school students. The framework can be used as a reference model to design formal or out-of-school curriculum units in any geographical region. This theme is chosen due to its…
The Skills Framework for the Information Age: Engaging Stakeholders in Curriculum Design
ERIC Educational Resources Information Center
von Konsky, Brian R.; Miller, Charlynn; Jones, Asheley
2016-01-01
This paper reports on a research project, examining the role of the Skills Framework for the Information Age (SFIA) in Information and Communications Technology (ICT) curriculum design and management. A goal was to investigate how SFIA informs a top-down approach to curriculum design, beginning with a set of skills that define a particular career…
Towards a Conceptual Framework of GBL Design for Engagement and Learning of Curriculum-Based Content
ERIC Educational Resources Information Center
Jabbar, Azita Iliya Abdul; Felicia, Patrick
2016-01-01
This paper aims to show best practices of GBL design for engagement. It intends to show how teachers can implement GBL in a collaborative, comprehensive and systematic way, in the classrooms, and probably outside the classrooms, based on empirical evidence and theoretical framework designed accordingly. This paper presents the components needed to…
Designing Online Management Education Courses Using the Community of Inquiry Framework
ERIC Educational Resources Information Center
Weyant, Lee E.
2013-01-01
Online learning has grown as a program delivery option for many colleges and programs of business. The Community of Inquiry (CoI) framework consisting of three interrelated elements--social presence, cognitive presence, and teaching presences--provides a model to guide business faculty in their online course design. The course design of an online…
The Fidelity and Usability of 5-DIE: A Design Study of Enacted Cyberlearning
ERIC Educational Resources Information Center
Kern, Cindy L.; Crippen, Kent J.; Skaza, Heather
2014-01-01
This paper describes a design study of a cyberlearning instructional unit about climate change created with a new inquiry-based design framework, the 5-featured Dynamic Inquiry Enterprise (5-DIE). The 5-DIE framework was created to address the need for authentic science inquiry experiences in cyberlearning environments that leverage existing tools…
Supporting Faculty in the Design and Structuring of Web-Based Courses.
ERIC Educational Resources Information Center
Freeman, H.; Ryan, S.; Boys, J.
This paper reports on the development and extension of a concept mapping tool into a complete online course design support framework for academics: CEDOT (Course Elicitation, Development and Output Tool). The tool provides a course design framework that faculty work through in an order of their choosing. It gives context specific help and advice…
Unmanned Tactical Autonomous Control and Collaboration Situation Awareness
2017-06-01
methodology framework using interdependence analysis (IA) tables for informing design requirements based on SA requirements. Future research should seek...requirements of UTACC. The authors then apply SA principles to Coactive Design in order to inform robotic design. The result is a methodology framework using...28 2. Non -intrusive Methods ................................................................29 3. Post-Mission Reviews
A Framework for the Design of Computer-Assisted Simulation Training for Complex Police Situations
ERIC Educational Resources Information Center
Söderström, Tor; Åström, Jan; Anderson, Greg; Bowles, Ron
2014-01-01
Purpose: The purpose of this paper is to report progress concerning the design of a computer-assisted simulation training (CAST) platform for developing decision-making skills in police students. The overarching aim is to outline a theoretical framework for the design of CAST to facilitate police students' development of search techniques in…
Statistical methods for quantitative mass spectrometry proteomic experiments with labeling.
Oberg, Ann L; Mahoney, Douglas W
2012-01-01
Mass Spectrometry utilizing labeling allows multiple specimens to be subjected to mass spectrometry simultaneously. As a result, between-experiment variability is reduced. Here we describe use of fundamental concepts of statistical experimental design in the labeling framework in order to minimize variability and avoid biases. We demonstrate how to export data in the format that is most efficient for statistical analysis. We demonstrate how to assess the need for normalization, perform normalization, and check whether it worked. We describe how to build a model explaining the observed values and test for differential protein abundance along with descriptive statistics and measures of reliability of the findings. Concepts are illustrated through the use of three case studies utilizing the iTRAQ 4-plex labeling protocol.
Verkhivker, Gennady M
2016-01-01
The human protein kinome presents one of the largest protein families that orchestrate functional processes in complex cellular networks, and when perturbed, can cause various cancers. The abundance and diversity of genetic, structural, and biochemical data underlies the complexity of mechanisms by which targeted and personalized drugs can combat mutational profiles in protein kinases. Coupled with the evolution of system biology approaches, genomic and proteomic technologies are rapidly identifying and charactering novel resistance mechanisms with the goal to inform rationale design of personalized kinase drugs. Integration of experimental and computational approaches can help to bring these data into a unified conceptual framework and develop robust models for predicting the clinical drug resistance. In the current study, we employ a battery of synergistic computational approaches that integrate genetic, evolutionary, biochemical, and structural data to characterize the effect of cancer mutations in protein kinases. We provide a detailed structural classification and analysis of genetic signatures associated with oncogenic mutations. By integrating genetic and structural data, we employ network modeling to dissect mechanisms of kinase drug sensitivities to oncogenic EGFR mutations. Using biophysical simulations and analysis of protein structure networks, we show that conformational-specific drug binding of Lapatinib may elicit resistant mutations in the EGFR kinase that are linked with the ligand-mediated changes in the residue interaction networks and global network properties of key residues that are responsible for structural stability of specific functional states. A strong network dependency on high centrality residues in the conformation-specific Lapatinib-EGFR complex may explain vulnerability of drug binding to a broad spectrum of mutations and the emergence of drug resistance. Our study offers a systems-based perspective on drug design by unravelling complex relationships between robustness of targeted kinase genes and binding specificity of targeted kinase drugs. We discuss how these approaches can exploit advances in chemical biology and network science to develop novel strategies for rationally tailored and robust personalized drug therapies.
Rapid development of Proteomic applications with the AIBench framework.
López-Fernández, Hugo; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Méndez Reboredo, José R; Santos, Hugo M; Carreira, Ricardo J; Capelo-Martínez, José L; Fdez-Riverola, Florentino
2011-09-15
In this paper we present two case studies of Proteomics applications development using the AIBench framework, a Java desktop application framework mainly focused in scientific software development. The applications presented in this work are Decision Peptide-Driven, for rapid and accurate protein quantification, and Bacterial Identification, for Tuberculosis biomarker search and diagnosis. Both tools work with mass spectrometry data, specifically with MALDI-TOF spectra, minimizing the time required to process and analyze the experimental data. Copyright 2011 The Author(s). Published by Journal of Integrative Bioinformatics.
Project Management Framework to Organizational Transitions
NASA Technical Reports Server (NTRS)
Kotnour, Tim; Barton, Saul
1996-01-01
This paper describes a project management framework and associated models for organizational transitions. The framework contains an integrated set of steps an organization can take to lead an organizational transition such as downsizing and change in mission or role. The framework is designed to help an organization do the right work the right way with the right people at the right time. The underlying rationale for the steps in the framework is based on a set of findings which include: defining a transition as containing both near-term and long-term actions, designing actions which respond to drivers and achieve desired results, aligning the organization with the external environment, and aligning the internal components of the organization. The framework was developed based on best practices found in the literature, lessons learned from heads of organizations who have completed large-scale organizational changes, and concerns from employees at the Kennedy Space Center (KSC). The framework is described using KSC.
A Standardization Framework for Electronic Government Service Portals
NASA Astrophysics Data System (ADS)
Sarantis, Demetrios; Tsiakaliaris, Christos; Lampathaki, Fenareti; Charalabidis, Yannis
Although most eGovernment interoperability frameworks (eGIFs) cover adequately the technical aspects of developing and supporting the provision of electronic services to citizens and businesses, they do not exclusively address several important areas regarding the organization, presentation, accessibility and security of the content and the electronic services offered through government portals. This chapter extends the scope of existing eGIFs presenting the overall architecture and the basic concepts of the Greek standardization framework for electronic government service portals which, for the first time in Europe, is part of a country's eGovernment framework. The proposed standardization framework includes standards, guidelines and recommendations regarding the design, development and operation of government portals that support the provision of administrative information and services to citizens and businesses. By applying the guidelines of the framework, the design, development and operation of portals in central, regional and municipal government can be systematically addressed resulting in an applicable, sustainable and ever-expanding framework.
A comprehensive risk assessment framework for offsite transportation of inflammable hazardous waste.
Das, Arup; Gupta, A K; Mazumder, T N
2012-08-15
A framework for risk assessment due to offsite transportation of hazardous wastes is designed based on the type of event that can be triggered from an accident of a hazardous waste carrier. The objective of this study is to design a framework for computing the risk to population associated with offsite transportation of inflammable and volatile wastes. The framework is based on traditional definition of risk and is designed for conditions where accident databases are not available. The probability based variable in risk assessment framework is substituted by a composite accident index proposed in this study. The framework computes the impacts due to a volatile cloud explosion based on TNO Multi-energy model. The methodology also estimates the vulnerable population in terms of disability adjusted life years (DALY) which takes into consideration the demographic profile of the population and the degree of injury on mortality and morbidity sustained. The methodology is illustrated using a case study of a pharmaceutical industry in the Kolkata metropolitan area. Copyright © 2012 Elsevier B.V. All rights reserved.
Implementation of sustainable and green design and construction practices for bridges.
DOT National Transportation Integrated Search
2012-12-01
The focus of this research is to develop a framework for more sustainable design and construction : processes for new bridges, and sustainable maintenance practices for existing bridges. The framework : includes a green rating system for bridges. The...
Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Briggs, Jeffery L.
2008-01-01
The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.
Johannesen, Kasper M; Claxton, Karl; Sculpher, Mark J; Wailoo, Allan J
2018-02-01
This paper presents a conceptual framework to analyse the design of the cost-effectiveness appraisal process of new healthcare technologies. The framework characterises the appraisal processes as a diagnostic test aimed at identifying cost-effective (true positive) and non-cost-effective (true negative) technologies. Using the framework, factors that influence the value of operating an appraisal process, in terms of net gain to population health, are identified. The framework is used to gain insight into current policy questions including (a) how rigorous the process should be, (b) who should have the burden of proof, and (c) how optimal design changes when allowing for appeals, price reductions, resubmissions, and re-evaluations. The paper demonstrates that there is no one optimal appraisal process and the process should be adapted over time and to the specific technology under assessment. Optimal design depends on country-specific features of (future) technologies, for example, effect, price, and size of the patient population, which might explain the difference in appraisal processes across countries. It is shown that burden of proof should be placed on the producers and that the impact of price reductions and patient access schemes on the producer's price setting should be considered when designing the appraisal process. Copyright © 2017 John Wiley & Sons, Ltd.
Insights from molecular dynamics simulations for computational protein design.
Childers, Matthew Carter; Daggett, Valerie
2017-02-01
A grand challenge in the field of structural biology is to design and engineer proteins that exhibit targeted functions. Although much success on this front has been achieved, design success rates remain low, an ever-present reminder of our limited understanding of the relationship between amino acid sequences and the structures they adopt. In addition to experimental techniques and rational design strategies, computational methods have been employed to aid in the design and engineering of proteins. Molecular dynamics (MD) is one such method that simulates the motions of proteins according to classical dynamics. Here, we review how insights into protein dynamics derived from MD simulations have influenced the design of proteins. One of the greatest strengths of MD is its capacity to reveal information beyond what is available in the static structures deposited in the Protein Data Bank. In this regard simulations can be used to directly guide protein design by providing atomistic details of the dynamic molecular interactions contributing to protein stability and function. MD simulations can also be used as a virtual screening tool to rank, select, identify, and assess potential designs. MD is uniquely poised to inform protein design efforts where the application requires realistic models of protein dynamics and atomic level descriptions of the relationship between dynamics and function. Here, we review cases where MD simulations was used to modulate protein stability and protein function by providing information regarding the conformation(s), conformational transitions, interactions, and dynamics that govern stability and function. In addition, we discuss cases where conformations from protein folding/unfolding simulations have been exploited for protein design, yielding novel outcomes that could not be obtained from static structures.
Insights from molecular dynamics simulations for computational protein design
Childers, Matthew Carter; Daggett, Valerie
2017-01-01
A grand challenge in the field of structural biology is to design and engineer proteins that exhibit targeted functions. Although much success on this front has been achieved, design success rates remain low, an ever-present reminder of our limited understanding of the relationship between amino acid sequences and the structures they adopt. In addition to experimental techniques and rational design strategies, computational methods have been employed to aid in the design and engineering of proteins. Molecular dynamics (MD) is one such method that simulates the motions of proteins according to classical dynamics. Here, we review how insights into protein dynamics derived from MD simulations have influenced the design of proteins. One of the greatest strengths of MD is its capacity to reveal information beyond what is available in the static structures deposited in the Protein Data Bank. In this regard simulations can be used to directly guide protein design by providing atomistic details of the dynamic molecular interactions contributing to protein stability and function. MD simulations can also be used as a virtual screening tool to rank, select, identify, and assess potential designs. MD is uniquely poised to inform protein design efforts where the application requires realistic models of protein dynamics and atomic level descriptions of the relationship between dynamics and function. Here, we review cases where MD simulations was used to modulate protein stability and protein function by providing information regarding the conformation(s), conformational transitions, interactions, and dynamics that govern stability and function. In addition, we discuss cases where conformations from protein folding/unfolding simulations have been exploited for protein design, yielding novel outcomes that could not be obtained from static structures. PMID:28239489
Protein Engineering Approaches in the Post-Genomic Era.
Singh, Raushan K; Lee, Jung-Kul; Selvaraj, Chandrabose; Singh, Ranjitha; Li, Jinglin; Kim, Sang-Yong; Kalia, Vipin C
2018-01-01
Proteins are one of the most multifaceted macromolecules in living systems. Proteins have evolved to function under physiological conditions and, therefore, are not usually tolerant of harsh experimental and environmental conditions. The growing use of proteins in industrial processes as a greener alternative to chemical catalysts often demands constant innovation to improve their performance. Protein engineering aims to design new proteins or modify the sequence of a protein to create proteins with new or desirable functions. With the emergence of structural and functional genomics, protein engineering has been invigorated in the post-genomic era. The three-dimensional structures of proteins with known functions facilitate protein engineering approaches to design variants with desired properties. There are three major approaches of protein engineering research, namely, directed evolution, rational design, and de novo design. Rational design is an effective method of protein engineering when the threedimensional structure and mechanism of the protein is well known. In contrast, directed evolution does not require extensive information and a three-dimensional structure of the protein of interest. Instead, it involves random mutagenesis and selection to screen enzymes with desired properties. De novo design uses computational protein design algorithms to tailor synthetic proteins by using the three-dimensional structures of natural proteins and their folding rules. The present review highlights and summarizes recent protein engineering approaches, and their challenges and limitations in the post-genomic era. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
Global analysis of protein folding using massively parallel design, synthesis and testing
Rocklin, Gabriel J.; Chidyausiku, Tamuka M.; Goreshnik, Inna; Ford, Alex; Houliston, Scott; Lemak, Alexander; Carter, Lauren; Ravichandran, Rashmi; Mulligan, Vikram K.; Chevalier, Aaron; Arrowsmith, Cheryl H.; Baker, David
2017-01-01
Proteins fold into unique native structures stabilized by thousands of weak interactions that collectively overcome the entropic cost of folding. Though these forces are “encoded” in the thousands of known protein structures, “decoding” them is challenging due to the complexity of natural proteins that have evolved for function, not stability. Here we combine computational protein design, next-generation gene synthesis, and a high-throughput protease susceptibility assay to measure folding and stability for over 15,000 de novo designed miniproteins, 1,000 natural proteins, 10,000 point-mutants, and 30,000 negative control sequences, identifying over 2,500 new stable designed proteins in four basic folds. This scale—three orders of magnitude greater than that of previous studies of design or folding—enabled us to systematically examine how sequence determines folding and stability in uncharted protein space. Iteration between design and experiment increased the design success rate from 6% to 47%, produced stable proteins unlike those found in nature for topologies where design was initially unsuccessful, and revealed subtle contributions to stability as designs became increasingly optimized. Our approach achieves the long-standing goal of a tight feedback cycle between computation and experiment, and promises to transform computational protein design into a data-driven science. PMID:28706065
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu Lingguang; Gu Lina; Hu Gang
2009-03-15
Modular design method for designing and synthesizing microporous metal-organic frameworks (MOFs) with selective catalytical activity was described. MOFs with both nano-sized channels and potential catalytic activities could be obtained through self-assembly of a framework unit and a catalyst unit. By selecting hexaaquo metal complexes and the ligand BTC (BTC=1,3,5-benzenetricarboxylate) as framework-building blocks and using the metal complex [M(phen){sub 2}(H{sub 2}O){sub 2}]{sup 2+} (phen=1,10-phenanthroline) as a catalyst unit, a series of supramolecular MOFs 1-7 with three-dimensional nano-sized channels, i.e. [M{sup 1}(H{sub 2}O){sub 6}].[M{sup 2}(phen){sub 2}(H{sub 2}O){sub 2}]{sub 2}.2(BTC).xH{sub 2}O (M{sup 1}, M{sup 2}=Co(II), Ni(II), Cu(II), Zn(II), or Mn(II), phen=1,10-phenanthroline, BTC=1,3,5-benzenetricarboxylate, x=22-24),more » were synthesized through self-assembly, and their structures were characterized by IR, elemental analysis, and single-crystal X-ray diffraction. These supramolecular microporous MOFs showed significant size and shape selectivity in the catalyzed oxidation of phenols, which is due to catalytic reactions taking place in the channels of the framework. Design strategy, synthesis, and self-assembly mechanism for the construction of these porous MOFs were discussed. - Grapical abstract: A modular design strategy has been developed to synthesize microporous metal-organic frameworks with potential catalytic activity by self-assembly of the framework-building blocks and the catalyst unit.« less
Blended Interaction Design: A Spatial Workspace Supporting HCI and Design Practice
NASA Astrophysics Data System (ADS)
Geyer, Florian
This research investigates novel methods and techniques along with tool support that result from a conceptual blend of human-computer interaction with design practice. Using blending theory with material anchors as a theoretical framework, we frame both input spaces and explore emerging structures within technical, cognitive, and social aspects. Based on our results, we will describe a framework of the emerging structures and will design and evaluate tool support within a spatial, studio-like workspace to support collaborative creativity in interaction design.
Lee, Joseph G. L.; Averett, Paige E.; Blanchflower, Tiffany; Gregory, Kyle R.
2018-01-01
INTRODUCTION Researchers and regulators need to know how changes to cigarette packages can influence population health. We sought to advance research on the role of cigarette packaging by assessing a theory-informed framework from the fields of design and consumer research. The selected Context of Consumption Framework posits cognitive, affective, and behavioral responses to visual design. To assess the Framework’s potential for guiding research on the visual design of cigarette packaging in the U.S., this study seeks to understand to what extent the Context of Consumption Framework converges with how adult smokers think and talk about cigarette pack designs. METHODS Data for this qualitative study came from six telephone-based focus groups conducted in March 2017. Two groups consisted of lesbian, gay, and bisexual participants; two groups of participants with less than four years college education; one group of LGB and straight identity; and one group the general population. All groups were selected for regional, gender, and racial/ethnic diversity. Participants (n=33) represented all nine U.S. Census divisions. We conducted a deductive qualitative analysis. RESULTS Cigarette package designs captured the participants’ attention, suggested the characteristics of the product, and reflected (or could be leveraged to convey) multiple dimensions of consumer identity. Particular to the affective responses to design, our participants shared that cigarette packaging conveyed how the pack could be used to particular ends, created an emotional response to the designs, complied with normative expectations of a cigarette, elicited interest when designs change, and prompted fascination when unique design characteristics are used. CONCLUSIONS Use of the Context of Consumption Framework for cigarette product packaging design can inform regulatory research on tobacco product packaging. Researchers and regulators should consider multiple cognitive, affective, and behavioral responses to cigarette pack design. PMID:29593883
Roberts, Shirley M; Davies, Gideon J
2012-01-01
The three-dimensional (3-D) structures of cellulases, and other glycoside hydrolases, are a central feature of research in carbohydrate chemistry and biochemistry. 3-D structure is used to inform protein engineering campaigns, both academic and industrial, which are typically used to improve the stability or activity of an enzyme. Examples of classical protein engineering goals include higher thermal stability, reduced metal-ion dependency, detergent and protease resistance, decreased product inhibition, and altered specificity. 3-D structure may also be used to interpret the behavior of enzyme variants that are derived from screening or random mutagenesis approaches, with a view to establishing an iterative design process. In other areas, 3-D structure is used as one of the many tools to probe enzymatic catalysis, typically dovetailing with physical organic chemistry approaches to provide complete reaction mechanisms for enzymes by visualizing catalytic site interactions at different stages of the reaction. Such mechanistic insight is not only fundamentally important, impacting on inhibitor and drug design approaches with ramifications way beyond cellulose hydrolysis, but also provides the framework for the design of enzyme variants to use as biocatalysts for the synthesis of bespoke oligosaccharides. Here we review some of the strategies and tactics that may be applied to the X-ray structure solution of cellulases (and other carbohydrate-active enzymes). The general approach is first to decide why you are doing the work, then to establish correct domain boundaries for truncated constructs (typically the catalytic domain only), and finally to pursue crystallization of pure, homogeneous, and monodisperse protein with appropriate ligand and additive combinations. Cellulase-specific strategies are important for the delineation of domain boundaries, while glycoside hydrolases generally also present challenges and opportunities for the selection and optimization of ligands to both aid crystallization, and also provide structural and mechanistic insight. As the many roles for plant cell wall degrading enzymes increase, so does the need for rapid high-quality structure determination to provide a sound structural foundation for understanding mechanism and specificity, and for future protein engineering strategies. Copyright © 2012 Elsevier Inc. All rights reserved.
A framework for the design, implementation, and evaluation of interprofessional education.
Pardue, Karen T
2015-01-01
The growing emphasis on teamwork and care coordination within health care delivery is sparking interest in interprofessional education (IPE) among nursing and health profession faculty. Faculty often lack firsthand IPE experience, which hinders pedagogical reform. This article proposes a theoretically grounded framework for the design, implementation, and evaluation of IPE. Supporting literature and practical advice are interwoven. The proposed framework guides faculty in the successful creation and evaluation of collaborative learning experiences.
When Playing Meets Learning: Methodological Framework for Designing Educational Games
NASA Astrophysics Data System (ADS)
Linek, Stephanie B.; Schwarz, Daniel; Bopp, Matthias; Albert, Dietrich
Game-based learning builds upon the idea of using the motivational potential of video games in the educational context. Thus, the design of educational games has to address optimizing enjoyment as well as optimizing learning. Within the EC-project ELEKTRA a methodological framework for the conceptual design of educational games was developed. Thereby state-of-the-art psycho-pedagogical approaches were combined with insights of media-psychology as well as with best-practice game design. This science-based interdisciplinary approach was enriched by enclosed empirical research to answer open questions on educational game-design. Additionally, several evaluation-cycles were implemented to achieve further improvements. The psycho-pedagogical core of the methodology can be summarized by the ELEKTRA's 4Ms: Macroadaptivity, Microadaptivity, Metacognition, and Motivation. The conceptual framework is structured in eight phases which have several interconnections and feedback-cycles that enable a close interdisciplinary collaboration between game design, pedagogy, cognitive science and media psychology.
Leveraging advances in biology to design biomaterials
NASA Astrophysics Data System (ADS)
Darnell, Max; Mooney, David J.
2017-12-01
Biomaterials have dramatically increased in functionality and complexity, allowing unprecedented control over the cells that interact with them. From these engineering advances arises the prospect of improved biomaterial-based therapies, yet practical constraints favour simplicity. Tools from the biology community are enabling high-resolution and high-throughput bioassays that, if incorporated into a biomaterial design framework, could help achieve unprecedented functionality while minimizing the complexity of designs by identifying the most important material parameters and biological outputs. However, to avoid data explosions and to effectively match the information content of an assay with the goal of the experiment, material screens and bioassays must be arranged in specific ways. By borrowing methods to design experiments and workflows from the bioprocess engineering community, we outline a framework for the incorporation of next-generation bioassays into biomaterials design to effectively optimize function while minimizing complexity. This framework can inspire biomaterials designs that maximize functionality and translatability.
Kaur, Gagan Deep
2017-05-01
The design process in Kashmiri carpet weaving is distributed over a number of actors and artifacts and is mediated by a weaving notation called talim. The script encodes entire design in practice-specific symbols. This encoded script is decoded and interpreted via design-specific conventions by weavers to weave the design embedded in it. The cognitive properties of this notational system are described in the paper employing cognitive dimensions (CDs) framework of Green (People and computers, Cambridge University Press, Cambridge, 1989) and Blackwell et al. (Cognitive technology: instruments of mind-CT 2001, LNAI 2117, Springer, Berlin, 2001). After introduction to the practice, the design process is described in 'The design process' section which includes coding and decoding of talim. In 'Cognitive dimensions of talim' section, after briefly discussing CDs framework, the specific cognitive dimensions possessed by talim are described in detail.
Framework Requirements for MDO Application Development
NASA Technical Reports Server (NTRS)
Salas, A. O.; Townsend, J. C.
1999-01-01
Frameworks or problem solving environments that support application development form an active area of research. The Multidisciplinary Optimization Branch at NASA Langley Research Center is investigating frameworks for supporting multidisciplinary analysis and optimization research. The Branch has generated a list of framework requirements, based on the experience gained from the Framework for Interdisciplinary Design Optimization project and the information acquired during a framework evaluation process. In this study, four existing frameworks are examined against these requirements. The results of this examination suggest several topics for further framework research.
A multi-fidelity framework for physics based rotor blade simulation and optimization
NASA Astrophysics Data System (ADS)
Collins, Kyle Brian
New helicopter rotor designs are desired that offer increased efficiency, reduced vibration, and reduced noise. Rotor Designers in industry need methods that allow them to use the most accurate simulation tools available to search for these optimal designs. Computer based rotor analysis and optimization have been advanced by the development of industry standard codes known as "comprehensive" rotorcraft analysis tools. These tools typically use table look-up aerodynamics, simplified inflow models and perform aeroelastic analysis using Computational Structural Dynamics (CSD). Due to the simplified aerodynamics, most design studies are performed varying structural related design variables like sectional mass and stiffness. The optimization of shape related variables in forward flight using these tools is complicated and results are viewed with skepticism because rotor blade loads are not accurately predicted. The most accurate methods of rotor simulation utilize Computational Fluid Dynamics (CFD) but have historically been considered too computationally intensive to be used in computer based optimization, where numerous simulations are required. An approach is needed where high fidelity CFD rotor analysis can be utilized in a shape variable optimization problem with multiple objectives. Any approach should be capable of working in forward flight in addition to hover. An alternative is proposed and founded on the idea that efficient hybrid CFD methods of rotor analysis are ready to be used in preliminary design. In addition, the proposed approach recognizes the usefulness of lower fidelity physics based analysis and surrogate modeling. Together, they are used with high fidelity analysis in an intelligent process of surrogate model building of parameters in the high fidelity domain. Closing the loop between high and low fidelity analysis is a key aspect of the proposed approach. This is done by using information from higher fidelity analysis to improve predictions made with lower fidelity models. This thesis documents the development of automated low and high fidelity physics based rotor simulation frameworks. The low fidelity framework uses a comprehensive code with simplified aerodynamics. The high fidelity model uses a parallel processor capable CFD/CSD methodology. Both low and high fidelity frameworks include an aeroacoustic simulation for prediction of noise. A synergistic process is developed that uses both the low and high fidelity frameworks together to build approximate models of important high fidelity metrics as functions of certain design variables. To test the process, a 4-bladed hingeless rotor model is used as a baseline. The design variables investigated include tip geometry and spanwise twist distribution. Approximation models are built for metrics related to rotor efficiency and vibration using the results from 60+ high fidelity (CFD/CSD) experiments and 400+ low fidelity experiments. Optimization using the approximation models found the Pareto Frontier anchor points, or the design having maximum rotor efficiency and the design having minimum vibration. Various Pareto generation methods are used to find designs on the frontier between these two anchor designs. When tested in the high fidelity framework, the Pareto anchor designs are shown to be very good designs when compared with other designs from the high fidelity database. This provides evidence that the process proposed has merit. Ultimately, this process can be utilized by industry rotor designers with their existing tools to bring high fidelity analysis into the preliminary design stage of rotors. In conclusion, the methods developed and documented in this thesis have made several novel contributions. First, an automated high fidelity CFD based forward flight simulation framework has been built for use in preliminary design optimization. The framework was built around an integrated, parallel processor capable CFD/CSD/AA process. Second, a novel method of building approximate models of high fidelity parameters has been developed. The method uses a combination of low and high fidelity results and combines Design of Experiments, statistical effects analysis, and aspects of approximation model management. And third, the determination of rotor blade shape variables through optimization using CFD based analysis in forward flight has been performed. This was done using the high fidelity CFD/CSD/AA framework and method mentioned above. While the low and high fidelity predictions methods used in the work still have inaccuracies that can affect the absolute levels of the results, a framework has been successfully developed and demonstrated that allows for an efficient process to improve rotor blade designs in terms of a selected choice of objective function(s). Using engineering judgment, this methodology could be applied today to investigate opportunities to improve existing designs. With improvements in the low and high fidelity prediction components that will certainly occur, this framework could become a powerful tool for future rotorcraft design work. (Abstract shortened by UMI.)
ERIC Educational Resources Information Center
Hirumi, Atsusi
2013-01-01
Advances in technology offer a vast array of opportunities for facilitating elearning. However, difficulties may arise if elearning research and design, including the use of emerging technologies, are based primarily on past practices, fads, or political agendas. This article describes refinements made to a framework for designing and sequencing…
ERIC Educational Resources Information Center
Cardenas-Claros, Monica Stella; Gruba, Paul A.
2013-01-01
This paper proposes a theoretical framework for the conceptualization and design of help options in computer-based second language (L2) listening. Based on four empirical studies, it aims at clarifying both conceptualization and design (CoDe) components. The elements of conceptualization consist of a novel four-part classification of help options:…
12 CFR Appendix A to Part 252 - Policy Statement on the Scenario Design Framework for Stress Testing
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 4 2014-01-01 2014-01-01 false Policy Statement on the Scenario Design... YY) Pt. 252, App. A Appendix A to Part 252—Policy Statement on the Scenario Design Framework for... (stress test rules) implementing section 165(i) of the Dodd-Frank Wall Street Reform and Consumer...
The Customer Flow Toolkit: A Framework for Designing High Quality Customer Services.
ERIC Educational Resources Information Center
New York Association of Training and Employment Professionals, Albany.
This document presents a toolkit to assist staff involved in the design and development of New York's one-stop system. Section 1 describes the preplanning issues to be addressed and the intended outcomes that serve as the framework for creation of the customer flow toolkit. Section 2 outlines the following strategies to assist in designing local…
ERIC Educational Resources Information Center
Zapata-Rivera, Diego; VanWinkle, Waverely; Doyle, Bryan; Buteux, Alyssa; Bauer, Malcolm
2009-01-01
Purpose: The purpose of this paper is to propose and demonstrate an evidence-based scenario design framework for assessment-based computer games. Design/methodology/approach: The evidence-based scenario design framework is presented and demonstrated by using BELLA, a new assessment-based gaming environment aimed at supporting student learning of…
ERIC Educational Resources Information Center
Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.
This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for the two course sequences of the state's postsecondary-level drafting and design technology program: architectural drafting technology and drafting and design technology. Presented first are a program description and…
NASA Astrophysics Data System (ADS)
Stolk, Machiel J.; de Jong, Onno; Bulte, Astrid M. W.; Pilot, Albert
2011-05-01
Involving teachers in early stages of context-based curriculum innovations requires a professional development programme that actively engages teachers in the design of new context-based units. This study considers the implementation of a teacher professional development framework aiming to investigate processes of professional development. The framework is based on Galperin's theory of the internalisation of actions and it is operationalised into a professional development programme to empower chemistry teachers for designing new context-based units. The programme consists of the teaching of an educative context-based unit, followed by the designing of an outline of a new context-based unit. Six experienced chemistry teachers participated in the instructional meetings and practical teaching in their respective classrooms. Data were obtained from meetings, classroom discussions, and observations. The findings indicated that teachers became only partially empowered for designing a new context-based chemistry unit. Moreover, the process of professional development leading to teachers' empowerment was not carried out as intended. It is concluded that the elaboration of the framework needs improvement. The implications for a new programme are discussed.
Tsao, Liuxing; Ma, Liang
2016-11-01
Digital human modelling enables ergonomists and designers to consider ergonomic concerns and design alternatives in a timely and cost-efficient manner in the early stages of design. However, the reliability of the simulation could be limited due to the percentile-based approach used in constructing the digital human model. To enhance the accuracy of the size and shape of the models, we proposed a framework to generate digital human models using three-dimensional (3D) anthropometric data. The 3D scan data from specific subjects' hands were segmented based on the estimated centres of rotation. The segments were then driven in forward kinematics to perform several functional postures. The constructed hand models were then verified, thereby validating the feasibility of the framework. The proposed framework helps generate accurate subject-specific digital human models, which can be utilised to guide product design and workspace arrangement. Practitioner Summary: Subject-specific digital human models can be constructed under the proposed framework based on three-dimensional (3D) anthropometry. This approach enables more reliable digital human simulation to guide product design and workspace arrangement.
Liu, Xiaojun; Zeng, Shimei; Dong, Shaojian; Jin, Can; Li, Jiale
2015-01-01
In this study, we clone and characterize a novel matrix protein, hic31, from the mantle of Hyriopsis cumingii. The amino acid composition of hic31 consists of a high proportion of Glycine residues (26.67%). Tissue expression detection by RT-PCR indicates that hic31 is expressed specifically at the mantle edge. In situ hybridization results reveals strong signals from the dorsal epithelial cells of the outer fold at the mantle edge, and weak signals from inner epithelial cells of the same fold, indicating that hic31 is a prismatic-layer matrix protein. Although BLASTP results identify no shared homology with other shell-matrix proteins or any other known proteins, the hic31 tertiary structure is similar to that of collagen I, alpha 1 and alpha 2. It has been well proved that collagen forms the basic organic frameworks in way of collagen fibrils and minerals present within or outside of these fibrils. Therefore, hic31 might be a framework-matrix protein involved in the prismatic-layer biomineralization. Besides, the gene expression of hic31 increase in the early stages of pearl sac development, indicating that hic31 may play important roles in biomineralization of the pearl prismatic layer.
Book Club Plus: A Conceptual Framework To Organize Literacy Instruction.
ERIC Educational Resources Information Center
Raphael, Taffy E.; Florio-Ruane, Susan; George, MariAnne
2001-01-01
Notes that finding time for skills instruction without replacing literature discussion and writers' workshop requires a strong organizational framework for literacy instruction. Suggests that teachers need principled, conceptual frameworks to guide their thoughts and actions. Describes a framework, Book Club Plus, designed by a practitioner…
Computational Approaches to the Chemical Equilibrium Constant in Protein-ligand Binding.
Montalvo-Acosta, Joel José; Cecchini, Marco
2016-12-01
The physiological role played by protein-ligand recognition has motivated the development of several computational approaches to the ligand binding affinity. Some of them, termed rigorous, have a strong theoretical foundation but involve too much computation to be generally useful. Some others alleviate the computational burden by introducing strong approximations and/or empirical calibrations, which also limit their general use. Most importantly, there is no straightforward correlation between the predictive power and the level of approximation introduced. Here, we present a general framework for the quantitative interpretation of protein-ligand binding based on statistical mechanics. Within this framework, we re-derive self-consistently the fundamental equations of some popular approaches to the binding constant and pinpoint the inherent approximations. Our analysis represents a first step towards the development of variants with optimum accuracy/efficiency ratio for each stage of the drug discovery pipeline. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Depenbrock, Brett T.; Balint, Tibor S.; Sheehy, Jeffrey A.
2014-01-01
Research and development organizations that push the innovation edge of technology frequently encounter challenges when attempting to identify an investment strategy and to accurately forecast the cost and schedule performance of selected projects. Fast moving and complex environments require managers to quickly analyze and diagnose the value of returns on investment versus allocated resources. Our Project Assessment Framework through Design (PAFTD) tool facilitates decision making for NASA senior leadership to enable more strategic and consistent technology development investment analysis, beginning at implementation and continuing through the project life cycle. The framework takes an integrated approach by leveraging design principles of useability, feasibility, and viability and aligns them with methods employed by NASA's Independent Program Assessment Office for project performance assessment. The need exists to periodically revisit the justification and prioritization of technology development investments as changes occur over project life cycles. The framework informs management rapidly and comprehensively about diagnosed internal and external root causes of project performance.
Bacteriophages as scaffolds for bipartite display: designing swiss army knives on a nanoscale.
Molek, Peter; Bratkovič, Tomaž
2015-03-18
Bacteriophages have been exploited as cloning vectors and display vehicles for decades owing to their genetic and structural simplicity. In bipartite display setting, phage takes on the role of a handle to which two modules are attached, each endowing it with specific functionality, much like the Swiss army knife. This concept offers unprecedented potential for phage applications in nanobiotechnology. Here, we compare common phage display platforms and discuss approaches to simultaneously append two or more different (poly)peptides or synthetic compounds to phage coat using genetic fusions, chemical or enzymatic conjugations, and in vitro noncovalent decoration techniques. We also review current reports on design of phage frameworks to link multiple effectors, and their use in diverse scientific disciplines. Bipartite phage display had left its mark in development of biosensors, vaccines, and targeted delivery vehicles. Furthermore, multifunctionalized phages have been utilized to template assembly of inorganic materials and protein complexes, showing promise as scaffolds in material sciences and structural biology, respectively.
Structural Plasticity of Helical Nanotubes Based on Coiled-Coil Assemblies
Egelman, Edward H.; Xu, C.; DiMaio, F.; ...
2015-01-22
Numerous instances can be seen in evolution in which protein quaternary structures have diverged while the sequences of the building blocks have remained fairly conserved. However, the path through which such divergence has taken place is usually not known. We have designed two synthetic 29-residue α-helical peptides, based on the coiled-coil structural motif, that spontaneously self-assemble into helical nanotubes in vitro. Using electron cryomicroscopy with a newly available direct electron detection capability, we can achieve near-atomic resolution of these thin structures. We show how conservative changes of only one or two amino acids result in dramatic changes in quaternary structure,more » in which the assemblies can be switched between two very different forms. This system provides a framework for understanding how small sequence changes in evolution can translate into very large changes in supramolecular structure, a phenomenon that may have significant implications for the de novo design of synthetic peptide assemblies.« less
Morgenstern, Hai; Rafaely, Boaz; Noisternig, Markus
2017-03-01
Spherical microphone arrays (SMAs) and spherical loudspeaker arrays (SLAs) facilitate the study of room acoustics due to the three-dimensional analysis they provide. More recently, systems that combine both arrays, referred to as multiple-input multiple-output (MIMO) systems, have been proposed due to the added spatial diversity they facilitate. The literature provides frameworks for designing SMAs and SLAs separately, including error analysis from which the operating frequency range (OFR) of an array is defined. However, such a framework does not exist for the joint design of a SMA and a SLA that comprise a MIMO system. This paper develops a design framework for MIMO systems based on a model that addresses errors and highlights the importance of a matched design. Expanding on a free-field assumption, errors are incorporated separately for each array and error bounds are defined, facilitating error analysis for the system. The dependency of the error bounds on the SLA and SMA parameters is studied and it is recommended that parameters should be chosen to assure matched OFRs of the arrays in MIMO system design. A design example is provided, demonstrating the superiority of a matched system over an unmatched system in the synthesis of directional room impulse responses.
Grandison, Scott; Roberts, Carl; Morris, Richard J
2009-03-01
Protein structures are not static entities consisting of equally well-determined atomic coordinates. Proteins undergo continuous motion, and as catalytic machines, these movements can be of high relevance for understanding function. In addition to this strong biological motivation for considering shape changes is the necessity to correctly capture different levels of detail and error in protein structures. Some parts of a structural model are often poorly defined, and the atomic displacement parameters provide an excellent means to characterize the confidence in an atom's spatial coordinates. A mathematical framework for studying these shape changes, and handling positional variance is therefore of high importance. We present an approach for capturing various protein structure properties in a concise mathematical framework that allows us to compare features in a highly efficient manner. We demonstrate how three-dimensional Zernike moments can be employed to describe functions, not only on the surface of a protein but throughout the entire molecule. A number of proof-of-principle examples are given which demonstrate how this approach may be used in practice for the representation of movement and uncertainty.
2009-01-01
Background There are few studies that examine the processes that interdisciplinary teams engage in and how we can design health information systems (HIS) to support those team processes. This was an exploratory study with two purposes: (1) To develop a framework for interdisciplinary team communication based on structures, processes and outcomes that were identified as having occurred during weekly team meetings. (2) To use the framework to guide 'e-teams' HIS design to support interdisciplinary team meeting communication. Methods An ethnographic approach was used to collect data on two interdisciplinary teams. Qualitative content analysis was used to analyze the data according to structures, processes and outcomes. Results We present details for team meta-concepts of structures, processes and outcomes and the concepts and sub concepts within each meta-concept. We also provide an exploratory framework for interdisciplinary team communication and describe how the framework can guide HIS design to support 'e-teams'. Conclusion The structures, processes and outcomes that describe interdisciplinary teams are complex and often occur in a non-linear fashion. Electronic data support, process facilitation and team video conferencing are three HIS tools that can enhance team function. PMID:19754966
Kuziemsky, Craig E; Borycki, Elizabeth M; Purkis, Mary Ellen; Black, Fraser; Boyle, Michael; Cloutier-Fisher, Denise; Fox, Lee Ann; MacKenzie, Patricia; Syme, Ann; Tschanz, Coby; Wainwright, Wendy; Wong, Helen
2009-09-15
There are few studies that examine the processes that interdisciplinary teams engage in and how we can design health information systems (HIS) to support those team processes. This was an exploratory study with two purposes: (1) To develop a framework for interdisciplinary team communication based on structures, processes and outcomes that were identified as having occurred during weekly team meetings. (2) To use the framework to guide 'e-teams' HIS design to support interdisciplinary team meeting communication. An ethnographic approach was used to collect data on two interdisciplinary teams. Qualitative content analysis was used to analyze the data according to structures, processes and outcomes. We present details for team meta-concepts of structures, processes and outcomes and the concepts and sub concepts within each meta-concept. We also provide an exploratory framework for interdisciplinary team communication and describe how the framework can guide HIS design to support 'e-teams'. The structures, processes and outcomes that describe interdisciplinary teams are complex and often occur in a non-linear fashion. Electronic data support, process facilitation and team video conferencing are three HIS tools that can enhance team function.
Introduction into the Virtual Olympic Games Framework for online communities.
Stoilescu, Dorian
2009-06-01
This paper presents the design of the Virtual Olympic Games Framework (VOGF), a computer application designated for athletics, health care, general well-being, nutrition and fitness, which offers multiple benefits for its participants. A special interest in starting the design of the framework was in exploring how people can connect and participate together using existing computer technologies (i.e. gaming consoles, exercise equipment with computer interfaces, devices of measuring health, speed, force and distance and Web 2.0 applications). A stationary bike set-up offering information to users about their individual health and athletic performances has been considered as a starting model. While this model is in the design stage, some preliminary findings are encouraging, suggesting the potential for various fields: sports, medicine, theories of learning, technologies and cybercultural studies. First, this framework would allow participants to perform a variety of sports and improve their health. Second, this would involve creating an online environment able to store health information and sport performances correlated with accessing multi-media data and research about performing sports. Third, participants could share experiences with other athletes, coaches and researchers. Fourth, this framework also provides support for the research community in their future investigations.
Lanzas, C; Broderick, G A; Fox, D G
2008-12-01
Adequate predictions of rumen-degradable protein (RDP) and rumen-undegradable protein (RUP) supplies are necessary to optimize performance while minimizing losses of excess nitrogen (N). The objectives of this study were to evaluate the original Cornell Net Carbohydrate Protein System (CNCPS) protein fractionation scheme and to develop and evaluate alternatives designed to improve its adequacy in predicting RDP and RUP. The CNCPS version 5 fractionates CP into 5 fractions based on solubility in protein precipitant agents, buffers, and detergent solutions: A represents the soluble nonprotein N, B1 is the soluble true protein, B2 represents protein with intermediate rates of degradation, B3 is the CP insoluble in neutral detergent solution but soluble in acid detergent solution, and C is the unavailable N. Model predictions were evaluated with studies that measured N flow data at the omasum. The N fractionation scheme in version 5 of the CNCPS explained 78% of the variation in RDP with a root mean square prediction error (RMSPE) of 275 g/d, and 51% of the RUP variation with RMSPE of 248 g/d. Neutral detergent insoluble CP flows were overpredicted with a mean bias of 128 g/d (40% of the observed mean). The greatest improvements in the accuracy of RDP and RUP predictions were obtained with the following 2 alternative schemes. Alternative 1 used the inhibitory in vitro system to measure the fractional rate of degradation for the insoluble protein fraction in which A = nonprotein N, B1 = true soluble protein, B2 = insoluble protein, C = unavailable protein (RDP: R(2) = 0.84 and RMSPE = 167 g/d; RUP: R(2) = 0.61 and RMSPE = 209 g/d), whereas alternative 2 redefined A and B1 fractions as the non-amino-N and amino-N in the soluble fraction respectively (RDP: R(2) = 0.79 with RMSPE = 195 g/d and RUP: R(2) = 0.54 with RMSPE = 225 g/d). We concluded that implementing alternative 1 or 2 will improve the accuracy of predicting RDP and RUP within the CNCPS framework.
Sustainable Supply Chain Design by the P-Graph Framework
The present work proposes a computer-aided methodology for designing sustainable supply chains in terms of sustainability metrics by resorting to the P-graph framework. The methodology is an outcome of the collaboration between the Office of Research and Development (ORD) of the ...
Designing Business Games for the Service Industries.
ERIC Educational Resources Information Center
Sculli, Domenic; Ng, Wing Cheong
1985-01-01
Presents a conceptual framework for design of business games in which output is in the form of service. The framework is presented as three separate systems--the physical, the financial, and the external environment. A hotel management game is used to illustrate the discussion. (Author/MBR)
McDonald, Steve; Turner, Tari; Chamberlain, Catherine; Lumbiganon, Pisake; Thinkhamrop, Jadsada; Festin, Mario R; Ho, Jacqueline J; Mohammad, Hakimi; Henderson-Smart, David J; Short, Jacki; Crowther, Caroline A; Martis, Ruth; Green, Sally
2010-07-01
Rates of maternal and perinatal mortality remain high in developing countries despite the existence of effective interventions. Efforts to strengthen evidence-based approaches to improve health in these settings are partly hindered by restricted access to the best available evidence, limited training in evidence-based practice and concerns about the relevance of existing evidence. South East Asia--Optimising Reproductive and Child Health in Developing Countries (SEA-ORCHID) was a five-year project that aimed to determine whether a multifaceted intervention designed to strengthen the capacity for research synthesis, evidence-based care and knowledge implementation improved clinical practice and led to better health outcomes for mothers and babies. This paper describes the development and design of the SEA-ORCHID intervention plan using a logical framework approach. SEA-ORCHID used a before-and-after design to evaluate the impact of a multifaceted tailored intervention at nine sites across Thailand, Malaysia, Philippines and Indonesia, supported by three centres in Australia. We used a logical framework approach to systematically prepare and summarise the project plan in a clear and logical way. The development and design of the SEA-ORCHID project was based around the three components of a logical framework (problem analysis, project plan and evaluation strategy). The SEA-ORCHID logical framework defined the project's goal and purpose (To improve the health of mothers and babies in South East Asia and To improve clinical practice in reproductive health in South East Asia), and outlined a series of project objectives and activities designed to achieve these. The logical framework also established outcome and process measures appropriate to each level of the project plan, and guided project work in each of the participating countries and hospitals. Development of a logical framework in the SEA-ORCHID project enabled a reasoned, logical approach to the project design that ensured the project activities would achieve the desired outcomes and that the evaluation plan would assess both the process and outcome of the project. The logical framework was also valuable over the course of the project to facilitate communication, assess progress and build a shared understanding of the project activities, purpose and goal.
2010-01-01
Background Rates of maternal and perinatal mortality remain high in developing countries despite the existence of effective interventions. Efforts to strengthen evidence-based approaches to improve health in these settings are partly hindered by restricted access to the best available evidence, limited training in evidence-based practice and concerns about the relevance of existing evidence. South East Asia - Optimising Reproductive and Child Health in Developing Countries (SEA-ORCHID) was a five-year project that aimed to determine whether a multifaceted intervention designed to strengthen the capacity for research synthesis, evidence-based care and knowledge implementation improved clinical practice and led to better health outcomes for mothers and babies. This paper describes the development and design of the SEA-ORCHID intervention plan using a logical framework approach. Methods SEA-ORCHID used a before-and-after design to evaluate the impact of a multifaceted tailored intervention at nine sites across Thailand, Malaysia, Philippines and Indonesia, supported by three centres in Australia. We used a logical framework approach to systematically prepare and summarise the project plan in a clear and logical way. The development and design of the SEA-ORCHID project was based around the three components of a logical framework (problem analysis, project plan and evaluation strategy). Results The SEA-ORCHID logical framework defined the project's goal and purpose (To improve the health of mothers and babies in South East Asia and To improve clinical practice in reproductive health in South East Asia), and outlined a series of project objectives and activities designed to achieve these. The logical framework also established outcome and process measures appropriate to each level of the project plan, and guided project work in each of the participating countries and hospitals. Conclusions Development of a logical framework in the SEA-ORCHID project enabled a reasoned, logical approach to the project design that ensured the project activities would achieve the desired outcomes and that the evaluation plan would assess both the process and outcome of the project. The logical framework was also valuable over the course of the project to facilitate communication, assess progress and build a shared understanding of the project activities, purpose and goal. PMID:20594325
Parametric estimation for reinforced concrete relief shelter for Aceh cases
NASA Astrophysics Data System (ADS)
Atthaillah; Saputra, Eri; Iqbal, Muhammad
2018-05-01
This paper was a work in progress (WIP) to discover a rapid parametric framework for post-disaster permanent shelter’s materials estimation. The intended shelters were reinforced concrete construction with bricks as its wall. Inevitably, in post-disaster cases, design variations were needed to help suited victims condition. It seemed impossible to satisfy a beneficiary with a satisfactory design utilizing the conventional method. This study offered a parametric framework to overcome slow construction-materials estimation issue against design variations. Further, this work integrated parametric tool, which was Grasshopper to establish algorithms that simultaneously model, visualize, calculate and write the calculated data to a spreadsheet in a real-time. Some customized Grasshopper components were created using GHPython scripting for a more optimized algorithm. The result from this study was a partial framework that successfully performed modeling, visualization, calculation and writing the calculated data simultaneously. It meant design alterations did not escalate time needed for modeling, visualization, and material estimation. Further, the future development of the parametric framework will be made open source.
Design and applications of a multimodality image data warehouse framework.
Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.
Design and Applications of a Multimodality Image Data Warehouse Framework
Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885
Tacir, Ibrahim H; Dirihan, Roda S; Polat, Zelal Seyfioglu; Salman, Gizem Ön; Vallittu, Pekka; Lassila, Lippo; Ayna, Emrah
2018-06-28
BACKGROUND The aim of this study was to investigate and compare the load-bearing capacities of three-unit direct resin-bonded fiber-reinforced composite fixed dental prosthesis with different framework designs. MATERIAL AND METHODS Sixty mandibular premolar and molar teeth without caries were collected and direct glass fiber-resin fixed FDPs were divided into 6 groups (n=10). Each group was restored via direct technique with different designs. In Group 1, the inlay-retained bridges formed 2 unidirectional FRC frameworks and pontic-reinforced transversal FRC. In Group 2, the inlay-retained bridges were supported by unidirectional lingual and occlusal FRC frameworks. Group 3, had buccal and lingual unidirectional FRC frameworks without the inlay cavities. Group 4 had reinforced inlay cavities and buccal-lingual FRC with unidirectional FRC frameworks. Group 5, had a circular form of fiber reinforcement around cusps in addition to buccal-lingual FRC frameworks. Group 6 had a circular form of fiber reinforcement around cusps with 2 bidirectional FRC frameworks into inlay cavities. All groups were loaded until final fracture using a universal testing machine at a crosshead speed of 1 mm/min. RESULTS Mean values of the groups were determined with ANOVA and Tukey HSD. When all data were evaluated, Group 6 had the highest load-bearing capacities and revealed significant differences from Group 3 and Group 4. Group 6 had the highest strain (p>0.05). When the fracture patterns were investigated, Group 6 had the durability to sustain fracture propagation within the restoration. CONCLUSIONS The efficiency of fiber reinforcement of the restorations alters not only the amount of fiber, but also the design of the restoration with fibers.
Biomolecular engineering for nanobio/bionanotechnology
NASA Astrophysics Data System (ADS)
Nagamune, Teruyuki
2017-04-01
Biomolecular engineering can be used to purposefully manipulate biomolecules, such as peptides, proteins, nucleic acids and lipids, within the framework of the relations among their structures, functions and properties, as well as their applicability to such areas as developing novel biomaterials, biosensing, bioimaging, and clinical diagnostics and therapeutics. Nanotechnology can also be used to design and tune the sizes, shapes, properties and functionality of nanomaterials. As such, there are considerable overlaps between nanotechnology and biomolecular engineering, in that both are concerned with the structure and behavior of materials on the nanometer scale or smaller. Therefore, in combination with nanotechnology, biomolecular engineering is expected to open up new fields of nanobio/bionanotechnology and to contribute to the development of novel nanobiomaterials, nanobiodevices and nanobiosystems. This review highlights recent studies using engineered biological molecules (e.g., oligonucleotides, peptides, proteins, enzymes, polysaccharides, lipids, biological cofactors and ligands) combined with functional nanomaterials in nanobio/bionanotechnology applications, including therapeutics, diagnostics, biosensing, bioanalysis and biocatalysts. Furthermore, this review focuses on five areas of recent advances in biomolecular engineering: (a) nucleic acid engineering, (b) gene engineering, (c) protein engineering, (d) chemical and enzymatic conjugation technologies, and (e) linker engineering. Precisely engineered nanobiomaterials, nanobiodevices and nanobiosystems are anticipated to emerge as next-generation platforms for bioelectronics, biosensors, biocatalysts, molecular imaging modalities, biological actuators, and biomedical applications.
Simakov, Nikolay A.
2010-01-01
A soft repulsion (SR) model of short range interactions between mobile ions and protein atoms is introduced in the framework of continuum representation of the protein and solvent. The Poisson-Nernst-Plank (PNP) theory of ion transport through biological channels is modified to incorporate this soft wall protein model. Two sets of SR parameters are introduced: the first is parameterized for all essential amino acid residues using all atom molecular dynamic simulations; the second is a truncated Lennard – Jones potential. We have further designed an energy based algorithm for the determination of the ion accessible volume, which is appropriate for a particular system discretization. The effects of these models of short-range interaction were tested by computing current-voltage characteristics of the α-hemolysin channel. The introduced SR potentials significantly improve prediction of channel selectivity. In addition, we studied the effect of choice of some space-dependent diffusion coefficient distributions on the predicted current-voltage properties. We conclude that the diffusion coefficient distributions largely affect total currents and have little effect on rectifications, selectivity or reversal potential. The PNP-SR algorithm is implemented in a new efficient parallel Poisson, Poisson-Boltzman and PNP equation solver, also incorporated in a graphical molecular modeling package HARLEM. PMID:21028776
Jatana, Nidhi; Thukral, Lipi; Latha, N
2016-01-01
Human Dopamine Receptor D4 (DRD4) orchestrates several neurological functions and represents a target for many psychological disorders. Here, we examined two rare variants in DRD4; V194G and R237L, which elicit functional alterations leading to disruption of ligand binding and G protein coupling, respectively. Using atomistic molecular dynamics (MD) simulations, we provide in-depth analysis to reveal structural signatures of wild and mutant complexes with their bound agonist and antagonist ligands. We constructed intra-protein network graphs to discriminate the global conformational changes induced by mutations. The simulations also allowed us to elucidate the local side-chain dynamical variations in ligand-bound mutant receptors. The data suggest that the mutation in transmembrane V (V194G) drastically disrupts the organization of ligand binding site and causes disorder in the native helical arrangement. Interestingly, the R237L mutation leads to significant rewiring of side-chain contacts in the intracellular loop 3 (site of mutation) and also affects the distant transmembrane topology. Additionally, these mutations lead to compact ICL3 region compared to the wild type, indicating that the receptor would be inaccessible for G protein coupling. Our findings thus reveal unreported structural determinants of the mutated DRD4 receptor and provide a robust framework for design of effective novel drugs.
Zadeh, Rana; Sadatsafavi, Hessam; Xue, Ryan
2015-01-01
This study describes a vision and framework that can facilitate the implementation of evidence-based design (EBD), scientific knowledge base into the process of the design, construction, and operation of healthcare facilities and clarify the related safety and quality outcomes for the stakeholders. The proposed framework pairs EBD with value-driven decision making and aims to improve communication among stakeholders by providing a common analytical language. Recent EBD research indicates that the design and operation of healthcare facilities contribute to an organization's operational success by improving safety, quality, and efficiency. However, because little information is available about the financial returns of evidence-based investments, such investments are readily eliminated during the capital-investment decision-making process. To model the proposed framework, we used engineering economy tools to evaluate the return on investments in six successful cases, identified by a literature review, in which facility design and operation interventions resulted in reductions in hospital-acquired infections, patient falls, staff injuries, and patient anxiety. In the evidence-based cases, calculated net present values, internal rates of return, and payback periods indicated that the long-term benefits of interventions substantially outweighed the intervention costs. This article explained a framework to develop a research-based and value-based communication language on specific interventions along the planning, design and construction, operation, and evaluation stages. Evidence-based and value-based design frameworks can be applied to communicate the life-cycle costs and savings of EBD interventions to stakeholders, thereby contributing to more informed decision makings and the optimization of healthcare infrastructures. © The Author(s) 2015.
Two theories/a sharper lens: the staff nurse voice in the workplace.
DeMarco, Rosanna
2002-06-01
This paper (1) introduces the two theoretical frameworks, Silencing the Self and the Framework of Systemic Organization (2) describes the design and findings briefly of a study exploring spillover in nurses utilizing the frameworks, and (3) discusses the process and value of theory triangulation when conducting research in the context of complex nursing systems phenomena where gender, professional work, and gender identity merge. A research study was designed to analyse the actual workplace behaviours of nurses in the context of their lives at work and outside work. An exploration of theoretical frameworks that could direct the measurement of the phenomena in question led to the use of two frameworks, the Framework of Systemic Organization (Friedemann 1995) and the Silencing the Self Theory (Jack 1991), and the creation of a valid and reliable summative rating instrument (the Staff Nurse Workplace Behaviours Scale, SNWBS). A descriptive correlational design was used to measure behaviours between work and home. There were statistically significant relationships found between workplace behaviours, family behaviours, and silencing behaviours as measured by the two separate scales measuring framework concepts. Although both theories had different origins and philosophical tenets, the findings of a research study created an opportunity to integrate the concepts of each and unexpectedly increase and broaden the understanding of spillover for women who are often nurses.
A domain-centric solution to functional genomics via dcGO Predictor
2013-01-01
Background Computational/manual annotations of protein functions are one of the first routes to making sense of a newly sequenced genome. Protein domain predictions form an essential part of this annotation process. This is due to the natural modularity of proteins with domains as structural, evolutionary and functional units. Sometimes two, three, or more adjacent domains (called supra-domains) are the operational unit responsible for a function, e.g. via a binding site at the interface. These supra-domains have contributed to functional diversification in higher organisms. Traditionally functional ontologies have been applied to individual proteins, rather than families of related domains and supra-domains. We expect, however, to some extent functional signals can be carried by protein domains and supra-domains, and consequently used in function prediction and functional genomics. Results Here we present a domain-centric Gene Ontology (dcGO) perspective. We generalize a framework for automatically inferring ontological terms associated with domains and supra-domains from full-length sequence annotations. This general framework has been applied specifically to primary protein-level annotations from UniProtKB-GOA, generating GO term associations with SCOP domains and supra-domains. The resulting 'dcGO Predictor', can be used to provide functional annotation to protein sequences. The functional annotation of sequences in the Critical Assessment of Function Annotation (CAFA) has been used as a valuable opportunity to validate our method and to be assessed by the community. The functional annotation of all completely sequenced genomes has demonstrated the potential for domain-centric GO enrichment analysis to yield functional insights into newly sequenced or yet-to-be-annotated genomes. This generalized framework we have presented has also been applied to other domain classifications such as InterPro and Pfam, and other ontologies such as mammalian phenotype and disease ontology. The dcGO and its predictor are available at http://supfam.org/SUPERFAMILY/dcGO including an enrichment analysis tool. Conclusions As functional units, domains offer a unique perspective on function prediction regardless of whether proteins are multi-domain or single-domain. The 'dcGO Predictor' holds great promise for contributing to a domain-centric functional understanding of genomes in the next generation sequencing era. PMID:23514627
Collaborative Metaliteracy: Putting the New Information Literacy Framework into (Digital) Practice
ERIC Educational Resources Information Center
Gersch, Beate; Lampner, Wendy; Turner, Dudley
2016-01-01
This article describes a course-integrated collaborative project between a subject librarian, a communication professor, and an instructional designer that illustrates how the TPACK (Technological Pedagogical Content Knowledge) framework, developed by Mishra and Koehler (2006), and the new ACRL Framework for Information Literacy (Framework)…
Multistate approaches in computational protein design
Davey, James A; Chica, Roberto A
2012-01-01
Computational protein design (CPD) is a useful tool for protein engineers. It has been successfully applied towards the creation of proteins with increased thermostability, improved binding affinity, novel enzymatic activity, and altered ligand specificity. Traditionally, CPD calculations search and rank sequences using a single fixed protein backbone template in an approach referred to as single-state design (SSD). While SSD has enjoyed considerable success, certain design objectives require the explicit consideration of multiple conformational and/or chemical states. Cases where a “multistate” approach may be advantageous over the SSD approach include designing conformational changes into proteins, using native ensembles to mimic backbone flexibility, and designing ligand or oligomeric association specificities. These design objectives can be efficiently tackled using multistate design (MSD), an emerging methodology in CPD that considers any number of protein conformational or chemical states as inputs instead of a single protein backbone template, as in SSD. In this review article, recent examples of the successful design of a desired property into proteins using MSD are described. These studies employing MSD are divided into two categories—those that utilized multiple conformational states, and those that utilized multiple chemical states. In addition, the scoring of competing states during negative design is discussed as a current challenge for MSD. PMID:22811394
Henkel, Marius; Zwick, Michaela; Beuker, Janina; Willenbacher, Judit; Baumann, Sandra; Oswald, Florian; Neumann, Anke; Siemann-Herzberg, Martin; Syldatk, Christoph; Hausmann, Rudolf
2015-01-01
Bioprocess engineering is a highly interdisciplinary field of study which is strongly benefited by practical courses where students can actively experience the interconnection between biology, engineering, and physical sciences. This work describes a lab course developed for 2nd year undergraduate students of bioprocess engineering and related disciplines, where students are challenged with a real-life bioprocess-engineering application, the production of recombinant protein in a fed-batch process. The lab course was designed to introduce students to the subject of operating and supervising an experiment in a bioreactor, along with the analysis of collected data and a final critical evaluation of the experiment. To provide visual feedback of the experimental outcome, the organism used during class was Escherichia coli which carried a plasmid to recombinantly produce enhanced green fluorescent protein (eGFP) upon induction. This can easily be visualized in both the bioreactor and samples by using ultraviolet light. The lab course is performed with bioreactors of the simplest design, and is therefore highly flexible, robust and easy to reproduce. As part of this work the implementation and framework, the results, the evaluation and assessment of student learning combined with opinion surveys are presented, which provides a basis for instructors intending to implement a similar lab course at their respective institution. © 2015 by the International Union of Biochemistry and Molecular Biology.
A pluggable framework for parallel pairwise sequence search.
Archuleta, Jeremy; Feng, Wu-chun; Tilevich, Eli
2007-01-01
The current and near future of the computing industry is one of multi-core and multi-processor technology. Most existing sequence-search tools have been designed with a focus on single-core, single-processor systems. This discrepancy between software design and hardware architecture substantially hinders sequence-search performance by not allowing full utilization of the hardware. This paper presents a novel framework that will aid the conversion of serial sequence-search tools into a parallel version that can take full advantage of the available hardware. The framework, which is based on a software architecture called mixin layers with refined roles, enables modules to be plugged into the framework with minimal effort. The inherent modular design improves maintenance and extensibility, thus opening up a plethora of opportunities for advanced algorithmic features to be developed and incorporated while routine maintenance of the codebase persists.
ICT Design for Collaborative and Community Driven Disaster Management.
Kuziemsky, Craig E
2017-01-01
Information and communication technologies (ICT) have the potential to greatly enhance our ability to develop community reliance and sustainability to support disaster management. However, developing community resilience requires the sharing of numerous resources and the development of collaborative capacity, both of which make ICT design a challenge. This paper presents a framework that integrates community based participatory research (CBPR) and participatory design (PD). We discuss how the framework provides bounding to support community driven ICT design and evaluation.
Haberland, M; Kim, S
2015-02-02
When millions of years of evolution suggest a particular design solution, we may be tempted to abandon traditional design methods and copy the biological example. However, biological solutions do not often translate directly into the engineering domain, and even when they do, copying eliminates the opportunity to improve. A better approach is to extract design principles relevant to the task of interest, incorporate them in engineering designs, and vet these candidates against others. This paper presents the first general framework for determining whether biologically inspired relationships between design input variables and output objectives and constraints are applicable to a variety of engineering systems. Using optimization and statistics to generalize the results beyond a particular system, the framework overcomes shortcomings observed of ad hoc methods, particularly those used in the challenging study of legged locomotion. The utility of the framework is demonstrated in a case study of the relative running efficiency of rotary-kneed and telescoping-legged robots.
Computational protein design with backbone plasticity
MacDonald, James T.; Freemont, Paul S.
2016-01-01
The computational algorithms used in the design of artificial proteins have become increasingly sophisticated in recent years, producing a series of remarkable successes. The most dramatic of these is the de novo design of artificial enzymes. The majority of these designs have reused naturally occurring protein structures as ‘scaffolds’ onto which novel functionality can be grafted without having to redesign the backbone structure. The incorporation of backbone flexibility into protein design is a much more computationally challenging problem due to the greatly increased search space, but promises to remove the limitations of reusing natural protein scaffolds. In this review, we outline the principles of computational protein design methods and discuss recent efforts to consider backbone plasticity in the design process. PMID:27911735
Comparing Freshman and doctoral engineering students in design: mapping with a descriptive framework
NASA Astrophysics Data System (ADS)
Carmona Marques, P.
2017-11-01
This paper reports the results of a study of engineering students' approaches to an open-ended design problem. To carry out this, sketches and interviews were collected from 9 freshmen (first year) and 10 doctoral engineering students, when they designed solutions for orange squeezers. Sketches and interviews were analysed and mapped with a descriptive 'ideation framework' (IF) of the design process, to document and compare their design creativity (Carmona Marques, P., A. Silva, E. Henriques, and C. Magee. 2014. "A Descriptive Framework of the Design Process from a Dual Cognitive Engineering Perspective." International Journal of Design Creativity and Innovation 2 (3): 142-164). The results show that the designers worked in a manner largely consistent with the IF for generalisation and specialisation loops. Also, doctoral students produced more alternative solutions during the ideation process. In addition, compared to freshman, doctoral used the generalisation loop of the IF, working at higher levels of abstraction. The iterative nature of design is highlighted during this study - a potential contribution to decrease the gap between both groups in engineering education.
Design Methodology for Automated Construction Machines
1987-12-11
along with the design of a pair of machines which automate framework installation.-,, 20. DISTRIBUTION IAVAILABILITY OF ABSTRACT 21. ABSTRACT SECURITY... Development Assistant Professor of Civil Engineering and Laura A . Demsetz, David H. Levy, Bruce Schena Graduate Research Assistants December 11, 1987 U.S...are discussed along with the design of a pair of machines which automate framework installation. Preliminary analysis and testing indicate that these
ERIC Educational Resources Information Center
Yoon, Susan A.; Klopfer, Eric
2006-01-01
This paper reports on the efficacy of a professional development framework premised on four complex systems design principles: Feedback, Adaptation, Network Growth and Self-organization (FANS). The framework is applied to the design and delivery of the first 2 years of a 3-year study aimed at improving teacher and student understanding of…
Scaling Agile Methods for Department of Defense Programs
2016-12-01
concepts that drive the design of scaling frameworks, the contextual drivers that shape implementation, and widely known frameworks available today...Barlow probably governs some of the design choices you make. Barlow’s formula helps us understand the relationship between the outside diameter of a...encouraged to cross-train engineering staff and move away from a team structure where people focus on only one specialty, such as design
Morris, Gerwyn; Puri, Basant K; Walder, Ken; Berk, Michael; Stubbs, Brendon; Maes, Michael; Carvalho, André F
2018-03-29
The endoplasmic reticulum (ER) is the main cellular organelle involved in protein synthesis, assembly and secretion. Accumulating evidence shows that across several neurodegenerative and neuroprogressive diseases, ER stress ensues, which is accompanied by over-activation of the unfolded protein response (UPR). Although the UPR could initially serve adaptive purposes in conditions associated with higher cellular demands and after exposure to a range of pathophysiological insults, over time the UPR may become detrimental, thus contributing to neuroprogression. Herein, we propose that immune-inflammatory, neuro-oxidative, neuro-nitrosative, as well as mitochondrial pathways may reciprocally interact with aberrations in UPR pathways. Furthermore, ER stress may contribute to a deregulation in calcium homoeostasis. The common denominator of these pathways is a decrease in neuronal resilience, synaptic dysfunction and even cell death. This review also discusses how mechanisms related to ER stress could be explored as a source for novel therapeutic targets for neurodegenerative and neuroprogressive diseases. The design of randomised controlled trials testing compounds that target aberrant UPR-related pathways within the emerging framework of precision psychiatry is warranted.
Gardner, Thomas J; Stein, Kathryn R; Duty, J Andrew; Schwarz, Toni M; Noriega, Vanessa M; Kraus, Thomas; Moran, Thomas M; Tortorella, Domenico
2016-12-14
The prototypic β-herpesvirus human cytomegalovirus (CMV) establishes life-long persistence within its human host. The CMV envelope consists of various protein complexes that enable wide viral tropism. More specifically, the glycoprotein complex gH/gL/gO (gH-trimer) is required for infection of all cell types, while the gH/gL/UL128/130/131a (gH-pentamer) complex imparts specificity in infecting epithelial, endothelial and myeloid cells. Here we utilize state-of-the-art robotics and a high-throughput neutralization assay to screen and identify monoclonal antibodies (mAbs) targeting the gH glycoproteins that display broad-spectrum properties to inhibit virus infection and dissemination. Subsequent biochemical characterization reveals that the mAbs bind to gH-trimer and gH-pentamer complexes and identify the antibodies' epitope as an 'antigenic hot spot' critical for virus entry. The mAbs inhibit CMV infection at a post-attachment step by interacting with a highly conserved central alpha helix-rich domain. The platform described here provides the framework for development of effective CMV biologics and vaccine design strategies.
Steps in the bacterial flagellar motor.
Mora, Thierry; Yu, Howard; Sowa, Yoshiyuki; Wingreen, Ned S
2009-10-01
The bacterial flagellar motor is a highly efficient rotary machine used by many bacteria to propel themselves. It has recently been shown that at low speeds its rotation proceeds in steps. Here we propose a simple physical model, based on the storage of energy in protein springs, that accounts for this stepping behavior as a random walk in a tilted corrugated potential that combines torque and contact forces. We argue that the absolute angular position of the rotor is crucial for understanding step properties and show this hypothesis to be consistent with the available data, in particular the observation that backward steps are smaller on average than forward steps. We also predict a sublinear speed versus torque relationship for fixed load at low torque, and a peak in rotor diffusion as a function of torque. Our model provides a comprehensive framework for understanding and analyzing stepping behavior in the bacterial flagellar motor and proposes novel, testable predictions. More broadly, the storage of energy in protein springs by the flagellar motor may provide useful general insights into the design of highly efficient molecular machines.
Pyicos: a versatile toolkit for the analysis of high-throughput sequencing data.
Althammer, Sonja; González-Vallinas, Juan; Ballaré, Cecilia; Beato, Miguel; Eyras, Eduardo
2011-12-15
High-throughput sequencing (HTS) has revolutionized gene regulation studies and is now fundamental for the detection of protein-DNA and protein-RNA binding, as well as for measuring RNA expression. With increasing variety and sequencing depth of HTS datasets, the need for more flexible and memory-efficient tools to analyse them is growing. We describe Pyicos, a powerful toolkit for the analysis of mapped reads from diverse HTS experiments: ChIP-Seq, either punctuated or broad signals, CLIP-Seq and RNA-Seq. We prove the effectiveness of Pyicos to select for significant signals and show that its accuracy is comparable and sometimes superior to that of methods specifically designed for each particular type of experiment. Pyicos facilitates the analysis of a variety of HTS datatypes through its flexibility and memory efficiency, providing a useful framework for data integration into models of regulatory genomics. Open-source software, with tutorials and protocol files, is available at http://regulatorygenomics.upf.edu/pyicos or as a Galaxy server at http://regulatorygenomics.upf.edu/galaxy eduardo.eyras@upf.edu Supplementary data are available at Bioinformatics online.
Stranges, P Benjamin; Kuhlman, Brian
2013-01-01
The accurate design of new protein-protein interactions is a longstanding goal of computational protein design. However, most computationally designed interfaces fail to form experimentally. This investigation compares five previously described successful de novo interface designs with 158 failures. Both sets of proteins were designed with the molecular modeling program Rosetta. Designs were considered a success if a high-resolution crystal structure of the complex closely matched the design model and the equilibrium dissociation constant for binding was less than 10 μM. The successes and failures represent a wide variety of interface types and design goals including heterodimers, homodimers, peptide-protein interactions, one-sided designs (i.e., where only one of the proteins was mutated) and two-sided designs. The most striking feature of the successful designs is that they have fewer polar atoms at their interfaces than many of the failed designs. Designs that attempted to create extensive sets of interface-spanning hydrogen bonds resulted in no detectable binding. In contrast, polar atoms make up more than 40% of the interface area of many natural dimers, and native interfaces often contain extensive hydrogen bonding networks. These results suggest that Rosetta may not be accurately balancing hydrogen bonding and electrostatic energies against desolvation penalties and that design processes may not include sufficient sampling to identify side chains in preordered conformations that can fully satisfy the hydrogen bonding potential of the interface. Copyright © 2012 The Protein Society.
Membrane-spanning α-helical barrels as tractable protein-design targets.
Niitsu, Ai; Heal, Jack W; Fauland, Kerstin; Thomson, Andrew R; Woolfson, Derek N
2017-08-05
The rational ( de novo ) design of membrane-spanning proteins lags behind that for water-soluble globular proteins. This is due to gaps in our knowledge of membrane-protein structure, and experimental difficulties in studying such proteins compared to water-soluble counterparts. One limiting factor is the small number of experimentally determined three-dimensional structures for transmembrane proteins. By contrast, many tens of thousands of globular protein structures provide a rich source of 'scaffolds' for protein design, and the means to garner sequence-to-structure relationships to guide the design process. The α-helical coiled coil is a protein-structure element found in both globular and membrane proteins, where it cements a variety of helix-helix interactions and helical bundles. Our deep understanding of coiled coils has enabled a large number of successful de novo designs. For one class, the α-helical barrels-that is, symmetric bundles of five or more helices with central accessible channels-there are both water-soluble and membrane-spanning examples. Recent computational designs of water-soluble α-helical barrels with five to seven helices have advanced the design field considerably. Here we identify and classify analogous and more complicated membrane-spanning α-helical barrels from the Protein Data Bank. These provide tantalizing but tractable targets for protein engineering and de novo protein design.This article is part of the themed issue 'Membrane pores: from structure and assembly, to medicine and technology'. © 2017 The Author(s).
Efficient Characterization of Protein Cavities within Molecular Simulation Trajectories: trj_cavity.
Paramo, Teresa; East, Alexandra; Garzón, Diana; Ulmschneider, Martin B; Bond, Peter J
2014-05-13
Protein cavities and tunnels are critical in determining phenomena such as ligand binding, molecular transport, and enzyme catalysis. Molecular dynamics (MD) simulations enable the exploration of the flexibility and conformational plasticity of protein cavities, extending the information available from static experimental structures relevant to, for example, drug design. Here, we present a new tool (trj_cavity) implemented within the GROMACS ( www.gromacs.org ) framework for the rapid identification and characterization of cavities detected within MD trajectories. trj_cavity is optimized for usability and computational efficiency and is applicable to the time-dependent analysis of any cavity topology, and optional specialized descriptors can be used to characterize, for example, protein channels. Its novel grid-based algorithm performs an efficient neighbor search whose calculation time is linear with system size, and a comparison of performance with other widely used cavity analysis programs reveals an orders-of-magnitude improvement in the computational cost. To demonstrate its potential for revealing novel mechanistic insights, trj_cavity has been used to analyze long-time scale simulation trajectories for three diverse protein cavity systems. This has helped to reveal, respectively, the lipid binding mechanism in the deep hydrophobic cavity of a soluble mite-allergen protein, Der p 2; a means for shuttling carbohydrates between the surface-exposed substrate-binding and catalytic pockets of a multidomain, membrane-proximal pullulanase, PulA; and the structural basis for selectivity in the transmembrane pore of a voltage-gated sodium channel (NavMs), embedded within a lipid bilayer environment. trj_cavity is available for download under an open-source license ( http://sourceforge.net/projects/trjcavity ). A simplified, GROMACS-independent version may also be compiled.
Detection of significant protein coevolution.
Ochoa, David; Juan, David; Valencia, Alfonso; Pazos, Florencio
2015-07-01
The evolution of proteins cannot be fully understood without taking into account the coevolutionary linkages entangling them. From a practical point of view, coevolution between protein families has been used as a way of detecting protein interactions and functional relationships from genomic information. The most common approach to inferring protein coevolution involves the quantification of phylogenetic tree similarity using a family of methodologies termed mirrortree. In spite of their success, a fundamental problem of these approaches is the lack of an adequate statistical framework to assess the significance of a given coevolutionary score (tree similarity). As a consequence, a number of ad hoc filters and arbitrary thresholds are required in an attempt to obtain a final set of confident coevolutionary signals. In this work, we developed a method for associating confidence estimators (P values) to the tree-similarity scores, using a null model specifically designed for the tree comparison problem. We show how this approach largely improves the quality and coverage (number of pairs that can be evaluated) of the detected coevolution in all the stages of the mirrortree workflow, independently of the starting genomic information. This not only leads to a better understanding of protein coevolution and its biological implications, but also to obtain a highly reliable and comprehensive network of predicted interactions, as well as information on the substructure of macromolecular complexes using only genomic information. The software and datasets used in this work are freely available at: http://csbg.cnb.csic.es/pMT/. pazos@cnb.csic.es Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Bagdonaite, Ieva; Nordén, Rickard; Joshi, Hiren J.; King, Sarah L.; Vakhrushev, Sergey Y.; Olofsson, Sigvard; Wandall, Hans H.
2016-01-01
Herpesviruses are among the most complex and widespread viruses, infection and propagation of which depend on envelope proteins. These proteins serve as mediators of cell entry as well as modulators of the immune response and are attractive vaccine targets. Although envelope proteins are known to carry glycans, little is known about the distribution, nature, and functions of these modifications. This is particularly true for O-glycans; thus we have recently developed a “bottom up” mass spectrometry-based technique for mapping O-glycosylation sites on herpes simplex virus type 1. We found wide distribution of O-glycans on herpes simplex virus type 1 glycoproteins and demonstrated that elongated O-glycans were essential for the propagation of the virus. Here, we applied our proteome-wide discovery platform for mapping O-glycosites on representative and clinically significant members of the herpesvirus family: varicella zoster virus, human cytomegalovirus, and Epstein-Barr virus. We identified a large number of O-glycosites distributed on most envelope proteins in all viruses and further demonstrated conserved patterns of O-glycans on distinct homologous proteins. Because glycosylation is highly dependent on the host cell, we tested varicella zoster virus-infected cell lysates and clinically isolated virus and found evidence of consistent O-glycosites. These results present a comprehensive view of herpesvirus O-glycosylation and point to the widespread occurrence of O-glycans in regions of envelope proteins important for virus entry, formation, and recognition by the host immune system. This knowledge enables dissection of specific functional roles of individual glycosites and, moreover, provides a framework for design of glycoprotein vaccines with representative glycosylation. PMID:27129252
ERIC Educational Resources Information Center
Linn, Marcia C.
1995-01-01
Describes a framework called scaffolded knowledge integration and illustrates how it guided the design of two successful course enhancements in the field of computer science and engineering: the LISP Knowledge Integration Environment and the spatial reasoning environment. (101 references) (Author/MKR)
2008-11-01
is particularly important in order to design a network that is realistically deployable. The goal of this project is the design of a theoretical ... framework to assess and predict the effectiveness and performance of networks and their loads.
A computer-aided methodology for designing sustainable supply chains is presented using the P-graph framework to develop supply chain structures which are analyzed using cost, the cost of producing electricity, and two sustainability metrics: ecological footprint and emergy. They...
A computer-aided methodology for designing sustainable supply chains is presented using the P-graph framework to develop supply chain structures which are analyzed using cost, the cost of producing electricity, and two sustainability metrics: ecological footprint and emergy. They...
Network Analysis Reveals the Recognition Mechanism for Mannose-binding Lectins
NASA Astrophysics Data System (ADS)
Zhao, Yunjie; Jian, Yiren; Zeng, Chen; Computational Biophysics Lab Team
The specific carbohydrate binding of mannose-binding lectin (MBL) protein in plants makes it a very useful molecular tool for cancer cell detection and other applications. The biological states of most MBL proteins are dimeric. Using dynamics network analysis on molecular dynamics (MD) simulations on the model protein of MBL, we elucidate the short- and long-range driving forces behind the dimer formation. The results are further supported by sequence coevolution analysis. We propose a general framework for deciphering the recognition mechanism underlying protein-protein interactions that may have potential applications in signaling pathways.
Computational Exploration of a Protein Receptor Binding Space with Student Proposed Peptide Ligands
King, Matthew D.; Phillips, Paul; Turner, Matthew W.; Katz, Michael; Lew, Sarah; Bradburn, Sarah; Andersen, Tim; Mcdougal, Owen M.
2017-01-01
Computational molecular docking is a fast and effective in silico method for the analysis of binding between a protein receptor model and a ligand. The visualization and manipulation of protein to ligand binding in three-dimensional space represents a powerful tool in the biochemistry curriculum to enhance student learning. The DockoMatic tutorial described herein provides a framework by which instructors can guide students through a drug screening exercise. Using receptor models derived from readily available protein crystal structures, docking programs have the ability to predict ligand binding properties, such as preferential binding orientations and binding affinities. The use of computational studies can significantly enhance complimentary wet chemical experimentation by providing insight into the important molecular interactions within the system of interest, as well as guide the design of new candidate ligands based on observed binding motifs and energetics. In this laboratory tutorial, the graphical user interface, DockoMatic, facilitates docking job submissions to the docking engine, AutoDock 4.2. The purpose of this exercise is to successfully dock a 17-amino acid peptide, α-conotoxin TxIA, to the acetylcholine binding protein from Aplysia californica-AChBP to determine the most stable binding configuration. Each student will then propose two specific amino acid substitutions of α-conotoxin TxIA to enhance peptide binding affinity, create the mutant in DockoMatic, and perform docking calculations to compare their results with the class. Students will also compare intermolecular forces, binding energy, and geometric orientation of their prepared analog to their initial α-conotoxin TxIA docking results. PMID:26537635