Template-based structure modeling of protein-protein interactions
Szilagyi, Andras; Zhang, Yang
2014-01-01
The structure of protein-protein complexes can be constructed by using the known structure of other protein complexes as a template. The complex structure templates are generally detected either by homology-based sequence alignments or, given the structure of monomer components, by structure-based comparisons. Critical improvements have been made in recent years by utilizing interface recognition and by recombining monomer and complex template libraries. Encouraging progress has also been witnessed in genome-wide applications of template-based modeling, with modeling accuracy comparable to high-throughput experimental data. Nevertheless, bottlenecks exist due to the incompleteness of the proteinprotein complex structure library and the lack of methods for distant homologous template identification and full-length complex structure refinement. PMID:24721449
High-throughput sequencing: a failure mode analysis.
Yang, George S; Stott, Jeffery M; Smailus, Duane; Barber, Sarah A; Balasundaram, Miruna; Marra, Marco A; Holt, Robert A
2005-01-04
Basic manufacturing principles are becoming increasingly important in high-throughput sequencing facilities where there is a constant drive to increase quality, increase efficiency, and decrease operating costs. While high-throughput centres report failure rates typically on the order of 10%, the causes of sporadic sequencing failures are seldom analyzed in detail and have not, in the past, been formally reported. Here we report the results of a failure mode analysis of our production sequencing facility based on detailed evaluation of 9,216 ESTs generated from two cDNA libraries. Two categories of failures are described; process-related failures (failures due to equipment or sample handling) and template-related failures (failures that are revealed by close inspection of electropherograms and are likely due to properties of the template DNA sequence itself). Preventative action based on a detailed understanding of failure modes is likely to improve the performance of other production sequencing pipelines.
The opportunity and challenge of spin coat based nanoimprint lithography
NASA Astrophysics Data System (ADS)
Jung, Wooyung; Cho, Jungbin; Choi, Eunhyuk; Lim, Yonghyun; Bok, Cheolkyu; Tsuji, Masatoshi; Kobayashi, Kei; Kono, Takuya; Nakasugi, Tetsuro
2017-03-01
Since multi patterning with spacer was introduced in NAND flash memory1, multi patterning with spacer has been a promising solution to overcome the resolution limit. However, the increase in process cost of multi patterning with spacer must be a serious burden to device manufacturers as half pitch of patterns gets smaller.2, 3 Even though Nano Imprint Lithography (NIL) has been considered as one of strong candidates to avoid cost issue of multi patterning with spacer, there are still negative viewpoints; template damage induced from particles between template and wafer, overlay degradation induced from shear force between template and wafer, and throughput loss induced from dispensing and spreading resist droplet. Jet and Flash Imprint Lithography (J-FIL4, 5, 6) has contributed to throughput improvement, but still has these above problems. J-FIL consists of 5 steps; dispense of resist droplets on wafer, imprinting template on wafer, filling the gap between template and wafer with resist, UV curing, and separation of template from wafer. If dispensing resist droplets by inkjet is replaced with coating resist at spin coater, additional progress in NIL can be achieved. Template damage from particle can be suppressed by thick resist which is spin-coated at spin coater and covers most of particles on wafer, shear force between template and wafer can be minimized with thick resist, and finally additional throughput enhancement can be achieved by skipping dispense of resist droplets on wafer. On the other hand, spin-coat-based NIL has side effect such as pattern collapse which comes from high separation energy of resist. It is expected that pattern collapse can be improved by the development of resist with low separation energy.
Hu, E; Liao, T. W.; Tiersch, T. R.
2013-01-01
Emerging commercial-level technology for aquatic sperm cryopreservation has not been modeled by computer simulation. Commercially available software (ARENA, Rockwell Automation, Inc. Milwaukee, WI) was applied to simulate high-throughput sperm cryopreservation of blue catfish (Ictalurus furcatus) based on existing processing capabilities. The goal was to develop a simulation model suitable for production planning and decision making. The objectives were to: 1) predict the maximum output for 8-hr workday; 2) analyze the bottlenecks within the process, and 3) estimate operational costs when run for daily maximum output. High-throughput cryopreservation was divided into six major steps modeled with time, resources and logic structures. The modeled production processed 18 fish and produced 1164 ± 33 (mean ± SD) 0.5-ml straws containing one billion cryopreserved sperm. Two such production lines could support all hybrid catfish production in the US and 15 such lines could support the entire channel catfish industry if it were to adopt artificial spawning techniques. Evaluations were made to improve efficiency, such as increasing scale, optimizing resources, and eliminating underutilized equipment. This model can serve as a template for other aquatic species and assist decision making in industrial application of aquatic germplasm in aquaculture, stock enhancement, conservation, and biomedical model fishes. PMID:25580079
A versatile coupled cell-free transcription-translation system based on tobacco BY-2 cell lysates.
Buntru, Matthias; Vogel, Simon; Stoff, Katrin; Spiegel, Holger; Schillberg, Stefan
2015-05-01
Cell-free protein synthesis is a powerful method for the high-throughput production of recombinant proteins, especially proteins that are difficult to express in living cells. Here we describe a coupled cell-free transcription-translation system based on tobacco BY-2 cell lysates (BYLs). Using a combination of fractional factorial designs and response surface models, we developed a cap-independent system that produces more than 250 μg/mL of functional enhanced yellow fluorescent protein (eYFP) and about 270 μg/mL of firefly luciferase using plasmid templates, and up to 180 μg/mL eYFP using linear templates (PCR products) in 18 h batch reactions. The BYL contains actively-translocating microsomal vesicles derived from the endoplasmic reticulum, promoting the formation of disulfide bonds, glycosylation and the cotranslational integration of membrane proteins. This was demonstrated by expressing a functional full-size antibody (∼ 150 μg/mL), the model enzyme glucose oxidase (GOx) (∼ 7.3 U/mL), and a transmembrane growth factor (∼ 25 μg/mL). Subsequent in vitro treatment of GOx with peptide-N-glycosidase F confirmed the presence of N-glycans. Our results show that the BYL can be used as a high-throughput expression and screening platform that is particularly suitable for complex and cytotoxic proteins. © 2014 Wiley Periodicals, Inc.
Protein docking by the interface structure similarity: how much structure is needed?
Sinha, Rohita; Kundrotas, Petras J; Vakser, Ilya A
2012-01-01
The increasing availability of co-crystallized protein-protein complexes provides an opportunity to use template-based modeling for protein-protein docking. Structure alignment techniques are useful in detection of remote target-template similarities. The size of the structure involved in the alignment is important for the success in modeling. This paper describes a systematic large-scale study to find the optimal definition/size of the interfaces for the structure alignment-based docking applications. The results showed that structural areas corresponding to the cutoff values <12 Å across the interface inadequately represent structural details of the interfaces. With the increase of the cutoff beyond 12 Å, the success rate for the benchmark set of 99 protein complexes, did not increase significantly for higher accuracy models, and decreased for lower-accuracy models. The 12 Å cutoff was optimal in our interface alignment-based docking, and a likely best choice for the large-scale (e.g., on the scale of the entire genome) applications to protein interaction networks. The results provide guidelines for the docking approaches, including high-throughput applications to modeled structures.
Sources of PCR-induced distortions in high-throughput sequencing data sets
Kebschull, Justus M.; Zador, Anthony M.
2015-01-01
PCR permits the exponential and sequence-specific amplification of DNA, even from minute starting quantities. PCR is a fundamental step in preparing DNA samples for high-throughput sequencing. However, there are errors associated with PCR-mediated amplification. Here we examine the effects of four important sources of error—bias, stochasticity, template switches and polymerase errors—on sequence representation in low-input next-generation sequencing libraries. We designed a pool of diverse PCR amplicons with a defined structure, and then used Illumina sequencing to search for signatures of each process. We further developed quantitative models for each process, and compared predictions of these models to our experimental data. We find that PCR stochasticity is the major force skewing sequence representation after amplification of a pool of unique DNA amplicons. Polymerase errors become very common in later cycles of PCR but have little impact on the overall sequence distribution as they are confined to small copy numbers. PCR template switches are rare and confined to low copy numbers. Our results provide a theoretical basis for removing distortions from high-throughput sequencing data. In addition, our findings on PCR stochasticity will have particular relevance to quantification of results from single cell sequencing, in which sequences are represented by only one or a few molecules. PMID:26187991
Li, Xiaofei; Wu, Yuhua; Li, Jun; Li, Yunjing; Long, Likun; Li, Feiwu; Wu, Gang
2015-01-05
The rapid increase in the number of genetically modified (GM) varieties has led to a demand for high-throughput methods to detect genetically modified organisms (GMOs). We describe a new dynamic array-based high throughput method to simultaneously detect 48 targets in 48 samples on a Fludigm system. The test targets included species-specific genes, common screening elements, most of the Chinese-approved GM events, and several unapproved events. The 48 TaqMan assays successfully amplified products from both single-event samples and complex samples with a GMO DNA amount of 0.05 ng, and displayed high specificity. To improve the sensitivity of detection, a preamplification step for 48 pooled targets was added to enrich the amount of template before performing dynamic chip assays. This dynamic chip-based method allowed the synchronous high-throughput detection of multiple targets in multiple samples. Thus, it represents an efficient, qualitative method for GMO multi-detection.
Li, Xiaofei; Wu, Yuhua; Li, Jun; Li, Yunjing; Long, Likun; Li, Feiwu; Wu, Gang
2015-01-01
The rapid increase in the number of genetically modified (GM) varieties has led to a demand for high-throughput methods to detect genetically modified organisms (GMOs). We describe a new dynamic array-based high throughput method to simultaneously detect 48 targets in 48 samples on a Fludigm system. The test targets included species-specific genes, common screening elements, most of the Chinese-approved GM events, and several unapproved events. The 48 TaqMan assays successfully amplified products from both single-event samples and complex samples with a GMO DNA amount of 0.05 ng, and displayed high specificity. To improve the sensitivity of detection, a preamplification step for 48 pooled targets was added to enrich the amount of template before performing dynamic chip assays. This dynamic chip-based method allowed the synchronous high-throughput detection of multiple targets in multiple samples. Thus, it represents an efficient, qualitative method for GMO multi-detection. PMID:25556930
Stubbs, Samuel; Oura, Chris A L; Henstock, Mark; Bowden, Timothy R; King, Donald P; Tuppurainen, Eeva S M
2012-02-01
Capripoxviruses, which are endemic in much of Africa and Asia, are the aetiological agents of economically devastating poxviral diseases in cattle, sheep and goats. The aim of this study was to validate a high-throughput real-time PCR assay for routine diagnostic use in a capripoxvirus reference laboratory. The performance of two previously published real-time PCR methods were compared using commercially available reagents including the amplification kits recommended in the original publication. Furthermore, both manual and robotic extraction methods used to prepare template nucleic acid were evaluated using samples collected from experimentally infected animals. The optimised assay had an analytical sensitivity of at least 63 target DNA copies per reaction, displayed a greater diagnostic sensitivity compared to conventional gel-based PCR, detected capripoxviruses isolated from outbreaks around the world and did not amplify DNA from related viruses in the genera Orthopoxvirus or Parapoxvirus. The high-throughput robotic DNA extraction procedure did not adversely affect the sensitivity of the assay compared to manual preparation of PCR templates. This laboratory-based assay provides a rapid and robust method to detect capripoxviruses following suspicion of disease in endemic or disease-free countries. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Vairo, Daniel M.
1998-01-01
The removal and installation of sting-mounted wind tunnel models in the National Transonic Facility (NTF) is a multi-task process having a large impact on the annual throughput of the facility. Approximately ten model removal and installation cycles occur annually at the NTF with each cycle requiring slightly over five days to complete. The various tasks of the model changeover process were modeled in Microsoft Project as a template to provide a planning, tracking, and management tool. The template can also be used as a tool to evaluate improvements to this process. This document describes the development of the template and provides step-by-step instructions on its use and as a planning and tracking tool. A secondary role of this document is to provide an overview of the model changeover process and briefly describe the tasks associated with it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammadi, Erfan; Zhao, Chuankai; Meng, Yifei
Solution processable semiconducting polymers have been under intense investigations due to their diverse applications from printed electronics to biomedical devices. However, controlling the macromolecular assembly across length scales during solution coating remains a key challenge, largely due to the disparity in timescales of polymer assembly and high-throughput printing/coating. Herein we propose the concept of dynamic templating to expedite polymer nucleation and the ensuing assembly process, inspired by biomineralization templates capable of surface reconfiguration. Molecular dynamic simulations reveal that surface reconfigurability is key to promoting template–polymer interactions, thereby lowering polymer nucleation barrier. Employing ionic-liquid-based dynamic template during meniscus-guided coating results inmore » highly aligned, highly crystalline donor-acceptor polymer thin films over large area (41cm 2) and promoted charge transport along both the polymer backbone and the π-π stacking direction in field-effect transistors. We further demonstrate that the charge transport anisotropy can be reversed by tuning the degree of polymer backbone alignment.« less
Mohammadi, Erfan; Zhao, Chuankai; Meng, Yifei; Qu, Ge; Zhang, Fengjiao; Zhao, Xikang; Mei, Jianguo; Zuo, Jian-Min; Shukla, Diwakar; Diao, Ying
2017-01-01
Solution processable semiconducting polymers have been under intense investigations due to their diverse applications from printed electronics to biomedical devices. However, controlling the macromolecular assembly across length scales during solution coating remains a key challenge, largely due to the disparity in timescales of polymer assembly and high-throughput printing/coating. Herein we propose the concept of dynamic templating to expedite polymer nucleation and the ensuing assembly process, inspired by biomineralization templates capable of surface reconfiguration. Molecular dynamic simulations reveal that surface reconfigurability is key to promoting template–polymer interactions, thereby lowering polymer nucleation barrier. Employing ionic-liquid-based dynamic template during meniscus-guided coating results in highly aligned, highly crystalline donor–acceptor polymer thin films over large area (>1 cm2) and promoted charge transport along both the polymer backbone and the π–π stacking direction in field-effect transistors. We further demonstrate that the charge transport anisotropy can be reversed by tuning the degree of polymer backbone alignment. PMID:28703136
Mohammadi, Erfan; Zhao, Chuankai; Meng, Yifei; ...
2017-07-13
Solution processable semiconducting polymers have been under intense investigations due to their diverse applications from printed electronics to biomedical devices. However, controlling the macromolecular assembly across length scales during solution coating remains a key challenge, largely due to the disparity in timescales of polymer assembly and high-throughput printing/coating. Herein we propose the concept of dynamic templating to expedite polymer nucleation and the ensuing assembly process, inspired by biomineralization templates capable of surface reconfiguration. Molecular dynamic simulations reveal that surface reconfigurability is key to promoting template–polymer interactions, thereby lowering polymer nucleation barrier. Employing ionic-liquid-based dynamic template during meniscus-guided coating results inmore » highly aligned, highly crystalline donor-acceptor polymer thin films over large area (41cm 2) and promoted charge transport along both the polymer backbone and the π-π stacking direction in field-effect transistors. We further demonstrate that the charge transport anisotropy can be reversed by tuning the degree of polymer backbone alignment.« less
Ordered three-dimensional interconnected nanoarchitectures in anodic porous alumina
Martín, Jaime; Martín-González, Marisol; Fernández, Jose Francisco; Caballero-Calero, Olga
2014-01-01
Three-dimensional nanostructures combine properties of nanoscale materials with the advantages of being macro-sized pieces when the time comes to manipulate, measure their properties, or make a device. However, the amount of compounds with the ability to self-organize in ordered three-dimensional nanostructures is limited. Therefore, template-based fabrication strategies become the key approach towards three-dimensional nanostructures. Here we report the simple fabrication of a template based on anodic aluminum oxide, having a well-defined, ordered, tunable, homogeneous 3D nanotubular network in the sub 100 nm range. The three-dimensional templates are then employed to achieve three-dimensional, ordered nanowire-networks in Bi2Te3 and polystyrene. Lastly, we demonstrate the photonic crystal behavior of both the template and the polystyrene three-dimensional nanostructure. Our approach may establish the foundations for future high-throughput, cheap, photonic materials and devices made of simple commodity plastics, metals, and semiconductors. PMID:25342247
NASA Astrophysics Data System (ADS)
Alexander, Kristen; Hampton, Meredith; Lopez, Rene; Desimone, Joseph
2009-03-01
When a pair of noble metal nanoparticles are brought close together, the plasmonic properties of the pair (known as a ``dimer'') give rise to intense electric field enhancements in the interstitial gap. These fields present a simple yet exquisitely sensitive system for performing single molecule surface-enhanced Raman spectroscopy (SM-SERS). Problems associated with current fabrication methods of SERS-active substrates include reproducibility issues, high cost of production and low throughput. In this study, we present a novel method for the high throughput fabrication of high quality SERS substrates. Using a polymer templating technique followed by the placement of thiolated nanoparticles through meniscus force deposition, we are able to fabricate large arrays of identical, uniformly spaced dimers in a quick, reproducible manner. Subsequent theoretical and experimental studies have confirmed the strong dependence of the SERS enhancement on both substrate geometry (e.g. dimer size, shape and gap size) and the polarization of the excitation source.
NASA Astrophysics Data System (ADS)
Alexander, Kristen; Lopez, Rene; Hampton, Meredith; Desimone, Joseph
2008-10-01
When a pair of noble metal nanoparticles are brought close together, the plasmonic properties of the pair (known as a ``dimer'') give rise to intense electric field enhancements in the interstitial gap. These fields present a simple yet exquisitely sensitive system for performing single molecule surface-enhanced Raman spectroscopy (SM-SERS). Problems associated with current fabrication methods of SERS-active substrates include reproducibility issues, high cost of production and low throughput. In this study, we present a novel method for the high throughput fabrication of high quality SERS substrates. Using a polymer templating technique followed by the placement of thiolated nanoparticles through meniscus force deposition, we are able to fabricate large arrays of identical, uniformly spaced dimers in a quick, reproducible manner. Subsequent theoretical and experimental studies have confirmed the strong dependence of the SERS enhancement on both substrate geometry (e.g. dimer size, shape and gap size) and the polarization of the excitation source.
Machine learning in computational biology to accelerate high-throughput protein expression.
Sastry, Anand; Monk, Jonathan; Tegel, Hanna; Uhlen, Mathias; Palsson, Bernhard O; Rockberg, Johan; Brunk, Elizabeth
2017-08-15
The Human Protein Atlas (HPA) enables the simultaneous characterization of thousands of proteins across various tissues to pinpoint their spatial location in the human body. This has been achieved through transcriptomics and high-throughput immunohistochemistry-based approaches, where over 40 000 unique human protein fragments have been expressed in E. coli. These datasets enable quantitative tracking of entire cellular proteomes and present new avenues for understanding molecular-level properties influencing expression and solubility. Combining computational biology and machine learning identifies protein properties that hinder the HPA high-throughput antibody production pipeline. We predict protein expression and solubility with accuracies of 70% and 80%, respectively, based on a subset of key properties (aromaticity, hydropathy and isoelectric point). We guide the selection of protein fragments based on these characteristics to optimize high-throughput experimentation. We present the machine learning workflow as a series of IPython notebooks hosted on GitHub (https://github.com/SBRG/Protein_ML). The workflow can be used as a template for analysis of further expression and solubility datasets. ebrunk@ucsd.edu or johanr@biotech.kth.se. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Smith, Thomas M; Lim, Siew Pheng; Yue, Kimberley; Busby, Scott A; Arora, Rishi; Seh, Cheah Chen; Wright, S Kirk; Nutiu, Razvan; Niyomrattanakit, Pornwaratt; Wan, Kah Fei; Beer, David; Shi, Pei-Yong; Benson, Timothy E
2015-01-01
Dengue virus (DENV) is the most significant mosquito-borne viral pathogen in the world and is the cause of dengue fever. The DENV RNA-dependent RNA polymerase (RdRp) is conserved among the four viral serotypes and is an attractive target for antiviral drug development. During initiation of viral RNA synthesis, the polymerase switches from a "closed" to "open" conformation to accommodate the viral RNA template. Inhibitors that lock the "closed" or block the "open" conformation would prevent viral RNA synthesis. Herein, we describe a screening campaign that employed two biochemical assays to identify inhibitors of RdRp initiation and elongation. Using a DENV subgenomic RNA template that promotes RdRp de novo initiation, the first assay measures cytosine nucleotide analogue (Atto-CTP) incorporation. Liberated Atto fluorophore allows for quantification of RdRp activity via fluorescence. The second assay uses the same RNA template but is label free and directly detects RdRp-mediated liberation of pyrophosphates of native ribonucleotides via liquid chromatography-mass spectrometry. The ability of inhibitors to bind and stabilize a "closed" conformation of the DENV RdRp was further assessed in a differential scanning fluorimetry assay. Last, active compounds were evaluated in a renilla luciferase-based DENV replicon cell-based assay to monitor cellular efficacy. All assays described herein are medium to high throughput, are robust and reproducible, and allow identification of inhibitors of the open and closed forms of DENV RNA polymerase. © 2014 Society for Laboratory Automation and Screening.
CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN
2013-01-01
After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379
Ozer, Abdullah; Tome, Jacob M.; Friedman, Robin C.; Gheba, Dan; Schroth, Gary P.; Lis, John T.
2016-01-01
Because RNA-protein interactions play a central role in a wide-array of biological processes, methods that enable a quantitative assessment of these interactions in a high-throughput manner are in great demand. Recently, we developed the High Throughput Sequencing-RNA Affinity Profiling (HiTS-RAP) assay, which couples sequencing on an Illumina GAIIx with the quantitative assessment of one or several proteins’ interactions with millions of different RNAs in a single experiment. We have successfully used HiTS-RAP to analyze interactions of EGFP and NELF-E proteins with their corresponding canonical and mutant RNA aptamers. Here, we provide a detailed protocol for HiTS-RAP, which can be completed in about a month (8 days hands-on time) including the preparation and testing of recombinant proteins and DNA templates, clustering DNA templates on a flowcell, high-throughput sequencing and protein binding with GAIIx, and finally data analysis. We also highlight aspects of HiTS-RAP that can be further improved and points of comparison between HiTS-RAP and two other recently developed methods, RNA-MaP and RBNS. A successful HiTS-RAP experiment provides the sequence and binding curves for approximately 200 million RNAs in a single experiment. PMID:26182240
Self-aligned grating couplers on template-stripped metal pyramids via nanostencil lithography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klemme, Daniel J.; Johnson, Timothy W.; Mohr, Daniel A.
2016-05-23
We combine nanostencil lithography and template stripping to create self-aligned patterns about the apex of ultrasmooth metal pyramids with high throughput. Three-dimensional patterns such as spiral and asymmetric linear gratings, which can couple incident light into a hot spot at the tip, are presented as examples of this fabrication method. Computer simulations demonstrate that spiral and linear diffraction grating patterns are both effective at coupling light to the tip. The self-aligned stencil lithography technique can be useful for integrating plasmonic couplers with sharp metallic tips for applications such as near-field optical spectroscopy, tip-based optical trapping, plasmonic sensing, and heat-assisted magneticmore » recording.« less
Chen, Rong; Zhou, Jingjing; Qin, Lingyun; Chen, Yao; Huang, Yongqi; Liu, Huili; Su, Zhengding
2017-06-27
In nearly half of cancers, the anticancer activity of p53 protein is often impaired by the overexpressed oncoprotein Mdm2 and its homologue, MdmX, demanding efficient therapeutics to disrupt the aberrant p53-MdmX/Mdm2 interactions to restore the p53 activity. While many potent Mdm2-specific inhibitors have already undergone clinical investigations, searching for MdmX-specific inhibitors has become very attractive, requiring a more efficient screening strategy for evaluating potential scaffolds or leads. In this work, considering that the intrinsic fluorescence residue Trp23 in the p53 transaction domain (p53p) plays an important role in determining the p53-MdmX/Mdm2 interactions, we constructed a fusion protein to utilize this intrinsic fluorescence signal to monitor high-throughput screening of a compound library. The fusion protein was composed of the p53p followed by the N-terminal domain of MdmX (N-MdmX) through a flexible amino acid linker, while the whole fusion protein contained a sole intrinsic fluorescence probe. The fusion protein was then evaluated using fluorescence spectroscopy against model compounds. Our results revealed that the variation of the fluorescence signal was highly correlated with the concentration of the ligand within 65 μM. The fusion protein was further evaluated with respect to its feasibility for use in high-throughput screening using a model compound library, including controls. We found that the imidazo-indole scaffold was a bona fide scaffold for template-based design of MdmX inhibitors. Thus, the p53p-N-MdmX fusion protein we designed provides a convenient and efficient tool for high-throughput screening of new MdmX inhibitors. The strategy described in this work should be applicable for other protein targets to accelerate drug discovery.
Shulman, Nick; Bellew, Matthew; Snelling, George; Carter, Donald; Huang, Yunda; Li, Hongli; Self, Steven G.; McElrath, M. Juliana; De Rosa, Stephen C.
2008-01-01
Background Intracellular cytokine staining (ICS) by multiparameter flow cytometry is one of the primary methods for determining T cell immunogenicity in HIV-1 clinical vaccine trials. Data analysis requires considerable expertise and time. The amount of data is quickly increasing as more and larger trials are performed, and thus there is a critical need for high throughput methods of data analysis. Methods A web based flow cytometric analysis system, LabKey Flow, was developed for analyses of data from standardized ICS assays. A gating template was created manually in commercially-available flow cytometric analysis software. Using this template, the system automatically compensated and analyzed all data sets. Quality control queries were designed to identify potentially incorrect sample collections. Results Comparison of the semi-automated analysis performed by LabKey Flow and the manual analysis performed using FlowJo software demonstrated excellent concordance (concordance correlation coefficient >0.990). Manual inspection of the analyses performed by LabKey Flow for 8-color ICS data files from several clinical vaccine trials indicates that template gates can appropriately be used for most data sets. Conclusions The semi-automated LabKey Flow analysis system can analyze accurately large ICS data files. Routine use of the system does not require specialized expertise. This high-throughput analysis will provide great utility for rapid evaluation of complex multiparameter flow cytometric measurements collected from large clinical trials. PMID:18615598
Park, Jong Hyuk; Nagpal, Prashant; McPeak, Kevin M; Lindquist, Nathan C; Oh, Sang-Hyun; Norris, David J
2013-10-09
The template-stripping method can yield smooth patterned films without surface contamination. However, the process is typically limited to coinage metals such as silver and gold because other materials cannot be readily stripped from silicon templates due to strong adhesion. Herein, we report a more general template-stripping method that is applicable to a larger variety of materials, including refractory metals, semiconductors, and oxides. To address the adhesion issue, we introduce a thin gold layer between the template and the deposited materials. After peeling off the combined film from the template, the gold layer can be selectively removed via wet etching to reveal a smooth patterned structure of the desired material. Further, we demonstrate template-stripped multilayer structures that have potential applications for photovoltaics and solar absorbers. An entire patterned device, which can include a transparent conductor, semiconductor absorber, and back contact, can be fabricated. Since our approach can also produce many copies of the patterned structure with high fidelity by reusing the template, a low-cost and high-throughput process in micro- and nanofabrication is provided that is useful for electronics, plasmonics, and nanophotonics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ton, H.; Yeung, E.S.
1997-02-15
An integrated on-line prototype for coupling a microreactor to capillary electrophoresis for DNA sequencing has been demonstrated. A dye-labeled terminator cycle-sequencing reaction is performed in a fused-silica capillary. Subsequently, the sequencing ladder is directly injected into a size-exclusion chromatographic column operated at nearly 95{degree}C for purification. On-line injection to a capillary for electrophoresis is accomplished at a junction set at nearly 70{degree}C. High temperature at the purification column and injection junction prevents the renaturation of DNA fragments during on-line transfer without affecting the separation. The high solubility of DNA in and the relatively low ionic strength of 1 x TEmore » buffer permit both effective purification and electrokinetic injection of the DNA sample. The system is compatible with highly efficient separations by a replaceable poly(ethylene oxide) polymer solution in uncoated capillary tubes. Future automation and adaptation to a multiple-capillary array system should allow high-speed, high-throughput DNA sequencing from templates to called bases in one step. 32 refs., 5 figs.« less
Arraycount, an algorithm for automatic cell counting in microwell arrays.
Kachouie, Nezamoddin; Kang, Lifeng; Khademhosseini, Ali
2009-09-01
Microscale technologies have emerged as a powerful tool for studying and manipulating biological systems and miniaturizing experiments. However, the lack of software complementing these techniques has made it difficult to apply them for many high-throughput experiments. This work establishes Arraycount, an approach to automatically count cells in microwell arrays. The procedure consists of fluorescent microscope imaging of cells that are seeded in microwells of a microarray system and then analyzing images via computer to recognize the array and count cells inside each microwell. To start counting, green and red fluorescent images (representing live and dead cells, respectively) are extracted from the original image and processed separately. A template-matching algorithm is proposed in which pre-defined well and cell templates are matched against the red and green images to locate microwells and cells. Subsequently, local maxima in the correlation maps are determined and local maxima maps are thresholded. At the end, the software records the cell counts for each detected microwell on the original image in high-throughput. The automated counting was shown to be accurate compared with manual counting, with a difference of approximately 1-2 cells per microwell: based on cell concentration, the absolute difference between manual and automatic counting measurements was 2.5-13%.
Automated one-step DNA sequencing based on nanoliter reaction volumes and capillary electrophoresis.
Pang, H M; Yeung, E S
2000-08-01
An integrated system with a nano-reactor for cycle-sequencing reaction coupled to on-line purification and capillary gel electrophoresis has been demonstrated. Fifty nanoliters of reagent solution, which includes dye-labeled terminators, polymerase, BSA and template, was aspirated and mixed with the template inside the nano-reactor followed by cycle-sequencing reaction. The reaction products were then purified by a size-exclusion chromatographic column operated at 50 degrees C followed by room temperature on-line injection of the DNA fragments into a capillary for gel electrophoresis. Over 450 bases of DNA can be separated and identified. As little as 25 nl reagent solution can be used for the cycle-sequencing reaction with a slightly shorter read length. Significant savings on reagent cost is achieved because the remaining stock solution can be reused without contamination. The steps of cycle sequencing, on-line purification, injection, DNA separation, capillary regeneration, gel-filling and fluidic manipulation were performed with complete automation. This system can be readily multiplexed for high-throughput DNA sequencing or PCR analysis directly from templates or even biological materials.
Regis, David P.; Dobaño, Carlota; Quiñones-Olson, Paola; Liang, Xiaowu; Graber, Norma L.; Stefaniak, Maureen E.; Campo, Joseph J.; Carucci, Daniel J.; Roth, David A.; He, Huaping; Felgner, Philip L.; Doolan, Denise L.
2009-01-01
We have evaluated a technology called Transcriptionally Active PCR (TAP) for high throughput identification and prioritization of novel target antigens from genomic sequence data using the Plasmodium parasite, the causative agent of malaria, as a model. First, we adapted the TAP technology for the highly AT-rich Plasmodium genome, using well-characterized P. falciparum and P. yoelii antigens and a small panel of uncharacterized open reading frames from the P. falciparum genome sequence database. We demonstrated that TAP fragments encoding six well-characterized P. falciparum antigens and five well-characterized P. yoelii antigens could be amplified in an equivalent manner from both plasmid DNA and genomic DNA templates, and that uncharacterized open reading frames could also be amplified from genomic DNA template. Second, we showed that the in vitro expression of the TAP fragments was equivalent or superior to that of supercoiled plasmid DNA encoding the same antigen. Third, we evaluated the in vivo immunogenicity of TAP fragments encoding a subset of the model P. falciparum and P. yoelii antigens. We found that antigen-specific antibody and cellular immune responses induced by the TAP fragments in mice were equivalent or superior to those induced by the corresponding plasmid DNA vaccines. Finally, we developed and demonstrated proof-of-principle for an in vitro humoral immunoscreening assay for down-selection of novel target antigens. These data support the potential of a TAP approach for rapid high throughput functional screening and identification of potential candidate vaccine antigens from genomic sequence data. PMID:18164079
Regis, David P; Dobaño, Carlota; Quiñones-Olson, Paola; Liang, Xiaowu; Graber, Norma L; Stefaniak, Maureen E; Campo, Joseph J; Carucci, Daniel J; Roth, David A; He, Huaping; Felgner, Philip L; Doolan, Denise L
2008-03-01
We have evaluated a technology called transcriptionally active PCR (TAP) for high throughput identification and prioritization of novel target antigens from genomic sequence data using the Plasmodium parasite, the causative agent of malaria, as a model. First, we adapted the TAP technology for the highly AT-rich Plasmodium genome, using well-characterized P. falciparum and P. yoelii antigens and a small panel of uncharacterized open reading frames from the P. falciparum genome sequence database. We demonstrated that TAP fragments encoding six well-characterized P. falciparum antigens and five well-characterized P. yoelii antigens could be amplified in an equivalent manner from both plasmid DNA and genomic DNA templates, and that uncharacterized open reading frames could also be amplified from genomic DNA template. Second, we showed that the in vitro expression of the TAP fragments was equivalent or superior to that of supercoiled plasmid DNA encoding the same antigen. Third, we evaluated the in vivo immunogenicity of TAP fragments encoding a subset of the model P. falciparum and P. yoelii antigens. We found that antigen-specific antibody and cellular immune responses induced by the TAP fragments in mice were equivalent or superior to those induced by the corresponding plasmid DNA vaccines. Finally, we developed and demonstrated proof-of-principle for an in vitro humoral immunoscreening assay for down-selection of novel target antigens. These data support the potential of a TAP approach for rapid high throughput functional screening and identification of potential candidate vaccine antigens from genomic sequence data.
2013-01-01
The template-stripping method can yield smooth patterned films without surface contamination. However, the process is typically limited to coinage metals such as silver and gold because other materials cannot be readily stripped from silicon templates due to strong adhesion. Herein, we report a more general template-stripping method that is applicable to a larger variety of materials, including refractory metals, semiconductors, and oxides. To address the adhesion issue, we introduce a thin gold layer between the template and the deposited materials. After peeling off the combined film from the template, the gold layer can be selectively removed via wet etching to reveal a smooth patterned structure of the desired material. Further, we demonstrate template-stripped multilayer structures that have potential applications for photovoltaics and solar absorbers. An entire patterned device, which can include a transparent conductor, semiconductor absorber, and back contact, can be fabricated. Since our approach can also produce many copies of the patterned structure with high fidelity by reusing the template, a low-cost and high-throughput process in micro- and nanofabrication is provided that is useful for electronics, plasmonics, and nanophotonics. PMID:24001174
Webb, Thomas R; Jiang, Luyong; Sviridov, Sergey; Venegas, Ruben E; Vlaskina, Anna V; McGrath, Douglas; Tucker, John; Wang, Jian; Deschenes, Alain; Li, Rongshi
2007-01-01
We report the further application of a novel approach to template and ligand design by the synthesis of agonists of the melanocortin receptor. This design method uses the conserved structural data from the three-dimensional conformations of beta-turn peptides to design rigid nonpeptide templates that mimic the orientation of the main chain C-alpha atoms in a peptide beta-turn. We report details on a new synthesis of derivatives of template 1 that are useful for the synthesis of exploratory libraries. The utility of this technique is further exemplified by several iterative rounds of high-throughput synthesis and screening, which result in new partially optimized nonpeptide agonists for several melanocortin receptors.
Validation of high-throughput single cell analysis methodology.
Devonshire, Alison S; Baradez, Marc-Olivier; Morley, Gary; Marshall, Damian; Foy, Carole A
2014-05-01
High-throughput quantitative polymerase chain reaction (qPCR) approaches enable profiling of multiple genes in single cells, bringing new insights to complex biological processes and offering opportunities for single cell-based monitoring of cancer cells and stem cell-based therapies. However, workflows with well-defined sources of variation are required for clinical diagnostics and testing of tissue-engineered products. In a study of neural stem cell lines, we investigated the performance of lysis, reverse transcription (RT), preamplification (PA), and nanofluidic qPCR steps at the single cell level in terms of efficiency, precision, and limit of detection. We compared protocols using a separate lysis buffer with cell capture directly in RT-PA reagent. The two methods were found to have similar lysis efficiencies, whereas the direct RT-PA approach showed improved precision. Digital PCR was used to relate preamplified template copy numbers to Cq values and reveal where low-quality signals may affect the analysis. We investigated the impact of calibration and data normalization strategies as a means of minimizing the impact of inter-experimental variation on gene expression values and found that both approaches can improve data comparability. This study provides validation and guidance for the application of high-throughput qPCR workflows for gene expression profiling of single cells. Copyright © 2014 Elsevier Inc. All rights reserved.
Liu, Shu; Hossinger, André; Göbbels, Sarah; Vorberg, Ina M
2017-03-04
Extracellular vesicles (EVs) are actively secreted, membrane-bound communication vehicles that exchange biomolecules between cells. EVs also serve as dissemination vehicles for pathogens, including prions, proteinaceous infectious agents that cause transmissible spongiform encephalopathies (TSEs) in mammals. Increasing evidence accumulates that diverse protein aggregates associated with common neurodegenerative diseases are packaged into EVs as well. Vesicle-mediated intercellular transmission of protein aggregates can induce aggregation of homotypic proteins in acceptor cells and might thereby contribute to disease progression. Our knowledge of how protein aggregates are sorted into EVs and how these vesicles adhere to and fuse with target cells is limited. Here we review how TSE prions exploit EVs for intercellular transmission and compare this to the transmission behavior of self-templating cytosolic protein aggregates derived from the yeast prion domain Sup 35 NM. Artificial NM prions are non-toxic to mammalian cell cultures and do not cause loss-of-function phenotypes. Importantly, NM particles are also secreted in association with exosomes that horizontally transmit the prion phenotype to naive bystander cells, a process that can be monitored with high accuracy by automated high throughput confocal microscopy. The high abundance of mammalian proteins with amino acid stretches compositionally similar to yeast prion domains makes the NM cell model an attractive model to study self-templating and dissemination properties of proteins with prion-like domains in the mammalian context.
Region Templates: Data Representation and Management for High-Throughput Image Analysis
Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Klasky, Scott; Saltz, Joel
2015-01-01
We introduce a region template abstraction and framework for the efficient storage, management and processing of common data types in analysis of large datasets of high resolution images on clusters of hybrid computing nodes. The region template abstraction provides a generic container template for common data structures, such as points, arrays, regions, and object sets, within a spatial and temporal bounding box. It allows for different data management strategies and I/O implementations, while providing a homogeneous, unified interface to applications for data storage and retrieval. A region template application is represented as a hierarchical dataflow in which each computing stage may be represented as another dataflow of finer-grain tasks. The execution of the application is coordinated by a runtime system that implements optimizations for hybrid machines, including performance-aware scheduling for maximizing the utilization of computing devices and techniques to reduce the impact of data transfers between CPUs and GPUs. An experimental evaluation on a state-of-the-art hybrid cluster using a microscopy imaging application shows that the abstraction adds negligible overhead (about 3%) and achieves good scalability and high data transfer rates. Optimizations in a high speed disk based storage implementation of the abstraction to support asynchronous data transfers and computation result in an application performance gain of about 1.13×. Finally, a processing rate of 11,730 4K×4K tiles per minute was achieved for the microscopy imaging application on a cluster with 100 nodes (300 GPUs and 1,200 CPU cores). This computation rate enables studies with very large datasets. PMID:26139953
Tome, Jacob M; Ozer, Abdullah; Pagano, John M; Gheba, Dan; Schroth, Gary P; Lis, John T
2014-06-01
RNA-protein interactions play critical roles in gene regulation, but methods to quantitatively analyze these interactions at a large scale are lacking. We have developed a high-throughput sequencing-RNA affinity profiling (HiTS-RAP) assay by adapting a high-throughput DNA sequencer to quantify the binding of fluorescently labeled protein to millions of RNAs anchored to sequenced cDNA templates. Using HiTS-RAP, we measured the affinity of mutagenized libraries of GFP-binding and NELF-E-binding aptamers to their respective targets and identified critical regions of interaction. Mutations additively affected the affinity of the NELF-E-binding aptamer, whose interaction depended mainly on a single-stranded RNA motif, but not that of the GFP aptamer, whose interaction depended primarily on secondary structure.
Prediction of Protein Structure by Template-Based Modeling Combined with the UNRES Force Field.
Krupa, Paweł; Mozolewska, Magdalena A; Joo, Keehyoung; Lee, Jooyoung; Czaplewski, Cezary; Liwo, Adam
2015-06-22
A new approach to the prediction of protein structures that uses distance and backbone virtual-bond dihedral angle restraints derived from template-based models and simulations with the united residue (UNRES) force field is proposed. The approach combines the accuracy and reliability of template-based methods for the segments of the target sequence with high similarity to those having known structures with the ability of UNRES to pack the domains correctly. Multiplexed replica-exchange molecular dynamics with restraints derived from template-based models of a given target, in which each restraint is weighted according to the accuracy of the prediction of the corresponding section of the molecule, is used to search the conformational space, and the weighted histogram analysis method and cluster analysis are applied to determine the families of the most probable conformations, from which candidate predictions are selected. To test the capability of the method to recover template-based models from restraints, five single-domain proteins with structures that have been well-predicted by template-based methods were used; it was found that the resulting structures were of the same quality as the best of the original models. To assess whether the new approach can improve template-based predictions with incorrectly predicted domain packing, four such targets were selected from the CASP10 targets; for three of them the new approach resulted in significantly better predictions compared with the original template-based models. The new approach can be used to predict the structures of proteins for which good templates can be found for sections of the sequence or an overall good template can be found for the entire sequence but the prediction quality is remarkably weaker in putative domain-linker regions.
Periodic nanostructural materials for nanoplasmonics
NASA Astrophysics Data System (ADS)
Choi, Dukhyun
2017-02-01
Nanoscale periodic material design and fabrication are essentially fundamental requirement for basic scientific researches and industrial applications of nanoscience and engineering. Innovative, effective, reproducible, large-area uniform, tunable and robust nanostructure/material syntheses are still challenging. Here, I would like to introduce the novel periodic nanostructural materials particularly with uniformly ordered nanoporous or nanoflower structures, which are fabricated by simple, cost-effective, and high-throughput wet chemical methods. I also report large-area periodic plasmonic nanostructures based on template-based nanolithography. The surface morphology and optical properties are characterized by SEM and UV-vis. spectroscopy. Furthermore, their enhancement factor is evaluated by using SERS signals.
High-throughput methods for characterizing the mechanical properties of coatings
NASA Astrophysics Data System (ADS)
Siripirom, Chavanin
The characterization of mechanical properties in a combinatorial and high-throughput workflow has been a bottleneck that reduced the speed of the materials development process. High-throughput characterization of the mechanical properties was applied in this research in order to reduce the amount of sample handling and to accelerate the output. A puncture tester was designed and built to evaluate the toughness of materials using an innovative template design coupled with automation. The test is in the form of a circular free-film indentation. A single template contains 12 samples which are tested in a rapid serial approach. Next, the operational principles of a novel parallel dynamic mechanical-thermal analysis instrument were analyzed in detail for potential sources of errors. The test uses a model of a circular bilayer fixed-edge plate deformation. A total of 96 samples can be analyzed simultaneously which provides a tremendous increase in efficiency compared with a conventional dynamic test. The modulus values determined by the system had considerable variation. The errors were observed and improvements to the system were made. A finite element analysis was used to analyze the accuracy given by the closed-form solution with respect to testing geometries, such as thicknesses of the samples. A good control of the thickness of the sample was proven to be crucial to the accuracy and precision of the output. Then, the attempt to correlate the high-throughput experiments and conventional coating testing methods was made. Automated nanoindentation in dynamic mode was found to provide information on the near-surface modulus and could potentially correlate with the pendulum hardness test using the loss tangent component. Lastly, surface characterization of stratified siloxane-polyurethane coatings was carried out with X-ray photoelectron spectroscopy, Rutherford backscattering spectroscopy, transmission electron microscopy, and nanoindentation. The siloxane component segregates to the surface during curing. The distribution of siloxane as a function of thickness into the sample showed differences depending on the formulation parameters. The coatings which had higher siloxane content near the surface were those coatings found to perform well in field tests.
Słomka, Marcin; Sobalska-Kwapis, Marta; Wachulec, Monika; Bartosz, Grzegorz; Strapagiel, Dominik
2017-11-03
High resolution melting (HRM) is a convenient method for gene scanning as well as genotyping of individual and multiple single nucleotide polymorphisms (SNPs). This rapid, simple, closed-tube, homogenous, and cost-efficient approach has the capacity for high specificity and sensitivity, while allowing easy transition to high-throughput scale. In this paper, we provide examples from our laboratory practice of some problematic issues which can affect the performance and data analysis of HRM results, especially with regard to reference curve-based targeted genotyping. We present those examples in order of the typical experimental workflow, and discuss the crucial significance of the respective experimental errors and limitations for the quality and analysis of results. The experimental details which have a decisive impact on correct execution of a HRM genotyping experiment include type and quality of DNA source material, reproducibility of isolation method and template DNA preparation, primer and amplicon design, automation-derived preparation and pipetting inconsistencies, as well as physical limitations in melting curve distinction for alternative variants and careful selection of samples for validation by sequencing. We provide a case-by-case analysis and discussion of actual problems we encountered and solutions that should be taken into account by researchers newly attempting HRM genotyping, especially in a high-throughput setup.
Słomka, Marcin; Sobalska-Kwapis, Marta; Wachulec, Monika; Bartosz, Grzegorz
2017-01-01
High resolution melting (HRM) is a convenient method for gene scanning as well as genotyping of individual and multiple single nucleotide polymorphisms (SNPs). This rapid, simple, closed-tube, homogenous, and cost-efficient approach has the capacity for high specificity and sensitivity, while allowing easy transition to high-throughput scale. In this paper, we provide examples from our laboratory practice of some problematic issues which can affect the performance and data analysis of HRM results, especially with regard to reference curve-based targeted genotyping. We present those examples in order of the typical experimental workflow, and discuss the crucial significance of the respective experimental errors and limitations for the quality and analysis of results. The experimental details which have a decisive impact on correct execution of a HRM genotyping experiment include type and quality of DNA source material, reproducibility of isolation method and template DNA preparation, primer and amplicon design, automation-derived preparation and pipetting inconsistencies, as well as physical limitations in melting curve distinction for alternative variants and careful selection of samples for validation by sequencing. We provide a case-by-case analysis and discussion of actual problems we encountered and solutions that should be taken into account by researchers newly attempting HRM genotyping, especially in a high-throughput setup. PMID:29099791
Automated sample-preparation technologies in genome sequencing projects.
Hilbert, H; Lauber, J; Lubenow, H; Düsterhöft, A
2000-01-01
A robotic workstation system (BioRobot 96OO, QIAGEN) and a 96-well UV spectrophotometer (Spectramax 250, Molecular Devices) were integrated in to the process of high-throughput automated sequencing of double-stranded plasmid DNA templates. An automated 96-well miniprep kit protocol (QIAprep Turbo, QIAGEN) provided high-quality plasmid DNA from shotgun clones. The DNA prepared by this procedure was used to generate more than two mega bases of final sequence data for two genomic projects (Arabidopsis thaliana and Schizosaccharomyces pombe), three thousand expressed sequence tags (ESTs) plus half a mega base of human full-length cDNA clones, and approximately 53,000 single reads for a whole genome shotgun project (Pseudomonas putida).
Minari, Jusaku; Shirai, Tetsuya; Kato, Kazuto
2014-12-01
As evidenced by high-throughput sequencers, genomic technologies have recently undergone radical advances. These technologies enable comprehensive sequencing of personal genomes considerably more efficiently and less expensively than heretofore. These developments present a challenge to the conventional framework of biomedical ethics; under these changing circumstances, each research project has to develop a pragmatic research policy. Based on the experience with a new large-scale project-the Genome Science Project-this article presents a novel approach to conducting a specific policy for personal genome research in the Japanese context. In creating an original informed-consent form template for the project, we present a two-tiered process: making the draft of the template following an analysis of national and international policies; refining the draft template in conjunction with genome project researchers for practical application. Through practical use of the template, we have gained valuable experience in addressing challenges in the ethical review process, such as the importance of sharing details of the latest developments in genomics with members of research ethics committees. We discuss certain limitations of the conventional concept of informed consent and its governance system and suggest the potential of an alternative process using information technology.
An Automated, High-Throughput Method for Interpreting the Tandem Mass Spectra of Glycosaminoglycans
NASA Astrophysics Data System (ADS)
Duan, Jiana; Jonathan Amster, I.
2018-05-01
The biological interactions between glycosaminoglycans (GAGs) and other biomolecules are heavily influenced by structural features of the glycan. The structure of GAGs can be assigned using tandem mass spectrometry (MS2), but analysis of these data, to date, requires manually interpretation, a slow process that presents a bottleneck to the broader deployment of this approach to solving biologically relevant problems. Automated interpretation remains a challenge, as GAG biosynthesis is not template-driven, and therefore, one cannot predict structures from genomic data, as is done with proteins. The lack of a structure database, a consequence of the non-template biosynthesis, requires a de novo approach to interpretation of the mass spectral data. We propose a model for rapid, high-throughput GAG analysis by using an approach in which candidate structures are scored for the likelihood that they would produce the features observed in the mass spectrum. To make this approach tractable, a genetic algorithm is used to greatly reduce the search-space of isomeric structures that are considered. The time required for analysis is significantly reduced compared to an approach in which every possible isomer is considered and scored. The model is coded in a software package using the MATLAB environment. This approach was tested on tandem mass spectrometry data for long-chain, moderately sulfated chondroitin sulfate oligomers that were derived from the proteoglycan bikunin. The bikunin data was previously interpreted manually. Our approach examines glycosidic fragments to localize SO3 modifications to specific residues and yields the same structures reported in literature, only much more quickly.
Mapping DNA polymerase errors by single-molecule sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, David F.; Lu, Jenny; Chang, Seungwoo
Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less
Mapping DNA polymerase errors by single-molecule sequencing
Lee, David F.; Lu, Jenny; Chang, Seungwoo; ...
2016-05-16
Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less
Machine-learning-assisted materials discovery using failed experiments
NASA Astrophysics Data System (ADS)
Raccuglia, Paul; Elbert, Katherine C.; Adler, Philip D. F.; Falk, Casey; Wenny, Malia B.; Mollo, Aurelio; Zeller, Matthias; Friedler, Sorelle A.; Schrier, Joshua; Norquist, Alexander J.
2016-05-01
Inorganic-organic hybrid materials such as organically templated metal oxides, metal-organic frameworks (MOFs) and organohalide perovskites have been studied for decades, and hydrothermal and (non-aqueous) solvothermal syntheses have produced thousands of new materials that collectively contain nearly all the metals in the periodic table. Nevertheless, the formation of these compounds is not fully understood, and development of new compounds relies primarily on exploratory syntheses. Simulation- and data-driven approaches (promoted by efforts such as the Materials Genome Initiative) provide an alternative to experimental trial-and-error. Three major strategies are: simulation-based predictions of physical properties (for example, charge mobility, photovoltaic properties, gas adsorption capacity or lithium-ion intercalation) to identify promising target candidates for synthetic efforts; determination of the structure-property relationship from large bodies of experimental data, enabled by integration with high-throughput synthesis and measurement tools; and clustering on the basis of similar crystallographic structure (for example, zeolite structure classification or gas adsorption properties). Here we demonstrate an alternative approach that uses machine-learning algorithms trained on reaction data to predict reaction outcomes for the crystallization of templated vanadium selenites. We used information on ‘dark’ reactions—failed or unsuccessful hydrothermal syntheses—collected from archived laboratory notebooks from our laboratory, and added physicochemical property descriptions to the raw notebook information using cheminformatics techniques. We used the resulting data to train a machine-learning model to predict reaction success. When carrying out hydrothermal synthesis experiments using previously untested, commercially available organic building blocks, our machine-learning model outperformed traditional human strategies, and successfully predicted conditions for new organically templated inorganic product formation with a success rate of 89 per cent. Inverting the machine-learning model reveals new hypotheses regarding the conditions for successful product formation.
Ozer, Abdullah; Tome, Jacob M; Friedman, Robin C; Gheba, Dan; Schroth, Gary P; Lis, John T
2015-08-01
Because RNA-protein interactions have a central role in a wide array of biological processes, methods that enable a quantitative assessment of these interactions in a high-throughput manner are in great demand. Recently, we developed the high-throughput sequencing-RNA affinity profiling (HiTS-RAP) assay that couples sequencing on an Illumina GAIIx genome analyzer with the quantitative assessment of protein-RNA interactions. This assay is able to analyze interactions between one or possibly several proteins with millions of different RNAs in a single experiment. We have successfully used HiTS-RAP to analyze interactions of the EGFP and negative elongation factor subunit E (NELF-E) proteins with their corresponding canonical and mutant RNA aptamers. Here we provide a detailed protocol for HiTS-RAP that can be completed in about a month (8 d hands-on time). This includes the preparation and testing of recombinant proteins and DNA templates, clustering DNA templates on a flowcell, HiTS and protein binding with a GAIIx instrument, and finally data analysis. We also highlight aspects of HiTS-RAP that can be further improved and points of comparison between HiTS-RAP and two other recently developed methods, quantitative analysis of RNA on a massively parallel array (RNA-MaP) and RNA Bind-n-Seq (RBNS), for quantitative analysis of RNA-protein interactions.
Cui, Yang; Hanley, Luke
2015-06-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.
Cui, Yang; Hanley, Luke
2015-01-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872
NASA Astrophysics Data System (ADS)
Cui, Yang; Hanley, Luke
2015-06-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.
A Template-Based Protein Structure Reconstruction Method Using Deep Autoencoder Learning.
Li, Haiou; Lyu, Qiang; Cheng, Jianlin
2016-12-01
Protein structure prediction is an important problem in computational biology, and is widely applied to various biomedical problems such as protein function study, protein design, and drug design. In this work, we developed a novel deep learning approach based on a deeply stacked denoising autoencoder for protein structure reconstruction. We applied our approach to a template-based protein structure prediction using only the 3D structural coordinates of homologous template proteins as input. The templates were identified for a target protein by a PSI-BLAST search. 3DRobot (a program that automatically generates diverse and well-packed protein structure decoys) was used to generate initial decoy models for the target from the templates. A stacked denoising autoencoder was trained on the decoys to obtain a deep learning model for the target protein. The trained deep model was then used to reconstruct the final structural model for the target sequence. With target proteins that have highly similar template proteins as benchmarks, the GDT-TS score of the predicted structures is greater than 0.7, suggesting that the deep autoencoder is a promising method for protein structure reconstruction.
Development of a High Angular Resolution Diffusion Imaging Human Brain Template
Varentsova, Anna; Zhang, Shengwei; Arfanakis, Konstantinos
2014-01-01
Brain diffusion templates contain rich information about the microstructure of the brain, and are used as references in spatial normalization or in the development of brain atlases. The accuracy of diffusion templates constructed based on the diffusion tensor (DT) model is limited in regions with complex neuronal micro-architecture. High angular resolution diffusion imaging (HARDI) overcomes limitations of the DT model and is capable of resolving intravoxel heterogeneity. However, when HARDI is combined with multiple-shot sequences to minimize image artifacts, the scan time becomes inappropriate for human brain imaging. In this work, an artifact-free HARDI template of the human brain was developed from low angular resolution multiple-shot diffusion data. The resulting HARDI template was produced in ICBM-152 space based on Turboprop diffusion data, was shown to resolve complex neuronal micro-architecture in regions with intravoxel heterogeneity, and contained fiber orientation information consistent with known human brain anatomy. PMID:24440528
IAOseq: inferring abundance of overlapping genes using RNA-seq data.
Sun, Hong; Yang, Shuang; Tun, Liangliang; Li, Yixue
2015-01-01
Overlapping transcription constitutes a common mechanism for regulating gene expression. A major limitation of the overlapping transcription assays is the lack of high throughput expression data. We developed a new tool (IAOseq) that is based on reads distributions along the transcribed regions to identify the expression levels of overlapping genes from standard RNA-seq data. Compared with five commonly used quantification methods, IAOseq showed better performance in the estimation accuracy of overlapping transcription levels. For the same strand overlapping transcription, currently existing high-throughput methods are rarely available to distinguish which strand was present in the original mRNA template. The IAOseq results showed that the commonly used methods gave an average of 1.6 fold overestimation of the expression levels of same strand overlapping genes. This work provides a useful tool for mining overlapping transcription levels from standard RNA-seq libraries. IAOseq could be used to help us understand the complex regulatory mechanism mediated by overlapping transcripts. IAOseq is freely available at http://lifecenter.sgst.cn/main/en/IAO_seq.jsp.
NASA Astrophysics Data System (ADS)
Khan, Muhammad Ibrahim
Limitation of near future scaling down of conventional silicon technology stimulated the quest for alternative technologies in nanometer-scale materials and devices in recent years. Since the discovery of carbon nanotubes, there has been great interest in the synthesis and characterization of other one-dimensional materials. Nanorods, wires, belts, and tubes make up one particular class of anisotropic nanomaterials, which are considered quasi one-dimensional structures. Nanowires are promising materials for many novel applications, ranging from chemical and biological sensors to optical and electronic devices. This is not only because of their unique geometry, but also because they possess many unique physical properties, including electrical, magnetic, optical, as well as mechanical properties. In this dissertation, we describe the synthesis, structure and properties of nanowires of various inorganic materials fabricated simply by filling up pores or via in a template by means of electrochemical deposition (ECD). The architecture of the porous template defines the wire shape, direction and size. Because of the extreme aspect ratios of these 3D porous membranes, most physical and chemical vapor deposition techniques are ill suited for this template-directed growth technique and template directed fabrication is found to be superior in terms of low cost, high throughput, high volume, and ease of production. Also multicomponent nanowires can be grown simply by switching the solution composition or in some cases even in the same solution by switching the deposition potential. The nanowires can be released from the template matrix by chemical dissolution of the template. Based on the successful fabrication of elemental and multicomponent nanowires we have designed and fabricated InSb nanowire based field effect transistor (FET) devices on Si substrate. InSb is well known for its direct narrow band gap (0.18 eV at 300 K) with a very high electron mobility (8x10 4 cm2 V-1 s-1 at 300 K), electron velocity, and ballistic length (up to 0.7 mum at 300 K) of any known semiconductor. We demonstrated InSb nanowire devices at different diameter range from 30nm to 200nm using template directed technique which promises smaller feature sizes and an alternate, more economical path to atomic-scale computing structures than top-down lithography.
NASA Astrophysics Data System (ADS)
Cherala, Anshuman; Sreenivasan, S. V.
2018-12-01
Complex nanoshaped structures (nanoshape structures here are defined as shapes enabled by sharp corners with radius of curvature <5 nm) have been shown to enable emerging nanoscale applications in energy, electronics, optics, and medicine. This nanoshaped fabrication at high throughput is well beyond the capabilities of advanced optical lithography. While the highest-resolution e-beam processes (Gaussian beam tools with non-chemically amplified resists) can achieve <5 nm resolution, this is only available at very low throughputs. Large-area e-beam processes, needed for photomasks and imprint templates, are limited to 18 nm half-pitch lines and spaces and 20 nm half-pitch hole patterns. Using nanoimprint lithography, we have previously demonstrated the ability to fabricate precise diamond-like nanoshapes with 3 nm radius corners over large areas. An exemplary shaped silicon nanowire ultracapacitor device was fabricated with these nanoshaped structures, wherein the half-pitch was 100 nm. The device significantly exceeded standard nanowire capacitor performance (by 90%) due to relative increase in surface area per unit projected area, enabled by the nanoshape. Going beyond the previous work, in this paper we explore the scaling of these nanoshaped structures to 10 nm half-pitch and below. At these scales a new "shape retention" resolution limit is observed due to polymer relaxation in imprint resists, which cannot be predicted with a linear elastic continuum model. An all-atom molecular dynamics model of the nanoshape structure was developed here to study this shape retention phenomenon and accurately predict the polymer relaxation. The atomistic framework is an essential modeling and design tool to extend the capability of imprint lithography to sub-10 nm nanoshapes. This framework has been used here to propose process refinements that maximize shape retention, and design template assist features (design for nanoshape retention) to achieve targeted nanoshapes.
Piatkowski, Pawel; Kasprzak, Joanna M; Kumar, Deepak; Magnus, Marcin; Chojnowski, Grzegorz; Bujnicki, Janusz M
2016-01-01
RNA encompasses an essential part of all known forms of life. The functions of many RNA molecules are dependent on their ability to form complex three-dimensional (3D) structures. However, experimental determination of RNA 3D structures is laborious and challenging, and therefore, the majority of known RNAs remain structurally uncharacterized. To address this problem, computational structure prediction methods were developed that either utilize information derived from known structures of other RNA molecules (by way of template-based modeling) or attempt to simulate the physical process of RNA structure formation (by way of template-free modeling). All computational methods suffer from various limitations that make theoretical models less reliable than high-resolution experimentally determined structures. This chapter provides a protocol for computational modeling of RNA 3D structure that overcomes major limitations by combining two complementary approaches: template-based modeling that is capable of predicting global architectures based on similarity to other molecules but often fails to predict local unique features, and template-free modeling that can predict the local folding, but is limited to modeling the structure of relatively small molecules. Here, we combine the use of a template-based method ModeRNA with a template-free method SimRNA. ModeRNA requires a sequence alignment of the target RNA sequence to be modeled with a template of the known structure; it generates a model that predicts the structure of a conserved core and provides a starting point for modeling of variable regions. SimRNA can be used to fold small RNAs (<80 nt) without any additional structural information, and to refold parts of models for larger RNAs that have a correctly modeled core. ModeRNA can be either downloaded, compiled and run locally or run through a web interface at http://genesilico.pl/modernaserver/ . SimRNA is currently available to download for local use as a precompiled software package at http://genesilico.pl/software/stand-alone/simrna and as a web server at http://genesilico.pl/SimRNAweb . For model optimization we use QRNAS, available at http://genesilico.pl/qrnas .
Pyrosequencing for Microbial Identification and Characterization
Cummings, Patrick J.; Ahmed, Ray; Durocher, Jeffrey A.; Jessen, Adam; Vardi, Tamar; Obom, Kristina M.
2013-01-01
Pyrosequencing is a versatile technique that facilitates microbial genome sequencing that can be used to identify bacterial species, discriminate bacterial strains and detect genetic mutations that confer resistance to anti-microbial agents. The advantages of pyrosequencing for microbiology applications include rapid and reliable high-throughput screening and accurate identification of microbes and microbial genome mutations. Pyrosequencing involves sequencing of DNA by synthesizing the complementary strand a single base at a time, while determining the specific nucleotide being incorporated during the synthesis reaction. The reaction occurs on immobilized single stranded template DNA where the four deoxyribonucleotides (dNTP) are added sequentially and the unincorporated dNTPs are enzymatically degraded before addition of the next dNTP to the synthesis reaction. Detection of the specific base incorporated into the template is monitored by generation of chemiluminescent signals. The order of dNTPs that produce the chemiluminescent signals determines the DNA sequence of the template. The real-time sequencing capability of pyrosequencing technology enables rapid microbial identification in a single assay. In addition, the pyrosequencing instrument, can analyze the full genetic diversity of anti-microbial drug resistance, including typing of SNPs, point mutations, insertions, and deletions, as well as quantification of multiple gene copies that may occur in some anti-microbial resistance patterns. PMID:23995536
Pyrosequencing for microbial identification and characterization.
Cummings, Patrick J; Ahmed, Ray; Durocher, Jeffrey A; Jessen, Adam; Vardi, Tamar; Obom, Kristina M
2013-08-22
Pyrosequencing is a versatile technique that facilitates microbial genome sequencing that can be used to identify bacterial species, discriminate bacterial strains and detect genetic mutations that confer resistance to anti-microbial agents. The advantages of pyrosequencing for microbiology applications include rapid and reliable high-throughput screening and accurate identification of microbes and microbial genome mutations. Pyrosequencing involves sequencing of DNA by synthesizing the complementary strand a single base at a time, while determining the specific nucleotide being incorporated during the synthesis reaction. The reaction occurs on immobilized single stranded template DNA where the four deoxyribonucleotides (dNTP) are added sequentially and the unincorporated dNTPs are enzymatically degraded before addition of the next dNTP to the synthesis reaction. Detection of the specific base incorporated into the template is monitored by generation of chemiluminescent signals. The order of dNTPs that produce the chemiluminescent signals determines the DNA sequence of the template. The real-time sequencing capability of pyrosequencing technology enables rapid microbial identification in a single assay. In addition, the pyrosequencing instrument, can analyze the full genetic diversity of anti-microbial drug resistance, including typing of SNPs, point mutations, insertions, and deletions, as well as quantification of multiple gene copies that may occur in some anti-microbial resistance patterns.
High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME
NASA Astrophysics Data System (ADS)
Otis, Richard A.; Liu, Zi-Kui
2017-05-01
One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.
Tiersch, Terrence R.; Yang, Huiping; Hu, E.
2011-01-01
With the development of genomic research technologies, comparative genome studies among vertebrate species are becoming commonplace for human biomedical research. Fish offer unlimited versatility for biomedical research. Extensive studies are done using these fish models, yielding tens of thousands of specific strains and lines, and the number is increasing every day. Thus, high-throughput sperm cryopreservation is urgently needed to preserve these genetic resources. Although high-throughput processing has been widely applied for sperm cryopreservation in livestock for decades, application in biomedical model fishes is still in the concept-development stage because of the limited sample volumes and the biological characteristics of fish sperm. High-throughput processing in livestock was developed based on advances made in the laboratory and was scaled up for increased processing speed, capability for mass production, and uniformity and quality assurance. Cryopreserved germplasm combined with high-throughput processing constitutes an independent industry encompassing animal breeding, preservation of genetic diversity, and medical research. Currently, there is no specifically engineered system available for high-throughput of cryopreserved germplasm for aquatic species. This review is to discuss the concepts and needs for high-throughput technology for model fishes, propose approaches for technical development, and overview future directions of this approach. PMID:21440666
Kolls, Brad J; Lai, Amy H; Srinivas, Anang A; Reid, Robert R
2014-06-01
The purpose of this study was to determine the relative cost reductions within different staffing models for continuous video-electroencephalography (cvEEG) service by introducing a template system for 10/20 lead application. We compared six staffing models using decision tree modeling based on historical service line utilization data from the cvEEG service at our center. Templates were integrated into technologist-based service lines in six different ways. The six models studied were templates for all studies, templates for intensive care unit (ICU) studies, templates for on-call studies, templates for studies of ≤ 24-hour duration, technologists for on-call studies, and technologists for all studies. Cost was linearly related to the study volume for all models with the "templates for all" model incurring the lowest cost. The "technologists for all" model carried the greatest cost. Direct cost comparison shows that any introduction of templates results in cost savings, with the templates being used for patients located in the ICU being the second most cost efficient and the most practical of the combined models to implement. Cost difference between the highest and lowest cost models under the base case produced an annual estimated savings of $267,574. Implementation of the ICU template model at our institution under base case conditions would result in a $205,230 savings over our current "technologist for all" model. Any implementation of templates into a technologist-based cvEEG service line results in cost savings, with the most significant annual savings coming from using the templates for all studies, but the most practical implementation approach with the second highest cost reduction being the template used in the ICU. The lowered costs determined in this work suggest that a template-based cvEEG service could be supported at smaller centers with significantly reduced costs and could allow for broader use of cvEEG patient monitoring.
Template-Based Modeling of Protein-RNA Interactions.
Zheng, Jinfang; Kundrotas, Petras J; Vakser, Ilya A; Liu, Shiyong
2016-09-01
Protein-RNA complexes formed by specific recognition between RNA and RNA-binding proteins play an important role in biological processes. More than a thousand of such proteins in human are curated and many novel RNA-binding proteins are to be discovered. Due to limitations of experimental approaches, computational techniques are needed for characterization of protein-RNA interactions. Although much progress has been made, adequate methodologies reliably providing atomic resolution structural details are still lacking. Although protein-RNA free docking approaches proved to be useful, in general, the template-based approaches provide higher quality of predictions. Templates are key to building a high quality model. Sequence/structure relationships were studied based on a representative set of binary protein-RNA complexes from PDB. Several approaches were tested for pairwise target/template alignment. The analysis revealed a transition point between random and correct binding modes. The results showed that structural alignment is better than sequence alignment in identifying good templates, suitable for generating protein-RNA complexes close to the native structure, and outperforms free docking, successfully predicting complexes where the free docking fails, including cases of significant conformational change upon binding. A template-based protein-RNA interaction modeling protocol PRIME was developed and benchmarked on a representative set of complexes.
Development of a high angular resolution diffusion imaging human brain template.
Varentsova, Anna; Zhang, Shengwei; Arfanakis, Konstantinos
2014-05-01
Brain diffusion templates contain rich information about the microstructure of the brain, and are used as references in spatial normalization or in the development of brain atlases. The accuracy of diffusion templates constructed based on the diffusion tensor (DT) model is limited in regions with complex neuronal micro-architecture. High angular resolution diffusion imaging (HARDI) overcomes limitations of the DT model and is capable of resolving intravoxel heterogeneity. However, when HARDI is combined with multiple-shot sequences to minimize image artifacts, the scan time becomes inappropriate for human brain imaging. In this work, an artifact-free HARDI template of the human brain was developed from low angular resolution multiple-shot diffusion data. The resulting HARDI template was produced in ICBM-152 space based on Turboprop diffusion data, was shown to resolve complex neuronal micro-architecture in regions with intravoxel heterogeneity, and contained fiber orientation information consistent with known human brain anatomy. Copyright © 2014 Elsevier Inc. All rights reserved.
Stranges, P. Benjamin; Palla, Mirkó; Kalachikov, Sergey; Nivala, Jeff; Dorwart, Michael; Trans, Andrew; Kumar, Shiv; Porel, Mintu; Chien, Minchen; Tao, Chuanjuan; Morozova, Irina; Li, Zengmin; Shi, Shundi; Aberra, Aman; Arnold, Cleoma; Yang, Alexander; Aguirre, Anne; Harada, Eric T.; Korenblum, Daniel; Pollard, James; Bhat, Ashwini; Gremyachinskiy, Dmitriy; Bibillo, Arek; Chen, Roger; Davis, Randy; Russo, James J.; Fuller, Carl W.; Roever, Stefan; Ju, Jingyue; Church, George M.
2016-01-01
Scalable, high-throughput DNA sequencing is a prerequisite for precision medicine and biomedical research. Recently, we presented a nanopore-based sequencing-by-synthesis (Nanopore-SBS) approach, which used a set of nucleotides with polymer tags that allow discrimination of the nucleotides in a biological nanopore. Here, we designed and covalently coupled a DNA polymerase to an α-hemolysin (αHL) heptamer using the SpyCatcher/SpyTag conjugation approach. These porin–polymerase conjugates were inserted into lipid bilayers on a complementary metal oxide semiconductor (CMOS)-based electrode array for high-throughput electrical recording of DNA synthesis. The designed nanopore construct successfully detected the capture of tagged nucleotides complementary to a DNA base on a provided template. We measured over 200 tagged-nucleotide signals for each of the four bases and developed a classification method to uniquely distinguish them from each other and background signals. The probability of falsely identifying a background event as a true capture event was less than 1.2%. In the presence of all four tagged nucleotides, we observed sequential additions in real time during polymerase-catalyzed DNA synthesis. Single-polymerase coupling to a nanopore, in combination with the Nanopore-SBS approach, can provide the foundation for a low-cost, single-molecule, electronic DNA-sequencing platform. PMID:27729524
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Sun, Yujie; Wang, Qiao
2018-07-01
In object-based image analysis (OBIA), object classification performance is jointly determined by image segmentation, sample or rule setting, and classifiers. Typically, as a crucial step to obtain object primitives, image segmentation quality significantly influences subsequent feature extraction and analyses. By contrast, template matching extracts specific objects from images and prevents shape defects caused by image segmentation. However, creating or editing templates is tedious and sometimes results in incomplete or inaccurate templates. In this study, we combine OBIA and template matching techniques to address these problems and aim for accurate photovoltaic panel (PVP) extraction from very high-resolution (VHR) aerial imagery. The proposed method is based on the previously proposed region-line primitive association framework, in which complementary information between region (segment) and line (straight line) primitives is utilized to achieve a more powerful performance than routine OBIA. Several novel concepts, including the mutual fitting ratio and best-fitting template based on region-line primitive association analyses, are proposed. Automatic template generation and matching method for PVP extraction from VHR imagery are designed for concept and model validation. Results show that the proposed method can successfully extract PVPs without any user-specified matching template or training sample. High user independency and accuracy are the main characteristics of the proposed method in comparison with routine OBIA and template matching techniques.
NASA Astrophysics Data System (ADS)
Mondal, Sudip; Hegarty, Evan; Martin, Chris; Gökçe, Sertan Kutal; Ghorashian, Navid; Ben-Yakar, Adela
2016-10-01
Next generation drug screening could benefit greatly from in vivo studies, using small animal models such as Caenorhabditis elegans for hit identification and lead optimization. Current in vivo assays can operate either at low throughput with high resolution or with low resolution at high throughput. To enable both high-throughput and high-resolution imaging of C. elegans, we developed an automated microfluidic platform. This platform can image 15 z-stacks of ~4,000 C. elegans from 96 different populations using a large-scale chip with a micron resolution in 16 min. Using this platform, we screened ~100,000 animals of the poly-glutamine aggregation model on 25 chips. We tested the efficacy of ~1,000 FDA-approved drugs in improving the aggregation phenotype of the model and identified four confirmed hits. This robust platform now enables high-content screening of various C. elegans disease models at the speed and cost of in vitro cell-based assays.
In vitro based assays are used to identify potential endocrine disrupting chemicals. Thyroperoxidase (TPO), an enzyme essential for thyroid hormone (TH) synthesis, is a target site for disruption of the thyroid axis for which a high-throughput screening (HTPS) assay has recently ...
Efficient and accurate adverse outcome pathway (AOP) based high-throughput screening (HTS) methods use a systems biology based approach to computationally model in vitro cellular and molecular data for rapid chemical prioritization; however, not all HTS assays are grounded by rel...
Design, fabrication, and integration testing of the Garden Banks 388 subsea production template
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ledbetter, W.R.; Smith, D.W.; Pierce, D.M.
1995-12-31
Enserch Exploration`s Garden Banks 388 development has a production scheme based around a floating drilling and production facility and subsea drilling/production template. The Floating Production Facility (FPF) is a converted semisubmersible drilling rig that will drill and produce through a 24-well slot subsea template. This development is located in Block 388 of the Garden Banks area in the Gulf of Mexico approximately 200 miles southwest of New Orleans, Louisiana. The production system is being installed in an area of known oil and gas reserves and will produce to a shallow water platform 54 miles away at Eugene Island 315. Themore » FPF will be permanently moored above the template. The subsea template has been installed in 2190 feet of water and will produce through a 2,000 foot free-standing production riser system to the FPF. The produced fluids are partially separated on the FPF before oil and gas are pumped through the template to export gathering lines that are connected to the shallow water facility. The system designed through-put is 40,000 BOPD of oil and 120 MMSCFD of gas.« less
Haufe, Stefan; Huang, Yu; Parra, Lucas C
2015-08-01
In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.
FUNGAL BIOGEOGRAPHY. Response to Comment on "Global diversity and geography of soil fungi".
Tedersoo, Leho; Bahram, Mohammad; Põlme, Sergei; Anslan, Sten; Riit, Taavi; Kõljalg, Urmas; Nilsson, R Henrik; Hildebrand, Falk; Abarenkov, Kessy
2015-08-28
Schadt and Rosling (Technical Comment, 26 June 2015, p. 1438) argue that primer-template mismatches neglected the fungal class Archaeorhizomycetes in a global soil survey. Amplicon-based metabarcoding of nine barcode-primer pair combinations and polymerase chain reaction (PCR)-free shotgun metagenomics revealed that barcode and primer choice and PCR bias drive the diversity and composition of microorganisms in general, but the Archaeorhizomycetes were little affected in the global study. We urge that careful choice of DNA markers and primers is essential for ecological studies using high-throughput sequencing for identification. Copyright © 2015, American Association for the Advancement of Science.
Template-based protein structure modeling using the RaptorX web server.
Källberg, Morten; Wang, Haipeng; Wang, Sheng; Peng, Jian; Wang, Zhiyong; Lu, Hui; Xu, Jinbo
2012-07-19
A key challenge of modern biology is to uncover the functional role of the protein entities that compose cellular proteomes. To this end, the availability of reliable three-dimensional atomic models of proteins is often crucial. This protocol presents a community-wide web-based method using RaptorX (http://raptorx.uchicago.edu/) for protein secondary structure prediction, template-based tertiary structure modeling, alignment quality assessment and sophisticated probabilistic alignment sampling. RaptorX distinguishes itself from other servers by the quality of the alignment between a target sequence and one or multiple distantly related template proteins (especially those with sparse sequence profiles) and by a novel nonlinear scoring function and a probabilistic-consistency algorithm. Consequently, RaptorX delivers high-quality structural models for many targets with only remote templates. At present, it takes RaptorX ~35 min to finish processing a sequence of 200 amino acids. Since its official release in August 2011, RaptorX has processed ~6,000 sequences submitted by ~1,600 users from around the world.
Template-based protein structure modeling using the RaptorX web server
Källberg, Morten; Wang, Haipeng; Wang, Sheng; Peng, Jian; Wang, Zhiyong; Lu, Hui; Xu, Jinbo
2016-01-01
A key challenge of modern biology is to uncover the functional role of the protein entities that compose cellular proteomes. To this end, the availability of reliable three-dimensional atomic models of proteins is often crucial. This protocol presents a community-wide web-based method using RaptorX (http://raptorx.uchicago.edu/) for protein secondary structure prediction, template-based tertiary structure modeling, alignment quality assessment and sophisticated probabilistic alignment sampling. RaptorX distinguishes itself from other servers by the quality of the alignment between a target sequence and one or multiple distantly related template proteins (especially those with sparse sequence profiles) and by a novel nonlinear scoring function and a probabilistic-consistency algorithm. Consequently, RaptorX delivers high-quality structural models for many targets with only remote templates. At present, it takes RaptorX ~35 min to finish processing a sequence of 200 amino acids. Since its official release in August 2011, RaptorX has processed ~6,000 sequences submitted by ~1,600 users from around the world. PMID:22814390
Template-Based Modeling of Protein-RNA Interactions
Zheng, Jinfang; Kundrotas, Petras J.; Vakser, Ilya A.
2016-01-01
Protein-RNA complexes formed by specific recognition between RNA and RNA-binding proteins play an important role in biological processes. More than a thousand of such proteins in human are curated and many novel RNA-binding proteins are to be discovered. Due to limitations of experimental approaches, computational techniques are needed for characterization of protein-RNA interactions. Although much progress has been made, adequate methodologies reliably providing atomic resolution structural details are still lacking. Although protein-RNA free docking approaches proved to be useful, in general, the template-based approaches provide higher quality of predictions. Templates are key to building a high quality model. Sequence/structure relationships were studied based on a representative set of binary protein-RNA complexes from PDB. Several approaches were tested for pairwise target/template alignment. The analysis revealed a transition point between random and correct binding modes. The results showed that structural alignment is better than sequence alignment in identifying good templates, suitable for generating protein-RNA complexes close to the native structure, and outperforms free docking, successfully predicting complexes where the free docking fails, including cases of significant conformational change upon binding. A template-based protein-RNA interaction modeling protocol PRIME was developed and benchmarked on a representative set of complexes. PMID:27662342
Synthetic Molecular Evolution of Membrane-Active Peptides
NASA Astrophysics Data System (ADS)
Wimley, William
The physical chemistry of membrane partitioning largely determines the function of membrane active peptides. Membrane-active peptides have potential utility in many areas, including in the cellular delivery of polar compounds, cancer therapy, biosensor design, and in antibacterial, antiviral and antifungal therapies. Yet, despite decades of research on thousands of known examples, useful sequence-structure-function relationships are essentially unknown. Because peptide-membrane interactions within the highly fluid bilayer are dynamic and heterogeneous, accounts of mechanism are necessarily vague and descriptive, and have little predictive power. This creates a significant roadblock to advances in the field. We are bypassing that roadblock with synthetic molecular evolution: iterative peptide library design and orthogonal high-throughput screening. We start with template sequences that have at least some useful activity, and create small, focused libraries using structural and biophysical principles to design the sequence space around the template. Orthogonal high-throughput screening is used to identify gain-of-function peptides by simultaneously selecting for several different properties (e.g. solubility, activity and toxicity). Multiple generations of iterative library design and screening have enabled the identification of membrane-active sequences with heretofore unknown properties, including clinically relevant, broad-spectrum activity against drug-resistant bacteria and enveloped viruses as well as pH-triggered macromolecular poration.
Some pharmaceuticals and environmental chemicals bind the thyroid peroxidase (TPO) enzyme and disrupt thyroid hormone production. The potential for TPO inhibition is a function of both the binding affinity and concentration of the chemical within the thyroid gland. The former can...
Templated dewetting: designing entirely self-organized platforms for photocatalysis.
Altomare, Marco; Nguyen, Nhat Truong; Schmuki, Patrik
2016-12-01
Formation and dispersion of metal nanoparticles on oxide surfaces in site-specific or even arrayed configuration are key in various technological processes such as catalysis, photonics, electrochemistry and for fabricating electrodes, sensors, memory devices, and magnetic, optical, and plasmonic platforms. A crucial aspect towards an efficient performance of many of these metal/metal oxide arrangements is a reliable fabrication approach. Since the early works on graphoepitaxy in the 70s, solid state dewetting of metal films on patterned surfaces has been much explored and regarded as a most effective tool to form defined arrays of ordered metal particles on a desired substrate. While templated dewetting has been studied in detail, particularly from a mechanistic perspective on lithographically patterned Si surfaces, the resulting outstanding potential of its applications on metal oxide semiconductors, such as titania, has received only limited attention. In this perspective we illustrate how dewetting and particularly templated dewetting can be used to fabricate highly efficient metal/TiO 2 photocatalyst assemblies e.g. for green hydrogen evolution. A remarkable advantage is that the synthesis of such photocatalysts is completely based on self-ordering principles: anodic self-organized TiO 2 nanotube arrays that self-align to a highest degree of hexagonal ordering are an ideal topographical substrate for a second self-ordering process, that is, templated-dewetting of sputter-deposited metal thin films. The controllable metal/semiconductor coupling delivers intriguing features and functionalities. We review concepts inherent to dewetting and particularly templated dewetting, and outline a series of effective tools that can be synergistically interlaced to reach fine control with nanoscopic precision over the resulting metal/TiO 2 structures (in terms of e.g. high ordering, size distribution, site specific placement, alloy formation) to maximize their photocatalytic efficiency. These processes are easy to scale up and have a high throughput and great potential to be applied to fabricate not only (photo)catalytic materials but also a large palette of other functional nanostructured elements and devices.
Templated dewetting: designing entirely self-organized platforms for photocatalysis
Altomare, Marco; Nguyen, Nhat Truong
2016-01-01
Formation and dispersion of metal nanoparticles on oxide surfaces in site-specific or even arrayed configuration are key in various technological processes such as catalysis, photonics, electrochemistry and for fabricating electrodes, sensors, memory devices, and magnetic, optical, and plasmonic platforms. A crucial aspect towards an efficient performance of many of these metal/metal oxide arrangements is a reliable fabrication approach. Since the early works on graphoepitaxy in the 70s, solid state dewetting of metal films on patterned surfaces has been much explored and regarded as a most effective tool to form defined arrays of ordered metal particles on a desired substrate. While templated dewetting has been studied in detail, particularly from a mechanistic perspective on lithographically patterned Si surfaces, the resulting outstanding potential of its applications on metal oxide semiconductors, such as titania, has received only limited attention. In this perspective we illustrate how dewetting and particularly templated dewetting can be used to fabricate highly efficient metal/TiO2 photocatalyst assemblies e.g. for green hydrogen evolution. A remarkable advantage is that the synthesis of such photocatalysts is completely based on self-ordering principles: anodic self-organized TiO2 nanotube arrays that self-align to a highest degree of hexagonal ordering are an ideal topographical substrate for a second self-ordering process, that is, templated-dewetting of sputter-deposited metal thin films. The controllable metal/semiconductor coupling delivers intriguing features and functionalities. We review concepts inherent to dewetting and particularly templated dewetting, and outline a series of effective tools that can be synergistically interlaced to reach fine control with nanoscopic precision over the resulting metal/TiO2 structures (in terms of e.g. high ordering, size distribution, site specific placement, alloy formation) to maximize their photocatalytic efficiency. These processes are easy to scale up and have a high throughput and great potential to be applied to fabricate not only (photo)catalytic materials but also a large palette of other functional nanostructured elements and devices. PMID:28567258
Assessment of Template-Based Modeling of Protein Structure in CASP11
Modi, Vivek; Xu, Qifang; Adhikari, Sam; Dunbrack, Roland L.
2016-01-01
We present the assessment of predictions submitted in the template-based modeling (TBM) category of CASP11 (Critical Assessment of Protein Structure Prediction). Model quality was judged on the basis of global and local measures of accuracy on all atoms including side chains. The top groups on 39 human-server targets based on model 1 predictions were LEER, Zhang, LEE, MULTICOM, and Zhang-Server. The top groups on 81 targets by server groups based on model 1 predictions were Zhang-Server, nns, BAKER-ROSETTASERVER, QUARK, and myprotein-me. In CASP11, the best models for most targets were equal to or better than the best template available in the Protein Data Bank, even for targets with poor templates. The overall performance in CASP11 is similar to the performance of predictors in CASP10 with slightly better performance on the hardest targets. For most targets, assessment measures exhibited bimodal probability density distributions. Multi-dimensional scaling of an RMSD matrix for each target typically revealed a single cluster with models similar to the target structure, with a mode in the GDT-TS density between 40 and 90, and a wide distribution of models highly divergent from each other and from the experimental structure, with density mode at a GDT-TS value of ~20. The models in this peak in the density were either compact models with entirely the wrong fold, or highly non-compact models. The results argue for a density-driven approach in future CASP TBM assessments that accounts for the bimodal nature of these distributions instead of Z-scores, which assume a unimodal, Gaussian distribution. PMID:27081927
Here, we present results of an approach for risk-based prioritization using the Threshold of Toxicological Concern (TTC) combined with high-throughput exposure (HTE) modelling. We started with 7968 chemicals with calculated population median oral daily intakes characterized by an...
Spitzer, James D; Hupert, Nathaniel; Duckart, Jonathan; Xiong, Wei
2007-01-01
Community-based mass prophylaxis is a core public health operational competency, but staffing needs may overwhelm the local trained health workforce. Just-in-time (JIT) training of emergency staff and computer modeling of workforce requirements represent two complementary approaches to address this logistical problem. Multnomah County, Oregon, conducted a high-throughput point of dispensing (POD) exercise to test JIT training and computer modeling to validate POD staffing estimates. The POD had 84% non-health-care worker staff and processed 500 patients per hour. Post-exercise modeling replicated observed staff utilization levels and queue formation, including development and amelioration of a large medical evaluation queue caused by lengthy processing times and understaffing in the first half-hour of the exercise. The exercise confirmed the feasibility of using JIT training for high-throughput antibiotic dispensing clinics staffed largely by nonmedical professionals. Patient processing times varied over the course of the exercise, with important implications for both staff reallocation and future POD modeling efforts. Overall underutilization of staff revealed the opportunity for greater efficiencies and even higher future throughputs.
A Novel BA Complex Network Model on Color Template Matching
Han, Risheng; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235
A novel BA complex network model on color template matching.
Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.
Template Synthesis of Nanostructured Polymeric Membranes by Inkjet Printing.
Gao, Peng; Hunter, Aaron; Benavides, Sherwood; Summe, Mark J; Gao, Feng; Phillip, William A
2016-02-10
The fabrication of functional nanomaterials with complex structures has been serving great scientific and practical interests, but current fabrication and patterning methods are generally costly and laborious. Here, we introduce a versatile, reliable, and rapid method for fabricating nanostructured polymeric materials. The novel method is based on a combination of inkjet printing and template synthesis, and its utility and advantages in the fabrication of polymeric nanomaterials is demonstrated through three examples: the generation of polymeric nanotubes, nanowires, and thin films. Layer-by-layer-assembled nanotubes can be synthesized in a polycarbonate track-etched (PCTE) membrane by printing poly(allylamine hydrochloride) and poly(styrenesulfonate) sequentially. This sequential deposition of polyelectrolyte ink enables control over the surface charge within the nanotubes. By a simple change of the printing conditions, polymeric nanotubes or nanowires were prepared by printing poly(vinyl alcohol) in a PCTE template. In this case, the high-throughput nature of the method enables functional nanomaterials to be generated in under 3 min. Furthermore, we demonstrate that inkjet printing paired with template synthesis can be used to generate patterns comprised of chemically distinct nanomaterials. Thin polymeric films of layer-by-layer-assembled poly(allylamine hydrochloride) and poly(styrenesulfonate) are printed on a PCTE membrane. Track-etched membranes covered with the deposited thin films reject ions and can potentially be utilized as nanofiltration membranes. When the fabrication of these different classes of nanostructured materials is demonstrated, the advantages of pairing template synthesis with inkjet printing, which include fast and reliable deposition, judicious use of the deposited materials, and the ability to design chemically patterned surfaces, are highlighted.
AOPs and Biomarkers: Bridging High Throughput Screening and Regulatory Decision Making
As high throughput screening (HTS) plays a larger role in toxicity testing, camputational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models designed to quantify potential adverse effects based on HTS data will b...
Assessing the applicability of template-based protein docking in the twilight zone.
Negroni, Jacopo; Mosca, Roberto; Aloy, Patrick
2014-09-02
The structural modeling of protein interactions in the absence of close homologous templates is a challenging task. Recently, template-based docking methods have emerged to exploit local structural similarities to help ab-initio protocols provide reliable 3D models for protein interactions. In this work, we critically assess the performance of template-based docking in the twilight zone. Our results show that, while it is possible to find templates for nearly all known interactions, the quality of the obtained models is rather limited. We can increase the precision of the models at expenses of coverage, but it drastically reduces the potential applicability of the method, as illustrated by the whole-interactome modeling of nine organisms. Template-based docking is likely to play an important role in the structural characterization of the interaction space, but we still need to improve the repertoire of structural templates onto which we can reliably model protein complexes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Masseroli, Marco; Stella, Andrea; Meani, Natalia; Alcalay, Myriam; Pinciroli, Francesco
2004-12-12
High-throughput technologies create the necessity to mine large amounts of gene annotations from diverse databanks, and to integrate the resulting data. Most databanks can be interrogated only via Web, for a single gene at a time, and query results are generally available only in the HTML format. Although some databanks provide batch retrieval of data via FTP, this requires expertise and resources for locally reimplementing the databank. We developed MyWEST, a tool aimed at researchers without extensive informatics skills or resources, which exploits user-defined templates to easily mine selected annotations from different Web-interfaced databanks, and aggregates and structures results in an automatically updated database. Using microarray results from a model system of retinoic acid-induced differentiation, MyWEST effectively gathered relevant annotations from various biomolecular databanks, highlighted significant biological characteristics and supported a global approach to the understanding of complex cellular mechanisms. MyWEST is freely available for non-profit use at http://www.medinfopoli.polimi.it/MyWEST/
SapTrap, a Toolkit for High-Throughput CRISPR/Cas9 Gene Modification in Caenorhabditis elegans.
Schwartz, Matthew L; Jorgensen, Erik M
2016-04-01
In principle, clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 allows genetic tags to be inserted at any locus. However, throughput is limited by the laborious construction of repair templates and guide RNA constructs and by the identification of modified strains. We have developed a reagent toolkit and plasmid assembly pipeline, called "SapTrap," that streamlines the production of targeting vectors for tag insertion, as well as the selection of modified Caenorhabditis elegans strains. SapTrap is a high-efficiency modular plasmid assembly pipeline that produces single plasmid targeting vectors, each of which encodes both a guide RNA transcript and a repair template for a particular tagging event. The plasmid is generated in a single tube by cutting modular components with the restriction enzyme SapI, which are then "trapped" in a fixed order by ligation to generate the targeting vector. A library of donor plasmids supplies a variety of protein tags, a selectable marker, and regulatory sequences that allow cell-specific tagging at either the N or the C termini. All site-specific sequences, such as guide RNA targeting sequences and homology arms, are supplied as annealed synthetic oligonucleotides, eliminating the need for PCR or molecular cloning during plasmid assembly. Each tag includes an embedded Cbr-unc-119 selectable marker that is positioned to allow concurrent expression of both the tag and the marker. We demonstrate that SapTrap targeting vectors direct insertion of 3- to 4-kb tags at six different loci in 10-37% of injected animals. Thus SapTrap vectors introduce the possibility for high-throughput generation of CRISPR/Cas9 genome modifications. Copyright © 2016 by the Genetics Society of America.
Whiter, Richard A.; Boughey, Chess; Smith, Michael
2018-01-01
Abstract Nanowires of the ferroelectric co‐polymer poly(vinylidenefluoride‐co‐triufloroethylene) [P(VDF‐TrFE)] are fabricated from solution within nanoporous templates of both “hard” anodic aluminium oxide (AAO) and “soft” polyimide (PI) through a facile and scalable template‐wetting process. The confined geometry afforded by the pores of the templates leads directly to highly crystalline P(VDF‐TrFE) nanowires in a macroscopic “poled” state that precludes the need for external electrical poling procedure typically required for piezoelectric performance. The energy‐harvesting performance of nanogenerators based on these template‐grown nanowires are extensively studied and analyzed in combination with finite element modelling. Both experimental results and computational models probing the role of the templates in determining overall nanogenerator performance, including both materials and device efficiencies, are presented. It is found that although P(VDF‐TrFE) nanowires grown in PI templates exhibit a lower material efficiency due to lower crystallinity as compared to nanowires grown in AAO templates, the overall device efficiency was higher for the PI‐template‐based nanogenerator because of the lower stiffness of the PI template as compared to the AAO template. This work provides a clear framework to assess the energy conversion efficiency of template‐grown piezoelectric nanowires and paves the way towards optimization of template‐based nanogenerator devices.
High-Throughput Models for Exposure-Based Chemical Prioritization in the ExpoCast Project
The United States Environmental Protection Agency (U.S. EPA) must characterize potential risks to human health and the environment associated with manufacture and use of thousands of chemicals. High-throughput screening (HTS) for biological activity allows the ToxCast research pr...
Use of High-Throughput Testing and Approaches for Evaluating Chemical Risk-Relevance to Humans
ToxCast is profiling the bioactivity of thousands of chemicals based on high-throughput screening (HTS) and computational models that integrate knowledge of biological systems and in vivo toxicities. Many of these assays probe signaling pathways and cellular processes critical to...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, H.
1999-03-31
The purpose of this research is to develop a multiplexed sample processing system in conjunction with multiplexed capillary electrophoresis for high-throughput DNA sequencing. The concept from DNA template to called bases was first demonstrated with a manually operated single capillary system. Later, an automated microfluidic system with 8 channels based on the same principle was successfully constructed. The instrument automatically processes 8 templates through reaction, purification, denaturation, pre-concentration, injection, separation and detection in a parallel fashion. A multiplexed freeze/thaw switching principle and a distribution network were implemented to manage flow direction and sample transportation. Dye-labeled terminator cycle-sequencing reactions are performedmore » in an 8-capillary array in a hot air thermal cycler. Subsequently, the sequencing ladders are directly loaded into a corresponding size-exclusion chromatographic column operated at {approximately} 60 C for purification. On-line denaturation and stacking injection for capillary electrophoresis is simultaneously accomplished at a cross assembly set at {approximately} 70 C. Not only the separation capillary array but also the reaction capillary array and purification columns can be regenerated after every run. DNA sequencing data from this system allow base calling up to 460 bases with accuracy of 98%.« less
Shibata, Kazuhiro; Itoh, Masayoshi; Aizawa, Katsunori; Nagaoka, Sumiharu; Sasaki, Nobuya; Carninci, Piero; Konno, Hideaki; Akiyama, Junichi; Nishi, Katsuo; Kitsunai, Tokuji; Tashiro, Hideo; Itoh, Mari; Sumi, Noriko; Ishii, Yoshiyuki; Nakamura, Shin; Hazama, Makoto; Nishine, Tsutomu; Harada, Akira; Yamamoto, Rintaro; Matsumoto, Hiroyuki; Sakaguchi, Sumito; Ikegami, Takashi; Kashiwagi, Katsuya; Fujiwake, Syuji; Inoue, Kouji; Togawa, Yoshiyuki; Izawa, Masaki; Ohara, Eiji; Watahiki, Masanori; Yoneda, Yuko; Ishikawa, Tomokazu; Ozawa, Kaori; Tanaka, Takumi; Matsuura, Shuji; Kawai, Jun; Okazaki, Yasushi; Muramatsu, Masami; Inoue, Yorinao; Kira, Akira; Hayashizaki, Yoshihide
2000-01-01
The RIKEN high-throughput 384-format sequencing pipeline (RISA system) including a 384-multicapillary sequencer (the so-called RISA sequencer) was developed for the RIKEN mouse encyclopedia project. The RISA system consists of colony picking, template preparation, sequencing reaction, and the sequencing process. A novel high-throughput 384-format capillary sequencer system (RISA sequencer system) was developed for the sequencing process. This system consists of a 384-multicapillary auto sequencer (RISA sequencer), a 384-multicapillary array assembler (CAS), and a 384-multicapillary casting device. The RISA sequencer can simultaneously analyze 384 independent sequencing products. The optical system is a scanning system chosen after careful comparison with an image detection system for the simultaneous detection of the 384-capillary array. This scanning system can be used with any fluorescent-labeled sequencing reaction (chain termination reaction), including transcriptional sequencing based on RNA polymerase, which was originally developed by us, and cycle sequencing based on thermostable DNA polymerase. For long-read sequencing, 380 out of 384 sequences (99.2%) were successfully analyzed and the average read length, with more than 99% accuracy, was 654.4 bp. A single RISA sequencer can analyze 216 kb with >99% accuracy in 2.7 h (90 kb/h). For short-read sequencing to cluster the 3′ end and 5′ end sequencing by reading 350 bp, 384 samples can be analyzed in 1.5 h. We have also developed a RISA inoculator, RISA filtrator and densitometer, RISA plasmid preparator which can handle throughput of 40,000 samples in 17.5 h, and a high-throughput RISA thermal cycler which has four 384-well sites. The combination of these technologies allowed us to construct the RISA system consisting of 16 RISA sequencers, which can process 50,000 DNA samples per day. One haploid genome shotgun sequence of a higher organism, such as human, mouse, rat, domestic animals, and plants, can be revealed by seven RISA systems within one month. PMID:11076861
High-throughput Screening Identification of Poliovirus RNA-dependent RNA Polymerase Inhibitors
Campagnola, Grace; Gong, Peng; Peersen, Olve B.
2011-01-01
Viral RNA-dependent RNA polymerase (RdRP) enzymes are essential for the replication of positive-strand RNA viruses and established targets for the development of selective antiviral therapeutics. In this work we have carried out a high-throughput screen of 154,267 compounds to identify poliovirus polymerase inhibitors using a fluorescence based RNA elongation assay. Screening and subsequent validation experiments using kinetic methods and RNA product analysis resulted in the identification of seven inhibitors that affect the RNA binding, initiation, or elongation activity of the polymerase. X-ray crystallography data show clear density for five of the compounds in the active site of the poliovirus polymerase elongation complex. The inhibitors occupy the NTP binding site by stacking on the priming nucleotide and interacting with the templating base, yet competition studies show fairly weak IC50 values in the low μM range. A comparison with nucleotide bound structures suggests that weak binding is likely due to the lack of a triphosphate group on the inhibitors. Consequently, the inhibitors are primarily effective at blocking polymerase initiation and do not effectively compete with NTP binding during processive elongation. These findings are discussed in the context of the polymerase elongation complex structure and allosteric control of the viral RdRP catalytic cycle. PMID:21722674
Barrow, James C; Stauffer, Shaun R; Rittle, Kenneth E; Ngo, Phung L; Yang, ZhiQiang; Selnick, Harold G; Graham, Samuel L; Munshi, Sanjeev; McGaughey, Georgia B; Holloway, M Katharine; Simon, Adam J; Price, Eric A; Sankaranarayanan, Sethu; Colussi, Dennis; Tugusheva, Katherine; Lai, Ming-Tain; Espeseth, Amy S; Xu, Min; Huang, Qian; Wolfe, Abigail; Pietrak, Beth; Zuck, Paul; Levorse, Dorothy A; Hazuda, Daria; Vacca, Joseph P
2008-10-23
A high-throughput screen at 100 microM inhibitor concentration for the BACE-1 enzyme revealed a novel spiropiperidine iminohydantoin aspartyl protease inhibitor template. An X-ray cocrystal structure with BACE-1 revealed a novel mode of binding whereby the inhibitor interacts with the catalytic aspartates via bridging water molecules. Using the crystal structure as a guide, potent compounds with good brain penetration were designed.
One use of alternative methods is to target animal use at only those chemicals and tests that are absolutely necessary. We discuss prioritization of testing based on high-throughput screening assays (HTS), QSAR modeling, high-throughput toxicokinetics (HTTK), and exposure modelin...
HPC AND GRID COMPUTING FOR INTEGRATIVE BIOMEDICAL RESEARCH
Kurc, Tahsin; Hastings, Shannon; Kumar, Vijay; Langella, Stephen; Sharma, Ashish; Pan, Tony; Oster, Scott; Ervin, David; Permar, Justin; Narayanan, Sivaramakrishnan; Gil, Yolanda; Deelman, Ewa; Hall, Mary; Saltz, Joel
2010-01-01
Integrative biomedical research projects query, analyze, and integrate many different data types and make use of datasets obtained from measurements or simulations of structure and function at multiple biological scales. With the increasing availability of high-throughput and high-resolution instruments, the integrative biomedical research imposes many challenging requirements on software middleware systems. In this paper, we look at some of these requirements using example research pattern templates. We then discuss how middleware systems, which incorporate Grid and high-performance computing, could be employed to address the requirements. PMID:20107625
Rizvi, Imran; Moon, Sangjun; Hasan, Tayyaba; Demirci, Utkan
2013-01-01
In vitro 3D cancer models that provide a more accurate representation of disease in vivo are urgently needed to improve our understanding of cancer pathology and to develop better cancer therapies. However, development of 3D models that are based on manual ejection of cells from micropipettes suffer from inherent limitations such as poor control over cell density, limited repeatability, low throughput, and, in the case of coculture models, lack of reproducible control over spatial distance between cell types (e.g., cancer and stromal cells). In this study, we build on a recently introduced 3D model in which human ovarian cancer (OVCAR-5) cells overlaid on Matrigel™ spontaneously form multicellular acini. We introduce a high-throughput automated cell printing system to bioprint a 3D coculture model using cancer cells and normal fibroblasts micropatterned on Matrigel™. Two cell types were patterned within a spatially controlled microenvironment (e.g., cell density, cell-cell distance) in a high-throughput and reproducible manner; both cell types remained viable during printing and continued to proliferate following patterning. This approach enables the miniaturization of an established macro-scale 3D culture model and would allow systematic investigation into the multiple unknown regulatory feedback mechanisms between tumor and stromal cells and provide a tool for high-throughput drug screening. PMID:21298805
[Synthesis of hollow titania microspheres by using microfluidic droplet-template].
Ma, Jingyun; Jiang, Lei; Qin, Jianhu
2011-09-01
Droplet-based microfluidics is of great interest due to its particular characteristics compared with the conventional methods, such as reduced reagent consumption, rapid mixing, high-throughput, shape controlled, etc. A novel method using microfluidic droplet as soft template for the synthesis of hollow titania microspheres was developed. A typical polydimethylsiloxane (PDMS) microfluidic device containing "flow-focusing" geometry was used to generate water/oil (W/O) droplet. The mechanism for the hollow structure formation was based on the interfacial hydrolysis reaction between the continuous phase containing titanium butoxide precursor and the dispersed containing water. The continuous phase mixed with butanol was added in the downstream of the channel after the hydrolysis reaction. This step was used for drawing the water out of the microgels for further hydrolysis. The microgels obtained through a glass pipe integrated were washed, dried under vacuum and calcined after aging for a certain time. The fluorescence and scanning electron microscope (SEM) image of the microspheres indicated the hollow structure and the thickness of the shell. In addition, these microspheres with thin shell (about 2 microm) were apt to rupture and collapse. Droplet-based microfluidic offered a gentle and size-controllable manner to moderate this problem. Moreover, it has potential applications in photocatalysis combined with some modification realized on the chip simultaneously.
Droplet-based pyrosequencing using digital microfluidics.
Boles, Deborah J; Benton, Jonathan L; Siew, Germaine J; Levy, Miriam H; Thwar, Prasanna K; Sandahl, Melissa A; Rouse, Jeremy L; Perkins, Lisa C; Sudarsan, Arjun P; Jalili, Roxana; Pamula, Vamsee K; Srinivasan, Vijay; Fair, Richard B; Griffin, Peter B; Eckhardt, Allen E; Pollack, Michael G
2011-11-15
The feasibility of implementing pyrosequencing chemistry within droplets using electrowetting-based digital microfluidics is reported. An array of electrodes patterned on a printed-circuit board was used to control the formation, transportation, merging, mixing, and splitting of submicroliter-sized droplets contained within an oil-filled chamber. A three-enzyme pyrosequencing protocol was implemented in which individual droplets contained enzymes, deoxyribonucleotide triphosphates (dNTPs), and DNA templates. The DNA templates were anchored to magnetic beads which enabled them to be thoroughly washed between nucleotide additions. Reagents and protocols were optimized to maximize signal over background, linearity of response, cycle efficiency, and wash efficiency. As an initial demonstration of feasibility, a portion of a 229 bp Candida parapsilosis template was sequenced using both a de novo protocol and a resequencing protocol. The resequencing protocol generated over 60 bp of sequence with 100% sequence accuracy based on raw pyrogram levels. Excellent linearity was observed for all of the homopolymers (two, three, or four nucleotides) contained in the C. parapsilosis sequence. With improvements in microfluidic design it is expected that longer reads, higher throughput, and improved process integration (i.e., "sample-to-sequence" capability) could eventually be achieved using this low-cost platform.
Droplet-Based Pyrosequencing Using Digital Microfluidics
Boles, Deborah J.; Benton, Jonathan L.; Siew, Germaine J.; Levy, Miriam H.; Thwar, Prasanna K.; Sandahl, Melissa A.; Rouse, Jeremy L.; Perkins, Lisa C.; Sudarsan, Arjun P.; Jalili, Roxana; Pamula, Vamsee K.; Srinivasan, Vijay; Fair, Richard B.; Griffin, Peter B.; Eckhardt, Allen E.; Pollack, Michael G.
2013-01-01
The feasibility of implementing pyrosequencing chemistry within droplets using electrowetting-based digital microfluidics is reported. An array of electrodes patterned on a printed-circuit board was used to control the formation, transportation, merging, mixing, and splitting of submicroliter-sized droplets contained within an oil-filled chamber. A three-enzyme pyrosequencing protocol was implemented in which individual droplets contained enzymes, deoxyribonucleotide triphosphates (dNTPs), and DNA templates. The DNA templates were anchored to magnetic beads which enabled them to be thoroughly washed between nucleotide additions. Reagents and protocols were optimized to maximize signal over background, linearity of response, cycle efficiency, and wash efficiency. As an initial demonstration of feasibility, a portion of a 229 bp Candida parapsilosis template was sequenced using both a de novo protocol and a resequencing protocol. The resequencing protocol generated over 60 bp of sequence with 100% sequence accuracy based on raw pyrogram levels. Excellent linearity was observed for all of the homopolymers (two, three, or four nucleotides) contained in the C. parapsilosis sequence. With improvements in microfluidic design it is expected that longer reads, higher throughput, and improved process integration (i.e., “sample-to-sequence” capability) could eventually be achieved using this low-cost platform. PMID:21932784
Automated antibody structure prediction using Accelrys tools: Results and best practices
Fasnacht, Marc; Butenhof, Ken; Goupil-Lamy, Anne; Hernandez-Guzman, Francisco; Huang, Hongwei; Yan, Lisa
2014-01-01
We describe the methodology and results from our participation in the second Antibody Modeling Assessment experiment. During the experiment we predicted the structure of eleven unpublished antibody Fv fragments. Our prediction methods centered on template-based modeling; potential templates were selected from an antibody database based on their sequence similarity to the target in the framework regions. Depending on the quality of the templates, we constructed models of the antibody framework regions either using a single, chimeric or multiple template approach. The hypervariable loop regions in the initial models were rebuilt by grafting the corresponding regions from suitable templates onto the model. For the H3 loop region, we further refined models using ab initio methods. The final models were subjected to constrained energy minimization to resolve severe local structural problems. The analysis of the models submitted show that Accelrys tools allow for the construction of quite accurate models for the framework and the canonical CDR regions, with RMSDs to the X-ray structure on average below 1 Å for most of these regions. The results show that accurate prediction of the H3 hypervariable loops remains a challenge. Furthermore, model quality assessment of the submitted models show that the models are of quite high quality, with local geometry assessment scores similar to that of the target X-ray structures. Proteins 2014; 82:1583–1598. © 2014 The Authors. Proteins published by Wiley Periodicals, Inc. PMID:24833271
Short template switch events explain mutation clusters in the human genome.
Löytynoja, Ari; Goldman, Nick
2017-06-01
Resequencing efforts are uncovering the extent of genetic variation in humans and provide data to study the evolutionary processes shaping our genome. One recurring puzzle in both intra- and inter-species studies is the high frequency of complex mutations comprising multiple nearby base substitutions or insertion-deletions. We devised a generalized mutation model of template switching during replication that extends existing models of genome rearrangement and used this to study the role of template switch events in the origin of short mutation clusters. Applied to the human genome, our model detects thousands of template switch events during the evolution of human and chimp from their common ancestor and hundreds of events between two independently sequenced human genomes. Although many of these are consistent with a template switch mechanism previously proposed for bacteria, our model also identifies new types of mutations that create short inversions, some flanked by paired inverted repeats. The local template switch process can create numerous complex mutation patterns, including hairpin loop structures, and explains multinucleotide mutations and compensatory substitutions without invoking positive selection, speculative mechanisms, or implausible coincidence. Clustered sequence differences are challenging for current mapping and variant calling methods, and we show that many erroneous variant annotations exist in human reference data. Local template switch events may have been neglected as an explanation for complex mutations because of biases in commonly used analyses. Incorporation of our model into reference-based analysis pipelines and comparisons of de novo assembled genomes will lead to improved understanding of genome variation and evolution. © 2017 Löytynoja and Goldman; Published by Cold Spring Harbor Laboratory Press.
Kumar, Avishek; Campitelli, Paul; Thorpe, M F; Ozkan, S Banu
2015-12-01
The most successful protein structure prediction methods to date have been template-based modeling (TBM) or homology modeling, which predicts protein structure based on experimental structures. These high accuracy predictions sometimes retain structural errors due to incorrect templates or a lack of accurate templates in the case of low sequence similarity, making these structures inadequate in drug-design studies or molecular dynamics simulations. We have developed a new physics based approach to the protein refinement problem by mimicking the mechanism of chaperons that rehabilitate misfolded proteins. The template structure is unfolded by selectively (targeted) pulling on different portions of the protein using the geometric based technique FRODA, and then refolded using hierarchically restrained replica exchange molecular dynamics simulations (hr-REMD). FRODA unfolding is used to create a diverse set of topologies for surveying near native-like structures from a template and to provide a set of persistent contacts to be employed during re-folding. We have tested our approach on 13 previous CASP targets and observed that this method of folding an ensemble of partially unfolded structures, through the hierarchical addition of contact restraints (that is, first local and then nonlocal interactions), leads to a refolding of the structure along with refinement in most cases (12/13). Although this approach yields refined models through advancement in sampling, the task of blind selection of the best refined models still needs to be solved. Overall, the method can be useful for improved sampling for low resolution models where certain of the portions of the structure are incorrectly modeled. © 2015 Wiley Periodicals, Inc.
SPIM-fluid: open source light-sheet based platform for high-throughput imaging
Gualda, Emilio J.; Pereira, Hugo; Vale, Tiago; Estrada, Marta Falcão; Brito, Catarina; Moreno, Nuno
2015-01-01
Light sheet fluorescence microscopy has recently emerged as the technique of choice for obtaining high quality 3D images of whole organisms/embryos with low photodamage and fast acquisition rates. Here we present an open source unified implementation based on Arduino and Micromanager, which is capable of operating Light Sheet Microscopes for automatized 3D high-throughput imaging on three-dimensional cell cultures and model organisms like zebrafish, oriented to massive drug screening. PMID:26601007
Ko, Junsu; Park, Hahnbeom; Seok, Chaok
2012-08-10
Protein structures can be reliably predicted by template-based modeling (TBM) when experimental structures of homologous proteins are available. However, it is challenging to obtain structures more accurate than the single best templates by either combining information from multiple templates or by modeling regions that vary among templates or are not covered by any templates. We introduce GalaxyTBM, a new TBM method in which the more reliable core region is modeled first from multiple templates and less reliable, variable local regions, such as loops or termini, are then detected and re-modeled by an ab initio method. This TBM method is based on "Seok-server," which was tested in CASP9 and assessed to be amongst the top TBM servers. The accuracy of the initial core modeling is enhanced by focusing on more conserved regions in the multiple-template selection and multiple sequence alignment stages. Additional improvement is achieved by ab initio modeling of up to 3 unreliable local regions in the fixed framework of the core structure. Overall, GalaxyTBM reproduced the performance of Seok-server, with GalaxyTBM and Seok-server resulting in average GDT-TS of 68.1 and 68.4, respectively, when tested on 68 single-domain CASP9 TBM targets. For application to multi-domain proteins, GalaxyTBM must be combined with domain-splitting methods. Application of GalaxyTBM to CASP9 targets demonstrates that accurate protein structure prediction is possible by use of a multiple-template-based approach, and ab initio modeling of variable regions can further enhance the model quality.
Condor-COPASI: high-throughput computing for biochemical networks
2012-01-01
Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945
Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature
Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat
2014-01-01
It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185
A Memory Efficient Network Encryption Scheme
NASA Astrophysics Data System (ADS)
El-Fotouh, Mohamed Abo; Diepold, Klaus
In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.
RaptorX server: a resource for template-based protein structure modeling.
Källberg, Morten; Margaryan, Gohar; Wang, Sheng; Ma, Jianzhu; Xu, Jinbo
2014-01-01
Assigning functional properties to a newly discovered protein is a key challenge in modern biology. To this end, computational modeling of the three-dimensional atomic arrangement of the amino acid chain is often crucial in determining the role of the protein in biological processes. We present a community-wide web-based protocol, RaptorX server ( http://raptorx.uchicago.edu ), for automated protein secondary structure prediction, template-based tertiary structure modeling, and probabilistic alignment sampling.Given a target sequence, RaptorX server is able to detect even remotely related template sequences by means of a novel nonlinear context-specific alignment potential and probabilistic consistency algorithm. Using the protocol presented here it is thus possible to obtain high-quality structural models for many target protein sequences when only distantly related protein domains have experimentally solved structures. At present, RaptorX server can perform secondary and tertiary structure prediction of a 200 amino acid target sequence in approximately 30 min.
Chan, Kamfai; Wong, Pui-Yan; Parikh, Chaitanya; Wong, Season
2018-03-15
Traditionally, the majority of nucleic acid amplification-based molecular diagnostic tests are done in centralized settings. In recent years, point-of-care tests have been developed for use in low-resource settings away from central laboratories. While most experts agree that point-of-care molecular tests are greatly needed, their availability as cost-effective and easy-to-operate tests remains an unmet goal. In this article, we discuss our efforts to develop a recombinase polymerase amplification reaction-based test that will meet these criteria. First, we describe our efforts in repurposing a low-cost 3D printer as a platform that can carry out medium-throughput, rapid, and high-performing nucleic acid extraction. Next, we address how these purified templates can be rapidly amplified and analyzed using the 3D printer's heated bed or the deconstructed, low-cost thermal cycler we have developed. In both approaches, real-time isothermal amplification and detection of template DNA or RNA can be accomplished using a low-cost portable detector or smartphone camera. Last, we demonstrate the capability of our technologies using foodborne pathogens and the Zika virus. Our low-cost approach does not employ complicated and high-cost components, making it suitable for resource-limited settings. When integrated and commercialized, it will offer simple sample-to-answer molecular diagnostics. Copyright © 2018 Elsevier Inc. All rights reserved.
Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime
2017-01-01
Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians’ need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change. PMID:29027022
Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime; Liebschner, Michael A K; Xia, James J
2018-04-01
Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians' need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change.
Greenough, Lucia; Schermerhorn, Kelly M.; Mazzola, Laurie; Bybee, Joanna; Rivizzigno, Danielle; Cantin, Elizabeth; Slatko, Barton E.; Gardner, Andrew F.
2016-01-01
Detailed biochemical characterization of nucleic acid enzymes is fundamental to understanding nucleic acid metabolism, genome replication and repair. We report the development of a rapid, high-throughput fluorescence capillary gel electrophoresis method as an alternative to traditional polyacrylamide gel electrophoresis to characterize nucleic acid metabolic enzymes. The principles of assay design described here can be applied to nearly any enzyme system that acts on a fluorescently labeled oligonucleotide substrate. Herein, we describe several assays using this core capillary gel electrophoresis methodology to accelerate study of nucleic acid enzymes. First, assays were designed to examine DNA polymerase activities including nucleotide incorporation kinetics, strand displacement synthesis and 3′-5′ exonuclease activity. Next, DNA repair activities of DNA ligase, flap endonuclease and RNase H2 were monitored. In addition, a multicolor assay that uses four different fluorescently labeled substrates in a single reaction was implemented to characterize GAN nuclease specificity. Finally, a dual-color fluorescence assay to monitor coupled enzyme reactions during Okazaki fragment maturation is described. These assays serve as a template to guide further technical development for enzyme characterization or nucleoside and non-nucleoside inhibitor screening in a high-throughput manner. PMID:26365239
High-Throughput Physiologically Based Toxicokinetic Models for ToxCast Chemicals
Physiologically based toxicokinetic (PBTK) models aid in predicting exposure doses needed to create tissue concentrations equivalent to those identified as bioactive by ToxCast. We have implemented four empirical and physiologically-based toxicokinetic (TK) models within a new R ...
Computational Design of DNA-Binding Proteins.
Thyme, Summer; Song, Yifan
2016-01-01
Predicting the outcome of engineered and naturally occurring sequence perturbations to protein-DNA interfaces requires accurate computational modeling technologies. It has been well established that computational design to accommodate small numbers of DNA target site substitutions is possible. This chapter details the basic method of design used in the Rosetta macromolecular modeling program that has been successfully used to modulate the specificity of DNA-binding proteins. More recently, combining computational design and directed evolution has become a common approach for increasing the success rate of protein engineering projects. The power of such high-throughput screening depends on computational methods producing multiple potential solutions. Therefore, this chapter describes several protocols for increasing the diversity of designed output. Lastly, we describe an approach for building comparative models of protein-DNA complexes in order to utilize information from homologous sequences. These models can be used to explore how nature modulates specificity of protein-DNA interfaces and potentially can even be used as starting templates for further engineering.
Toxicokinetic and Dosimetry Modeling Tools for Exposure ...
New technologies and in vitro testing approaches have been valuable additions to risk assessments that have historically relied solely on in vivo test results. Compared to in vivo methods, in vitro high throughput screening (HTS) assays are less expensive, faster and can provide mechanistic insights on chemical action. However, extrapolating from in vitro chemical concentrations to target tissue or blood concentrations in vivo is fraught with uncertainties, and modeling is dependent upon pharmacokinetic variables not measured in in vitro assays. To address this need, new tools have been created for characterizing, simulating, and evaluating chemical toxicokinetics. Physiologically-based pharmacokinetic (PBPK) models provide estimates of chemical exposures that produce potentially hazardous tissue concentrations, while tissue microdosimetry PK models relate whole-body chemical exposures to cell-scale concentrations. These tools rely on high-throughput in vitro measurements, and successful methods exist for pharmaceutical compounds that determine PK from limited in vitro measurements and chemical structure-derived property predictions. These high throughput (HT) methods provide a more rapid and less resource–intensive alternative to traditional PK model development. We have augmented these in vitro data with chemical structure-based descriptors and mechanistic tissue partitioning models to construct HTPBPK models for over three hundred environmental and pharmace
Day, Ryan; Joo, Hyun; Chavan, Archana; Lennox, Kristin P.; Chen, Ann; Dahl, David B.; Vannucci, Marina; Tsai, Jerry W.
2012-01-01
As an alternative to the common template based protein structure prediction methods based on main-chain position, a novel side-chain centric approach has been developed. Together with a Bayesian loop modeling procedure and a combination scoring function, the Stone Soup algorithm was applied to the CASP9 set of template based modeling targets. Although the method did not generate as large of perturbations to the template structures as necessary, the analysis of the results gives unique insights into the differences in packing between the target structures and their templates. Considerable variation in packing is found between target and template structures even when the structures are close, and this variation is found due to 2 and 3 body packing interactions. Outside the inherent restrictions in packing representation of the PDB, the first steps in correctly defining those regions of variable packing have been mapped primarily to local interactions, as the packing at the secondary and tertiary structure are largely conserved. Of the scoring functions used, a loop scoring function based on water structure exhibited some promise for discrimination. These results present a clear structural path for further development of a side-chain centered approach to template based modeling. PMID:23266765
Day, Ryan; Joo, Hyun; Chavan, Archana C; Lennox, Kristin P; Chen, Y Ann; Dahl, David B; Vannucci, Marina; Tsai, Jerry W
2013-02-01
As an alternative to the common template based protein structure prediction methods based on main-chain position, a novel side-chain centric approach has been developed. Together with a Bayesian loop modeling procedure and a combination scoring function, the Stone Soup algorithm was applied to the CASP9 set of template based modeling targets. Although the method did not generate as large of perturbations to the template structures as necessary, the analysis of the results gives unique insights into the differences in packing between the target structures and their templates. Considerable variation in packing is found between target and template structures even when the structures are close, and this variation is found due to 2 and 3 body packing interactions. Outside the inherent restrictions in packing representation of the PDB, the first steps in correctly defining those regions of variable packing have been mapped primarily to local interactions, as the packing at the secondary and tertiary structure are largely conserved. Of the scoring functions used, a loop scoring function based on water structure exhibited some promise for discrimination. These results present a clear structural path for further development of a side-chain centered approach to template based modeling. Copyright © 2012 Elsevier Ltd. All rights reserved.
High Throughput PBTK: Open-Source Data and Tools for ...
Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy
Developing High-Throughput HIV Incidence Assay with Pyrosequencing Platform
Park, Sung Yong; Goeken, Nolan; Lee, Hyo Jin; Bolan, Robert; Dubé, Michael P.
2014-01-01
ABSTRACT Human immunodeficiency virus (HIV) incidence is an important measure for monitoring the epidemic and evaluating the efficacy of intervention and prevention trials. This study developed a high-throughput, single-measure incidence assay by implementing a pyrosequencing platform. We devised a signal-masking bioinformatics pipeline, which yielded a process error rate of 5.8 × 10−4 per base. The pipeline was then applied to analyze 18,434 envelope gene segments (HXB2 7212 to 7601) obtained from 12 incident and 24 chronic patients who had documented HIV-negative and/or -positive tests. The pyrosequencing data were cross-checked by using the single-genome-amplification (SGA) method to independently obtain 302 sequences from 13 patients. Using two genomic biomarkers that probe for the presence of similar sequences, the pyrosequencing platform correctly classified all 12 incident subjects (100% sensitivity) and 23 of 24 chronic subjects (96% specificity). One misclassified subject's chronic infection was correctly classified by conducting the same analysis with SGA data. The biomarkers were statistically associated across the two platforms, suggesting the assay's reproducibility and robustness. Sampling simulations showed that the biomarkers were tolerant of sequencing errors and template resampling, two factors most likely to affect the accuracy of pyrosequencing results. We observed comparable biomarker scores between AIDS and non-AIDS chronic patients (multivariate analysis of variance [MANOVA], P = 0.12), indicating that the stage of HIV disease itself does not affect the classification scheme. The high-throughput genomic HIV incidence marks a significant step toward determining incidence from a single measure in cross-sectional surveys. IMPORTANCE Annual HIV incidence, the number of newly infected individuals within a year, is the key measure of monitoring the epidemic's rise and decline. Developing reliable assays differentiating recent from chronic infections has been a long-standing quest in the HIV community. Over the past 15 years, these assays have traditionally measured various HIV-specific antibodies, but recent technological advancements have expanded the diversity of proposed accurate, user-friendly, and financially viable tools. Here we designed a high-throughput genomic HIV incidence assay based on the signature imprinted in the HIV gene sequence population. By combining next-generation sequencing techniques with bioinformatics analysis, we demonstrated that genomic fingerprints are capable of distinguishing recently infected patients from chronically infected patients with high precision. Our high-throughput platform is expected to allow us to process many patients' samples from a single experiment, permitting the assay to be cost-effective for routine surveillance. PMID:24371062
Nonaminoglycoside compounds induce readthrough of nonsense mutations
Damoiseaux, Robert; Nahas, Shareef; Gao, Kun; Hu, Hailiang; Pollard, Julianne M.; Goldstine, Jimena; Jung, Michael E.; Henning, Susanne M.; Bertoni, Carmen
2009-01-01
Large numbers of genetic disorders are caused by nonsense mutations for which compound-induced readthrough of premature termination codons (PTCs) might be exploited as a potential treatment strategy. We have successfully developed a sensitive and quantitative high-throughput screening (HTS) assay, protein transcription/translation (PTT)–enzyme-linked immunosorbent assay (ELISA), for identifying novel PTC-readthrough compounds using ataxia-telangiectasia (A-T) as a genetic disease model. This HTS PTT-ELISA assay is based on a coupled PTT that uses plasmid templates containing prototypic A-T mutated (ATM) mutations for HTS. The assay is luciferase independent. We screened ∼34,000 compounds and identified 12 low-molecular-mass nonaminoglycosides with potential PTC-readthrough activity. From these, two leading compounds consistently induced functional ATM protein in ATM-deficient cells containing disease-causing nonsense mutations, as demonstrated by direct measurement of ATM protein, restored ATM kinase activity, and colony survival assays for cellular radiosensitivity. The two compounds also demonstrated readthrough activity in mdx mouse myotube cells carrying a nonsense mutation and induced significant amounts of dystrophin protein. PMID:19770270
Baumann, Pascal; Hahn, Tobias; Hubbuch, Jürgen
2015-10-01
Upstream processes are rather complex to design and the productivity of cells under suitable cultivation conditions is hard to predict. The method of choice for examining the design space is to execute high-throughput cultivation screenings in micro-scale format. Various predictive in silico models have been developed for many downstream processes, leading to a reduction of time and material costs. This paper presents a combined optimization approach based on high-throughput micro-scale cultivation experiments and chromatography modeling. The overall optimized system must not necessarily be the one with highest product titers, but the one resulting in an overall superior process performance in up- and downstream. The methodology is presented in a case study for the Cherry-tagged enzyme Glutathione-S-Transferase from Escherichia coli SE1. The Cherry-Tag™ (Delphi Genetics, Belgium) which can be fused to any target protein allows for direct product analytics by simple VIS absorption measurements. High-throughput cultivations were carried out in a 48-well format in a BioLector micro-scale cultivation system (m2p-Labs, Germany). The downstream process optimization for a set of randomly picked upstream conditions producing high yields was performed in silico using a chromatography modeling software developed in-house (ChromX). The suggested in silico-optimized operational modes for product capturing were validated subsequently. The overall best system was chosen based on a combination of excellent up- and downstream performance. © 2015 Wiley Periodicals, Inc.
Knowlton, Michelle N; Li, Tongbin; Ren, Yongliang; Bill, Brent R; Ellis, Lynda Bm; Ekker, Stephen C
2008-01-07
The zebrafish is a powerful model vertebrate amenable to high throughput in vivo genetic analyses. Examples include reverse genetic screens using morpholino knockdown, expression-based screening using enhancer trapping and forward genetic screening using transposon insertional mutagenesis. We have created a database to facilitate web-based distribution of data from such genetic studies. The MOrpholino DataBase is a MySQL relational database with an online, PHP interface. Multiple quality control levels allow differential access to data in raw and finished formats. MODBv1 includes sequence information relating to almost 800 morpholinos and their targets and phenotypic data regarding the dose effect of each morpholino (mortality, toxicity and defects). To improve the searchability of this database, we have incorporated a fixed-vocabulary defect ontology that allows for the organization of morpholino affects based on anatomical structure affected and defect produced. This also allows comparison between species utilizing Phenotypic Attribute Trait Ontology (PATO) designated terminology. MODB is also cross-linked with ZFIN, allowing full searches between the two databases. MODB offers users the ability to retrieve morpholino data by sequence of morpholino or target, name of target, anatomical structure affected and defect produced. MODB data can be used for functional genomic analysis of morpholino design to maximize efficacy and minimize toxicity. MODB also serves as a template for future sequence-based functional genetic screen databases, and it is currently being used as a model for the creation of a mutagenic insertional transposon database.
Binladen, Jonas; Gilbert, M Thomas P; Bollback, Jonathan P; Panitz, Frank; Bendixen, Christian; Nielsen, Rasmus; Willerslev, Eske
2007-02-14
The invention of the Genome Sequence 20 DNA Sequencing System (454 parallel sequencing platform) has enabled the rapid and high-volume production of sequence data. Until now, however, individual emulsion PCR (emPCR) reactions and subsequent sequencing runs have been unable to combine template DNA from multiple individuals, as homologous sequences cannot be subsequently assigned to their original sources. We use conventional PCR with 5'-nucleotide tagged primers to generate homologous DNA amplification products from multiple specimens, followed by sequencing through the high-throughput Genome Sequence 20 DNA Sequencing System (GS20, Roche/454 Life Sciences). Each DNA sequence is subsequently traced back to its individual source through 5'tag-analysis. We demonstrate that this new approach enables the assignment of virtually all the generated DNA sequences to the correct source once sequencing anomalies are accounted for (miss-assignment rate<0.4%). Therefore, the method enables accurate sequencing and assignment of homologous DNA sequences from multiple sources in single high-throughput GS20 run. We observe a bias in the distribution of the differently tagged primers that is dependent on the 5' nucleotide of the tag. In particular, primers 5' labelled with a cytosine are heavily overrepresented among the final sequences, while those 5' labelled with a thymine are strongly underrepresented. A weaker bias also exists with regards to the distribution of the sequences as sorted by the second nucleotide of the dinucleotide tags. As the results are based on a single GS20 run, the general applicability of the approach requires confirmation. However, our experiments demonstrate that 5'primer tagging is a useful method in which the sequencing power of the GS20 can be applied to PCR-based assays of multiple homologous PCR products. The new approach will be of value to a broad range of research areas, such as those of comparative genomics, complete mitochondrial analyses, population genetics, and phylogenetics.
Lazinski, David W; Camilli, Andrew
2013-01-01
The amplification of DNA fragments, cloned between user-defined 5' and 3' end sequences, is a prerequisite step in the use of many current applications including massively parallel sequencing (MPS). Here we describe an improved method, called homopolymer tail-mediated ligation PCR (HTML-PCR), that requires very little starting template, minimal hands-on effort, is cost-effective, and is suited for use in high-throughput and robotic methodologies. HTML-PCR starts with the addition of homopolymer tails of controlled lengths to the 3' termini of a double-stranded genomic template. The homopolymer tails enable the annealing-assisted ligation of a hybrid oligonucleotide to the template's recessed 5' ends. The hybrid oligonucleotide has a user-defined sequence at its 5' end. This primer, together with a second primer composed of a longer region complementary to the homopolymer tail and fused to a second 5' user-defined sequence, are used in a PCR reaction to generate the final product. The user-defined sequences can be varied to enable compatibility with a wide variety of downstream applications. We demonstrate our new method by constructing MPS libraries starting from nanogram and sub-nanogram quantities of Vibrio cholerae and Streptococcus pneumoniae genomic DNA.
Application of ToxCast High-Throughput Screening and ...
Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenesis Distruptors Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenssis Distruptors
An in vivo MRI Template Set for Morphometry, Tissue Segmentation, and fMRI Localization in Rats
Valdés-Hernández, Pedro Antonio; Sumiyoshi, Akira; Nonaka, Hiroi; Haga, Risa; Aubert-Vásquez, Eduardo; Ogawa, Takeshi; Iturria-Medina, Yasser; Riera, Jorge J.; Kawashima, Ryuta
2011-01-01
Over the last decade, several papers have focused on the construction of highly detailed mouse high field magnetic resonance image (MRI) templates via non-linear registration to unbiased reference spaces, allowing for a variety of neuroimaging applications such as robust morphometric analyses. However, work in rats has only provided medium field MRI averages based on linear registration to biased spaces with the sole purpose of approximate functional MRI (fMRI) localization. This precludes any morphometric analysis in spite of the need of exploring in detail the neuroanatomical substrates of diseases in a recent advent of rat models. In this paper we present a new in vivo rat T2 MRI template set, comprising average images of both intensity and shape, obtained via non-linear registration. Also, unlike previous rat template sets, we include white and gray matter probabilistic segmentations, expanding its use to those applications demanding prior-based tissue segmentation, e.g., statistical parametric mapping (SPM) voxel-based morphometry. We also provide a preliminary digitalization of latest Paxinos and Watson atlas for anatomical and functional interpretations within the cerebral cortex. We confirmed that, like with previous templates, forepaw and hindpaw fMRI activations can be correctly localized in the expected atlas structure. To exemplify the use of our new MRI template set, were reported the volumes of brain tissues and cortical structures and probed their relationships with ontogenetic development. Other in vivo applications in the near future can be tensor-, deformation-, or voxel-based morphometry, morphological connectivity, and diffusion tensor-based anatomical connectivity. Our template set, freely available through the SPM extension website, could be an important tool for future longitudinal and/or functional extensive preclinical studies. PMID:22275894
Lee, Hasup; Baek, Minkyung; Lee, Gyu Rie; Park, Sangwoo; Seok, Chaok
2017-03-01
Many proteins function as homo- or hetero-oligomers; therefore, attempts to understand and regulate protein functions require knowledge of protein oligomer structures. The number of available experimental protein structures is increasing, and oligomer structures can be predicted using the experimental structures of related proteins as templates. However, template-based models may have errors due to sequence differences between the target and template proteins, which can lead to functional differences. Such structural differences may be predicted by loop modeling of local regions or refinement of the overall structure. In CAPRI (Critical Assessment of PRotein Interactions) round 30, we used recently developed features of the GALAXY protein modeling package, including template-based structure prediction, loop modeling, model refinement, and protein-protein docking to predict protein complex structures from amino acid sequences. Out of the 25 CAPRI targets, medium and acceptable quality models were obtained for 14 and 1 target(s), respectively, for which proper oligomer or monomer templates could be detected. Symmetric interface loop modeling on oligomer model structures successfully improved model quality, while loop modeling on monomer model structures failed. Overall refinement of the predicted oligomer structures consistently improved the model quality, in particular in interface contacts. Proteins 2017; 85:399-407. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Fuller, Carl W.; Kumar, Shiv; Porel, Mintu; Chien, Minchen; Bibillo, Arek; Stranges, P. Benjamin; Dorwart, Michael; Tao, Chuanjuan; Li, Zengmin; Guo, Wenjing; Shi, Shundi; Korenblum, Daniel; Trans, Andrew; Aguirre, Anne; Liu, Edward; Harada, Eric T.; Pollard, James; Bhat, Ashwini; Cech, Cynthia; Yang, Alexander; Arnold, Cleoma; Palla, Mirkó; Hovis, Jennifer; Chen, Roger; Morozova, Irina; Kalachikov, Sergey; Russo, James J.; Kasianowicz, John J.; Davis, Randy; Roever, Stefan; Church, George M.; Ju, Jingyue
2016-01-01
DNA sequencing by synthesis (SBS) offers a robust platform to decipher nucleic acid sequences. Recently, we reported a single-molecule nanopore-based SBS strategy that accurately distinguishes four bases by electronically detecting and differentiating four different polymer tags attached to the 5′-phosphate of the nucleotides during their incorporation into a growing DNA strand catalyzed by DNA polymerase. Further developing this approach, we report here the use of nucleotides tagged at the terminal phosphate with oligonucleotide-based polymers to perform nanopore SBS on an α-hemolysin nanopore array platform. We designed and synthesized several polymer-tagged nucleotides using tags that produce different electrical current blockade levels and verified they are active substrates for DNA polymerase. A highly processive DNA polymerase was conjugated to the nanopore, and the conjugates were complexed with primer/template DNA and inserted into lipid bilayers over individually addressable electrodes of the nanopore chip. When an incoming complementary-tagged nucleotide forms a tight ternary complex with the primer/template and polymerase, the tag enters the pore, and the current blockade level is measured. The levels displayed by the four nucleotides tagged with four different polymers captured in the nanopore in such ternary complexes were clearly distinguishable and sequence-specific, enabling continuous sequence determination during the polymerase reaction. Thus, real-time single-molecule electronic DNA sequencing data with single-base resolution were obtained. The use of these polymer-tagged nucleotides, combined with polymerase tethering to nanopores and multiplexed nanopore sensors, should lead to new high-throughput sequencing methods. PMID:27091962
NASA Astrophysics Data System (ADS)
Seoud, Ahmed; Kim, Juhwan; Ma, Yuansheng; Jayaram, Srividya; Hong, Le; Chae, Gyu-Yeol; Lee, Jeong-Woo; Park, Dae-Jin; Yune, Hyoung-Soon; Oh, Se-Young; Park, Chan-Ha
2018-03-01
Sub-resolution assist feature (SRAF) insertion techniques have been effectively used for a long time now to increase process latitude in the lithography patterning process. Rule-based SRAF and model-based SRAF are complementary solutions, and each has its own benefits, depending on the objectives of applications and the criticality of the impact on manufacturing yield, efficiency, and productivity. Rule-based SRAF provides superior geometric output consistency and faster runtime performance, but the associated recipe development time can be of concern. Model-based SRAF provides better coverage for more complicated pattern structures in terms of shapes and sizes, with considerably less time required for recipe development, although consistency and performance may be impacted. In this paper, we introduce a new model-assisted template extraction (MATE) SRAF solution, which employs decision tree learning in a model-based solution to provide the benefits of both rule-based and model-based SRAF insertion approaches. The MATE solution is designed to automate the creation of rules/templates for SRAF insertion, and is based on the SRAF placement predicted by model-based solutions. The MATE SRAF recipe provides optimum lithographic quality in relation to various manufacturing aspects in a very short time, compared to traditional methods of rule optimization. Experiments were done using memory device pattern layouts to compare the MATE solution to existing model-based SRAF and pixelated SRAF approaches, based on lithographic process window quality, runtime performance, and geometric output consistency.
Straightforward and effective protein encapsulation in polypeptide-based artificial cells.
Zhi, Zheng-Liang; Haynie, Donald T
2006-01-01
A simple and straightforward approach to encapsulating an enzyme and preserving its function in polypeptide-based artificial cells is demonstrated. A model enzyme, glucose oxidase (GOx), was encapsulated by repeated stepwise adsorption of poly(L-lysine) and poly(L-glutamic acid) onto GOx-coated CaCO3 templates. These polypeptides are known from previous research to exhibit nanometer-scale organization in multilayer films. Templates were dissolved by ethylenediaminetetraacetic acid (EDTA) at neutral pH. Addition of polyethylene glycol (PEG) to the polypeptide assembly solutions greatly increased enzyme retention on the templates, resulting in high-capacity, high-activity loading of the enzyme into artificial cells. Assay of enzyme activity showed that over 80 mg-mL(-1) GOx was retained in artificial cells after polypeptide multilayer film formation and template dissolution in the presence of PEG, but only one-fifth as much was retained in the absence of PEG. Encapsulation is a means of improving the availability of therapeutic macromolecules in biomedicine. This work therefore represents a means of developing polypeptide-based artificial cells for use as therapeutic biomacromolecule delivery vehicles.
Graph-based signal integration for high-throughput phenotyping
2012-01-01
Background Electronic Health Records aggregated in Clinical Data Warehouses (CDWs) promise to revolutionize Comparative Effectiveness Research and suggest new avenues of research. However, the effectiveness of CDWs is diminished by the lack of properly labeled data. We present a novel approach that integrates knowledge from the CDW, the biomedical literature, and the Unified Medical Language System (UMLS) to perform high-throughput phenotyping. In this paper, we automatically construct a graphical knowledge model and then use it to phenotype breast cancer patients. We compare the performance of this approach to using MetaMap when labeling records. Results MetaMap's overall accuracy at identifying breast cancer patients was 51.1% (n=428); recall=85.4%, precision=26.2%, and F1=40.1%. Our unsupervised graph-based high-throughput phenotyping had accuracy of 84.1%; recall=46.3%, precision=61.2%, and F1=52.8%. Conclusions We conclude that our approach is a promising alternative for unsupervised high-throughput phenotyping. PMID:23320851
NASA Astrophysics Data System (ADS)
Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong
2017-11-01
Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.
High-throughput screening (HTS) and modeling of the retinoid ...
Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system
Pandey, Ram Vinay; Pulverer, Walter; Kallmeyer, Rainer; Beikircher, Gabriel; Pabinger, Stephan; Kriegner, Albert; Weinhäusel, Andreas
2016-01-01
Bisulfite (BS) conversion-based and methylation-sensitive restriction enzyme (MSRE)-based PCR methods have been the most commonly used techniques for locus-specific DNA methylation analysis. However, both methods have advantages and limitations. Thus, an integrated approach would be extremely useful to quantify the DNA methylation status successfully with great sensitivity and specificity. Designing specific and optimized primers for target regions is the most critical and challenging step in obtaining the adequate DNA methylation results using PCR-based methods. Currently, no integrated, optimized, and high-throughput methylation-specific primer design software methods are available for both BS- and MSRE-based methods. Therefore an integrated, powerful, and easy-to-use methylation-specific primer design pipeline with great accuracy and success rate will be very useful. We have developed a new web-based pipeline, called MSP-HTPrimer, to design primers pairs for MSP, BSP, pyrosequencing, COBRA, and MSRE assays on both genomic strands. First, our pipeline converts all target sequences into bisulfite-treated templates for both forward and reverse strand and designs all possible primer pairs, followed by filtering for single nucleotide polymorphisms (SNPs) and known repeat regions. Next, each primer pairs are annotated with the upstream and downstream RefSeq genes, CpG island, and cut sites (for COBRA and MSRE). Finally, MSP-HTPrimer selects specific primers from both strands based on custom and user-defined hierarchical selection criteria. MSP-HTPrimer produces a primer pair summary output table in TXT and HTML format for display and UCSC custom tracks for resulting primer pairs in GTF format. MSP-HTPrimer is an integrated, web-based, and high-throughput pipeline and has no limitation on the number and size of target sequences and designs MSP, BSP, pyrosequencing, COBRA, and MSRE assays. It is the only pipeline, which automatically designs primers on both genomic strands to increase the success rate. It is a standalone web-based pipeline, which is fully configured within a virtual machine and thus can be readily used without any configuration. We have experimentally validated primer pairs designed by our pipeline and shown a very high success rate of primer pairs: out of 66 BSP primer pairs, 63 were successfully validated without any further optimization step and using the same qPCR conditions. The MSP-HTPrimer pipeline is freely available from http://sourceforge.net/p/msp-htprimer.
Hassane, Duane C.; Guzman, Monica L.; Corbett, Cheryl; Li, Xiaojie; Abboud, Ramzi; Young, Fay; Liesveld, Jane L.; Carroll, Martin
2008-01-01
Increasing evidence indicates that malignant stem cells are important for the pathogenesis of acute myelogenous leukemia (AML) and represent a reservoir of cells that drive the development of AML and relapse. Therefore, new treatment regimens are necessary to prevent relapse and improve therapeutic outcomes. Previous studies have shown that the sesquiterpene lactone, parthenolide (PTL), ablates bulk, progenitor, and stem AML cells while causing no appreciable toxicity to normal hematopoietic cells. Thus, PTL must evoke cellular responses capable of mediating AML selective cell death. Given recent advances in chemical genomics such as gene expression-based high-throughput screening (GE-HTS) and the Connectivity Map, we hypothesized that the gene expression signature resulting from treatment of primary AML with PTL could be used to search for similar signatures in publicly available gene expression profiles deposited into the Gene Expression Omnibus (GEO). We therefore devised a broad in silico screen of the GEO database using the PTL gene expression signature as a template and discovered 2 new agents, celastrol and 4-hydroxy-2-nonenal, that effectively eradicate AML at the bulk, progenitor, and stem cell level. These findings suggest the use of multicenter collections of high-throughput data to facilitate discovery of leukemia drugs and drug targets. PMID:18305216
Synthetic spike-in standards for high-throughput 16S rRNA gene amplicon sequencing
Tourlousse, Dieter M.; Yoshiike, Satowa; Ohashi, Akiko; Matsukura, Satoko; Noda, Naohiro
2017-01-01
Abstract High-throughput sequencing of 16S rRNA gene amplicons (16S-seq) has become a widely deployed method for profiling complex microbial communities but technical pitfalls related to data reliability and quantification remain to be fully addressed. In this work, we have developed and implemented a set of synthetic 16S rRNA genes to serve as universal spike-in standards for 16S-seq experiments. The spike-ins represent full-length 16S rRNA genes containing artificial variable regions with negligible identity to known nucleotide sequences, permitting unambiguous identification of spike-in sequences in 16S-seq read data from any microbiome sample. Using defined mock communities and environmental microbiota, we characterized the performance of the spike-in standards and demonstrated their utility for evaluating data quality on a per-sample basis. Further, we showed that staggered spike-in mixtures added at the point of DNA extraction enable concurrent estimation of absolute microbial abundances suitable for comparative analysis. Results also underscored that template-specific Illumina sequencing artifacts may lead to biases in the perceived abundance of certain taxa. Taken together, the spike-in standards represent a novel bioanalytical tool that can substantially improve 16S-seq-based microbiome studies by enabling comprehensive quality control along with absolute quantification. PMID:27980100
Lamm, Ayelet T; Stadler, Michael R; Zhang, Huibin; Gent, Jonathan I; Fire, Andrew Z
2011-02-01
We have used a combination of three high-throughput RNA capture and sequencing methods to refine and augment the transcriptome map of a well-studied genetic model, Caenorhabditis elegans. The three methods include a standard (non-directional) library preparation protocol relying on cDNA priming and foldback that has been used in several previous studies for transcriptome characterization in this species, and two directional protocols, one involving direct capture of single-stranded RNA fragments and one involving circular-template PCR (CircLigase). We find that each RNA-seq approach shows specific limitations and biases, with the application of multiple methods providing a more complete map than was obtained from any single method. Of particular note in the analysis were substantial advantages of CircLigase-based and ssRNA-based capture for defining sequences and structures of the precise 5' ends (which were lost using the double-strand cDNA capture method). Of the three methods, ssRNA capture was most effective in defining sequences to the poly(A) junction. Using data sets from a spectrum of C. elegans strains and stages and the UCSC Genome Browser, we provide a series of tools, which facilitate rapid visualization and assignment of gene structures.
Im, Hyungsoon; Lee, Si Hoon; Wittenberg, Nathan J.; Johnson, Timothy W.; Lindquist, Nathan C.; Nagpal, Prashant; Norris, David J.; Oh, Sang-Hyun
2011-01-01
Inexpensive, reproducible and high-throughput fabrication of nanometric apertures in metallic films can benefit many applications in plasmonics, sensing, spectroscopy, lithography and imaging. Here we use template stripping to pattern periodic nanohole arrays in optically thick, smooth Ag films with a silicon template made via nanoimprint lithography. Ag is a low-cost material with good optical properties, but it suffers from poor chemical stability and biocompatibility. However, a thin silica shell encapsulating our template-stripped Ag nanoholes facilitates biosensing applications by protecting the Ag from oxidation as well as providing a robust surface that can be readily modified with a variety of biomolecules using well-established silane chemistry. The thickness of the conformal silica shell can be precisely tuned by atomic layer deposition, and a 15-nm-thick silica shell can effectively prevent fluorophore quenching. The Ag nanohole arrays with silica shells can also be bonded to polydimethylsiloxane (PDMS) microfluidic channels for fluorescence imaging, formation of supported lipid bilayers, and real-time, label-free SPR sensing. Additionally, the smooth surfaces of the template-stripped Ag films enhance refractive index sensitivity compared with as-deposited, rough Ag films. Because nearly centimeter-sized nanohole arrays can be produced inexpensively without using any additional lithography, etching or lift-off, this method can facilitate widespread applications of metallic nanohole arrays for plasmonics and biosensing. PMID:21770414
A high-throughput in vitro ring assay for vasoactivity using magnetic 3D bioprinting
Tseng, Hubert; Gage, Jacob A.; Haisler, William L.; Neeley, Shane K.; Shen, Tsaiwei; Hebel, Chris; Barthlow, Herbert G.; Wagoner, Matthew; Souza, Glauco R.
2016-01-01
Vasoactive liabilities are typically assayed using wire myography, which is limited by its high cost and low throughput. To meet the demand for higher throughput in vitro alternatives, this study introduces a magnetic 3D bioprinting-based vasoactivity assay. The principle behind this assay is the magnetic printing of vascular smooth muscle cells into 3D rings that functionally represent blood vessel segments, whose contraction can be altered by vasodilators and vasoconstrictors. A cost-effective imaging modality employing a mobile device is used to capture contraction with high throughput. The goal of this study was to validate ring contraction as a measure of vasoactivity, using a small panel of known vasoactive drugs. In vitro responses of the rings matched outcomes predicted by in vivo pharmacology, and were supported by immunohistochemistry. Altogether, this ring assay robustly models vasoactivity, which could meet the need for higher throughput in vitro alternatives. PMID:27477945
Yan, Yumeng; Wen, Zeyu; Wang, Xinxiang; Huang, Sheng-You
2017-03-01
Protein-protein docking is an important computational tool for predicting protein-protein interactions. With the rapid development of proteomics projects, more and more experimental binding information ranging from mutagenesis data to three-dimensional structures of protein complexes are becoming available. Therefore, how to appropriately incorporate the biological information into traditional ab initio docking has been an important issue and challenge in the field of protein-protein docking. To address these challenges, we have developed a Hybrid DOCKing protocol of template-based and template-free approaches, referred to as HDOCK. The basic procedure of HDOCK is to model the structures of individual components based on the template complex by a template-based method if a template is available; otherwise, the component structures will be modeled based on monomer proteins by regular homology modeling. Then, the complex structure of the component models is predicted by traditional protein-protein docking. With the HDOCK protocol, we have participated in the CPARI experiment for rounds 28-35. Out of the 25 CASP-CAPRI targets for oligomer modeling, our HDOCK protocol predicted correct models for 16 targets, ranking one of the top algorithms in this challenge. Our docking method also made correct predictions on other CAPRI challenges such as protein-peptide binding for 6 out of 8 targets and water predictions for 2 out of 2 targets. The advantage of our hybrid docking approach over pure template-based docking was further confirmed by a comparative evaluation on 20 CASP-CAPRI targets. Proteins 2017; 85:497-512. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Preparation of kinase-biased compounds in the search for lead inhibitors of kinase targets.
Lai, Justine Y Q; Langston, Steven; Adams, Ruth; Beevers, Rebekah E; Boyce, Richard; Burckhardt, Svenja; Cobb, James; Ferguson, Yvonne; Figueroa, Eva; Grimster, Neil; Henry, Andrew H; Khan, Nawaz; Jenkins, Kerry; Jones, Mark W; Judkins, Robert; Major, Jeremy; Masood, Abid; Nally, James; Payne, Helen; Payne, Lloyd; Raphy, Gilles; Raynham, Tony; Reader, John; Reader, Valérie; Reid, Alison; Ruprah, Parminder; Shaw, Michael; Sore, Hannah; Stirling, Matthew; Talbot, Adam; Taylor, Jess; Thompson, Stephen; Wada, Hiroki; Walker, David
2005-05-01
This work describes the preparation of approximately 13,000 compounds for rapid identification of hits in high-throughput screening (HTS). These compounds were designed as potential serine/threonine or tyrosine kinase inhibitors. The library consists of various scaffolds, e.g., purines, oxindoles, and imidazoles, whereby each core scaffold generally includes the hydrogen bond acceptor/donor properties known to be important for kinase binding. Several of these are based upon literature kinase templates, or adaptations of them to provide novelty. The routes to their preparation are outlined. A variety of automation techniques were used to prepare >500 compounds per scaffold. Where applicable, scavenger resins were employed to remove excess reagents and when necessary, preparative high performance liquid chromatography (HPLC) was used for purification. These compounds were screened against an 'in-house' kinase panel. The success rate in HTS was significantly higher than the corporate compound collection. Copyright (c) 2004 Wiley Periodicals, Inc.
The U.S. EPA, under its ExpoCast program, is developing high-throughput near-field modeling methods to estimate human chemical exposure and to provide real-world context to high-throughput screening (HTS) hazard data. These novel modeling methods include reverse methods to infer ...
Modeling complexes of modeled proteins.
Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A
2017-03-01
Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Lu, Pinyi; Hontecillas, Raquel; Horne, William T; Carbo, Adria; Viladomiu, Monica; Pedragosa, Mireia; Bevan, David R; Lewis, Stephanie N; Bassaganya-Riera, Josep
2012-01-01
Lanthionine synthetase component C-like protein 2 (LANCL2) is a member of the eukaryotic lanthionine synthetase component C-Like protein family involved in signal transduction and insulin sensitization. Recently, LANCL2 is a target for the binding and signaling of abscisic acid (ABA), a plant hormone with anti-diabetic and anti-inflammatory effects. The goal of this study was to determine the role of LANCL2 as a potential therapeutic target for developing novel drugs and nutraceuticals against inflammatory diseases. Previously, we performed homology modeling to construct a three-dimensional structure of LANCL2 using the crystal structure of lanthionine synthetase component C-like protein 1 (LANCL1) as a template. Using this model, structure-based virtual screening was performed using compounds from NCI (National Cancer Institute) Diversity Set II, ChemBridge, ZINC natural products, and FDA-approved drugs databases. Several potential ligands were identified using molecular docking. In order to validate the anti-inflammatory efficacy of the top ranked compound (NSC61610) in the NCI Diversity Set II, a series of in vitro and pre-clinical efficacy studies were performed using a mouse model of dextran sodium sulfate (DSS)-induced colitis. Our findings showed that the lead compound, NSC61610, activated peroxisome proliferator-activated receptor gamma in a LANCL2- and adenylate cyclase/cAMP dependent manner in vitro and ameliorated experimental colitis by down-modulating colonic inflammatory gene expression and favoring regulatory T cell responses. LANCL2 is a novel therapeutic target for inflammatory diseases. High-throughput, structure-based virtual screening is an effective computational-based drug design method for discovering anti-inflammatory LANCL2-based drug candidates.
Lu, Pinyi; Hontecillas, Raquel; Horne, William T.; Carbo, Adria; Viladomiu, Monica; Pedragosa, Mireia; Bevan, David R.; Lewis, Stephanie N.; Bassaganya-Riera, Josep
2012-01-01
Background Lanthionine synthetase component C-like protein 2 (LANCL2) is a member of the eukaryotic lanthionine synthetase component C-Like protein family involved in signal transduction and insulin sensitization. Recently, LANCL2 is a target for the binding and signaling of abscisic acid (ABA), a plant hormone with anti-diabetic and anti-inflammatory effects. Methodology/Principal Findings The goal of this study was to determine the role of LANCL2 as a potential therapeutic target for developing novel drugs and nutraceuticals against inflammatory diseases. Previously, we performed homology modeling to construct a three-dimensional structure of LANCL2 using the crystal structure of lanthionine synthetase component C-like protein 1 (LANCL1) as a template. Using this model, structure-based virtual screening was performed using compounds from NCI (National Cancer Institute) Diversity Set II, ChemBridge, ZINC natural products, and FDA-approved drugs databases. Several potential ligands were identified using molecular docking. In order to validate the anti-inflammatory efficacy of the top ranked compound (NSC61610) in the NCI Diversity Set II, a series of in vitro and pre-clinical efficacy studies were performed using a mouse model of dextran sodium sulfate (DSS)-induced colitis. Our findings showed that the lead compound, NSC61610, activated peroxisome proliferator-activated receptor gamma in a LANCL2- and adenylate cyclase/cAMP dependent manner in vitro and ameliorated experimental colitis by down-modulating colonic inflammatory gene expression and favoring regulatory T cell responses. Conclusions/Significance LANCL2 is a novel therapeutic target for inflammatory diseases. High-throughput, structure-based virtual screening is an effective computational-based drug design method for discovering anti-inflammatory LANCL2-based drug candidates. PMID:22509338
Structural protein descriptors in 1-dimension and their sequence-based predictions.
Kurgan, Lukasz; Disfani, Fatemeh Miri
2011-09-01
The last few decades observed an increasing interest in development and application of 1-dimensional (1D) descriptors of protein structure. These descriptors project 3D structural features onto 1D strings of residue-wise structural assignments. They cover a wide-range of structural aspects including conformation of the backbone, burying depth/solvent exposure and flexibility of residues, and inter-chain residue-residue contacts. We perform first-of-its-kind comprehensive comparative review of the existing 1D structural descriptors. We define, review and categorize ten structural descriptors and we also describe, summarize and contrast over eighty computational models that are used to predict these descriptors from the protein sequences. We show that the majority of the recent sequence-based predictors utilize machine learning models, with the most popular being neural networks, support vector machines, hidden Markov models, and support vector and linear regressions. These methods provide high-throughput predictions and most of them are accessible to a non-expert user via web servers and/or stand-alone software packages. We empirically evaluate several recent sequence-based predictors of secondary structure, disorder, and solvent accessibility descriptors using a benchmark set based on CASP8 targets. Our analysis shows that the secondary structure can be predicted with over 80% accuracy and segment overlap (SOV), disorder with over 0.9 AUC, 0.6 Matthews Correlation Coefficient (MCC), and 75% SOV, and relative solvent accessibility with PCC of 0.7 and MCC of 0.6 (0.86 when homology is used). We demonstrate that the secondary structure predicted from sequence without the use of homology modeling is as good as the structure extracted from the 3D folds predicted by top-performing template-based methods.
Cruz, Rochelle E.; Shokoples, Sandra E.; Manage, Dammika P.; Yanow, Stephanie K.
2010-01-01
Mutations within the Plasmodium falciparum dihydrofolate reductase gene (Pfdhfr) contribute to resistance to antimalarials such as sulfadoxine-pyrimethamine (SP). Of particular importance are the single nucleotide polymorphisms (SNPs) within codons 51, 59, 108, and 164 in the Pfdhfr gene that are associated with SP treatment failure. Given that traditional genotyping methods are time-consuming and laborious, we developed an assay that provides the rapid, high-throughput analysis of parasite DNA isolated from clinical samples. This assay is based on asymmetric real-time PCR and melt-curve analysis (MCA) performed on the LightCycler platform. Unlabeled probes specific to each SNP are included in the reaction mixture and hybridize differentially to the mutant and wild-type sequences within the amplicon, generating distinct melting curves. Since the probe is present throughout PCR and MCA, the assay proceeds seamlessly with no further addition of reagents. This assay was validated for analytical sensitivity and specificity using plasmids, purified genomic DNA from reference strains, and parasite cultures. For all four SNPs, correct genotypes were identified with 100 copies of the template. The performance of the assay was evaluated with a blind panel of clinical isolates from travelers with low-level parasitemia. The concordance between our assay and DNA sequencing ranged from 84 to 100% depending on the SNP. We also directly compared our MCA assay to a published TaqMan real-time PCR assay and identified major issues with the specificity of the TaqMan probes. Our assay provides a number of technical improvements that facilitate the high-throughput screening of patient samples to identify SP-resistant malaria. PMID:20631115
Multiplex High-Throughput Targeted Proteomic Assay To Identify Induced Pluripotent Stem Cells.
Baud, Anna; Wessely, Frank; Mazzacuva, Francesca; McCormick, James; Camuzeaux, Stephane; Heywood, Wendy E; Little, Daniel; Vowles, Jane; Tuefferd, Marianne; Mosaku, Olukunbi; Lako, Majlinda; Armstrong, Lyle; Webber, Caleb; Cader, M Zameel; Peeters, Pieter; Gissen, Paul; Cowley, Sally A; Mills, Kevin
2017-02-21
Induced pluripotent stem cells have great potential as a human model system in regenerative medicine, disease modeling, and drug screening. However, their use in medical research is hampered by laborious reprogramming procedures that yield low numbers of induced pluripotent stem cells. For further applications in research, only the best, competent clones should be used. The standard assays for pluripotency are based on genomic approaches, which take up to 1 week to perform and incur significant cost. Therefore, there is a need for a rapid and cost-effective assay able to distinguish between pluripotent and nonpluripotent cells. Here, we describe a novel multiplexed, high-throughput, and sensitive peptide-based multiple reaction monitoring mass spectrometry assay, allowing for the identification and absolute quantitation of multiple core transcription factors and pluripotency markers. This assay provides simpler and high-throughput classification into either pluripotent or nonpluripotent cells in 7 min analysis while being more cost-effective than conventional genomic tests.
The protein structure prediction problem could be solved using the current PDB library
Zhang, Yang; Skolnick, Jeffrey
2005-01-01
For single-domain proteins, we examine the completeness of the structures in the current Protein Data Bank (PDB) library for use in full-length model construction of unknown sequences. To address this issue, we employ a comprehensive benchmark set of 1,489 medium-size proteins that cover the PDB at the level of 35% sequence identity and identify templates by structure alignment. With homologous proteins excluded, we can always find similar folds to native with an average rms deviation (RMSD) from native of 2.5 Å with ≈82% alignment coverage. These template structures often contain a significant number of insertions/deletions. The tasser algorithm was applied to build full-length models, where continuous fragments are excised from the top-scoring templates and reassembled under the guide of an optimized force field, which includes consensus restraints taken from the templates and knowledge-based statistical potentials. For almost all targets (except for 2/1,489), the resultant full-length models have an RMSD to native below 6 Å (97% of them below 4 Å). On average, the RMSD of full-length models is 2.25 Å, with aligned regions improved from 2.5 Å to 1.88 Å, comparable with the accuracy of low-resolution experimental structures. Furthermore, starting from state-of-the-art structural alignments, we demonstrate a methodology that can consistently bring template-based alignments closer to native. These results are highly suggestive that the protein-folding problem can in principle be solved based on the current PDB library by developing efficient fold recognition algorithms that can recover such initial alignments. PMID:15653774
HDOCK: a web server for protein–protein and protein–DNA/RNA docking based on a hybrid strategy
Yan, Yumeng; Zhang, Di; Zhou, Pei; Li, Botong
2017-01-01
Abstract Protein–protein and protein–DNA/RNA interactions play a fundamental role in a variety of biological processes. Determining the complex structures of these interactions is valuable, in which molecular docking has played an important role. To automatically make use of the binding information from the PDB in docking, here we have presented HDOCK, a novel web server of our hybrid docking algorithm of template-based modeling and free docking, in which cases with misleading templates can be rescued by the free docking protocol. The server supports protein–protein and protein–DNA/RNA docking and accepts both sequence and structure inputs for proteins. The docking process is fast and consumes about 10–20 min for a docking run. Tested on the cases with weakly homologous complexes of <30% sequence identity from five docking benchmarks, the HDOCK pipeline tied with template-based modeling on the protein–protein and protein–DNA benchmarks and performed better than template-based modeling on the three protein–RNA benchmarks when the top 10 predictions were considered. The performance of HDOCK became better when more predictions were considered. Combining the results of HDOCK and template-based modeling by ranking first of the template-based model further improved the predictive power of the server. The HDOCK web server is available at http://hdock.phys.hust.edu.cn/. PMID:28521030
NASA Astrophysics Data System (ADS)
Chaudhari, Rajan; Heim, Andrew J.; Li, Zhijun
2015-05-01
Evidenced by the three-rounds of G-protein coupled receptors (GPCR) Dock competitions, improving homology modeling methods of helical transmembrane proteins including the GPCRs, based on templates of low sequence identity, remains an eminent challenge. Current approaches addressing this challenge adopt the philosophy of "modeling first, refinement next". In the present work, we developed an alternative modeling approach through the novel application of available multiple templates. First, conserved inter-residue interactions are derived from each additional template through conservation analysis of each template-target pairwise alignment. Then, these interactions are converted into distance restraints and incorporated in the homology modeling process. This approach was applied to modeling of the human β2 adrenergic receptor using the bovin rhodopsin and the human protease-activated receptor 1 as templates and improved model quality was demonstrated compared to the homology model generated by standard single-template and multiple-template methods. This method of "refined restraints first, modeling next", provides a fast and complementary way to the current modeling approaches. It allows rational identification and implementation of additional conserved distance restraints extracted from multiple templates and/or experimental data, and has the potential to be applicable to modeling of all helical transmembrane proteins.
Greenough, Lucia; Schermerhorn, Kelly M; Mazzola, Laurie; Bybee, Joanna; Rivizzigno, Danielle; Cantin, Elizabeth; Slatko, Barton E; Gardner, Andrew F
2016-01-29
Detailed biochemical characterization of nucleic acid enzymes is fundamental to understanding nucleic acid metabolism, genome replication and repair. We report the development of a rapid, high-throughput fluorescence capillary gel electrophoresis method as an alternative to traditional polyacrylamide gel electrophoresis to characterize nucleic acid metabolic enzymes. The principles of assay design described here can be applied to nearly any enzyme system that acts on a fluorescently labeled oligonucleotide substrate. Herein, we describe several assays using this core capillary gel electrophoresis methodology to accelerate study of nucleic acid enzymes. First, assays were designed to examine DNA polymerase activities including nucleotide incorporation kinetics, strand displacement synthesis and 3'-5' exonuclease activity. Next, DNA repair activities of DNA ligase, flap endonuclease and RNase H2 were monitored. In addition, a multicolor assay that uses four different fluorescently labeled substrates in a single reaction was implemented to characterize GAN nuclease specificity. Finally, a dual-color fluorescence assay to monitor coupled enzyme reactions during Okazaki fragment maturation is described. These assays serve as a template to guide further technical development for enzyme characterization or nucleoside and non-nucleoside inhibitor screening in a high-throughput manner. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Microengineering methods for cell-based microarrays and high-throughput drug-screening applications.
Xu, Feng; Wu, JinHui; Wang, ShuQi; Durmus, Naside Gozde; Gurkan, Umut Atakan; Demirci, Utkan
2011-09-01
Screening for effective therapeutic agents from millions of drug candidates is costly, time consuming, and often faces concerns due to the extensive use of animals. To improve cost effectiveness, and to minimize animal testing in pharmaceutical research, in vitro monolayer cell microarrays with multiwell plate assays have been developed. Integration of cell microarrays with microfluidic systems has facilitated automated and controlled component loading, significantly reducing the consumption of the candidate compounds and the target cells. Even though these methods significantly increased the throughput compared to conventional in vitro testing systems and in vivo animal models, the cost associated with these platforms remains prohibitively high. Besides, there is a need for three-dimensional (3D) cell-based drug-screening models which can mimic the in vivo microenvironment and the functionality of the native tissues. Here, we present the state-of-the-art microengineering approaches that can be used to develop 3D cell-based drug-screening assays. We highlight the 3D in vitro cell culture systems with live cell-based arrays, microfluidic cell culture systems, and their application to high-throughput drug screening. We conclude that among the emerging microengineering approaches, bioprinting holds great potential to provide repeatable 3D cell-based constructs with high temporal, spatial control and versatility.
Microengineering Methods for Cell Based Microarrays and High-Throughput Drug Screening Applications
Xu, Feng; Wu, JinHui; Wang, ShuQi; Durmus, Naside Gozde; Gurkan, Umut Atakan; Demirci, Utkan
2011-01-01
Screening for effective therapeutic agents from millions of drug candidates is costly, time-consuming and often face ethical concerns due to extensive use of animals. To improve cost-effectiveness, and to minimize animal testing in pharmaceutical research, in vitro monolayer cell microarrays with multiwell plate assays have been developed. Integration of cell microarrays with microfluidic systems have facilitated automated and controlled component loading, significantly reducing the consumption of the candidate compounds and the target cells. Even though these methods significantly increased the throughput compared to conventional in vitro testing systems and in vivo animal models, the cost associated with these platforms remains prohibitively high. Besides, there is a need for three-dimensional (3D) cell based drug-screening models, which can mimic the in vivo microenvironment and the functionality of the native tissues. Here, we present the state-of-the-art microengineering approaches that can be used to develop 3D cell based drug screening assays. We highlight the 3D in vitro cell culture systems with live cell-based arrays, microfluidic cell culture systems, and their application to high-throughput drug screening. We conclude that among the emerging microengineering approaches, bioprinting holds a great potential to provide repeatable 3D cell based constructs with high temporal, spatial control and versatility. PMID:21725152
Hill, Jeff W.; Thompson, Jeffrey F.; Carter, Mark B.; Edwards, Bruce S.; Sklar, Larry A.; Rosenberg, Gary A.
2014-01-01
Stroke is a leading cause of death and disability and treatment options are limited. A promising approach to accelerate the development of new therapeutics is the use of high-throughput screening of chemical libraries. Using a cell-based high-throughput oxygen-glucose deprivation (OGD) model, we evaluated 1,200 small molecules for repurposed application in stroke therapy. Isoxsuprine hydrochloride was identified as a potent neuroprotective compound in primary neurons exposed to OGD. Isoxsuprine, a β2-adrenergic agonist and NR2B subtype-selective N-methyl-D-aspartate (NMDA) receptor antagonist, demonstrated no loss of efficacy when administered up to an hour after reoxygenation in an in vitro stroke model. In an animal model of transient focal ischemia, isoxsuprine significantly reduced infarct volume compared to vehicle (137±18 mm3 versus 279±25 mm3, p<0.001). Isoxsuprine, a peripheral vasodilator, was FDA approved for the treatment of cerebrovascular insufficiency and peripheral vascular disease. Our demonstration of the significant and novel neuroprotective action of isoxsuprine hydrochloride in an in vivo stroke model and its history of human use suggest that isoxsuprine may be an ideal candidate for further investigation as a potential stroke therapeutic. PMID:24804769
Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif
2008-03-01
High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.
A faculty development workshop in narrative-based reflective writing.
Boudreau, J Donald; Liben, Stephen; Fuks, Abraham
2012-08-01
Narrative approaches are used increasingly in the health professions with a range of objectives. We must acquaint educators with this burgeoning field and prepare them for the incorporation of story-telling in their pedagogical practices. The authors describe a template for a faculty development workshop designed to foster self-reflection through the use of narrative techniques and prepare clinical teachers to deploy such approaches. The design is based on a six-year experience in delivering introductory workshops in narrative approaches to medical teachers. The workshops, which served as a model for the template, have been offered to a total of 92 clinicians being trained to mentor medical students. A generic template is described. It includes a table of core concepts from narrative theory, a set of probing questions useful in a basic technical analysis of texts and a list of initiating prompts for exercises in reflective writing. A workshop organized and deployed using this template is deliverable over a half-day. The model has proven to be feasible and highly valued by participants. It can be adapted for other contexts by educators across the continuum of health professional education.
Automatic Prediction of Protein 3D Structures by Probabilistic Multi-template Homology Modeling.
Meier, Armin; Söding, Johannes
2015-10-01
Homology modeling predicts the 3D structure of a query protein based on the sequence alignment with one or more template proteins of known structure. Its great importance for biological research is owed to its speed, simplicity, reliability and wide applicability, covering more than half of the residues in protein sequence space. Although multiple templates have been shown to generally increase model quality over single templates, the information from multiple templates has so far been combined using empirically motivated, heuristic approaches. We present here a rigorous statistical framework for multi-template homology modeling. First, we find that the query proteins' atomic distance restraints can be accurately described by two-component Gaussian mixtures. This insight allowed us to apply the standard laws of probability theory to combine restraints from multiple templates. Second, we derive theoretically optimal weights to correct for the redundancy among related templates. Third, a heuristic template selection strategy is proposed. We improve the average GDT-ha model quality score by 11% over single template modeling and by 6.5% over a conventional multi-template approach on a set of 1000 query proteins. Robustness with respect to wrong constraints is likewise improved. We have integrated our multi-template modeling approach with the popular MODELLER homology modeling software in our free HHpred server http://toolkit.tuebingen.mpg.de/hhpred and also offer open source software for running MODELLER with the new restraints at https://bitbucket.org/soedinglab/hh-suite.
WholePathwayScope: a comprehensive pathway-based analysis tool for high-throughput data
Yi, Ming; Horton, Jay D; Cohen, Jonathan C; Hobbs, Helen H; Stephens, Robert M
2006-01-01
Background Analysis of High Throughput (HTP) Data such as microarray and proteomics data has provided a powerful methodology to study patterns of gene regulation at genome scale. A major unresolved problem in the post-genomic era is to assemble the large amounts of data generated into a meaningful biological context. We have developed a comprehensive software tool, WholePathwayScope (WPS), for deriving biological insights from analysis of HTP data. Result WPS extracts gene lists with shared biological themes through color cue templates. WPS statistically evaluates global functional category enrichment of gene lists and pathway-level pattern enrichment of data. WPS incorporates well-known biological pathways from KEGG (Kyoto Encyclopedia of Genes and Genomes) and Biocarta, GO (Gene Ontology) terms as well as user-defined pathways or relevant gene clusters or groups, and explores gene-term relationships within the derived gene-term association networks (GTANs). WPS simultaneously compares multiple datasets within biological contexts either as pathways or as association networks. WPS also integrates Genetic Association Database and Partial MedGene Database for disease-association information. We have used this program to analyze and compare microarray and proteomics datasets derived from a variety of biological systems. Application examples demonstrated the capacity of WPS to significantly facilitate the analysis of HTP data for integrative discovery. Conclusion This tool represents a pathway-based platform for discovery integration to maximize analysis power. The tool is freely available at . PMID:16423281
I-TASSER: fully automated protein structure prediction in CASP8.
Zhang, Yang
2009-01-01
The I-TASSER algorithm for 3D protein structure prediction was tested in CASP8, with the procedure fully automated in both the Server and Human sections. The quality of the server models is close to that of human ones but the human predictions incorporate more diverse templates from other servers which improve the human predictions in some of the distant homology targets. For the first time, the sequence-based contact predictions from machine learning techniques are found helpful for both template-based modeling (TBM) and template-free modeling (FM). In TBM, although the accuracy of the sequence based contact predictions is on average lower than that from template-based ones, the novel contacts in the sequence-based predictions, which are complementary to the threading templates in the weakly or unaligned regions, are important to improve the global and local packing in these regions. Moreover, the newly developed atomic structural refinement algorithm was tested in CASP8 and found to improve the hydrogen-bonding networks and the overall TM-score, which is mainly due to its ability of removing steric clashes so that the models can be generated from cluster centroids. Nevertheless, one of the major issues of the I-TASSER pipeline is the model selection where the best models could not be appropriately recognized when the correct templates are detected only by the minority of the threading algorithms. There are also problems related with domain-splitting and mirror image recognition which mainly influences the performance of I-TASSER modeling in the FM-based structure predictions. Copyright 2009 Wiley-Liss, Inc.
Optimizing multi-dimensional high throughput screening using zebrafish
Truong, Lisa; Bugel, Sean M.; Chlebowski, Anna; Usenko, Crystal Y.; Simonich, Michael T.; Massey Simonich, Staci L.; Tanguay, Robert L.
2016-01-01
The use of zebrafish for high throughput screening (HTS) for chemical bioactivity assessments is becoming routine in the fields of drug discovery and toxicology. Here we report current recommendations from our experiences in zebrafish HTS. We compared the effects of different high throughput chemical delivery methods on nominal water concentration, chemical sorption to multi-well polystyrene plates, transcription responses, and resulting whole animal responses. We demonstrate that digital dispensing consistently yields higher data quality and reproducibility compared to standard plastic tip-based liquid handling. Additionally, we illustrate the challenges in using this sensitive model for chemical assessment when test chemicals have trace impurities. Adaptation of these better practices for zebrafish HTS should increase reproducibility across laboratories. PMID:27453428
Microfluidic Bead Suspension Hopper
2014-01-01
Many high-throughput analytical platforms, from next-generation DNA sequencing to drug discovery, rely on beads as carriers of molecular diversity. Microfluidic systems are ideally suited to handle and analyze such bead libraries with high precision and at minute volume scales; however, the challenge of introducing bead suspensions into devices before they sediment usually confounds microfluidic handling and analysis. We developed a bead suspension hopper that exploits sedimentation to load beads into a microfluidic droplet generator. A suspension hopper continuously delivered synthesis resin beads (17 μm diameter, 112,000 over 2.67 h) functionalized with a photolabile linker and pepstatin A into picoliter-scale droplets of an HIV-1 protease activity assay to model ultraminiaturized compound screening. Likewise, trypsinogen template DNA-coated magnetic beads (2.8 μm diameter, 176,000 over 5.5 h) were loaded into droplets of an in vitro transcription/translation system to model a protein evolution experiment. The suspension hopper should effectively remove any barriers to using suspensions as sample inputs, paving the way for microfluidic automation to replace robotic library distribution. PMID:24761972
Template-based protein-protein docking exploiting pairwise interfacial residue restraints.
Xue, Li C; Rodrigues, João P G L M; Dobbs, Drena; Honavar, Vasant; Bonvin, Alexandre M J J
2017-05-01
Although many advanced and sophisticated ab initio approaches for modeling protein-protein complexes have been proposed in past decades, template-based modeling (TBM) remains the most accurate and widely used approach, given a reliable template is available. However, there are many different ways to exploit template information in the modeling process. Here, we systematically evaluate and benchmark a TBM method that uses conserved interfacial residue pairs as docking distance restraints [referred to as alpha carbon-alpha carbon (CA-CA)-guided docking]. We compare it with two other template-based protein-protein modeling approaches, including a conserved non-pairwise interfacial residue restrained docking approach [referred to as the ambiguous interaction restraint (AIR)-guided docking] and a simple superposition-based modeling approach. Our results show that, for most cases, the CA-CA-guided docking method outperforms both superposition with refinement and the AIR-guided docking method. We emphasize the superiority of the CA-CA-guided docking on cases with medium to large conformational changes, and interactions mediated through loops, tails or disordered regions. Our results also underscore the importance of a proper refinement of superimposition models to reduce steric clashes. In summary, we provide a benchmarked TBM protocol that uses conserved pairwise interface distance as restraints in generating realistic 3D protein-protein interaction models, when reliable templates are available. The described CA-CA-guided docking protocol is based on the HADDOCK platform, which allows users to incorporate additional prior knowledge of the target system to further improve the quality of the resulting models. © The Author 2016. Published by Oxford University Press.
HDOCK: a web server for protein-protein and protein-DNA/RNA docking based on a hybrid strategy.
Yan, Yumeng; Zhang, Di; Zhou, Pei; Li, Botong; Huang, Sheng-You
2017-07-03
Protein-protein and protein-DNA/RNA interactions play a fundamental role in a variety of biological processes. Determining the complex structures of these interactions is valuable, in which molecular docking has played an important role. To automatically make use of the binding information from the PDB in docking, here we have presented HDOCK, a novel web server of our hybrid docking algorithm of template-based modeling and free docking, in which cases with misleading templates can be rescued by the free docking protocol. The server supports protein-protein and protein-DNA/RNA docking and accepts both sequence and structure inputs for proteins. The docking process is fast and consumes about 10-20 min for a docking run. Tested on the cases with weakly homologous complexes of <30% sequence identity from five docking benchmarks, the HDOCK pipeline tied with template-based modeling on the protein-protein and protein-DNA benchmarks and performed better than template-based modeling on the three protein-RNA benchmarks when the top 10 predictions were considered. The performance of HDOCK became better when more predictions were considered. Combining the results of HDOCK and template-based modeling by ranking first of the template-based model further improved the predictive power of the server. The HDOCK web server is available at http://hdock.phys.hust.edu.cn/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Khan, Ferdous; Tare, Rahul S; Kanczler, Janos M; Oreffo, Richard O C; Bradley, Mark
2010-03-01
A combination of high-throughput material formulation and microarray techniques were synergistically applied for the efficient analysis of the biological functionality of 135 binary polymer blends. This allowed the identification of cell-compatible biopolymers permissive for human skeletal stem cell growth in both in vitro and in vivo applications. The blended polymeric materials were developed from commercially available, inexpensive and well characterised biodegradable polymers, which on their own lacked both the structural requirements of a scaffold material and, critically, the ability to facilitate cell growth. Blends identified here proved excellent templates for cell attachment, and in addition, a number of blends displayed remarkable bone-like architecture and facilitated bone regeneration by providing 3D biomimetic scaffolds for skeletal cell growth and osteogenic differentiation. This study demonstrates a unique strategy to generate and identify innovative materials with widespread application in cell biology as well as offering a new reparative platform strategy applicable to skeletal tissues. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Lazinski, David W.; Camilli, Andrew
2013-01-01
The amplification of DNA fragments, cloned between user-defined 5′ and 3′ end sequences, is a prerequisite step in the use of many current applications including massively parallel sequencing (MPS). Here we describe an improved method, called homopolymer tail-mediated ligation PCR (HTML-PCR), that requires very little starting template, minimal hands-on effort, is cost-effective, and is suited for use in high-throughput and robotic methodologies. HTML-PCR starts with the addition of homopolymer tails of controlled lengths to the 3′ termini of a double-stranded genomic template. The homopolymer tails enable the annealing-assisted ligation of a hybrid oligonucleotide to the template's recessed 5′ ends. The hybrid oligonucleotide has a user-defined sequence at its 5′ end. This primer, together with a second primer composed of a longer region complementary to the homopolymer tail and fused to a second 5′ user-defined sequence, are used in a PCR reaction to generate the final product. The user-defined sequences can be varied to enable compatibility with a wide variety of downstream applications. We demonstrate our new method by constructing MPS libraries starting from nanogram and sub-nanogram quantities of Vibrio cholerae and Streptococcus pneumoniae genomic DNA. PMID:23311318
NASA Astrophysics Data System (ADS)
Jian, Wei; Estevez, Claudio; Chowdhury, Arshad; Jia, Zhensheng; Wang, Jianxin; Yu, Jianguo; Chang, Gee-Kung
2010-12-01
This paper presents an energy-efficient Medium Access Control (MAC) protocol for very-high-throughput millimeter-wave (mm-wave) wireless sensor communication networks (VHT-MSCNs) based on hybrid multiple access techniques of frequency division multiplexing access (FDMA) and time division multiplexing access (TDMA). An energy-efficient Superframe for wireless sensor communication network employing directional mm-wave wireless access technologies is proposed for systems that require very high throughput, such as high definition video signals, for sensing, processing, transmitting, and actuating functions. Energy consumption modeling for each network element and comparisons among various multi-access technologies in term of power and MAC layer operations are investigated for evaluating the energy-efficient improvement of proposed MAC protocol.
Zhou, Haiying; Purdie, Jennifer; Wang, Tongtong; Ouyang, Anli
2010-01-01
The number of therapeutic proteins produced by cell culture in the pharmaceutical industry continues to increase. During the early stages of manufacturing process development, hundreds of clones and various cell culture conditions are evaluated to develop a robust process to identify and select cell lines with high productivity. It is highly desirable to establish a high throughput system to accelerate process development and reduce cost. Multiwell plates and shake flasks are widely used in the industry as the scale down model for large-scale bioreactors. However, one of the limitations of these two systems is the inability to measure and control pH in a high throughput manner. As pH is an important process parameter for cell culture, this could limit the applications of these scale down model vessels. An economical, rapid, and robust pH measurement method was developed at Eli Lilly and Company by employing SNARF-4F 5-(-and 6)-carboxylic acid. The method demonstrated the ability to measure the pH values of cell culture samples in a high throughput manner. Based upon the chemical equilibrium of CO(2), HCO(3)(-), and the buffer system, i.e., HEPES, we established a mathematical model to regulate pH in multiwell plates and shake flasks. The model calculates the required %CO(2) from the incubator and the amount of sodium bicarbonate to be added to adjust pH to a preset value. The model was validated by experimental data, and pH was accurately regulated by this method. The feasibility of studying the pH effect on cell culture in 96-well plates and shake flasks was also demonstrated in this study. This work shed light on mini-bioreactor scale down model construction and paved the way for cell culture process development to improve productivity or product quality using high throughput systems. Copyright 2009 American Institute of Chemical Engineers
Environmental Learning Centers: A Template.
ERIC Educational Resources Information Center
Vozick, Eric
1999-01-01
Provides a working model, or template, for community-based environmental learning centers (ELCs). The template presents a philosophy as well as a plan for staff and administration operations, educational programming, and financial support. The template also addresses "green" construction and maintenance of buildings and grounds and…
You, Zhu-Hong; Li, Shuai; Gao, Xin; Luo, Xin; Ji, Zhen
2014-01-01
Protein-protein interactions are the basis of biological functions, and studying these interactions on a molecular level is of crucial importance for understanding the functionality of a living cell. During the past decade, biosensors have emerged as an important tool for the high-throughput identification of proteins and their interactions. However, the high-throughput experimental methods for identifying PPIs are both time-consuming and expensive. On the other hand, high-throughput PPI data are often associated with high false-positive and high false-negative rates. Targeting at these problems, we propose a method for PPI detection by integrating biosensor-based PPI data with a novel computational model. This method was developed based on the algorithm of extreme learning machine combined with a novel representation of protein sequence descriptor. When performed on the large-scale human protein interaction dataset, the proposed method achieved 84.8% prediction accuracy with 84.08% sensitivity at the specificity of 85.53%. We conducted more extensive experiments to compare the proposed method with the state-of-the-art techniques, support vector machine. The achieved results demonstrate that our approach is very promising for detecting new PPIs, and it can be a helpful supplement for biosensor-based PPI data detection.
Nagarajan, Mahesh B; Raman, Steven S; Lo, Pechin; Lin, Wei-Chan; Khoshnoodi, Pooria; Sayre, James W; Ramakrishna, Bharath; Ahuja, Preeti; Huang, Jiaoti; Margolis, Daniel J A; Lu, David S K; Reiter, Robert E; Goldin, Jonathan G; Brown, Matthew S; Enzmann, Dieter R
2018-02-19
We present a method for generating a T2 MR-based probabilistic model of tumor occurrence in the prostate to guide the selection of anatomical sites for targeted biopsies and serve as a diagnostic tool to aid radiological evaluation of prostate cancer. In our study, the prostate and any radiological findings within were segmented retrospectively on 3D T2-weighted MR images of 266 subjects who underwent radical prostatectomy. Subsequent histopathological analysis determined both the ground truth and the Gleason grade of the tumors. A randomly chosen subset of 19 subjects was used to generate a multi-subject-derived prostate template. Subsequently, a cascading registration algorithm involving both affine and non-rigid B-spline transforms was used to register the prostate of every subject to the template. Corresponding transformation of radiological findings yielded a population-based probabilistic model of tumor occurrence. The quality of our probabilistic model building approach was statistically evaluated by measuring the proportion of correct placements of tumors in the prostate template, i.e., the number of tumors that maintained their anatomical location within the prostate after their transformation into the prostate template space. Probabilistic model built with tumors deemed clinically significant demonstrated a heterogeneous distribution of tumors, with higher likelihood of tumor occurrence at the mid-gland anterior transition zone and the base-to-mid-gland posterior peripheral zones. Of 250 MR lesions analyzed, 248 maintained their original anatomical location with respect to the prostate zones after transformation to the prostate. We present a robust method for generating a probabilistic model of tumor occurrence in the prostate that could aid clinical decision making, such as selection of anatomical sites for MR-guided prostate biopsies.
Ramlee, Muhammad Khairul; Wang, Jing; Cheung, Alice M S; Li, Shang
2017-04-08
The development of programmable genome-editing tools has facilitated the use of reverse genetics to understand the roles specific genomic sequences play in the functioning of cells and whole organisms. This cause has been tremendously aided by the recent introduction of the CRISPR/Cas9 system-a versatile tool that allows researchers to manipulate the genome and transcriptome in order to, among other things, knock out, knock down, or knock in genes in a targeted manner. For the purpose of knocking out a gene, CRISPR/Cas9-mediated double-strand breaks recruit the non-homologous end-joining DNA repair pathway to introduce the frameshift-causing insertion or deletion of nucleotides at the break site. However, an individual guide RNA may cause undesirable off-target effects, and to rule these out, the use of multiple guide RNAs is necessary. This multiplicity of targets also means that a high-volume screening of clones is required, which in turn begs the use of an efficient high-throughput technique to genotype the knockout clones. Current genotyping techniques either suffer from inherent limitations or incur high cost, hence rendering them unsuitable for high-throughput purposes. Here, we detail the protocol for using fluorescent PCR, which uses genomic DNA from crude cell lysate as a template, and then resolving the PCR fragments via capillary gel electrophoresis. This technique is accurate enough to differentiate one base-pair difference between fragments and hence is adequate in indicating the presence or absence of a frameshift in the coding sequence of the targeted gene. This precise knowledge effectively precludes the need for a confirmatory sequencing step and allows users to save time and cost in the process. Moreover, this technique has proven to be versatile in genotyping various mammalian cells of various tissue origins targeted by guide RNAs against numerous genes, as shown here and elsewhere.
Lin, Zeming; He, Bingwei; Chen, Jiang; D u, Zhibin; Zheng, Jingyi; Li, Yanqin
2012-08-01
To guide doctors in precisely positioning surgical operation, a new production method of minimally invasive implant guide template was presented. The mandible of patient was scanned by CT scanner, and three-dimensional jaw bone model was constructed based on CT images data The professional dental implant software Simplant was used to simulate the plant based on the three-dimensional CT model to determine the location and depth of implants. In the same time, the dental plaster models were scanned by stereo vision system to build the oral mucosa model. Next, curvature registration technology was used to fuse the oral mucosa model and the CT model, then the designed position of implant in the oral mucosa could be determined. The minimally invasive implant guide template was designed in 3-Matic software according to the design position of implant and the oral mucosa model. Finally, the template was produced by rapid prototyping. The three-dimensional registration technology was useful to fuse the CT data and the dental plaster data, and the template was accurate that could provide the doctors a guidance in the actual planting without cut-off mucosa. The guide template which fabricated by comprehensive utilization of three-dimensional registration, Simplant simulation and rapid prototyping positioning are accurate and can achieve the minimally invasive and accuracy implant surgery, this technique is worthy of clinical use.
Wood, Maree; Fonseca, Amara; Sampson, David; Kovendy, Andrew; Westhuyzen, Justin; Shakespeare, Thomas; Turnbull, Kirsty
2016-01-01
The aim of the retrospective study was to develop a planning class solution for prostate intensity-modulated radiotherapy (IMRT) that achieved target and organs-at-risk (OAR) doses within acceptable departmental protocol criteria using the Monaco treatment planning system (Elekta-CMS Software, MO, USA). Advances in radiation therapy technology have led to a re-evaluation of work practices. Class solutions have the potential to produce highly conformal plans in a time-efficient manner. Using data from intermediate and high risk prostate cancer patients, a stepwise quality improvement model was employed. Stage 1 involved the development of a broadly based treatment template developed across 10 patients. Stage 2 involved template refinement and clinical audit ( n = 20); Stage 3, template review ( n = 50) and Stage 4 an assessment of a revised template against the actual treatment plan involving 72 patients. The computer algorithm that comprised the Stage 4 template met clinical treatment criteria for 82% of patients. Minor template changes were required for a further 13% of patients. Major changes were required in 4%; one patient could not be assessed. The average calculation time was 13 min and involved seven mouse clicks by the planner. Thus, the new template met treatment criteria or required only minor changes in 95% of prostate patients; this is an encouraging result suggesting improvements in planning efficiency and consistency. It is feasible to develop a class solution for prostate IMRT using a stepwise quality improvement model which delivers clinically acceptable plans in the great majority of prostate cases.
Pietiainen, Vilja; Saarela, Jani; von Schantz, Carina; Turunen, Laura; Ostling, Paivi; Wennerberg, Krister
2014-05-01
The High Throughput Biomedicine (HTB) unit at the Institute for Molecular Medicine Finland FIMM was established in 2010 to serve as a national and international academic screening unit providing access to state of the art instrumentation for chemical and RNAi-based high throughput screening. The initial focus of the unit was multiwell plate based chemical screening and high content microarray-based siRNA screening. However, over the first four years of operation, the unit has moved to a more flexible service platform where both chemical and siRNA screening is performed at different scales primarily in multiwell plate-based assays with a wide range of readout possibilities with a focus on ultraminiaturization to allow for affordable screening for the academic users. In addition to high throughput screening, the equipment of the unit is also used to support miniaturized, multiplexed and high throughput applications for other types of research such as genomics, sequencing and biobanking operations. Importantly, with the translational research goals at FIMM, an increasing part of the operations at the HTB unit is being focused on high throughput systems biological platforms for functional profiling of patient cells in personalized and precision medicine projects.
Zhang, Yang
2014-01-01
We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. PMID:23760925
Zhang, Yang
2014-02-01
We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. Copyright © 2013 Wiley Periodicals, Inc.
Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms
Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas
2016-01-01
Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640
Droplet-based microfluidic analysis and screening of single plant cells.
Yu, Ziyi; Boehm, Christian R; Hibberd, Julian M; Abell, Chris; Haseloff, Jim; Burgess, Steven J; Reyna-Llorens, Ivan
2018-01-01
Droplet-based microfluidics has been used to facilitate high-throughput analysis of individual prokaryote and mammalian cells. However, there is a scarcity of similar workflows applicable to rapid phenotyping of plant systems where phenotyping analyses typically are time-consuming and low-throughput. We report on-chip encapsulation and analysis of protoplasts isolated from the emergent plant model Marchantia polymorpha at processing rates of >100,000 cells per hour. We use our microfluidic system to quantify the stochastic properties of a heat-inducible promoter across a population of transgenic protoplasts to demonstrate its potential for assessing gene expression activity in response to environmental conditions. We further demonstrate on-chip sorting of droplets containing YFP-expressing protoplasts from wild type cells using dielectrophoresis force. This work opens the door to droplet-based microfluidic analysis of plant cells for applications ranging from high-throughput characterisation of DNA parts to single-cell genomics to selection of rare plant phenotypes.
Archetype-based conversion of EHR content models: pilot experience with a regional EHR system.
Chen, Rong; Klein, Gunnar O; Sundvall, Erik; Karlsson, Daniel; Ahlfeldt, Hans
2009-07-01
Exchange of Electronic Health Record (EHR) data between systems from different suppliers is a major challenge. EHR communication based on archetype methodology has been developed by openEHR and CEN/ISO. The experience of using archetypes in deployed EHR systems is quite limited today. Currently deployed EHR systems with large user bases have their own proprietary way of representing clinical content using various models. This study was designed to investigate the feasibility of representing EHR content models from a regional EHR system as openEHR archetypes and inversely to convert archetypes to the proprietary format. The openEHR EHR Reference Model (RM) and Archetype Model (AM) specifications were used. The template model of the Cambio COSMIC, a regional EHR product from Sweden, was analyzed and compared to the openEHR RM and AM. This study was focused on the convertibility of the EHR semantic models. A semantic mapping between the openEHR RM/AM and the COSMIC template model was produced and used as the basis for developing prototype software that performs automated bi-directional conversion between openEHR archetypes and COSMIC templates. Automated bi-directional conversion between openEHR archetype format and COSMIC template format has been achieved. Several archetypes from the openEHR Clinical Knowledge Repository have been imported into COSMIC, preserving most of the structural and terminology related constraints. COSMIC templates from a large regional installation were successfully converted into the openEHR archetype format. The conversion from the COSMIC templates into archetype format preserves nearly all structural and semantic definitions of the original content models. A strategy of gradually adding archetype support to legacy EHR systems was formulated in order to allow sharing of clinical content models defined using different formats. The openEHR RM and AM are expressive enough to represent the existing clinical content models from the template based EHR system tested and legacy content models can automatically be converted to archetype format for sharing of knowledge. With some limitations, internationally available archetypes could be converted to the legacy EHR models. Archetype support can be added to legacy EHR systems in an incremental way allowing a migration path to interoperability based on standards.
A model-based 3D template matching technique for pose acquisition of an uncooperative space object.
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele
2015-03-16
This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented.
Synthetic spike-in standards for high-throughput 16S rRNA gene amplicon sequencing.
Tourlousse, Dieter M; Yoshiike, Satowa; Ohashi, Akiko; Matsukura, Satoko; Noda, Naohiro; Sekiguchi, Yuji
2017-02-28
High-throughput sequencing of 16S rRNA gene amplicons (16S-seq) has become a widely deployed method for profiling complex microbial communities but technical pitfalls related to data reliability and quantification remain to be fully addressed. In this work, we have developed and implemented a set of synthetic 16S rRNA genes to serve as universal spike-in standards for 16S-seq experiments. The spike-ins represent full-length 16S rRNA genes containing artificial variable regions with negligible identity to known nucleotide sequences, permitting unambiguous identification of spike-in sequences in 16S-seq read data from any microbiome sample. Using defined mock communities and environmental microbiota, we characterized the performance of the spike-in standards and demonstrated their utility for evaluating data quality on a per-sample basis. Further, we showed that staggered spike-in mixtures added at the point of DNA extraction enable concurrent estimation of absolute microbial abundances suitable for comparative analysis. Results also underscored that template-specific Illumina sequencing artifacts may lead to biases in the perceived abundance of certain taxa. Taken together, the spike-in standards represent a novel bioanalytical tool that can substantially improve 16S-seq-based microbiome studies by enabling comprehensive quality control along with absolute quantification. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Automated image-based phenotypic analysis in zebrafish embryos
Vogt, Andreas; Cholewinski, Andrzej; Shen, Xiaoqiang; Nelson, Scott; Lazo, John S.; Tsang, Michael; Hukriede, Neil A.
2009-01-01
Presently, the zebrafish is the only vertebrate model compatible with contemporary paradigms of drug discovery. Zebrafish embryos are amenable to automation necessary for high-throughput chemical screens, and optical transparency makes them potentially suited for image-based screening. However, the lack of tools for automated analysis of complex images presents an obstacle to utilizing the zebrafish as a high-throughput screening model. We have developed an automated system for imaging and analyzing zebrafish embryos in multi-well plates regardless of embryo orientation and without user intervention. Images of fluorescent embryos were acquired on a high-content reader and analyzed using an artificial intelligence-based image analysis method termed Cognition Network Technology (CNT). CNT reliably detected transgenic fluorescent embryos (Tg(fli1:EGFP)y1) arrayed in 96-well plates and quantified intersegmental blood vessel development in embryos treated with small molecule inhibitors of anigiogenesis. The results demonstrate it is feasible to adapt image-based high-content screening methodology to measure complex whole organism phenotypes. PMID:19235725
The Grid[Way] Job Template Manager, a tool for parameter sweeping
NASA Astrophysics Data System (ADS)
Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.
2011-04-01
Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of the input parameter sets. Also management of the job template files including the job submission to the grid, control and information retrieval. Restrictions: The parameter sweep is limited by disk space during generation of the job templates. The wild-carding of parameters cannot be done in decreasing order. Job submission, control and information is delegated to the GridWay Metascheduler. Running time: From half a second in the simplest operation to a few minutes for thousands of exponential sampling parameters.
Wedge, David C; Rowe, William; Kell, Douglas B; Knowles, Joshua
2009-03-07
We model the process of directed evolution (DE) in silico using genetic algorithms. Making use of the NK fitness landscape model, we analyse the effects of mutation rate, crossover and selection pressure on the performance of DE. A range of values of K, the epistatic interaction of the landscape, are considered, and high- and low-throughput modes of evolution are compared. Our findings suggest that for runs of or around ten generations' duration-as is typical in DE-there is little difference between the way in which DE needs to be configured in the high- and low-throughput regimes, nor across different degrees of landscape epistasis. In all cases, a high selection pressure (but not an extreme one) combined with a moderately high mutation rate works best, while crossover provides some benefit but only on the less rugged landscapes. These genetic algorithms were also compared with a "model-based approach" from the literature, which uses sequential fixing of the problem parameters based on fitting a linear model. Overall, we find that purely evolutionary techniques fare better than do model-based approaches across all but the smoothest landscapes.
Bláha, Benjamin A F; Morris, Stephen A; Ogonah, Olotu W; Maucourant, Sophie; Crescente, Vincenzo; Rosenberg, William; Mukhopadhyay, Tarit K
2018-01-01
The time and cost benefits of miniaturized fermentation platforms can only be gained by employing complementary techniques facilitating high-throughput at small sample volumes. Microbial cell disruption is a major bottleneck in experimental throughput and is often restricted to large processing volumes. Moreover, for rigid yeast species, such as Pichia pastoris, no effective high-throughput disruption methods exist. The development of an automated, miniaturized, high-throughput, noncontact, scalable platform based on adaptive focused acoustics (AFA) to disrupt P. pastoris and recover intracellular heterologous protein is described. Augmented modes of AFA were established by investigating vessel designs and a novel enzymatic pretreatment step. Three different modes of AFA were studied and compared to the performance high-pressure homogenization. For each of these modes of cell disruption, response models were developed to account for five different performance criteria. Using multiple responses not only demonstrated that different operating parameters are required for different response optima, with highest product purity requiring suboptimal values for other criteria, but also allowed for AFA-based methods to mimic large-scale homogenization processes. These results demonstrate that AFA-mediated cell disruption can be used for a wide range of applications including buffer development, strain selection, fermentation process development, and whole bioprocess integration. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 34:130-140, 2018. © 2017 American Institute of Chemical Engineers.
Sokkar, Pandian; Mohandass, Shylajanaciyar; Ramachandran, Murugesan
2011-07-01
We present a comparative account on 3D-structures of human type-1 receptor (AT1) for angiotensin II (AngII), modeled using three different methodologies. AngII activates a wide spectrum of signaling responses via the AT1 receptor that mediates physiological control of blood pressure and diverse pathological actions in cardiovascular, renal, and other cell types. Availability of 3D-model of AT1 receptor would significantly enhance the development of new drugs for cardiovascular diseases. However, templates of AT1 receptor with low sequence similarity increase the complexity in straightforward homology modeling, and hence there is a need to evaluate different modeling methodologies in order to use the models for sensitive applications such as rational drug design. Three models were generated for AT1 receptor by, (1) homology modeling with bovine rhodopsin as template, (2) homology modeling with multiple templates and (3) threading using I-TASSER web server. Molecular dynamics (MD) simulation (15 ns) of models in explicit membrane-water system, Ramachandran plot analysis and molecular docking with antagonists led to the conclusion that multiple template-based homology modeling outweighs other methodologies for AT1 modeling.
Optimizing multi-dimensional high throughput screening using zebrafish.
Truong, Lisa; Bugel, Sean M; Chlebowski, Anna; Usenko, Crystal Y; Simonich, Michael T; Simonich, Staci L Massey; Tanguay, Robert L
2016-10-01
The use of zebrafish for high throughput screening (HTS) for chemical bioactivity assessments is becoming routine in the fields of drug discovery and toxicology. Here we report current recommendations from our experiences in zebrafish HTS. We compared the effects of different high throughput chemical delivery methods on nominal water concentration, chemical sorption to multi-well polystyrene plates, transcription responses, and resulting whole animal responses. We demonstrate that digital dispensing consistently yields higher data quality and reproducibility compared to standard plastic tip-based liquid handling. Additionally, we illustrate the challenges in using this sensitive model for chemical assessment when test chemicals have trace impurities. Adaptation of these better practices for zebrafish HTS should increase reproducibility across laboratories. Copyright © 2016 Elsevier Inc. All rights reserved.
Segovia, Romulo; Shen, Yaoqing; Lujan, Scott A; Jones, Steven J M; Stirling, Peter C
2017-03-07
Gene-gene or gene-drug interactions are typically quantified using fitness as a readout because the data are continuous and easily measured in high throughput. However, to what extent fitness captures the range of other phenotypes that show synergistic effects is usually unknown. Using Saccharomyces cerevisiae and focusing on a matrix of DNA repair mutants and genotoxic drugs, we quantify 76 gene-drug interactions based on both mutation rate and fitness and find that these parameters are not connected. Independent of fitness defects, we identified six cases of synthetic hypermutation, where the combined effect of the drug and mutant on mutation rate was greater than predicted. One example occurred when yeast lacking RA D1 were exposed to cisplatin, and we characterized this interaction using whole-genome sequencing. Our sequencing results indicate mutagenesis by cisplatin in rad1 Δ cells appeared to depend almost entirely on interstrand cross-links at GpCpN motifs. Interestingly, our data suggest that the following base on the template strand dictates the addition of the mutated base. This result differs from cisplatin mutation signatures in XPF-deficient Caenorhabditis elegans and supports a model in which translesion synthesis polymerases perform a slippage and realignment extension across from the damaged base. Accordingly, DNA polymerase ζ activity was essential for mutagenesis in cisplatin-treated rad1 Δ cells. Together these data reveal the potential to gain new mechanistic insights from nonfitness measures of gene-drug interactions and extend the use of mutation accumulation and whole-genome sequencing analysis to define DNA repair mechanisms.
CABS-fold: Server for the de novo and consensus-based prediction of protein structure.
Blaszczyk, Maciej; Jamroz, Michal; Kmiecik, Sebastian; Kolinski, Andrzej
2013-07-01
The CABS-fold web server provides tools for protein structure prediction from sequence only (de novo modeling) and also using alternative templates (consensus modeling). The web server is based on the CABS modeling procedures ranked in previous Critical Assessment of techniques for protein Structure Prediction competitions as one of the leading approaches for de novo and template-based modeling. Except for template data, fragmentary distance restraints can also be incorporated into the modeling process. The web server output is a coarse-grained trajectory of generated conformations, its Jmol representation and predicted models in all-atom resolution (together with accompanying analysis). CABS-fold can be freely accessed at http://biocomp.chem.uw.edu.pl/CABSfold.
CABS-fold: server for the de novo and consensus-based prediction of protein structure
Blaszczyk, Maciej; Jamroz, Michal; Kmiecik, Sebastian; Kolinski, Andrzej
2013-01-01
The CABS-fold web server provides tools for protein structure prediction from sequence only (de novo modeling) and also using alternative templates (consensus modeling). The web server is based on the CABS modeling procedures ranked in previous Critical Assessment of techniques for protein Structure Prediction competitions as one of the leading approaches for de novo and template-based modeling. Except for template data, fragmentary distance restraints can also be incorporated into the modeling process. The web server output is a coarse-grained trajectory of generated conformations, its Jmol representation and predicted models in all-atom resolution (together with accompanying analysis). CABS-fold can be freely accessed at http://biocomp.chem.uw.edu.pl/CABSfold. PMID:23748950
High-throughput electrical characterization for robust overlay lithography control
NASA Astrophysics Data System (ADS)
Devender, Devender; Shen, Xumin; Duggan, Mark; Singh, Sunil; Rullan, Jonathan; Choo, Jae; Mehta, Sohan; Tang, Teck Jung; Reidy, Sean; Holt, Jonathan; Kim, Hyung Woo; Fox, Robert; Sohn, D. K.
2017-03-01
Realizing sensitive, high throughput and robust overlay measurement is a challenge in current 14nm and advanced upcoming nodes with transition to 300mm and upcoming 450mm semiconductor manufacturing, where slight deviation in overlay has significant impact on reliability and yield1). Exponentially increasing number of critical masks in multi-patterning lithoetch, litho-etch (LELE) and subsequent LELELE semiconductor processes require even tighter overlay specification2). Here, we discuss limitations of current image- and diffraction- based overlay measurement techniques to meet these stringent processing requirements due to sensitivity, throughput and low contrast3). We demonstrate a new electrical measurement based technique where resistance is measured for a macro with intentional misalignment between two layers. Overlay is quantified by a parabolic fitting model to resistance where minima and inflection points are extracted to characterize overlay control and process window, respectively. Analyses using transmission electron microscopy show good correlation between actual overlay performance and overlay obtained from fitting. Additionally, excellent correlation of overlay from electrical measurements to existing image- and diffraction- based techniques is found. We also discuss challenges of integrating electrical measurement based approach in semiconductor manufacturing from Back End of Line (BEOL) perspective. Our findings open up a new pathway for accessing simultaneous overlay as well as process window and margins from a robust, high throughput and electrical measurement approach.
G-LoSA for Prediction of Protein-Ligand Binding Sites and Structures.
Lee, Hui Sun; Im, Wonpil
2017-01-01
Recent advances in high-throughput structure determination and computational protein structure prediction have significantly enriched the universe of protein structure. However, there is still a large gap between the number of available protein structures and that of proteins with annotated function in high accuracy. Computational structure-based protein function prediction has emerged to reduce this knowledge gap. The identification of a ligand binding site and its structure is critical to the determination of a protein's molecular function. We present a computational methodology for predicting small molecule ligand binding site and ligand structure using G-LoSA, our protein local structure alignment and similarity measurement tool. All the computational procedures described here can be easily implemented using G-LoSA Toolkit, a package of standalone software programs and preprocessed PDB structure libraries. G-LoSA and G-LoSA Toolkit are freely available to academic users at http://compbio.lehigh.edu/GLoSA . We also illustrate a case study to show the potential of our template-based approach harnessing G-LoSA for protein function prediction.
A Sensitive Assay for Virus Discovery in Respiratory Clinical Samples
de Vries, Michel; Deijs, Martin; Canuti, Marta; van Schaik, Barbera D. C.; Faria, Nuno R.; van de Garde, Martijn D. B.; Jachimowski, Loes C. M.; Jebbink, Maarten F.; Jakobs, Marja; Luyf, Angela C. M.; Coenjaerts, Frank E. J.; Claas, Eric C. J.; Molenkamp, Richard; Koekkoek, Sylvie M.; Lammens, Christine; Leus, Frank; Goossens, Herman; Ieven, Margareta; Baas, Frank; van der Hoek, Lia
2011-01-01
In 5–40% of respiratory infections in children, the diagnostics remain negative, suggesting that the patients might be infected with a yet unknown pathogen. Virus discovery cDNA-AFLP (VIDISCA) is a virus discovery method based on recognition of restriction enzyme cleavage sites, ligation of adaptors and subsequent amplification by PCR. However, direct discovery of unknown pathogens in nasopharyngeal swabs is difficult due to the high concentration of ribosomal RNA (rRNA) that acts as competitor. In the current study we optimized VIDISCA by adjusting the reverse transcription enzymes and decreasing rRNA amplification in the reverse transcription, using hexamer oligonucleotides that do not anneal to rRNA. Residual cDNA synthesis on rRNA templates was further reduced with oligonucleotides that anneal to rRNA but can not be extended due to 3′-dideoxy-C6-modification. With these modifications >90% reduction of rRNA amplification was established. Further improvement of the VIDISCA sensitivity was obtained by high throughput sequencing (VIDISCA-454). Eighteen nasopharyngeal swabs were analysed, all containing known respiratory viruses. We could identify the proper virus in the majority of samples tested (11/18). The median load in the VIDISCA-454 positive samples was 7.2 E5 viral genome copies/ml (ranging from 1.4 E3–7.7 E6). Our results show that optimization of VIDISCA and subsequent high-throughput-sequencing enhances sensitivity drastically and provides the opportunity to perform virus discovery directly in patient material. PMID:21283679
Visual programming for next-generation sequencing data analytics.
Milicchio, Franco; Rose, Rebecca; Bian, Jiang; Min, Jae; Prosperi, Mattia
2016-01-01
High-throughput or next-generation sequencing (NGS) technologies have become an established and affordable experimental framework in biological and medical sciences for all basic and translational research. Processing and analyzing NGS data is challenging. NGS data are big, heterogeneous, sparse, and error prone. Although a plethora of tools for NGS data analysis has emerged in the past decade, (i) software development is still lagging behind data generation capabilities, and (ii) there is a 'cultural' gap between the end user and the developer. Generic software template libraries specifically developed for NGS can help in dealing with the former problem, whilst coupling template libraries with visual programming may help with the latter. Here we scrutinize the state-of-the-art low-level software libraries implemented specifically for NGS and graphical tools for NGS analytics. An ideal developing environment for NGS should be modular (with a native library interface), scalable in computational methods (i.e. serial, multithread, distributed), transparent (platform-independent), interoperable (with external software interface), and usable (via an intuitive graphical user interface). These characteristics should facilitate both the run of standardized NGS pipelines and the development of new workflows based on technological advancements or users' needs. We discuss in detail the potential of a computational framework blending generic template programming and visual programming that addresses all of the current limitations. In the long term, a proper, well-developed (although not necessarily unique) software framework will bridge the current gap between data generation and hypothesis testing. This will eventually facilitate the development of novel diagnostic tools embedded in routine healthcare.
Prediction of Chemical Function: Model Development and Application
The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (...
Fletcher, E; Carmichael, O; Decarli, C
2012-01-01
We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.
Fletcher, E.; Carmichael, O.; DeCarli, C.
2013-01-01
We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843
Population based MRI and DTI templates of the adult ferret brain and tools for voxelwise analysis.
Hutchinson, E B; Schwerin, S C; Radomski, K L; Sadeghi, N; Jenkins, J; Komlosh, M E; Irfanoglu, M O; Juliano, S L; Pierpaoli, C
2017-05-15
Non-invasive imaging has the potential to play a crucial role in the characterization and translation of experimental animal models to investigate human brain development and disorders, especially when employed to study animal models that more accurately represent features of human neuroanatomy. The purpose of this study was to build and make available MRI and DTI templates and analysis tools for the ferret brain as the ferret is a well-suited species for pre-clinical MRI studies with folded cortical surface, relatively high white matter volume and body dimensions that allow imaging with pre-clinical MRI scanners. Four ferret brain templates were built in this study - in-vivo MRI and DTI and ex-vivo MRI and DTI - using brain images across many ferrets and region of interest (ROI) masks corresponding to established ferret neuroanatomy were generated by semi-automatic and manual segmentation. The templates and ROI masks were used to create a web-based ferret brain viewing software for browsing the MRI and DTI volumes with annotations based on the ROI masks. A second objective of this study was to provide a careful description of the imaging methods used for acquisition, processing, registration and template building and to demonstrate several voxelwise analysis methods including Jacobian analysis of morphometry differences between the female and male brain and bias-free identification of DTI abnormalities in an injured ferret brain. The templates, tools and methodological optimization presented in this study are intended to advance non-invasive imaging approaches for human-similar animal species that will enable the use of pre-clinical MRI studies for understanding and treating brain disorders. Published by Elsevier Inc.
Infrared radiation scene generation of stars and planets in celestial background
NASA Astrophysics Data System (ADS)
Guo, Feng; Hong, Yaohui; Xu, Xiaojian
2014-10-01
An infrared (IR) radiation generation model of stars and planets in celestial background is proposed in this paper. Cohen's spectral template1 is modified for high spectral resolution and accuracy. Based on the improved spectral template for stars and the blackbody assumption for planets, an IR radiation model is developed which is able to generate the celestial IR background for stars and planets appearing in sensor's field of view (FOV) for specified observing date and time, location, viewpoint and spectral band over 1.2μm ~ 35μm. In the current model, the initial locations of stars are calculated based on midcourse space experiment (MSX) IR astronomical catalogue (MSX-IRAC) 2 , while the initial locations of planets are calculated using secular variations of the planetary orbits (VSOP) theory. Simulation results show that the new IR radiation model has higher resolution and accuracy than common model.
Refinement of protein termini in template-based modeling using conformational space annealing.
Park, Hahnbeom; Ko, Junsu; Joo, Keehyoung; Lee, Julian; Seok, Chaok; Lee, Jooyoung
2011-09-01
The rapid increase in the number of experimentally determined protein structures in recent years enables us to obtain more reliable protein tertiary structure models than ever by template-based modeling. However, refinement of template-based models beyond the limit available from the best templates is still needed for understanding protein function in atomic detail. In this work, we develop a new method for protein terminus modeling that can be applied to refinement of models with unreliable terminus structures. The energy function for terminus modeling consists of both physics-based and knowledge-based potential terms with carefully optimized relative weights. Effective sampling of both the framework and terminus is performed using the conformational space annealing technique. This method has been tested on a set of termini derived from a nonredundant structure database and two sets of termini from the CASP8 targets. The performance of the terminus modeling method is significantly improved over our previous method that does not employ terminus refinement. It is also comparable or superior to the best server methods tested in CASP8. The success of the current approach suggests that similar strategy may be applied to other types of refinement problems such as loop modeling or secondary structure rearrangement. Copyright © 2011 Wiley-Liss, Inc.
Microfluidics-based digital quantitative PCR for single-cell small RNA quantification.
Yu, Tian; Tang, Chong; Zhang, Ying; Zhang, Ruirui; Yan, Wei
2017-09-01
Quantitative analyses of small RNAs at the single-cell level have been challenging because of limited sensitivity and specificity of conventional real-time quantitative PCR methods. A digital quantitative PCR (dqPCR) method for miRNA quantification has been developed, but it requires the use of proprietary stem-loop primers and only applies to miRNA quantification. Here, we report a microfluidics-based dqPCR (mdqPCR) method, which takes advantage of the Fluidigm BioMark HD system for both template partition and the subsequent high-throughput dqPCR. Our mdqPCR method demonstrated excellent sensitivity and reproducibility suitable for quantitative analyses of not only miRNAs but also all other small RNA species at the single-cell level. Using this method, we discovered that each sperm has a unique miRNA profile. © The Authors 2017. Published by Oxford University Press on behalf of Society for the Study of Reproduction. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ASIC-based architecture for the real-time computation of 2D convolution with large kernel size
NASA Astrophysics Data System (ADS)
Shao, Rui; Zhong, Sheng; Yan, Luxin
2015-12-01
Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.
Methods and Applications of CRISPR-Mediated Base Editing in Eukaryotic Genomes.
Hess, Gaelen T; Tycko, Josh; Yao, David; Bassik, Michael C
2017-10-05
The past several years have seen an explosion in development of applications for the CRISPR-Cas9 system, from efficient genome editing, to high-throughput screening, to recruitment of a range of DNA and chromatin-modifying enzymes. While homology-directed repair (HDR) coupled with Cas9 nuclease cleavage has been used with great success to repair and re-write genomes, recently developed base-editing systems present a useful orthogonal strategy to engineer nucleotide substitutions. Base editing relies on recruitment of cytidine deaminases to introduce changes (rather than double-stranded breaks and donor templates) and offers potential improvements in efficiency while limiting damage and simplifying the delivery of editing machinery. At the same time, these systems enable novel mutagenesis strategies to introduce sequence diversity for engineering and discovery. Here, we review the different base-editing platforms, including their deaminase recruitment strategies and editing outcomes, and compare them to other CRISPR genome-editing technologies. Additionally, we discuss how these systems have been applied in therapeutic, engineering, and research settings. Lastly, we explore future directions of this emerging technology. Copyright © 2017 Elsevier Inc. All rights reserved.
As defined by Wikipedia (https://en.wikipedia.org/wiki/Metamodeling), “(a) metamodel or surrogate model is a model of a model, and metamodeling is the process of generating such metamodels.” The goals of metamodeling include, but are not limited to (1) developing func...
Greil, Stefanie; Rahman, Atikur; Liu, Mingzhao; ...
2017-10-10
Here, we report the fabrication of ultrathin, nanoporous silicon nitride membranes made from templates of regular, nanoscale features in self-assembled block copolymer thin films. The inorganic membranes feature thicknesses less than 50 nm and volume porosities over 30%, with straight-through pores that offer high throughout for gas transport and separation applications. As fabricated, the pores are uniformly around 20 nm in diameter, but they can be controllably and continuously tuned to single-digit nanometer dimensions by atomic layer deposition of conformal coatings. A deviation from expected Knudsen diffusion is revealed for transport characteristics of saturated vapors of organic solvents across themore » membrane, which becomes more significant for membranes of smaller pores. We attribute this to capillary condensation of saturated vapors within membrane pores, which reduces membrane throughput by over 1 order of magnitude but significantly improves the membrane’s selectivity. Between vapors of acetone and ethyl acetate, we measure selectivities as high as 7:1 at ambient pressure and temperature, 4 times more than the Knudsen selectivity.« less
Zhang, Weiwei; Huang, Guoyou; Ng, Kelvin; Ji, Yuan; Gao, Bin; Huang, Liqing; Zhou, Jinxiong; Lu, Tian Jian; Xu, Feng
2018-03-26
Hydrogel particles that can be engineered to compartmentally culture cells in a three-dimensional (3D) and high-throughput manner have attracted increasing interest in the biomedical area. However, the ability to generate hydrogel particles with specially designed structures and their potential biomedical applications need to be further explored. This work introduces a method for fabricating hydrogel particles in an ellipsoidal cap-like shape (i.e., ellipsoidal cap-like hydrogel particles) by employing an open-pore anodic aluminum oxide membrane. Hydrogel particles of different sizes are fabricated. The ability to produce ellipsoidal cap-like magnetic hydrogel particles with controlled distribution of magnetic nanoparticles is demonstrated. Encapsulated cells show high viability, indicating the potential for using these hydrogel particles as structure- and remote-controllable building blocks for tissue engineering application. Moreover, the hydrogel particles are also used as sacrificial templates for fabricating ellipsoidal cap-like concave wells, which are further applied for producing size controllable cell aggregates. The results are beneficial for the development of hydrogel particles and their applications in 3D cell culture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greil, Stefanie; Rahman, Atikur; Liu, Mingzhao
Here, we report the fabrication of ultrathin, nanoporous silicon nitride membranes made from templates of regular, nanoscale features in self-assembled block copolymer thin films. The inorganic membranes feature thicknesses less than 50 nm and volume porosities over 30%, with straight-through pores that offer high throughout for gas transport and separation applications. As fabricated, the pores are uniformly around 20 nm in diameter, but they can be controllably and continuously tuned to single-digit nanometer dimensions by atomic layer deposition of conformal coatings. A deviation from expected Knudsen diffusion is revealed for transport characteristics of saturated vapors of organic solvents across themore » membrane, which becomes more significant for membranes of smaller pores. We attribute this to capillary condensation of saturated vapors within membrane pores, which reduces membrane throughput by over 1 order of magnitude but significantly improves the membrane’s selectivity. Between vapors of acetone and ethyl acetate, we measure selectivities as high as 7:1 at ambient pressure and temperature, 4 times more than the Knudsen selectivity.« less
These novel modeling approaches for screening, evaluating and classifying chemicals based on the potential for biologically-relevant human exposures will inform toxicity testing and prioritization for chemical risk assessment. The new modeling approach is derived from the Stocha...
ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers.
Teodoro, Douglas; Sundvall, Erik; João Junior, Mario; Ruch, Patrick; Miranda Freire, Sergio
2018-01-01
The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR) systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS) containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms.
ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers
Sundvall, Erik; João Junior, Mario; Ruch, Patrick; Miranda Freire, Sergio
2018-01-01
The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR) systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS) containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms. PMID:29293556
Mismatch cleavage by single-strand specific nucleases
Till, Bradley J.; Burtner, Chris; Comai, Luca; Henikoff, Steven
2004-01-01
We have investigated the ability of single-strand specific (sss) nucleases from different sources to cleave single base pair mismatches in heteroduplex DNA templates used for mutation and single-nucleotide polymorphism analysis. The TILLING (Targeting Induced Local Lesions IN Genomes) mismatch cleavage protocol was used with the LI-COR gel detection system to assay cleavage of amplified heteroduplexes derived from a variety of induced mutations and naturally occurring polymorphisms. We found that purified nucleases derived from celery (CEL I), mung bean sprouts and Aspergillus (S1) were able to specifically cleave nearly all single base pair mismatches tested. Optimal nicking of heteroduplexes for mismatch detection was achieved using higher pH, temperature and divalent cation conditions than are routinely used for digestion of single-stranded DNA. Surprisingly, crude plant extracts performed as well as the highly purified preparations for this application. These observations suggest that diverse members of the S1 family of sss nucleases act similarly in cleaving non-specifically at bulges in heteroduplexes, and single-base mismatches are the least accessible because they present the smallest single-stranded region for enzyme binding. We conclude that a variety of sss nucleases and extracts can be effectively used for high-throughput mutation and polymorphism discovery. PMID:15141034
Automated side-chain model building and sequence assignment by template matching.
Terwilliger, Thomas C
2003-01-01
An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.
Archetype-based conversion of EHR content models: pilot experience with a regional EHR system
2009-01-01
Background Exchange of Electronic Health Record (EHR) data between systems from different suppliers is a major challenge. EHR communication based on archetype methodology has been developed by openEHR and CEN/ISO. The experience of using archetypes in deployed EHR systems is quite limited today. Currently deployed EHR systems with large user bases have their own proprietary way of representing clinical content using various models. This study was designed to investigate the feasibility of representing EHR content models from a regional EHR system as openEHR archetypes and inversely to convert archetypes to the proprietary format. Methods The openEHR EHR Reference Model (RM) and Archetype Model (AM) specifications were used. The template model of the Cambio COSMIC, a regional EHR product from Sweden, was analyzed and compared to the openEHR RM and AM. This study was focused on the convertibility of the EHR semantic models. A semantic mapping between the openEHR RM/AM and the COSMIC template model was produced and used as the basis for developing prototype software that performs automated bi-directional conversion between openEHR archetypes and COSMIC templates. Results Automated bi-directional conversion between openEHR archetype format and COSMIC template format has been achieved. Several archetypes from the openEHR Clinical Knowledge Repository have been imported into COSMIC, preserving most of the structural and terminology related constraints. COSMIC templates from a large regional installation were successfully converted into the openEHR archetype format. The conversion from the COSMIC templates into archetype format preserves nearly all structural and semantic definitions of the original content models. A strategy of gradually adding archetype support to legacy EHR systems was formulated in order to allow sharing of clinical content models defined using different formats. Conclusion The openEHR RM and AM are expressive enough to represent the existing clinical content models from the template based EHR system tested and legacy content models can automatically be converted to archetype format for sharing of knowledge. With some limitations, internationally available archetypes could be converted to the legacy EHR models. Archetype support can be added to legacy EHR systems in an incremental way allowing a migration path to interoperability based on standards. PMID:19570196
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elkin, Christopher; Kapur, Hitesh; Smith, Troy
2001-09-15
We have developed an automated purification method for terminator sequencing products based on a magnetic bead technology. This 384-well protocol generates labeled DNA fragments that are essentially free of contaminates for less than $0.005 per reaction. In comparison to laborious ethanol precipitation protocols, this method increases the phred20 read length by forty bases with various DNA templates such as PCR fragments, Plasmids, Cosmids and RCA products. Our method eliminates centrifugation and is compatible with both the MegaBACE 1000 and ABIPrism 3700 capillary instruments. As of September 2001, this method has produced over 1.6 million samples with 93 percent averaging 620more » phred20 bases as part of Joint Genome Institutes Production Process.« less
Screening Chemicals for Estrogen Receptor Bioactivity Using a Computational Model.
Browne, Patience; Judson, Richard S; Casey, Warren M; Kleinstreuer, Nicole C; Thomas, Russell S
2015-07-21
The U.S. Environmental Protection Agency (EPA) is considering high-throughput and computational methods to evaluate the endocrine bioactivity of environmental chemicals. Here we describe a multistep, performance-based validation of new methods and demonstrate that these new tools are sufficiently robust to be used in the Endocrine Disruptor Screening Program (EDSP). Results from 18 estrogen receptor (ER) ToxCast high-throughput screening assays were integrated into a computational model that can discriminate bioactivity from assay-specific interference and cytotoxicity. Model scores range from 0 (no activity) to 1 (bioactivity of 17β-estradiol). ToxCast ER model performance was evaluated for reference chemicals, as well as results of EDSP Tier 1 screening assays in current practice. The ToxCast ER model accuracy was 86% to 93% when compared to reference chemicals and predicted results of EDSP Tier 1 guideline and other uterotrophic studies with 84% to 100% accuracy. The performance of high-throughput assays and ToxCast ER model predictions demonstrates that these methods correctly identify active and inactive reference chemicals, provide a measure of relative ER bioactivity, and rapidly identify chemicals with potential endocrine bioactivities for additional screening and testing. EPA is accepting ToxCast ER model data for 1812 chemicals as alternatives for EDSP Tier 1 ER binding, ER transactivation, and uterotrophic assays.
Model-based high-throughput design of ion exchange protein chromatography.
Khalaf, Rushd; Heymann, Julia; LeSaout, Xavier; Monard, Florence; Costioli, Matteo; Morbidelli, Massimo
2016-08-12
This work describes the development of a model-based high-throughput design (MHD) tool for the operating space determination of a chromatographic cation-exchange protein purification process. Based on a previously developed thermodynamic mechanistic model, the MHD tool generates a large amount of system knowledge and thereby permits minimizing the required experimental workload. In particular, each new experiment is designed to generate information needed to help refine and improve the model. Unnecessary experiments that do not increase system knowledge are avoided. Instead of aspiring to a perfectly parameterized model, the goal of this design tool is to use early model parameter estimates to find interesting experimental spaces, and to refine the model parameter estimates with each new experiment until a satisfactory set of process parameters is found. The MHD tool is split into four sections: (1) prediction, high throughput experimentation using experiments in (2) diluted conditions and (3) robotic automated liquid handling workstations (robotic workstation), and (4) operating space determination and validation. (1) Protein and resin information, in conjunction with the thermodynamic model, is used to predict protein resin capacity. (2) The predicted model parameters are refined based on gradient experiments in diluted conditions. (3) Experiments on the robotic workstation are used to further refine the model parameters. (4) The refined model is used to determine operating parameter space that allows for satisfactory purification of the protein of interest on the HPLC scale. Each section of the MHD tool is used to define the adequate experimental procedures for the next section, thus avoiding any unnecessary experimental work. We used the MHD tool to design a polishing step for two proteins, a monoclonal antibody and a fusion protein, on two chromatographic resins, in order to demonstrate it has the ability to strongly accelerate the early phases of process development. Copyright © 2016 Elsevier B.V. All rights reserved.
Chan, Leo Li-Ying; Smith, Tim; Kumph, Kendra A; Kuksin, Dmitry; Kessel, Sarah; Déry, Olivier; Cribbes, Scott; Lai, Ning; Qiu, Jean
2016-10-01
To ensure cell-based assays are performed properly, both cell concentration and viability have to be determined so that the data can be normalized to generate meaningful and comparable results. Cell-based assays performed in immuno-oncology, toxicology, or bioprocessing research often require measuring of multiple samples and conditions, thus the current automated cell counter that uses single disposable counting slides is not practical for high-throughput screening assays. In the recent years, a plate-based image cytometry system has been developed for high-throughput biomolecular screening assays. In this work, we demonstrate a high-throughput AO/PI-based cell concentration and viability method using the Celigo image cytometer. First, we validate the method by comparing directly to Cellometer automated cell counter. Next, cell concentration dynamic range, viability dynamic range, and consistency are determined. The high-throughput AO/PI method described here allows for 96-well to 384-well plate samples to be analyzed in less than 7 min, which greatly reduces the time required for the single sample-based automated cell counter. In addition, this method can improve the efficiency for high-throughput screening assays, where multiple cell counts and viability measurements are needed prior to performing assays such as flow cytometry, ELISA, or simply plating cells for cell culture.
Mapping monomeric threading to protein-protein structure prediction.
Guerler, Aysam; Govindarajoo, Brandon; Zhang, Yang
2013-03-25
The key step of template-based protein-protein structure prediction is the recognition of complexes from experimental structure libraries that have similar quaternary fold. Maintaining two monomer and dimer structure libraries is however laborious, and inappropriate library construction can degrade template recognition coverage. We propose a novel strategy SPRING to identify complexes by mapping monomeric threading alignments to protein-protein interactions based on the original oligomer entries in the PDB, which does not rely on library construction and increases the efficiency and quality of complex template recognitions. SPRING is tested on 1838 nonhomologous protein complexes which can recognize correct quaternary template structures with a TM score >0.5 in 1115 cases after excluding homologous proteins. The average TM score of the first model is 60% and 17% higher than that by HHsearch and COTH, respectively, while the number of targets with an interface RMSD <2.5 Å by SPRING is 134% and 167% higher than these competing methods. SPRING is controlled with ZDOCK on 77 docking benchmark proteins. Although the relative performance of SPRING and ZDOCK depends on the level of homology filters, a combination of the two methods can result in a significantly higher model quality than ZDOCK at all homology thresholds. These data demonstrate a new efficient approach to quaternary structure recognition that is ready to use for genome-scale modeling of protein-protein interactions due to the high speed and accuracy.
Predicting dermal penetration for ToxCast chemicals using in silico estimates for diffusion in combination with physiologically based pharmacokinetic (PBPK) modeling.Evans, M.V., Sawyer, M.E., Isaacs, K.K, and Wambaugh, J.With the development of efficient high-throughput (HT) in ...
Species-specific predictive models of developmental toxicity using the ToxCast chemical library
EPA’s ToxCastTM project is profiling the in vitro bioactivity of chemicals to generate predictive models that correlate with observed in vivo toxicity. In vitro profiling methods are based on ToxCast data, consisting of over 600 high-throughput screening (HTS) and high-content sc...
DNA polymerase preference determines PCR priming efficiency.
Pan, Wenjing; Byrne-Steele, Miranda; Wang, Chunlin; Lu, Stanley; Clemmons, Scott; Zahorchak, Robert J; Han, Jian
2014-01-30
Polymerase chain reaction (PCR) is one of the most important developments in modern biotechnology. However, PCR is known to introduce biases, especially during multiplex reactions. Recent studies have implicated the DNA polymerase as the primary source of bias, particularly initiation of polymerization on the template strand. In our study, amplification from a synthetic library containing a 12 nucleotide random portion was used to provide an in-depth characterization of DNA polymerase priming bias. The synthetic library was amplified with three commercially available DNA polymerases using an anchored primer with a random 3' hexamer end. After normalization, the next generation sequencing (NGS) results of the amplified libraries were directly compared to the unamplified synthetic library. Here, high throughput sequencing was used to systematically demonstrate and characterize DNA polymerase priming bias. We demonstrate that certain sequence motifs are preferred over others as primers where the six nucleotide sequences at the 3' end of the primer, as well as the sequences four base pairs downstream of the priming site, may influence priming efficiencies. DNA polymerases in the same family from two different commercial vendors prefer similar motifs, while another commercially available enzyme from a different DNA polymerase family prefers different motifs. Furthermore, the preferred priming motifs are GC-rich. The DNA polymerase preference for certain sequence motifs was verified by amplification from single-primer templates. We incorporated the observed DNA polymerase preference into a primer-design program that guides the placement of the primer to an optimal location on the template. DNA polymerase priming bias was characterized using a synthetic library amplification system and NGS. The characterization of DNA polymerase priming bias was then utilized to guide the primer-design process and demonstrate varying amplification efficiencies among three commercially available DNA polymerases. The results suggest that the interaction of the DNA polymerase with the primer:template junction during the initiation of DNA polymerization is very important in terms of overall amplification bias and has broader implications for both the primer design process and multiplex PCR.
Jiang, Guoqian; Evans, Julie; Endle, Cory M; Solbrig, Harold R; Chute, Christopher G
2016-01-01
The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.
Towards a high performance geometry library for particle-detector simulations
Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...
2015-05-22
Thread-parallelization and single-instruction multiple data (SIMD) ”vectorisation” of software components in HEP computing has become a necessity to fully benefit from current and future computing hardware. In this context, the Geant-Vector/GPU simulation project aims to re-engineer current software for the simulation of the passage of particles through detectors in order to increase the overall event throughput. As one of the core modules in this area, the geometry library plays a central role and vectorising its algorithms will be one of the cornerstones towards achieving good CPU performance. Here, we report on the progress made in vectorising the shape primitives, asmore » well as in applying new C++ template based optimizations of existing code available in the Geant4, ROOT or USolids geometry libraries. We will focus on a presentation of our software development approach that aims to provide optimized code for all use cases of the library (e.g., single particle and many-particle APIs) and to support different architectures (CPU and GPU) while keeping the code base small, manageable and maintainable. We report on a generic and templated C++ geometry library as a continuation of the AIDA USolids project. As a result, the experience gained with these developments will be beneficial to other parts of the simulation software, such as for the optimization of the physics library, and possibly to other parts of the experiment software stack, such as reconstruction and analysis.« less
Towards a high performance geometry library for particle-detector simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apostolakis, J.; Bandieramonte, M.; Bitzes, G.
Thread-parallelization and single-instruction multiple data (SIMD) ”vectorisation” of software components in HEP computing has become a necessity to fully benefit from current and future computing hardware. In this context, the Geant-Vector/GPU simulation project aims to re-engineer current software for the simulation of the passage of particles through detectors in order to increase the overall event throughput. As one of the core modules in this area, the geometry library plays a central role and vectorising its algorithms will be one of the cornerstones towards achieving good CPU performance. Here, we report on the progress made in vectorising the shape primitives, asmore » well as in applying new C++ template based optimizations of existing code available in the Geant4, ROOT or USolids geometry libraries. We will focus on a presentation of our software development approach that aims to provide optimized code for all use cases of the library (e.g., single particle and many-particle APIs) and to support different architectures (CPU and GPU) while keeping the code base small, manageable and maintainable. We report on a generic and templated C++ geometry library as a continuation of the AIDA USolids project. As a result, the experience gained with these developments will be beneficial to other parts of the simulation software, such as for the optimization of the physics library, and possibly to other parts of the experiment software stack, such as reconstruction and analysis.« less
High-throughput screening based on label-free detection of small molecule microarrays
NASA Astrophysics Data System (ADS)
Zhu, Chenggang; Fei, Yiyan; Zhu, Xiangdong
2017-02-01
Based on small-molecule microarrays (SMMs) and oblique-incidence reflectivity difference (OI-RD) scanner, we have developed a novel high-throughput drug preliminary screening platform based on label-free monitoring of direct interactions between target proteins and immobilized small molecules. The screening platform is especially attractive for screening compounds against targets of unknown function and/or structure that are not compatible with functional assay development. In this screening platform, OI-RD scanner serves as a label-free detection instrument which is able to monitor about 15,000 biomolecular interactions in a single experiment without the need to label any biomolecule. Besides, SMMs serves as a novel format for high-throughput screening by immobilization of tens of thousands of different compounds on a single phenyl-isocyanate functionalized glass slide. Based on the high-throughput screening platform, we sequentially screened five target proteins (purified target proteins or cell lysate containing target protein) in high-throughput and label-free mode. We found hits for respective target protein and the inhibition effects for some hits were confirmed by following functional assays. Compared to traditional high-throughput screening assay, the novel high-throughput screening platform has many advantages, including minimal sample consumption, minimal distortion of interactions through label-free detection, multi-target screening analysis, which has a great potential to be a complementary screening platform in the field of drug discovery.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-01-01
Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. PMID:26369671
Handfield, Louis-François; Chong, Yolanda T.; Simmons, Jibril; Andrews, Brenda J.; Moses, Alan M.
2013-01-01
Protein subcellular localization has been systematically characterized in budding yeast using fluorescently tagged proteins. Based on the fluorescence microscopy images, subcellular localization of many proteins can be classified automatically using supervised machine learning approaches that have been trained to recognize predefined image classes based on statistical features. Here, we present an unsupervised analysis of protein expression patterns in a set of high-resolution, high-throughput microscope images. Our analysis is based on 7 biologically interpretable features which are evaluated on automatically identified cells, and whose cell-stage dependency is captured by a continuous model for cell growth. We show that it is possible to identify most previously identified localization patterns in a cluster analysis based on these features and that similarities between the inferred expression patterns contain more information about protein function than can be explained by a previous manual categorization of subcellular localization. Furthermore, the inferred cell-stage associated to each fluorescence measurement allows us to visualize large groups of proteins entering the bud at specific stages of bud growth. These correspond to proteins localized to organelles, revealing that the organelles must be entering the bud in a stereotypical order. We also identify and organize a smaller group of proteins that show subtle differences in the way they move around the bud during growth. Our results suggest that biologically interpretable features based on explicit models of cell morphology will yield unprecedented power for pattern discovery in high-resolution, high-throughput microscopy images. PMID:23785265
As defined by Wikipedia (https://en.wikipedia.org/wiki/Metamodeling), “(a) metamodel or surrogate model is a model of a model, and metamodeling is the process of generating such metamodels.” The goals of metamodeling include, but are not limited to (1) developing functional or st...
3D shape analysis of the brain's third ventricle using a midplane encoded symmetric template model
Kim, Jaeil; Valdés Hernández, Maria del C.; Royle, Natalie A.; Maniega, Susana Muñoz; Aribisala, Benjamin S.; Gow, Alan J.; Bastin, Mark E.; Deary, Ian J.; Wardlaw, Joanna M.; Park, Jinah
2016-01-01
Background Structural changes of the brain's third ventricle have been acknowledged as an indicative measure of the brain atrophy progression in neurodegenerative and endocrinal diseases. To investigate the ventricular enlargement in relation to the atrophy of the surrounding structures, shape analysis is a promising approach. However, there are hurdles in modeling the third ventricle shape. First, it has topological variations across individuals due to the inter-thalamic adhesion. In addition, as an interhemispheric structure, it needs to be aligned to the midsagittal plane to assess its asymmetric and regional deformation. Method To address these issues, we propose a model-based shape assessment. Our template model of the third ventricle consists of a midplane and a symmetric mesh of generic shape. By mapping the template's midplane to the individuals’ brain midsagittal plane, we align the symmetric mesh on the midline of the brain before quantifying the third ventricle shape. To build the vertex-wise correspondence between the individual third ventricle and the template mesh, we employ a minimal-distortion surface deformation framework. In addition, to account for topological variations, we implement geometric constraints guiding the template mesh to have zero width where the inter-thalamic adhesion passes through, preventing vertices crossing between left and right walls of the third ventricle. The individual shapes are compared using a vertex-wise deformity from the symmetric template. Results Experiments on imaging and demographic data from a study of aging showed that our model was sensitive in assessing morphological differences between individuals in relation to brain volume (i.e. proxy for general brain atrophy), gender and the fluid intelligence at age 72. It also revealed that the proposed method can detect the regional and asymmetrical deformation unlike the conventional measures: volume (median 1.95 ml, IQR 0.96 ml) and width of the third ventricle. Similarity measures between binary masks and the shape model showed that the latter reconstructed shape details with high accuracy (Dice coefficient ≥0.9, mean distance 0.5 mm and Hausdorff distance 2.7 mm). Conclusions We have demonstrated that our approach is suitable to morphometrical analyses of the third ventricle, providing high accuracy and inter-subject consistency in the shape quantification. This shape modeling method with geometric constraints based on anatomical landmarks could be extended to other brain structures which require a consistent measurement basis in the morphometry. PMID:27084320
Evaluating High Throughput Toxicokinetics and Toxicodynamics for IVIVE (WC10)
High-throughput screening (HTS) generates in vitro data for characterizing potential chemical hazard. TK models are needed to allow in vitro to in vivo extrapolation (IVIVE) to real world situations. The U.S. EPA has created a public tool (R package “httk” for high throughput tox...
Electromechanical behavior of [001]-textured Pb(Mg1/3Nb2/3)O3-PbTiO3 ceramics
NASA Astrophysics Data System (ADS)
Yan, Yongke; Wang, Yu. U.; Priya, Shashank
2012-05-01
[001]-textured Pb(Mg1/3Nb2/3)O3-PbTiO3 (PMN-PT) ceramics were synthesized by using templated grain growth method. Significantly high [001] texture degree corresponding to 0.98 Lotgering factor was achieved at 1 vol. % BaTiO3 template. Electromechanical properties for [001]-textured PMN-PT ceramics with 1 vol. % BaTiO3 were found to be d33 = 1000 pC/N, d31 = 371 pC/N, ɛr = 2591, and tanδ = ˜0.6%. Elastoelectric composite based modeling results showed that higher volume fraction of template reduces the overall dielectric constant and thus has adverse effect on the piezoelectric response. Clamping effect was modeled by deriving the changes in free energy as a function of applied electric field and microstructural boundary condition.
Chappell, James; Jensen, Kirsten; Freemont, Paul S.
2013-01-01
A bottleneck in our capacity to rationally and predictably engineer biological systems is the limited number of well-characterized genetic elements from which to build. Current characterization methods are tied to measurements in living systems, the transformation and culturing of which are inherently time-consuming. To address this, we have validated a completely in vitro approach for the characterization of DNA regulatory elements using Escherichia coli extract cell-free systems. Importantly, we demonstrate that characterization in cell-free systems correlates and is reflective of performance in vivo for the most frequently used DNA regulatory elements. Moreover, we devise a rapid and completely in vitro method to generate DNA templates for cell-free systems, bypassing the need for DNA template generation and amplification from living cells. This in vitro approach is significantly quicker than current characterization methods and is amenable to high-throughput techniques, providing a valuable tool for rapidly prototyping libraries of DNA regulatory elements for synthetic biology. PMID:23371936
Synthetic aperture radar target detection, feature extraction, and image formation techniques
NASA Technical Reports Server (NTRS)
Li, Jian
1994-01-01
This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.
Shirai, Hiroki; Ikeda, Kazuyoshi; Yamashita, Kazuo; Tsuchiya, Yuko; Sarmiento, Jamica; Liang, Shide; Morokata, Tatsuaki; Mizuguchi, Kenji; Higo, Junichi; Standley, Daron M; Nakamura, Haruki
2014-08-01
In the second antibody modeling assessment, we used a semiautomated template-based structure modeling approach for 11 blinded antibody variable region (Fv) targets. The structural modeling method involved several steps, including template selection for framework and canonical structures of complementary determining regions (CDRs), homology modeling, energy minimization, and expert inspection. The submitted models for Fv modeling in Stage 1 had the lowest average backbone root mean square deviation (RMSD) (1.06 Å). Comparison to crystal structures showed the most accurate Fv models were generated for 4 out of 11 targets. We found that the successful modeling in Stage 1 mainly was due to expert-guided template selection for CDRs, especially for CDR-H3, based on our previously proposed empirical method (H3-rules) and the use of position specific scoring matrix-based scoring. Loop refinement using fragment assembly and multicanonical molecular dynamics (McMD) was applied to CDR-H3 loop modeling in Stage 2. Fragment assembly and McMD produced putative structural ensembles with low free energy values that were scored based on the OSCAR all-atom force field and conformation density in principal component analysis space, respectively, as well as the degree of consensus between the two sampling methods. The quality of 8 out of 10 targets improved as compared with Stage 1. For 4 out of 10 Stage-2 targets, our method generated top-scoring models with RMSD values of less than 1 Å. In this article, we discuss the strengths and weaknesses of our approach as well as possible directions for improvement to generate better predictions in the future. © 2014 Wiley Periodicals, Inc.
Template-based automatic extraction of the joint space of foot bones from CT scan
NASA Astrophysics Data System (ADS)
Park, Eunbi; Kim, Taeho; Park, Jinah
2016-03-01
Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).
Rethinking the Educator Portfolio: An Innovative Criteria-Based Model.
Shinkai, Kanade; Chen, Chen Amy; Schwartz, Brian S; Loeser, Helen; Ashe, Cynthia; Irby, David M
2017-11-07
Academic medical centers struggle to achieve parity in advancement and promotions between educators and discovery-oriented researchers in part because of narrow definitions of scholarship, lack of clear criteria for measuring excellence, and barriers to making educational contributions available for peer review. Despite recent progress in expanding scholarship definitions and identifying excellence criteria, these advances are not integrated into educator portfolio (EP) templates or curriculum vitae platforms. From 2013 to 2015, a working group from the Academy of Medical Educators (AME) at the University of California, San Francisco (UCSF) designed a streamlined, criteria-based EP (EP 2.0) template highlighting faculty members' recent activities in education and setting rigorous evaluation methods to enable educational scholarship to be objectively evaluated for academic advancement, AME membership, and professional development. The EP 2.0 template was integrated into the AME application, resulting in high overall satisfaction among candidates and the selection committee and positive feedback on the template's transparency, ease of use, and streamlined format. In 2016, the EP 2.0 template was integrated into the campus-wide curriculum vitae platform and academic advancement system. The authors plan to increase awareness of the EP 2.0 template by educating promotions committees and faculty at UCSF and partnering with other institutions to disseminate it for use. They also plan to study the impact of the template on supporting educators by making their important scholarly contributions available for peer review, providing guidance for professional development, and decreasing disparities in promotions.
Bai, Zhi-Ru; Fei, Hong-Qiang; Li, Na; Cao, Liang; Zhang, Chen-Feng; Wang, Tuan-Jie; Ding, Gang; Wang, Zhen-Zhong; Xiao, Wei
2016-02-01
Prostaglandin (PG) E2 is an active substance in pathological and physiological mechanisms, such as inflammation and pain. The in vitro high-throughput assay for screening the inhibitors of reducing PEG2 production is a useful method for finding out antiphlogistic and analgesic candidates. The assay was based on LPS-induced PGE2 production model using a homogeneous time-resolved fluorescence(HTRF) PGE2 testing kit combined with liquid handling automation and detection instruments. The critical steps, including the cell density optimization and IC50 values determination of a positive compound, were taken to verify the stability and sensibility of the assay. Low intra-plate, inter-plate and day-to-day variability were observed in this 384-well, high-throughput format assay. Totally 5 121 samples were selected from the company's traditional Chinese medicine(TCM) material base library and used to screen PGE2 inhibitors. In this model, the cell plating density was 2 000 cells for each well; the average IC₅₀ value for positive compounds was (7.3±0.1) μmol; the Z' factor for test plates was more than 0.5 and averaged at 0.7. Among the 5 121 samples, 228 components exhibited a PGE2 production prohibition rate of more than 50%, and 23 components exhibited more than 80%. This model reached the expected standards in data stability and accuracy, indicating the reliability and authenticity of the screening results. The automated screening system was introduced to make the model fast and efficient, with a average daily screening amount exceeding 14 000 data points and provide a new model for discovering new anti-inflammatory and analgesic drug and quickly screening effective constituents of TCM in the early stage. Copyright© by the Chinese Pharmaceutical Association.
Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)
High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...
Template-free modeling by LEE and LEER in CASP11.
Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Ralph, Duncan K; Matsen, Frederick A
2016-01-01
VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM.
Penaluna, Brooke E.; Railsback, Steve F.; Dunham, Jason B.; Johnson, S.; Bilby, Richard E.; Skaugset, Arne E.
2015-01-01
The importance of multiple processes and instream factors to aquatic biota has been explored extensively, but questions remain about how local spatiotemporal variability of aquatic biota is tied to environmental regimes and the geophysical template of streams. We used an individual-based trout model to explore the relative role of the geophysical template versus environmental regimes on biomass of trout (Oncorhynchus clarkii clarkii). We parameterized the model with observed data from each of the four headwater streams (their local geophysical template and environmental regime) and then ran 12 simulations where we replaced environmental regimes (stream temperature, flow, turbidity) of a given stream with values from each neighboring stream while keeping the geophysical template fixed. We also performed single-parameter sensitivity analyses on the model results from each of the four streams. Although our modeled findings show that trout biomass is most responsive to changes in the geophysical template of streams, they also reveal that biomass is restricted by available habitat during seasonal low flow, which is a product of both the stream’s geophysical template and flow regime. Our modeled results suggest that differences in the geophysical template among streams render trout more or less sensitive to environmental change, emphasizing the importance of local fish–habitat relationships in streams.
Kastner, Elisabeth; Kaur, Randip; Lowry, Deborah; Moghaddam, Behfar; Wilkinson, Alexander; Perrie, Yvonne
2014-12-30
Microfluidics has recently emerged as a new method of manufacturing liposomes, which allows for reproducible mixing in miliseconds on the nanoliter scale. Here we investigate microfluidics-based manufacturing of liposomes. The aim of these studies was to assess the parameters in a microfluidic process by varying the total flow rate (TFR) and the flow rate ratio (FRR) of the solvent and aqueous phases. Design of experiment and multivariate data analysis were used for increased process understanding and development of predictive and correlative models. High FRR lead to the bottom-up synthesis of liposomes, with a strong correlation with vesicle size, demonstrating the ability to in-process control liposomes size; the resulting liposome size correlated with the FRR in the microfluidics process, with liposomes of 50 nm being reproducibly manufactured. Furthermore, we demonstrate the potential of a high throughput manufacturing of liposomes using microfluidics with a four-fold increase in the volumetric flow rate, maintaining liposome characteristics. The efficacy of these liposomes was demonstrated in transfection studies and was modelled using predictive modeling. Mathematical modelling identified FRR as the key variable in the microfluidic process, with the highest impact on liposome size, polydispersity and transfection efficiency. This study demonstrates microfluidics as a robust and high-throughput method for the scalable and highly reproducible manufacture of size-controlled liposomes. Furthermore, the application of statistically based process control increases understanding and allows for the generation of a design-space for controlled particle characteristics. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Structure based drug design: development of potent and selective factor IXa (FIXa) inhibitors.
Wang, Shouming; Beck, Richard; Burd, Andrew; Blench, Toby; Marlin, Frederic; Ayele, Tenagne; Buxton, Stuart; Dagostin, Claudio; Malic, Maja; Joshi, Rina; Barry, John; Sajad, Mohammed; Cheung, Chiming; Shaikh, Shaheda; Chahwala, Suresh; Chander, Chaman; Baumgartner, Christine; Holthoff, Hans-Peter; Murray, Elizabeth; Blackney, Michael; Giddings, Amanda
2010-02-25
On the basis of our understanding on the binding interactions of the benzothiophene template within the FIXa active site by X-ray crystallography and molecular modeling studies, we developed our SAR strategy by targeting the 4-position of the template to access the S1 beta and S2-S4 sites. A number of highly selective and potent factor Xa (FXa) and FIXa inhibitors were identified by simple switch of functional groups with conformational changes toward the S2-S4 sites.
Incorporating User Input in Template-Based Segmentation
Vidal, Camille; Beggs, Dale; Younes, Laurent; Jain, Sanjay K.; Jedynak, Bruno
2015-01-01
We present a simple and elegant method to incorporate user input in a template-based segmentation method for diseased organs. The user provides a partial segmentation of the organ of interest, which is used to guide the template towards its target. The user also highlights some elements of the background that should be excluded from the final segmentation. We derive by likelihood maximization a registration algorithm from a simple statistical image model in which the user labels are modeled as Bernoulli random variables. The resulting registration algorithm minimizes the sum of square differences between the binary template and the user labels, while preventing the template from shrinking, and penalizing for the inclusion of background elements into the final segmentation. We assess the performance of the proposed algorithm on synthetic images in which the amount of user annotation is controlled. We demonstrate our algorithm on the segmentation of the lungs of Mycobacterium tuberculosis infected mice from μCT images. PMID:26146532
Modeling limb-bud dysmorphogenesis in a predictive virtual embryo model
ToxCast is profiling the bioactivity of thousands of chemicals based on high-throughput screening (HTS) and computational methods that integrate knowledge of biological systems and in vivo toxicities (www.epa.gov/ncct/toxcast/). Many ToxCast assays assess signaling pathways and c...
Associating putative molecular initiating events (MIE) with downstream cell signaling pathways and modeling fetal exposure kinetics is an important challenge for integration in developmental systems toxicology. Here, we describe an integrative systems toxicology model for develop...
Sugano, Shigeo S; Suzuki, Hiroko; Shimokita, Eisuke; Chiba, Hirofumi; Noji, Sumihare; Osakabe, Yuriko; Osakabe, Keishi
2017-04-28
Mushroom-forming basidiomycetes produce a wide range of metabolites and have great value not only as food but also as an important global natural resource. Here, we demonstrate CRISPR/Cas9-based genome editing in the model species Coprinopsis cinerea. Using a high-throughput reporter assay with cryopreserved protoplasts, we identified a novel promoter, CcDED1 pro , with seven times stronger activity in this assay than the conventional promoter GPD2. To develop highly efficient genome editing using CRISPR/Cas9 in C. cinerea, we used the CcDED1 pro to express Cas9 and a U6-snRNA promoter from C. cinerea to express gRNA. Finally, CRISPR/Cas9-mediated GFP mutagenesis was performed in a stable GFP expression line. Individual genome-edited lines were isolated, and loss of GFP function was detected in hyphae and fruiting body primordia. This novel method of high-throughput CRISPR/Cas9-based genome editing using cryopreserved protoplasts should be a powerful tool in the study of edible mushrooms.
Statistical tools for transgene copy number estimation based on real-time PCR.
Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal
2007-11-01
As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.
High Throughput Determination of Critical Human Dosing Parameters (SOT)
High throughput toxicokinetics (HTTK) is a rapid approach that uses in vitro data to estimate TK for hundreds of environmental chemicals. Reverse dosimetry (i.e., reverse toxicokinetics or RTK) based on HTTK data converts high throughput in vitro toxicity screening (HTS) data int...
High Throughput Determinations of Critical Dosing Parameters (IVIVE workshop)
High throughput toxicokinetics (HTTK) is an approach that allows for rapid estimations of TK for hundreds of environmental chemicals. HTTK-based reverse dosimetry (i.e, reverse toxicokinetics or RTK) is used in order to convert high throughput in vitro toxicity screening (HTS) da...
A template-based approach for responsibility management in executable business processes
NASA Astrophysics Data System (ADS)
Cabanillas, Cristina; Resinas, Manuel; Ruiz-Cortés, Antonio
2018-05-01
Process-oriented organisations need to manage the different types of responsibilities their employees may have w.r.t. the activities involved in their business processes. Despite several approaches provide support for responsibility modelling, in current Business Process Management Systems (BPMS) the only responsibility considered at runtime is the one related to performing the work required for activity completion. Others like accountability or consultation must be implemented by manually adding activities in the executable process model, which is time-consuming and error-prone. In this paper, we address this limitation by enabling current BPMS to execute processes in which people with different responsibilities interact to complete the activities. We introduce a metamodel based on Responsibility Assignment Matrices (RAM) to model the responsibility assignment for each activity, and a flexible template-based mechanism that automatically transforms such information into BPMN elements, which can be interpreted and executed by a BPMS. Thus, our approach does not enforce any specific behaviour for the different responsibilities but new templates can be modelled to specify the interaction that best suits the activity requirements. Furthermore, libraries of templates can be created and reused in different processes. We provide a reference implementation and build a library of templates for a well-known set of responsibilities.
Inkjet-Printed Nanocavities on a Photonic Crystal Template.
Brossard, Frederic S F; Pecunia, Vincenzo; Ramsay, Andrew J; Griffiths, Jonathan P; Hugues, Maxime; Sirringhaus, Henning
2017-12-01
The last decade has witnessed the rapid development of inkjet printing as an attractive bottom-up microfabrication technology due to its simplicity and potentially low cost. The wealth of printable materials has been key to its widespread adoption in organic optoelectronics and biotechnology. However, its implementation in nanophotonics has so far been limited by the coarse resolution of conventional inkjet-printing methods. In addition, the low refractive index of organic materials prevents the use of "soft-photonics" in applications where strong light confinement is required. This study introduces a hybrid approach for creating and fine tuning high-Q nanocavities, involving the local deposition of an organic ink on the surface of an inorganic 2D photonic crystal template using a commercially available high-resolution inkjet printer. The controllability of this approach is demonstrated by tuning the resonance of the printed nanocavities by the number of printer passes and by the fabrication of photonic crystal molecules with controllable splitting. The versatility of this method is evidenced by the realization of nanocavities obtained by surface deposition on a blank photonic crystal. A new method for a free-form, high-density, material-independent, and high-throughput fabrication technique is thus established with a manifold of opportunities in photonic applications. © 2017 Hitachi Cambridge Laboratory. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gu, Zefeng; Cao, Zhijuan
2018-06-07
A novel assay for histidine and cysteine has been constructed based on modulation of fluorescent copper nanoclusters (CuNCs) by molecular switches. In our previous work, a dumbbell DNA template with a poly-T (thymine) loop has been developed as an excellent template for the formation of strongly fluorescent CuNCs. Herein, for the first time, we established this biosensor for sensing two amino acids by using dumbbell DNA-templated CuNCs as the single probe. Among 20 natural amino acids, only histidine and cysteine can selectively quench fluorescence emission of CuNCs, because of the specific interaction of these compounds with copper ions. Furthermore, by using nickel ions (Ni 2+ ) and N-ethylmaleimide as the masking agents for histidine and cysteine respectively, an integrated logic gate system was designed by coupling with the fluorescent CuNCs and demonstrated selective and sensitive detection of cysteine and histidine. Under optimal conditions, cysteine can be detected in the concentration ranges of 0.01-10.0 μM with the detection limit (DL) of as low as 98 pM, while histidine can be detected in the ranges of 0.05-40.0 μM with DL of 1.6 nM. In addition, histidine and cysteine can be observed with the naked eye under a hand-held UV lamp (DL, 50 nM), which can be easily adapted to automated high-throughput screening. Finally, the strategy has been successfully utilized for biological fluids. The proposed system can be conducted in homogeneous solution, eliminating the need for organic cosolvents, separation processes of nanomaterials, or any chemical modifications. Overall, the assay provides an alternative method for simultaneous detection of cysteine and histidine by taking the advantages of high speed, no label and enzyme requirement, and good sensitivity and specificity, and will satisfy the great demand for determination of amino acids in fields such as food processing, biochemistry, pharmaceuticals, and clinical analysis. Graphical abstract.
We demonstrate a computational network model that integrates 18 in vitro, high-throughput screening assays measuring estrogen receptor (ER) binding, dimerization, chromatin binding, transcriptional activation and ER-dependent cell proliferation. The network model uses activity pa...
Wang, Xixian; Ren, Lihui; Su, Yetian; Ji, Yuetong; Liu, Yaoping; Li, Chunyu; Li, Xunrong; Zhang, Yi; Wang, Wei; Hu, Qiang; Han, Danxiang; Xu, Jian; Ma, Bo
2017-11-21
Raman-activated cell sorting (RACS) has attracted increasing interest, yet throughput remains one major factor limiting its broader application. Here we present an integrated Raman-activated droplet sorting (RADS) microfluidic system for functional screening of live cells in a label-free and high-throughput manner, by employing AXT-synthetic industrial microalga Haematococcus pluvialis (H. pluvialis) as a model. Raman microspectroscopy analysis of individual cells is carried out prior to their microdroplet encapsulation, which is then directly coupled to DEP-based droplet sorting. To validate the system, H. pluvialis cells containing different levels of AXT were mixed and underwent RADS. Those AXT-hyperproducing cells were sorted with an accuracy of 98.3%, an enrichment ratio of eight folds, and a throughput of ∼260 cells/min. Of the RADS-sorted cells, 92.7% remained alive and able to proliferate, which is equivalent to the unsorted cells. Thus, the RADS achieves a much higher throughput than existing RACS systems, preserves the vitality of cells, and facilitates seamless coupling with downstream manipulations such as single-cell sequencing and cultivation.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2016-09-01
Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. Proteins 2016; 84(Suppl 1):247-259. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Koppstein, David; Ashour, Joseph; Bartel, David P.
2015-01-01
The influenza polymerase cleaves host RNAs ∼10–13 nucleotides downstream of their 5′ ends and uses this capped fragment to prime viral mRNA synthesis. To better understand this process of cap snatching, we used high-throughput sequencing to determine the 5′ ends of A/WSN/33 (H1N1) influenza mRNAs. The sequences provided clear evidence for nascent-chain realignment during transcription initiation and revealed a strong influence of the viral template on the frequency of realignment. After accounting for the extra nucleotides inserted through realignment, analysis of the capped fragments indicated that the different viral mRNAs were each prepended with a common set of sequences and that the polymerase often cleaved host RNAs after a purine and often primed transcription on a single base pair to either the terminal or penultimate residue of the viral template. We also developed a bioinformatic approach to identify the targeted host transcripts despite limited information content within snatched fragments and found that small nuclear RNAs and small nucleolar RNAs contributed the most abundant capped leaders. These results provide insight into the mechanism of viral transcription initiation and reveal the diversity of the cap-snatched repertoire, showing that noncoding transcripts as well as mRNAs are used to make influenza mRNAs. PMID:25901029
Massana, Ramon; Gobet, Angélique; Audic, Stéphane; Bass, David; Bittner, Lucie; Boutte, Christophe; Chambouvet, Aurélie; Christen, Richard; Claverie, Jean-Michel; Decelle, Johan; Dolan, John R; Dunthorn, Micah; Edvardsen, Bente; Forn, Irene; Forster, Dominik; Guillou, Laure; Jaillon, Olivier; Kooistra, Wiebe H C F; Logares, Ramiro; Mahé, Frédéric; Not, Fabrice; Ogata, Hiroyuki; Pawlowski, Jan; Pernice, Massimo C; Probert, Ian; Romac, Sarah; Richards, Thomas; Santini, Sébastien; Shalchian-Tabrizi, Kamran; Siano, Raffaele; Simon, Nathalie; Stoeck, Thorsten; Vaulot, Daniel; Zingone, Adriana; de Vargas, Colomban
2015-10-01
Although protists are critical components of marine ecosystems, they are still poorly characterized. Here we analysed the taxonomic diversity of planktonic and benthic protist communities collected in six distant European coastal sites. Environmental deoxyribonucleic acid (DNA) and ribonucleic acid (RNA) from three size fractions (pico-, nano- and micro/mesoplankton), as well as from dissolved DNA and surface sediments were used as templates for tag pyrosequencing of the V4 region of the 18S ribosomal DNA. Beta-diversity analyses split the protist community structure into three main clusters: picoplankton-nanoplankton-dissolved DNA, micro/mesoplankton and sediments. Within each cluster, protist communities from the same site and time clustered together, while communities from the same site but different seasons were unrelated. Both DNA and RNA-based surveys provided similar relative abundances for most class-level taxonomic groups. Yet, particular groups were overrepresented in one of the two templates, such as marine alveolates (MALV)-I and MALV-II that were much more abundant in DNA surveys. Overall, the groups displaying the highest relative contribution were Dinophyceae, Diatomea, Ciliophora and Acantharia. Also, well represented were Mamiellophyceae, Cryptomonadales, marine alveolates and marine stramenopiles in the picoplankton, and Monadofilosa and basal Fungi in sediments. Our extensive and systematic sequencing of geographically separated sites provides the most comprehensive molecular description of coastal marine protist diversity to date. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.
Shin, Hyeong-Moo; Ernstoff, Alexi; Arnot, Jon A; Wetmore, Barbara A; Csiszar, Susan A; Fantke, Peter; Zhang, Xianming; McKone, Thomas E; Jolliet, Olivier; Bennett, Deborah H
2015-06-02
We present a risk-based high-throughput screening (HTS) method to identify chemicals for potential health concerns or for which additional information is needed. The method is applied to 180 organic chemicals as a case study. We first obtain information on how the chemical is used and identify relevant use scenarios (e.g., dermal application, indoor emissions). For each chemical and use scenario, exposure models are then used to calculate a chemical intake fraction, or a product intake fraction, accounting for chemical properties and the exposed population. We then combine these intake fractions with use scenario-specific estimates of chemical quantity to calculate daily intake rates (iR; mg/kg/day). These intake rates are compared to oral equivalent doses (OED; mg/kg/day), calculated from a suite of ToxCast in vitro bioactivity assays using in vitro-to-in vivo extrapolation and reverse dosimetry. Bioactivity quotients (BQs) are calculated as iR/OED to obtain estimates of potential impact associated with each relevant use scenario. Of the 180 chemicals considered, 38 had maximum iRs exceeding minimum OEDs (i.e., BQs > 1). For most of these compounds, exposures are associated with direct intake, food/oral contact, or dermal exposure. The method provides high-throughput estimates of exposure and important input for decision makers to identify chemicals of concern for further evaluation with additional information or more refined models.
Staged anticonvulsant screening for chronic epilepsy.
Berdichevsky, Yevgeny; Saponjian, Yero; Park, Kyung-Il; Roach, Bonnie; Pouliot, Wendy; Lu, Kimberly; Swiercz, Waldemar; Dudek, F Edward; Staley, Kevin J
2016-12-01
Current anticonvulsant screening programs are based on seizures evoked in normal animals. One-third of epileptic patients do not respond to the anticonvulsants discovered with these models. We evaluated a tiered program based on chronic epilepsy and spontaneous seizures, with compounds advancing from high-throughput in vitro models to low-throughput in vivo models. Epileptogenesis in organotypic hippocampal slice cultures was quantified by lactate production and lactate dehydrogenase release into culture media as rapid assays for seizure-like activity and cell death, respectively. Compounds that reduced these biochemical measures were retested with in vitro electrophysiological confirmation (i.e., second stage). The third stage involved crossover testing in the kainate model of chronic epilepsy, with blinded analysis of spontaneous seizures after continuous electrographic recordings. We screened 407 compound-concentration combinations. The cyclooxygenase inhibitor, celecoxib, had no effect on seizures evoked in normal brain tissue but demonstrated robust antiseizure activity in all tested models of chronic epilepsy. The use of organotypic hippocampal cultures, where epileptogenesis occurs on a compressed time scale, and where seizure-like activity and seizure-induced cell death can be easily quantified with biomarker assays, allowed us to circumvent the throughput limitations of in vivo chronic epilepsy models. Ability to rapidly screen compounds in a chronic model of epilepsy allowed us to find an anticonvulsant that would be missed by screening in acute models.
NASA Astrophysics Data System (ADS)
Eggers, Georg; Cosgarea, Raluca; Rieker, Marcus; Kress, Bodo; Dickhaus, Hartmut; Mühling, Joachim
2009-02-01
An oral imaging template was developed to address the shortcomings of MR image data for image guided dental implant planning and placement. The template was conctructed as a gadolinium filled plastic shell to give contrast to the dentition and also to be accurately re-attachable for use in image guided dental implant placement. The result of segmentation and modelling of the dentition from MR Image data with the template was compared to plaster casts of the dentition. In a phantom study dental implant placement was performed based on MR image data. MR imaging with the contrast template allowed complete representation of the existing dentition. In the phantom study, a commercially available system for image guided dental implant placement was used. Transformation of the imaging contrast template into a surgical drill guide based on the MR image data resulted in pilot burr hole placement with an accuracy of 2 mm. MRI based imaging of the existing dentition for proper image guided planning is possible with the proposed template. Using the image data and the template resulted in less accurate pilot burr hole placement in comparison to CT-based image guided implant placement.
ERIC Educational Resources Information Center
Hermann, Ronald S.; Miranda, Rommel J.
2010-01-01
This article provides an instructional approach to helping students generate open-inquiry research questions, which the authors call the "open-inquiry question template." This template was created based on their experience teaching high school science and preservice university methods courses. To help teachers implement this template, they…
Elliott, Lydia; DeCristofaro, Claire; Carpenter, Alesia
2012-09-01
This article describes the development and implementation of integrated use of personal handheld devices (personal digital assistants, PDAs) and high-fidelity simulation in an advanced health assessment course in a graduate family nurse practitioner (NP) program. A teaching tool was developed that can be utilized as a template for clinical case scenarios blending these separate technologies. Review of the evidence-based literature, including peer-reviewed articles and reviews. Blending the technologies of high-fidelity simulation and handheld devices (PDAs) provided a positive learning experience for graduate NP students in a teaching laboratory setting. Combining both technologies in clinical case scenarios offered a more real-world learning experience, with a focus on point-of-care service and integration of interview and physical assessment skills with existing standards of care and external clinical resources. Faculty modeling and advance training with PDA technology was crucial to success. Faculty developed a general template tool and systems-based clinical scenarios integrating PDA and high-fidelity simulation. Faculty observations, the general template tool, and one scenario example are included in this article. ©2012 The Author(s) Journal compilation ©2012 American Academy of Nurse Practitioners.
Multi-pathway exposure modelling of chemicals in cosmetics with application to shampoo
We present a novel multi-pathway, mass balance based, fate and exposure model compatible with life cycle and high-throughput screening assessments of chemicals in cosmetic products. The exposures through product use as well as post-use emissions and environmental media were quant...
White, David T; Eroglu, Arife Unal; Wang, Guohua; Zhang, Liyun; Sengupta, Sumitra; Ding, Ding; Rajpurohit, Surendra K; Walker, Steven L; Ji, Hongkai; Qian, Jiang; Mumm, Jeff S
2017-01-01
The zebrafish has emerged as an important model for whole-organism small-molecule screening. However, most zebrafish-based chemical screens have achieved only mid-throughput rates. Here we describe a versatile whole-organism drug discovery platform that can achieve true high-throughput screening (HTS) capacities. This system combines our automated reporter quantification in vivo (ARQiv) system with customized robotics, and is termed ‘ARQiv-HTS’. We detail the process of establishing and implementing ARQiv-HTS: (i) assay design and optimization, (ii) calculation of sample size and hit criteria, (iii) large-scale egg production, (iv) automated compound titration, (v) dispensing of embryos into microtiter plates, and (vi) reporter quantification. We also outline what we see as best practice strategies for leveraging the power of ARQiv-HTS for zebrafish-based drug discovery, and address technical challenges of applying zebrafish to large-scale chemical screens. Finally, we provide a detailed protocol for a recently completed inaugural ARQiv-HTS effort, which involved the identification of compounds that elevate insulin reporter activity. Compounds that increased the number of insulin-producing pancreatic beta cells represent potential new therapeutics for diabetic patients. For this effort, individual screening sessions took 1 week to conclude, and sessions were performed iteratively approximately every other day to increase throughput. At the conclusion of the screen, more than a half million drug-treated larvae had been evaluated. Beyond this initial example, however, the ARQiv-HTS platform is adaptable to almost any reporter-based assay designed to evaluate the effects of chemical compounds in living small-animal models. ARQiv-HTS thus enables large-scale whole-organism drug discovery for a variety of model species and from numerous disease-oriented perspectives. PMID:27831568
Mis, Emily K.; Liem, Karel F.; Kong, Yong; Schwartz, Nancy B.; Domowicz, Miriam; Weatherbee, Scott D.
2014-01-01
The long bones of the vertebrate body are built by the initial formation of a cartilage template that is later replaced by mineralized bone. The proliferation and maturation of the skeletal precursor cells (chondrocytes) within the cartilage template and their replacement by bone is a highly coordinated process which, if misregulated, can lead to a number of defects including dwarfism and other skeletal deformities. This is exemplified by the fact that abnormal bone development is one of the most common types of human birth defects. Yet, many of the factors that initiate and regulate chondrocyte maturation are not known. We identified a recessive dwarf mouse mutant (pug) from an N-ethyl-N-nitrosourea (ENU) mutagenesis screen. pug mutant skeletal elements are patterned normally during development, but display a ~20% length reduction compared to wild-type embryos. We show that the pug mutation does not lead to changes in chondrocyte proliferation but instead promotes premature maturation and early ossification, which ultimately leads to disproportionate dwarfism. Using sequence capture and high-throughput sequencing, we identified a missense mutation in the Xylosyltransferase 1 (Xylt1) gene in pug mutants. Xylosyltransferases catalyze the initial step in glycosaminoglycan (GAG) chain addition to proteoglycan core proteins, and these modifications are essential for normal proteoglycan function. We show that the pug mutation disrupts Xylt1 activity and subcellular localization, leading to a reduction in GAG chains in pug mutants. The pug mutant serves as a novel model for mammalian dwarfism and identifies a key role for proteoglycan modification in the initiation of chondrocyte maturation. PMID:24161523
[Current applications of high-throughput DNA sequencing technology in antibody drug research].
Yu, Xin; Liu, Qi-Gang; Wang, Ming-Rong
2012-03-01
Since the publication of a high-throughput DNA sequencing technology based on PCR reaction was carried out in oil emulsions in 2005, high-throughput DNA sequencing platforms have been evolved to a robust technology in sequencing genomes and diverse DNA libraries. Antibody libraries with vast numbers of members currently serve as a foundation of discovering novel antibody drugs, and high-throughput DNA sequencing technology makes it possible to rapidly identify functional antibody variants with desired properties. Herein we present a review of current applications of high-throughput DNA sequencing technology in the analysis of antibody library diversity, sequencing of CDR3 regions, identification of potent antibodies based on sequence frequency, discovery of functional genes, and combination with various display technologies, so as to provide an alternative approach of discovery and development of antibody drugs.
Template based protein structure modeling by global optimization in CASP11.
Joo, Keehyoung; Joung, InSuk; Lee, Sun Young; Kim, Jong Yun; Cheng, Qianyi; Manavalan, Balachandran; Joung, Jong Young; Heo, Seungryong; Lee, Juyong; Nam, Mikyung; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-based modeling (TBM) of CASP11 targets, we have developed three new protein modeling protocols (nns for server prediction and LEE and LEER for human prediction) by improving upon our previous CASP protocols (CASP7 through CASP10). We applied the powerful global optimization method of conformational space annealing to three stages of optimization, including multiple sequence-structure alignment, three-dimensional (3D) chain building, and side-chain remodeling. For more successful fold recognition, a new alignment method called CRFalign was developed. It can incorporate sensitive positional and environmental dependence in alignment scores as well as strong nonlinear correlations among various features. Modifications and adjustments were made to the form of the energy function and weight parameters pertaining to the chain building procedure. For the side-chain remodeling step, residue-type dependence was introduced to the cutoff value that determines the entry of a rotamer to the side-chain modeling library. The improved performance of the nns server method is attributed to successful fold recognition achieved by combining several methods including CRFalign and to the current modeling formulation that can incorporate native-like structural aspects present in multiple templates. The LEE protocol is identical to the nns one except that CASP11-released server models are used as templates. The success of LEE in utilizing CASP11 server models indicates that proper template screening and template clustering assisted by appropriate cluster ranking promises a new direction to enhance protein 3D modeling. Proteins 2016; 84(Suppl 1):221-232. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
High Throughput Exposure Estimation Using NHANES Data (SOT)
In the ExpoCast project, high throughput (HT) exposure models enable rapid screening of large numbers of chemicals for exposure potential. Evaluation of these models requires empirical exposure data and due to the paucity of human metabolism/exposure data such evaluations includ...
Economic consequences of high throughput maskless lithography
NASA Astrophysics Data System (ADS)
Hartley, John G.; Govindaraju, Lakshmi
2005-11-01
Many people in the semiconductor industry bemoan the high costs of masks and view mask cost as one of the significant barriers to bringing new chip designs to market. All that is needed is a viable maskless technology and the problem will go away. Numerous sites around the world are working on maskless lithography but inevitably, the question asked is "Wouldn't a one wafer per hour maskless tool make a really good mask writer?" Of course, the answer is yes, the hesitation you hear in the answer isn't based on technology concerns, it's financial. The industry needs maskless lithography because mask costs are too high. Mask costs are too high because mask pattern generators (PG's) are slow and expensive. If mask PG's become much faster, mask costs go down, the maskless market goes away and the PG supplier is faced with an even smaller tool demand from the mask shops. Technical success becomes financial suicide - or does it? In this paper we will present the results of a model that examines some of the consequences of introducing high throughput maskless pattern generation. Specific features in the model include tool throughput for masks and wafers, market segmentation by node for masks and wafers and mask cost as an entry barrier to new chip designs. How does the availability of low cost masks and maskless tools affect the industries tool makeup and what is the ultimate potential market for high throughput maskless pattern generators?
Neto, A I; Correia, C R; Oliveira, M B; Rial-Hermida, M I; Alvarez-Lorenzo, C; Reis, R L; Mano, J F
2015-04-01
We propose a novel hanging spherical drop system for anchoring arrays of droplets of cell suspension based on the use of biomimetic superhydrophobic flat substrates, with controlled positional adhesion and minimum contact with a solid substrate. By facing down the platform, it was possible to generate independent spheroid bodies in a high throughput manner, in order to mimic in vivo tumour models on the lab-on-chip scale. To validate this system for drug screening purposes, the toxicity of the anti-cancer drug doxorubicin in cell spheroids was tested and compared to cells in 2D culture. The advantages presented by this platform, such as feasibility of the system and the ability to control the size uniformity of the spheroid, emphasize its potential to be used as a new low cost toolbox for high-throughput drug screening and in cell or tissue engineering.
Lo Cicero, Alessandra; Jaskowiak, Anne-Laure; Egesipe, Anne-Laure; Tournois, Johana; Brinon, Benjamin; Pitrez, Patricia R.; Ferreira, Lino; de Sandre-Giovannoli, Annachiara; Levy, Nicolas; Nissan, Xavier
2016-01-01
Hutchinson-Gilford progeria syndrome (HGPS) is a rare fatal genetic disorder that causes systemic accelerated aging in children. Thanks to the pluripotency and self-renewal properties of induced pluripotent stem cells (iPSC), HGPS iPSC-based modeling opens up the possibility of access to different relevant cell types for pharmacological approaches. In this study, 2800 small molecules were explored using high-throughput screening, looking for compounds that could potentially reduce the alkaline phosphatase activity of HGPS mesenchymal stem cells (MSCs) committed into osteogenic differentiation. Results revealed seven compounds that normalized the osteogenic differentiation process and, among these, all-trans retinoic acid and 13-cis-retinoic acid, that also decreased progerin expression. This study highlights the potential of high-throughput drug screening using HGPS iPS-derived cells, in order to find therapeutic compounds for HGPS and, potentially, for other aging-related disorders. PMID:27739443
Lo Cicero, Alessandra; Jaskowiak, Anne-Laure; Egesipe, Anne-Laure; Tournois, Johana; Brinon, Benjamin; Pitrez, Patricia R; Ferreira, Lino; de Sandre-Giovannoli, Annachiara; Levy, Nicolas; Nissan, Xavier
2016-10-14
Hutchinson-Gilford progeria syndrome (HGPS) is a rare fatal genetic disorder that causes systemic accelerated aging in children. Thanks to the pluripotency and self-renewal properties of induced pluripotent stem cells (iPSC), HGPS iPSC-based modeling opens up the possibility of access to different relevant cell types for pharmacological approaches. In this study, 2800 small molecules were explored using high-throughput screening, looking for compounds that could potentially reduce the alkaline phosphatase activity of HGPS mesenchymal stem cells (MSCs) committed into osteogenic differentiation. Results revealed seven compounds that normalized the osteogenic differentiation process and, among these, all-trans retinoic acid and 13-cis-retinoic acid, that also decreased progerin expression. This study highlights the potential of high-throughput drug screening using HGPS iPS-derived cells, in order to find therapeutic compounds for HGPS and, potentially, for other aging-related disorders.
High-Throughput Models for Exposure-Based Chemical ...
The United States Environmental Protection Agency (U.S. EPA) must characterize potential risks to human health and the environment associated with manufacture and use of thousands of chemicals. High-throughput screening (HTS) for biological activity allows the ToxCast research program to prioritize chemical inventories for potential hazard. Similar capabilities for estimating exposure potential would support rapid risk-based prioritization for chemicals with limited information; here, we propose a framework for high-throughput exposure assessment. To demonstrate application, an analysis was conducted that predicts human exposure potential for chemicals and estimates uncertainty in these predictions by comparison to biomonitoring data. We evaluated 1936 chemicals using far-field mass balance human exposure models (USEtox and RAIDAR) and an indicator for indoor and/or consumer use. These predictions were compared to exposures inferred by Bayesian analysis from urine concentrations for 82 chemicals reported in the National Health and Nutrition Examination Survey (NHANES). Joint regression on all factors provided a calibrated consensus prediction, the variance of which serves as an empirical determination of uncertainty for prioritization on absolute exposure potential. Information on use was found to be most predictive; generally, chemicals above the limit of detection in NHANES had consumer/indoor use. Coupled with hazard HTS, exposure HTS can place risk earlie
Chen, Dijun; Neumann, Kerstin; Friedel, Swetlana; Kilian, Benjamin; Chen, Ming; Altmann, Thomas; Klukas, Christian
2014-01-01
Significantly improved crop varieties are urgently needed to feed the rapidly growing human population under changing climates. While genome sequence information and excellent genomic tools are in place for major crop species, the systematic quantification of phenotypic traits or components thereof in a high-throughput fashion remains an enormous challenge. In order to help bridge the genotype to phenotype gap, we developed a comprehensive framework for high-throughput phenotype data analysis in plants, which enables the extraction of an extensive list of phenotypic traits from nondestructive plant imaging over time. As a proof of concept, we investigated the phenotypic components of the drought responses of 18 different barley (Hordeum vulgare) cultivars during vegetative growth. We analyzed dynamic properties of trait expression over growth time based on 54 representative phenotypic features. The data are highly valuable to understand plant development and to further quantify growth and crop performance features. We tested various growth models to predict plant biomass accumulation and identified several relevant parameters that support biological interpretation of plant growth and stress tolerance. These image-based traits and model-derived parameters are promising for subsequent genetic mapping to uncover the genetic basis of complex agronomic traits. Taken together, we anticipate that the analytical framework and analysis results presented here will be useful to advance our views of phenotypic trait components underlying plant development and their responses to environmental cues. PMID:25501589
TCP Throughput Profiles Using Measurements over Dedicated Connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata
Wide-area data transfers in high-performance computing infrastructures are increasingly being carried over dynamically provisioned dedicated network connections that provide high capacities with no competing traffic. We present extensive TCP throughput measurements and time traces over a suite of physical and emulated 10 Gbps connections with 0-366 ms round-trip times (RTTs). Contrary to the general expectation, they show significant statistical and temporal variations, in addition to the overall dependencies on the congestion control mechanism, buffer size, and the number of parallel streams. We analyze several throughput profiles that have highly desirable concave regions wherein the throughput decreases slowly with RTTs, inmore » stark contrast to the convex profiles predicted by various TCP analytical models. We present a generic throughput model that abstracts the ramp-up and sustainment phases of TCP flows, which provides insights into qualitative trends observed in measurements across TCP variants: (i) slow-start followed by well-sustained throughput leads to concave regions; (ii) large buffers and multiple parallel streams expand the concave regions in addition to improving the throughput; and (iii) stable throughput dynamics, indicated by a smoother Poincare map and smaller Lyapunov exponents, lead to wider concave regions. These measurements and analytical results together enable us to select a TCP variant and its parameters for a given connection to achieve high throughput with statistical guarantees.« less
NASA Astrophysics Data System (ADS)
Balta, Christiana; Bouwman, Ramona W.; Sechopoulos, Ioannis; Broeders, Mireille J. M.; Karssemeijer, Nico; van Engen, Ruben E.; Veldkamp, Wouter J. H.
2017-03-01
Model observers (MOs) are being investigated for image quality assessment in full-field digital mammography (FFDM). Signal templates for the non-prewhitening MO with eye filter (NPWE) were formed using acquired FFDM images. A signal template was generated from acquired images by averaging multiple exposures resulting in a low noise signal template. Noise elimination while preserving the signal was investigated and a methodology which results in a noise-free template is proposed. In order to deal with signal location uncertainty, template shifting was implemented. The procedure to generate the template was evaluated on images of an anthropomorphic breast phantom containing microcalcification-related signals. Optimal reduction of the background noise was achieved without changing the signal. Based on a validation study in simulated images, the difference (bias) in MO performance from the ground truth signal was calculated and found to be <1%. As template generation is a building stone of the entire image quality assessment framework, the proposed method to construct templates from acquired images facilitates the use of the NPWE MO in acquired images.
This presentation discusses methods used to extrapolate from in vitro high-throughput screening (HTS) toxicity data for an endocrine pathway to in vivo for early life stages in humans, and the use of a life stage PBPK model to address rapidly changing physiological parameters. A...
Matta, Ragai-Edward; Bergauer, Bastian; Adler, Werner; Wichmann, Manfred; Nickenig, Hans-Joachim
2017-06-01
The use of a surgical template is a well-established method in advanced implantology. In addition to conventional fabrication, computer-aided design and computer-aided manufacturing (CAD/CAM) work-flow provides an opportunity to engineer implant drilling templates via a three-dimensional printer. In order to transfer the virtual planning to the oral situation, a highly accurate surgical guide is needed. The aim of this study was to evaluate the impact of the fabrication method on the three-dimensional accuracy. The same virtual planning based on a scanned plaster model was used to fabricate a conventional thermo-formed and a three-dimensional printed surgical guide for each of 13 patients (single tooth implants). Both templates were acquired individually on the respective plaster model using an optical industrial white-light scanner (ATOS II, GOM mbh, Braunschweig, Germany), and the virtual datasets were superimposed. Using the three-dimensional geometry of the implant sleeve, the deviation between both surgical guides was evaluated. The mean discrepancy of the angle was 3.479° (standard deviation, 1.904°) based on data from 13 patients. Concerning the three-dimensional position of the implant sleeve, the highest deviation was in the Z-axis at 0.594 mm. The mean deviation of the Euclidian distance, dxyz, was 0.864 mm. Although the two different fabrication methods delivered statistically significantly different templates, the deviations ranged within a decimillimeter span. Both methods are appropriate for clinical use. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Autonomous system for Web-based microarray image analysis.
Bozinov, Daniel
2003-12-01
Software-based feature extraction from DNA microarray images still requires human intervention on various levels. Manual adjustment of grid and metagrid parameters, precise alignment of superimposed grid templates and gene spots, or simply identification of large-scale artifacts have to be performed beforehand to reliably analyze DNA signals and correctly quantify their expression values. Ideally, a Web-based system with input solely confined to a single microarray image and a data table as output containing measurements for all gene spots would directly transform raw image data into abstracted gene expression tables. Sophisticated algorithms with advanced procedures for iterative correction function can overcome imminent challenges in image processing. Herein is introduced an integrated software system with a Java-based interface on the client side that allows for decentralized access and furthermore enables the scientist to instantly employ the most updated software version at any given time. This software tool is extended from PixClust as used in Extractiff incorporated with Java Web Start deployment technology. Ultimately, this setup is destined for high-throughput pipelines in genome-wide medical diagnostics labs or microarray core facilities aimed at providing fully automated service to its users.
Automated image alignment for 2D gel electrophoresis in a high-throughput proteomics pipeline.
Dowsey, Andrew W; Dunn, Michael J; Yang, Guang-Zhong
2008-04-01
The quest for high-throughput proteomics has revealed a number of challenges in recent years. Whilst substantial improvements in automated protein separation with liquid chromatography and mass spectrometry (LC/MS), aka 'shotgun' proteomics, have been achieved, large-scale open initiatives such as the Human Proteome Organization (HUPO) Brain Proteome Project have shown that maximal proteome coverage is only possible when LC/MS is complemented by 2D gel electrophoresis (2-DE) studies. Moreover, both separation methods require automated alignment and differential analysis to relieve the bioinformatics bottleneck and so make high-throughput protein biomarker discovery a reality. The purpose of this article is to describe a fully automatic image alignment framework for the integration of 2-DE into a high-throughput differential expression proteomics pipeline. The proposed method is based on robust automated image normalization (RAIN) to circumvent the drawbacks of traditional approaches. These use symbolic representation at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in modelling and alignment. In RAIN, a third-order volume-invariant B-spline model is incorporated into a multi-resolution schema to correct for geometric and expression inhomogeneity at multiple scales. The normalized images can then be compared directly in the image domain for quantitative differential analysis. Through evaluation against an existing state-of-the-art method on real and synthetically warped 2D gels, the proposed analysis framework demonstrates substantial improvements in matching accuracy and differential sensitivity. High-throughput analysis is established through an accelerated GPGPU (general purpose computation on graphics cards) implementation. Supplementary material, software and images used in the validation are available at http://www.proteomegrid.org/rain/.
Conformational Sampling in Template-Free Protein Loop Structure Modeling: An Overview
Li, Yaohang
2013-01-01
Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a “mini protein folding problem” under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized. PMID:24688696
Conformational sampling in template-free protein loop structure modeling: an overview.
Li, Yaohang
2013-01-01
Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a "mini protein folding problem" under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized.
Conboy, Michael J; Karasov, Ariela O; Rando, Thomas A
2007-05-01
Decades ago, the "immortal strand hypothesis" was proposed as a means by which stem cells might limit acquiring mutations that could give rise to cancer, while continuing to proliferate for the life of an organism. Originally based on observations in embryonic cells, and later studied in terms of stem cell self-renewal, this hypothesis has remained largely unaccepted because of few additional reports, the rarity of the cells displaying template strand segregation, and alternative interpretations of experiments involving single labels or different types of labels to follow template strands. Using sequential pulses of halogenated thymidine analogs (bromodeoxyuridine [BrdU], chlorodeoxyuridine [CldU], and iododeoxyuridine [IdU]), and analyzing stem cell progeny during induced regeneration in vivo, we observed extraordinarily high frequencies of segregation of older and younger template strands during a period of proliferative expansion of muscle stem cells. Furthermore, template strand co-segregation was strongly associated with asymmetric cell divisions yielding daughters with divergent fates. Daughter cells inheriting the older templates retained the more immature phenotype, whereas daughters inheriting the newer templates acquired a more differentiated phenotype. These data provide compelling evidence of template strand co-segregation based on template age and associated with cell fate determination, suggest that template strand age is monitored during stem cell lineage progression, and raise important caveats for the interpretation of label-retaining cells.
Development and evaluation of human AP endonuclease inhibitors in melanoma and glioma cell lines.
Mohammed, M Z; Vyjayanti, V N; Laughton, C A; Dekker, L V; Fischer, P M; Wilson, D M; Abbotts, R; Shah, S; Patel, P M; Hickson, I D; Madhusudan, S
2011-02-15
Modulation of DNA base excision repair (BER) has the potential to enhance response to chemotherapy and improve outcomes in tumours such as melanoma and glioma. APE1, a critical protein in BER that processes potentially cytotoxic abasic sites (AP sites), is a promising new target in cancer. In the current study, we aimed to develop small molecule inhibitors of APE1 for cancer therapy. An industry-standard high throughput virtual screening strategy was adopted. The Sybyl8.0 (Tripos, St Louis, MO, USA) molecular modelling software suite was used to build inhibitor templates. Similarity searching strategies were then applied using ROCS 2.3 (Open Eye Scientific, Santa Fe, NM, USA) to extract pharmacophorically related subsets of compounds from a chemically diverse database of 2.6 million compounds. The compounds in these subsets were subjected to docking against the active site of the APE1 model, using the genetic algorithm-based programme GOLD2.7 (CCDC, Cambridge, UK). Predicted ligand poses were ranked on the basis of several scoring functions. The top virtual hits with promising pharmaceutical properties underwent detailed in vitro analyses using fluorescence-based APE1 cleavage assays and counter screened using endonuclease IV cleavage assays, fluorescence quenching assays and radiolabelled oligonucleotide assays. Biochemical APE1 inhibitors were then subjected to detailed cytotoxicity analyses. Several specific APE1 inhibitors were isolated by this approach. The IC(50) for APE1 inhibition ranged between 30 nM and 50 μM. We demonstrated that APE1 inhibitors lead to accumulation of AP sites in genomic DNA and potentiated the cytotoxicity of alkylating agents in melanoma and glioma cell lines. Our study provides evidence that APE1 is an emerging drug target and could have therapeutic application in patients with melanoma and glioma.
Autonomous control of production networks using a pheromone approach
NASA Astrophysics Data System (ADS)
Armbruster, D.; de Beer, C.; Freitag, M.; Jagalski, T.; Ringhofer, C.
2006-04-01
The flow of parts through a production network is usually pre-planned by a central control system. Such central control fails in presence of highly fluctuating demand and/or unforeseen disturbances. To manage such dynamic networks according to low work-in-progress and short throughput times, an autonomous control approach is proposed. Autonomous control means a decentralized routing of the autonomous parts themselves. The parts’ decisions base on backward propagated information about the throughput times of finished parts for different routes. So, routes with shorter throughput times attract parts to use this route again. This process can be compared to ants leaving pheromones on their way to communicate with following ants. The paper focuses on a mathematical description of such autonomously controlled production networks. A fluid model with limited service rates in a general network topology is derived and compared to a discrete-event simulation model. Whereas the discrete-event simulation of production networks is straightforward, the formulation of the addressed scenario in terms of a fluid model is challenging. Here it is shown, how several problems in a fluid model formulation (e.g. discontinuities) can be handled mathematically. Finally, some simulation results for the pheromone-based control with both the discrete-event simulation model and the fluid model are presented for a time-dependent influx.
An image analysis toolbox for high-throughput C. elegans assays
Wählby, Carolina; Kamentsky, Lee; Liu, Zihan H.; Riklin-Raviv, Tammy; Conery, Annie L.; O’Rourke, Eyleen J.; Sokolnicki, Katherine L.; Visvikis, Orane; Ljosa, Vebjorn; Irazoqui, Javier E.; Golland, Polina; Ruvkun, Gary; Ausubel, Frederick M.; Carpenter, Anne E.
2012-01-01
We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems. This WormToolbox is available via the open-source CellProfiler project and enables objective scoring of whole-animal high-throughput image-based assays of C. elegans for the study of diverse biological pathways relevant to human disease. PMID:22522656
High-throughput, image-based screening of pooled genetic variant libraries
Emanuel, George; Moffitt, Jeffrey R.; Zhuang, Xiaowei
2018-01-01
Image-based, high-throughput screening of genetic perturbations will advance both biology and biotechnology. We report a high-throughput screening method that allows diverse genotypes and corresponding phenotypes to be imaged in numerous individual cells. We achieve genotyping by introducing barcoded genetic variants into cells and using massively multiplexed FISH to measure the barcodes. We demonstrated this method by screening mutants of the fluorescent protein YFAST, yielding brighter and more photostable YFAST variants. PMID:29083401
Multi-step high-throughput conjugation platform for the development of antibody-drug conjugates.
Andris, Sebastian; Wendeler, Michaela; Wang, Xiangyang; Hubbuch, Jürgen
2018-07-20
Antibody-drug conjugates (ADCs) form a rapidly growing class of biopharmaceuticals which attracts a lot of attention throughout the industry due to its high potential for cancer therapy. They combine the specificity of a monoclonal antibody (mAb) and the cell-killing capacity of highly cytotoxic small molecule drugs. Site-specific conjugation approaches involve a multi-step process for covalent linkage of antibody and drug via a linker. Despite the range of parameters that have to be investigated, high-throughput methods are scarcely used so far in ADC development. In this work an automated high-throughput platform for a site-specific multi-step conjugation process on a liquid-handling station is presented by use of a model conjugation system. A high-throughput solid-phase buffer exchange was successfully incorporated for reagent removal by utilization of a batch cation exchange step. To ensure accurate screening of conjugation parameters, an intermediate UV/Vis-based concentration determination was established including feedback to the process. For conjugate characterization, a high-throughput compatible reversed-phase chromatography method with a runtime of 7 min and no sample preparation was developed. Two case studies illustrate the efficient use for mapping the operating space of a conjugation process. Due to the degree of automation and parallelization, the platform is capable of significantly reducing process development efforts and material demands and shorten development timelines for antibody-drug conjugates. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yan, Zongkai; Zhang, Xiaokun; Li, Guang; Cui, Yuxing; Jiang, Zhaolian; Liu, Wen; Peng, Zhi; Xiang, Yong
2018-01-01
The conventional methods for designing and preparing thin film based on wet process remain a challenge due to disadvantages such as time-consuming and ineffective, which hinders the development of novel materials. Herein, we present a high-throughput combinatorial technique for continuous thin film preparation relied on chemical bath deposition (CBD). The method is ideally used to prepare high-throughput combinatorial material library with low decomposition temperatures and high water- or oxygen-sensitivity at relatively high-temperature. To check this system, a Cu(In, Ga)Se (CIGS) thin films library doped with 0-19.04 at.% of antimony (Sb) was taken as an example to evaluate the regulation of varying Sb doping concentration on the grain growth, structure, morphology and electrical properties of CIGS thin film systemically. Combined with the Energy Dispersive Spectrometer (EDS), X-ray Photoelectron Spectroscopy (XPS), automated X-ray Diffraction (XRD) for rapid screening and Localized Electrochemical Impedance Spectroscopy (LEIS), it was confirmed that this combinatorial high-throughput system could be used to identify the composition with the optimal grain orientation growth, microstructure and electrical properties systematically, through accurately monitoring the doping content and material composition. According to the characterization results, a Sb2Se3 quasi-liquid phase promoted CIGS film-growth model has been put forward. In addition to CIGS thin film reported here, the combinatorial CBD also could be applied to the high-throughput screening of other sulfide thin film material systems.
TipMT: Identification of PCR-based taxon-specific markers.
Rodrigues-Luiz, Gabriela F; Cardoso, Mariana S; Valdivia, Hugo O; Ayala, Edward V; Gontijo, Célia M F; Rodrigues, Thiago de S; Fujiwara, Ricardo T; Lopes, Robson S; Bartholomeu, Daniella C
2017-02-11
Molecular genetic markers are one of the most informative and widely used genome features in clinical and environmental diagnostic studies. A polymerase chain reaction (PCR)-based molecular marker is very attractive because it is suitable to high throughput automation and confers high specificity. However, the design of taxon-specific primers may be difficult and time consuming due to the need to identify appropriate genomic regions for annealing primers and to evaluate primer specificity. Here, we report the development of a Tool for Identification of Primers for Multiple Taxa (TipMT), which is a web application to search and design primers for genotyping based on genomic data. The tool identifies and targets single sequence repeats (SSR) or orthologous/taxa-specific genes for genotyping using Multiplex PCR. This pipeline was applied to the genomes of four species of Leishmania (L. amazonensis, L. braziliensis, L. infantum and L. major) and validated by PCR using artificial genomic DNA mixtures of the Leishmania species as templates. This experimental validation demonstrates the reliability of TipMT because amplification profiles showed discrimination of genomic DNA samples from Leishmania species. The TipMT web tool allows for large-scale identification and design of taxon-specific primers and is freely available to the scientific community at http://200.131.37.155/tipMT/ .
Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale.
Parton, Daniel L; Grinaway, Patrick B; Hanson, Sonya M; Beauchamp, Kyle A; Chodera, John D
2016-06-01
The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences-from a single sequence to an entire superfamily-and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics-such as Markov state models (MSMs)-which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.
Digital Biomass Accumulation Using High-Throughput Plant Phenotype Data Analysis.
Rahaman, Md Matiur; Ahsan, Md Asif; Gillani, Zeeshan; Chen, Ming
2017-09-01
Biomass is an important phenotypic trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive, and they require numerous individuals to be cultivated for repeated measurements. With the advent of image-based high-throughput plant phenotyping facilities, non-destructive biomass measuring methods have attempted to overcome this problem. Thus, the estimation of plant biomass of individual plants from their digital images is becoming more important. In this paper, we propose an approach to biomass estimation based on image derived phenotypic traits. Several image-based biomass studies state that the estimation of plant biomass is only a linear function of the projected plant area in images. However, we modeled the plant volume as a function of plant area, plant compactness, and plant age to generalize the linear biomass model. The obtained results confirm the proposed model and can explain most of the observed variance during image-derived biomass estimation. Moreover, a small difference was observed between actual and estimated digital biomass, which indicates that our proposed approach can be used to estimate digital biomass accurately.
Microfluidics for cell-based high throughput screening platforms - A review.
Du, Guansheng; Fang, Qun; den Toonder, Jaap M J
2016-01-15
In the last decades, the basic techniques of microfluidics for the study of cells such as cell culture, cell separation, and cell lysis, have been well developed. Based on cell handling techniques, microfluidics has been widely applied in the field of PCR (Polymerase Chain Reaction), immunoassays, organ-on-chip, stem cell research, and analysis and identification of circulating tumor cells. As a major step in drug discovery, high-throughput screening allows rapid analysis of thousands of chemical, biochemical, genetic or pharmacological tests in parallel. In this review, we summarize the application of microfluidics in cell-based high throughput screening. The screening methods mentioned in this paper include approaches using the perfusion flow mode, the droplet mode, and the microarray mode. We also discuss the future development of microfluidic based high throughput screening platform for drug discovery. Copyright © 2015 Elsevier B.V. All rights reserved.
Forecasting Container Throughput at the Doraleh Port in Djibouti through Time Series Analysis
NASA Astrophysics Data System (ADS)
Mohamed Ismael, Hawa; Vandyck, George Kobina
The Doraleh Container Terminal (DCT) located in Djibouti has been noted as the most technologically advanced container terminal on the African continent. DCT's strategic location at the crossroads of the main shipping lanes connecting Asia, Africa and Europe put it in a unique position to provide important shipping services to vessels plying that route. This paper aims to forecast container throughput through the Doraleh Container Port in Djibouti by Time Series Analysis. A selection of univariate forecasting models has been used, namely Triple Exponential Smoothing Model, Grey Model and Linear Regression Model. By utilizing the above three models and their combination, the forecast of container throughput through the Doraleh port was realized. A comparison of the different forecasting results of the three models, in addition to the combination forecast is then undertaken, based on commonly used evaluation criteria Mean Absolute Deviation (MAD) and Mean Absolute Percentage Error (MAPE). The study found that the Linear Regression forecasting Model was the best prediction method for forecasting the container throughput, since its forecast error was the least. Based on the regression model, a ten (10) year forecast for container throughput at DCT has been made.
Automatic Segmentation of High-Throughput RNAi Fluorescent Cellular Images
Yan, Pingkum; Zhou, Xiaobo; Shah, Mubarak; Wong, Stephen T. C.
2010-01-01
High-throughput genome-wide RNA interference (RNAi) screening is emerging as an essential tool to assist biologists in understanding complex cellular processes. The large number of images produced in each study make manual analysis intractable; hence, automatic cellular image analysis becomes an urgent need, where segmentation is the first and one of the most important steps. In this paper, a fully automatic method for segmentation of cells from genome-wide RNAi screening images is proposed. Nuclei are first extracted from the DNA channel by using a modified watershed algorithm. Cells are then extracted by modeling the interaction between them as well as combining both gradient and region information in the Actin and Rac channels. A new energy functional is formulated based on a novel interaction model for segmenting tightly clustered cells with significant intensity variance and specific phenotypes. The energy functional is minimized by using a multiphase level set method, which leads to a highly effective cell segmentation method. Promising experimental results demonstrate that automatic segmentation of high-throughput genome-wide multichannel screening can be achieved by using the proposed method, which may also be extended to other multichannel image segmentation problems. PMID:18270043
Renkecz, Tibor; László, Krisztina; Horváth, Viola
2012-06-01
There is a growing need in membrane separations for novel membrane materials providing selective retention. Molecularly imprinted polymers (MIPs) are promising candidates for membrane functionalization. In this work, a novel approach is described to prepare composite membrane adsorbers incorporating molecularly imprinted microparticles or nanoparticles into commercially available macroporous filtration membranes. The polymerization is carried out in highly viscous polymerization solvents, and the particles are formed in situ in the pores of the support membrane. MIP particle composite membranes selective for terbutylazine were prepared and characterized by scanning electron microscopy and N₂ porosimetry. By varying the polymerization solvent microparticles or nanoparticles with diameters ranging from several hundred nanometers to 1 µm could be embedded into the support. The permeability of the membranes was in the range of 1000 to 20,000 Lm⁻² hr⁻¹ bar⁻¹. The imprinted composite membranes showed high MIP/NIP (nonimprinted polymer) selectivity for the template in organic media both in equilibrium-rebinding measurements and in filtration experiments. The solid phase extraction of a mixture of the template, its analogs, and a nonrelated compound demonstrated MIP/NIP selectivity and substance selectivity of the new molecularly imprinted membrane. The synthesis technique offers a potential for the cost-effective production of selective membrane adsorbers with high capacity and high throughput. Copyright © 2012 John Wiley & Sons, Ltd.
High-throughput bioinformatics with the Cyrille2 pipeline system
Fiers, Mark WEJ; van der Burgt, Ate; Datema, Erwin; de Groot, Joost CW; van Ham, Roeland CHJ
2008-01-01
Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1) a web based, graphical user interface (GUI) that enables a pipeline operator to manage the system; 2) the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3) the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines. PMID:18269742
Heiger-Bernays, Wendy J; Wegner, Susanna; Dix, David J
2018-01-16
The presence of industrial chemicals, consumer product chemicals, and pharmaceuticals is well documented in waters in the U.S. and globally. Most of these chemicals lack health-protective guidelines and many have been shown to have endocrine bioactivity. There is currently no systematic or national prioritization for monitoring waters for chemicals with endocrine disrupting activity. We propose ambient water bioactivity concentrations (AWBCs) generated from high throughput data as a health-based screen for endocrine bioactivity of chemicals in water. The U.S. EPA ToxCast program has screened over 1800 chemicals for estrogen receptor (ER) and androgen receptor (AR) pathway bioactivity. AWBCs are calculated for 110 ER and 212 AR bioactive chemicals using high throughput ToxCast data from in vitro screening assays and predictive pathway models, high-throughput toxicokinetic data, and data-driven assumptions about consumption of water. Chemical-specific AWBCs are compared with measured water concentrations in data sets from the greater Denver area, Minnesota lakes, and Oregon waters, demonstrating a framework for identifying endocrine bioactive chemicals. This approach can be used to screen potential cumulative endocrine activity in drinking water and to inform prioritization of future monitoring, chemical testing and pollution prevention efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chanda, M.; Rempel, G.L.
A new process has been developed for making granular gel-type sorbents from chelating resins using metal ion as template. Named as templated gel-filling, the process uses the chosen metal as templating host ion on high-surface-area silica to build a templated gel layer from a solution of the chelating resin in a suitable solvent in which the resin is soluble but its metal complex is insoluble. After cross-linking the templated gel layer, the silica support is removed by alkali to produce a hollow shell of the templated gel. The shells are then soaked in a concentrated aqueous solution of the samemore » metal ion and suspended in the same resin solution to afford gel-filling. The shells thus filled with metal-templated gel are treated with cross-linking agent, followed by acid to remove the template ion and activate the resin for metal sorption. Poly(ethyleneimine) and its partially ethylated derivative have been used to produce granular gel-type sorbents by this process, with Cu(II) as the template ion. These sorbents are found to offer high capacity and selectivity for copper over nickel, cobalt, and zinc in both acidic and alkaline media. Containing a relatively high fraction of imbibed water, the sorbents exhibit markedly enhanced rate behavior, in both sorption and stripping.« less
20180312 - Applying a High-Throughput PBTK Model for IVIVE (SOT)
The ability to link in vitro and in vivo toxicity enables the use of high-throughput in vitro assays as an alternative to resource intensive animal studies. Toxicokinetics (TK) should help describe this link, but prior work found weak correlation when using a TK model for in vitr...
Applying a High-Throughput PBTK Model for IVIVE
The ability to link in vitro and in vivo toxicity enables the use of high-throughput in vitro assays as an alternative to resource intensive animal studies. Toxicokinetics (TK) should help describe this link, but prior work found weak correlation when using a TK model for in vitr...
In vitro, high-throughput approaches have been widely recommended as an approach to screen chemicals for the potential to cause developmental neurotoxicity and prioritize them for additional testing. The choice of cellular models for such an approach will have important ramificat...
High-throughput exposure modeling to support prioritization of chemicals in personal care products
We demonstrate the application of a high-throughput modeling framework to estimate exposure to chemicals used in personal care products (PCPs). As a basis for estimating exposure, we use the product intake fraction (PiF), defined as the mass of chemical taken by an individual or ...
Single and Multiple Object Tracking Using a Multi-Feature Joint Sparse Representation.
Hu, Weiming; Li, Wei; Zhang, Xiaoqin; Maybank, Stephen
2015-04-01
In this paper, we propose a tracking algorithm based on a multi-feature joint sparse representation. The templates for the sparse representation can include pixel values, textures, and edges. In the multi-feature joint optimization, noise or occlusion is dealt with using a set of trivial templates. A sparse weight constraint is introduced to dynamically select the relevant templates from the full set of templates. A variance ratio measure is adopted to adaptively adjust the weights of different features. The multi-feature template set is updated adaptively. We further propose an algorithm for tracking multi-objects with occlusion handling based on the multi-feature joint sparse reconstruction. The observation model based on sparse reconstruction automatically focuses on the visible parts of an occluded object by using the information in the trivial templates. The multi-object tracking is simplified into a joint Bayesian inference. The experimental results show the superiority of our algorithm over several state-of-the-art tracking algorithms.
An Agent-Based Modeling Template for a Cohort of Veterans with Diabetic Retinopathy.
Day, Theodore Eugene; Ravi, Nathan; Xian, Hong; Brugh, Ann
2013-01-01
Agent-based models are valuable for examining systems where large numbers of discrete individuals interact with each other, or with some environment. Diabetic Veterans seeking eye care at a Veterans Administration hospital represent one such cohort. The objective of this study was to develop an agent-based template to be used as a model for a patient with diabetic retinopathy (DR). This template may be replicated arbitrarily many times in order to generate a large cohort which is representative of a real-world population, upon which in-silico experimentation may be conducted. Agent-based template development was performed in java-based computer simulation suite AnyLogic Professional 6.6. The model was informed by medical data abstracted from 535 patient records representing a retrospective cohort of current patients of the VA St. Louis Healthcare System Eye clinic. Logistic regression was performed to determine the predictors associated with advancing stages of DR. Predicted probabilities obtained from logistic regression were used to generate the stage of DR in the simulated cohort. The simulated cohort of DR patients exhibited no significant deviation from the test population of real-world patients in proportion of stage of DR, duration of diabetes mellitus (DM), or the other abstracted predictors. Simulated patients after 10 years were significantly more likely to exhibit proliferative DR (P<0.001). Agent-based modeling is an emerging platform, capable of simulating large cohorts of individuals based on manageable data abstraction efforts. The modeling method described may be useful in simulating many different conditions where course of disease is described in categorical stages.
Pathak, Jyotishman; Bailey, Kent R; Beebe, Calvin E; Bethard, Steven; Carrell, David S; Chen, Pei J; Dligach, Dmitriy; Endle, Cory M; Hart, Lacey A; Haug, Peter J; Huff, Stanley M; Kaggal, Vinod C; Li, Dingcheng; Liu, Hongfang; Marchant, Kyle; Masanz, James; Miller, Timothy; Oniki, Thomas A; Palmer, Martha; Peterson, Kevin J; Rea, Susan; Savova, Guergana K; Stancl, Craig R; Sohn, Sunghwan; Solbrig, Harold R; Suesse, Dale B; Tao, Cui; Taylor, David P; Westberg, Les; Wu, Stephen; Zhuo, Ning; Chute, Christopher G
2013-01-01
Research objective To develop scalable informatics infrastructure for normalization of both structured and unstructured electronic health record (EHR) data into a unified, concept-based model for high-throughput phenotype extraction. Materials and methods Software tools and applications were developed to extract information from EHRs. Representative and convenience samples of both structured and unstructured data from two EHR systems—Mayo Clinic and Intermountain Healthcare—were used for development and validation. Extracted information was standardized and normalized to meaningful use (MU) conformant terminology and value set standards using Clinical Element Models (CEMs). These resources were used to demonstrate semi-automatic execution of MU clinical-quality measures modeled using the Quality Data Model (QDM) and an open-source rules engine. Results Using CEMs and open-source natural language processing and terminology services engines—namely, Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) and Common Terminology Services (CTS2)—we developed a data-normalization platform that ensures data security, end-to-end connectivity, and reliable data flow within and across institutions. We demonstrated the applicability of this platform by executing a QDM-based MU quality measure that determines the percentage of patients between 18 and 75 years with diabetes whose most recent low-density lipoprotein cholesterol test result during the measurement year was <100 mg/dL on a randomly selected cohort of 273 Mayo Clinic patients. The platform identified 21 and 18 patients for the denominator and numerator of the quality measure, respectively. Validation results indicate that all identified patients meet the QDM-based criteria. Conclusions End-to-end automated systems for extracting clinical information from diverse EHR systems require extensive use of standardized vocabularies and terminologies, as well as robust information models for storing, discovering, and processing that information. This study demonstrates the application of modular and open-source resources for enabling secondary use of EHR data through normalization into standards-based, comparable, and consistent format for high-throughput phenotyping to identify patient cohorts. PMID:24190931
Structure and folding of the Tetrahymena telomerase RNA pseudoknot
Cash, Darian D.; Feigon, Juli
2016-11-28
Telomerase maintains telomere length at the ends of linear chromosomes using an integral telomerase RNA (TER) and telomerase reverse transcriptase (TERT). An essential part of TER is the template/pseudoknot domain (t/PK) which includes the template, for adding telomeric repeats, template boundary element (TBE), and pseudoknot, enclosed in a circle by stem 1. The Tetrahymena telomerase holoenzyme catalytic core (p65-TER-TERT) was recently modeled in our 9 Å resolution cryo-electron microscopy map by fitting protein and TER domains, including a solution NMR structure of the Tetrahymena pseudoknot. Here, we describe in detail the structure and folding of the isolated pseudoknot, which formsmore » a compact structure with major groove U•A-U and novel C•G-A + base triples. Base substitutions that disrupt the base triples reduce telomerase activity in vitro. NMR studies also reveal that the pseudoknot does not form in the context of full-length TER in the absence of TERT, due to formation of a competing structure that sequesters pseudoknot residues. The residues around the TBE remain unpaired, potentially providing access by TERT to this high affinity binding site during an early step in TERT-TER assembly. A model for the assembly pathway of the catalytic core is proposed.« less
Effects of Channel Modification on Detection and Dating of Fault Scarps
NASA Astrophysics Data System (ADS)
Sare, R.; Hilley, G. E.
2016-12-01
Template matching of scarp-like features could potentially generate morphologic age estimates for individual scarps over entire regions, but data noise and scarp modification limits detection of fault scarps by this method. Template functions based on diffusion in the cross-scarp direction may fail to accurately date scarps near channel boundaries. Where channels reduce scarp amplitudes, or where cross-scarp noise is significant, signal-to-noise ratios decrease and the scarp may be poorly resolved. In this contribution, we explore the bias in morphologic age of a complex scarp produced by systematic changes in fault scarp curvature. For example, fault scarps may be modified by encroaching channel banks and mass failure, lateral diffusion of material into a channel, or undercutting parallel to the base of a scarp. We quantify such biases on morphologic age estimates using a block offset model subject to two-dimensional linear diffusion. We carry out a synthetic study of the effects of two-dimensional transport on morphologic age calculated using a profile model, and compare these results to a well- studied and constrained site along the San Andreas Fault at Wallace Creek, CA. This study serves as a first step towards defining regions of high confidence in template matching results based on scarp length, channel geometry, and near-scarp topography.
Structure and folding of the Tetrahymena telomerase RNA pseudoknot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cash, Darian D.; Feigon, Juli
Telomerase maintains telomere length at the ends of linear chromosomes using an integral telomerase RNA (TER) and telomerase reverse transcriptase (TERT). An essential part of TER is the template/pseudoknot domain (t/PK) which includes the template, for adding telomeric repeats, template boundary element (TBE), and pseudoknot, enclosed in a circle by stem 1. The Tetrahymena telomerase holoenzyme catalytic core (p65-TER-TERT) was recently modeled in our 9 Å resolution cryo-electron microscopy map by fitting protein and TER domains, including a solution NMR structure of the Tetrahymena pseudoknot. Here, we describe in detail the structure and folding of the isolated pseudoknot, which formsmore » a compact structure with major groove U•A-U and novel C•G-A + base triples. Base substitutions that disrupt the base triples reduce telomerase activity in vitro. NMR studies also reveal that the pseudoknot does not form in the context of full-length TER in the absence of TERT, due to formation of a competing structure that sequesters pseudoknot residues. The residues around the TBE remain unpaired, potentially providing access by TERT to this high affinity binding site during an early step in TERT-TER assembly. A model for the assembly pathway of the catalytic core is proposed.« less
High throughput single cell counting in droplet-based microfluidics.
Lu, Heng; Caen, Ouriel; Vrignon, Jeremy; Zonta, Eleonora; El Harrak, Zakaria; Nizard, Philippe; Baret, Jean-Christophe; Taly, Valérie
2017-05-02
Droplet-based microfluidics is extensively and increasingly used for high-throughput single-cell studies. However, the accuracy of the cell counting method directly impacts the robustness of such studies. We describe here a simple and precise method to accurately count a large number of adherent and non-adherent human cells as well as bacteria. Our microfluidic hemocytometer provides statistically relevant data on large populations of cells at a high-throughput, used to characterize cell encapsulation and cell viability during incubation in droplets.
Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.
Ferreira, Miguel; Roma, Nuno; Russo, Luis M S
2014-05-30
HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.
Chen, Cheng; Wang, Wei; Ozolek, John A.; Rohde, Gustavo K.
2013-01-01
We describe a new supervised learning-based template matching approach for segmenting cell nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given dataset to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template-based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered nuclei. PMID:23568787
NASA Astrophysics Data System (ADS)
Maillard, Philippe; Gomes, Marília F.
2016-06-01
This article presents an original algorithm created to detect and count trees in orchards using very high resolution images. The algorithm is based on an adaptation of the "template matching" image processing approach, in which the template is based on a "geometricaloptical" model created from a series of parameters, such as illumination angles, maximum and ambient radiance, and tree size specifications. The algorithm is tested on four images from different regions of the world and different crop types. These images all have < 1 meter spatial resolution and were downloaded from the GoogleEarth application. Results show that the algorithm is very efficient at detecting and counting trees as long as their spectral and spatial characteristics are relatively constant. For walnut, mango and orange trees, the overall accuracy was clearly above 90%. However, the overall success rate for apple trees fell under 75%. It appears that the openness of the apple tree crown is most probably responsible for this poorer result. The algorithm is fully explained with a step-by-step description. At this stage, the algorithm still requires quite a bit of user interaction. The automatic determination of most of the required parameters is under development.
Comparative modeling without implicit sequence alignments.
Kolinski, Andrzej; Gront, Dominik
2007-10-01
The number of known protein sequences is about thousand times larger than the number of experimentally solved 3D structures. For more than half of the protein sequences a close or distant structural analog could be identified. The key starting point in a classical comparative modeling is to generate the best possible sequence alignment with a template or templates. With decreasing sequence similarity, the number of errors in the alignments increases and these errors are the main causes of the decreasing accuracy of the molecular models generated. Here we propose a new approach to comparative modeling, which does not require the implicit alignment - the model building phase explores geometric, evolutionary and physical properties of a template (or templates). The proposed method requires prior identification of a template, although the initial sequence alignment is ignored. The model is built using a very efficient reduced representation search engine CABS to find the best possible superposition of the query protein onto the template represented as a 3D multi-featured scaffold. The criteria used include: sequence similarity, predicted secondary structure consistency, local geometric features and hydrophobicity profile. For more difficult cases, the new method qualitatively outperforms existing schemes of comparative modeling. The algorithm unifies de novo modeling, 3D threading and sequence-based methods. The main idea is general and could be easily combined with other efficient modeling tools as Rosetta, UNRES and others.
Shin, Hyeong -Moo; Ernstoff, Alexi; Arnot, Jon A.; ...
2015-05-01
We present a risk-based high-throughput screening (HTS) method to identify chemicals for potential health concerns or for which additional information is needed. The method is applied to 180 organic chemicals as a case study. We first obtain information on how the chemical is used and identify relevant use scenarios (e.g., dermal application, indoor emissions). For each chemical and use scenario, exposure models are then used to calculate a chemical intake fraction, or a product intake fraction, accounting for chemical properties and the exposed population. We then combine these intake fractions with use scenario-specific estimates of chemical quantity to calculate dailymore » intake rates (iR; mg/kg/day). These intake rates are compared to oral equivalent doses (OED; mg/kg/day), calculated from a suite of ToxCast in vitro bioactivity assays using in vitro-to-in vivo extrapolation and reverse dosimetry. Bioactivity quotients (BQs) are calculated as iR/OED to obtain estimates of potential impact associated with each relevant use scenario. Of the 180 chemicals considered, 38 had maximum iRs exceeding minimum OEDs (i.e., BQs > 1). For most of these compounds, exposures are associated with direct intake, food/oral contact, or dermal exposure. The method provides high-throughput estimates of exposure and important input for decision makers to identify chemicals of concern for further evaluation with additional information or more refined models.« less
Volkmann, Niels
2004-01-01
Reduced representation templates are used in a real-space pattern matching framework to facilitate automatic particle picking from electron micrographs. The procedure consists of five parts. First, reduced templates are constructed either from models or directly from the data. Second, a real-space pattern matching algorithm is applied using the reduced representations as templates. Third, peaks are selected from the resulting score map using peak-shape characteristics. Fourth, the surviving peaks are tested for distance constraints. Fifth, a correlation-based outlier screening is applied. Test applications to a data set of keyhole limpet hemocyanin particles indicate that the method is robust and reliable.
Stepping into the omics era: Opportunities and challenges for biomaterials science and engineering.
Groen, Nathalie; Guvendiren, Murat; Rabitz, Herschel; Welsh, William J; Kohn, Joachim; de Boer, Jan
2016-04-01
The research paradigm in biomaterials science and engineering is evolving from using low-throughput and iterative experimental designs towards high-throughput experimental designs for materials optimization and the evaluation of materials properties. Computational science plays an important role in this transition. With the emergence of the omics approach in the biomaterials field, referred to as materiomics, high-throughput approaches hold the promise of tackling the complexity of materials and understanding correlations between material properties and their effects on complex biological systems. The intrinsic complexity of biological systems is an important factor that is often oversimplified when characterizing biological responses to materials and establishing property-activity relationships. Indeed, in vitro tests designed to predict in vivo performance of a given biomaterial are largely lacking as we are not able to capture the biological complexity of whole tissues in an in vitro model. In this opinion paper, we explain how we reached our opinion that converging genomics and materiomics into a new field would enable a significant acceleration of the development of new and improved medical devices. The use of computational modeling to correlate high-throughput gene expression profiling with high throughput combinatorial material design strategies would add power to the analysis of biological effects induced by material properties. We believe that this extra layer of complexity on top of high-throughput material experimentation is necessary to tackle the biological complexity and further advance the biomaterials field. In this opinion paper, we postulate that converging genomics and materiomics into a new field would enable a significant acceleration of the development of new and improved medical devices. The use of computational modeling to correlate high-throughput gene expression profiling with high throughput combinatorial material design strategies would add power to the analysis of biological effects induced by material properties. We believe that this extra layer of complexity on top of high-throughput material experimentation is necessary to tackle the biological complexity and further advance the biomaterials field. Copyright © 2016. Published by Elsevier Ltd.
CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets
Nowicka, Malgorzata; Krieg, Carsten; Weber, Lukas M.; Hartmann, Felix J.; Guglietta, Silvia; Becher, Burkhard; Levesque, Mitchell P.; Robinson, Mark D.
2017-01-01
High dimensional mass and flow cytometry (HDCyto) experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots), reporting of clustering results (dimensionality reduction, heatmaps with dendrograms) and differential analyses (e.g. plots of aggregated signals). PMID:28663787
Comparative Protein Structure Modeling Using MODELLER
Webb, Benjamin; Sali, Andrej
2016-01-01
Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406
SnapDock—template-based docking by Geometric Hashing
Estrin, Michael; Wolfson, Haim J.
2017-01-01
Abstract Motivation: A highly efficient template-based protein–protein docking algorithm, nicknamed SnapDock, is presented. It employs a Geometric Hashing-based structural alignment scheme to align the target proteins to the interfaces of non-redundant protein–protein interface libraries. Docking of a pair of proteins utilizing the 22 600 interface PIFACE library is performed in < 2 min on the average. A flexible version of the algorithm allowing hinge motion in one of the proteins is presented as well. Results: To evaluate the performance of the algorithm a blind re-modelling of 3547 PDB complexes, which have been uploaded after the PIFACE publication has been performed with success ratio of about 35%. Interestingly, a similar experiment with the template free PatchDock docking algorithm yielded a success rate of about 23% with roughly 1/3 of the solutions different from those of SnapDock. Consequently, the combination of the two methods gave a 42% success ratio. Availability and implementation: A web server of the application is under development. Contact: michaelestrin@gmail.com or wolfson@tau.ac.il PMID:28881968
Rational design of mesoporous metals and related nanomaterials by a soft-template approach.
Yamauchi, Yusuke; Kuroda, Kazuyuki
2008-04-07
We review recent developments in the preparation of mesoporous metals and related metal-based nanomaterials. Among the many types of mesoporous materials, mesoporous metals hold promise for a wide range of potential applications, such as in electronic devices, magnetic recording media, and metal catalysts, owing to their metallic frameworks. Mesoporous metals with highly ordered networks and narrow pore-size distributions have traditionally been produced by using mesoporous silica as a hard template. This method involves the formation of an original template followed by deposition of metals within the mesopores and subsequent removal of the template. Another synthetic method is the direct-template approach from lyotropic liquid crystals (LLCs) made of nonionic surfactants at high concentrations. Direct-template synthesis creates a novel avenue for the production of mesoporous metals as well as related metal-based nanomaterials. Many mesoporous metals have been prepared by the chemical or electrochemical reduction of metal salts dissolved in aqueous LLC domains. As a soft template, LLCs are more versatile and therefore more advantageous than hard templates. It is possible to produce various nanostructures (e.g., lamellar, 2D hexagonal (p6mm), and 3D cubic (Ia\\3d)), nanoparticles, and nanotubes simply by controlling the composition of the reaction bath.
Day, Ryan; Qu, Xiaotao; Swanson, Rosemarie; Bohannan, Zach; Bliss, Robert
2011-01-01
Abstract Most current template-based structure prediction methods concentrate on finding the correct backbone conformation and then packing sidechains within that backbone. Our packing-based method derives distance constraints from conserved relative packing groups (RPGs). In our refinement approach, the RPGs provide a level of resolution that restrains global topology while allowing conformational sampling. In this study, we test our template-based structure prediction method using 51 prediction units from CASP7 experiments. RPG-based constraints are able to substantially improve approximately two-thirds of starting templates. Upon deeper investigation, we find that true positive spatial constraints, especially those non-local in sequence, derived from the RPGs were important to building nearer native models. Surprisingly, the fraction of incorrect or false positive constraints does not strongly influence the quality of the final candidate. This result indicates that our RPG-based true positive constraints sample the self-consistent, cooperative interactions of the native structure. The lack of such reinforcing cooperativity explains the weaker effect of false positive constraints. Generally, these findings are encouraging indications that RPGs will improve template-based structure prediction. PMID:21210729
Yu, Jian-Hong; Lo, Lun-Jou; Hsu, Pin-Hsin
2017-01-01
This study integrates cone-beam computed tomography (CBCT)/laser scan image superposition, computer-aided design (CAD), and 3D printing (3DP) to develop a technology for producing customized dental (orthodontic) miniscrew surgical templates using polymer material. Maxillary bone solid models with the bone and teeth reconstructed using CBCT images and teeth and mucosa outer profile acquired using laser scanning were superimposed to allow miniscrew visual insertion planning and permit surgical template fabrication. The customized surgical template CAD model was fabricated offset based on the teeth/mucosa/bracket contour profiles in the superimposition model and exported to duplicate the plastic template using the 3DP technique and polymer material. An anterior retraction and intrusion clinical test for the maxillary canines/incisors showed that two miniscrews were placed safely and did not produce inflammation or other discomfort symptoms one week after surgery. The fitness between the mucosa and template indicated that the average gap sizes were found smaller than 0.5 mm and confirmed that the surgical template presented good holding power and well-fitting adaption. This study addressed integrating CBCT and laser scan image superposition; CAD and 3DP techniques can be applied to fabricate an accurate customized surgical template for dental orthodontic miniscrews. PMID:28280726
GPCR-SSFE 2.0-a fragment-based molecular modeling web tool for Class A G-protein coupled receptors.
Worth, Catherine L; Kreuchwig, Franziska; Tiemann, Johanna K S; Kreuchwig, Annika; Ritschel, Michele; Kleinau, Gunnar; Hildebrand, Peter W; Krause, Gerd
2017-07-03
G-protein coupled receptors (GPCRs) are key players in signal transduction and therefore a large proportion of pharmaceutical drugs target these receptors. Structural data of GPCRs are sparse yet important for elucidating the molecular basis of GPCR-related diseases and for performing structure-based drug design. To ameliorate this problem, GPCR-SSFE 2.0 (http://www.ssfa-7tmr.de/ssfe2/), an intuitive web server dedicated to providing three-dimensional Class A GPCR homology models has been developed. The updated web server includes 27 inactive template structures and incorporates various new functionalities. Uniquely, it uses a fingerprint correlation scoring strategy for identifying the optimal templates, which we demonstrate captures structural features that sequence similarity alone is unable to do. Template selection is carried out separately for each helix, allowing both single-template models and fragment-based models to be built. Additionally, GPCR-SSFE 2.0 stores a comprehensive set of pre-calculated and downloadable homology models and also incorporates interactive loop modeling using the tool SL2, allowing knowledge-based input by the user to guide the selection process. For visual analysis, the NGL viewer is embedded into the result pages. Finally, blind-testing using two recently published structures shows that GPCR-SSFE 2.0 performs comparably or better than other state-of-the art GPCR modeling web servers. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Wang, Jiguang; Sun, Yidan; Zheng, Si; Zhang, Xiang-Sun; Zhou, Huarong; Chen, Luonan
2013-01-01
Synergistic interactions among transcription factors (TFs) and their cofactors collectively determine gene expression in complex biological systems. In this work, we develop a novel graphical model, called Active Protein-Gene (APG) network model, to quantify regulatory signals of transcription in complex biomolecular networks through integrating both TF upstream-regulation and downstream-regulation high-throughput data. Firstly, we theoretically and computationally demonstrate the effectiveness of APG by comparing with the traditional strategy based only on TF downstream-regulation information. We then apply this model to study spontaneous type 2 diabetic Goto-Kakizaki (GK) and Wistar control rats. Our biological experiments validate the theoretical results. In particular, SP1 is found to be a hidden TF with changed regulatory activity, and the loss of SP1 activity contributes to the increased glucose production during diabetes development. APG model provides theoretical basis to quantitatively elucidate transcriptional regulation by modelling TF combinatorial interactions and exploiting multilevel high-throughput information.
Wang, Jiguang; Sun, Yidan; Zheng, Si; Zhang, Xiang-Sun; Zhou, Huarong; Chen, Luonan
2013-01-01
Synergistic interactions among transcription factors (TFs) and their cofactors collectively determine gene expression in complex biological systems. In this work, we develop a novel graphical model, called Active Protein-Gene (APG) network model, to quantify regulatory signals of transcription in complex biomolecular networks through integrating both TF upstream-regulation and downstream-regulation high-throughput data. Firstly, we theoretically and computationally demonstrate the effectiveness of APG by comparing with the traditional strategy based only on TF downstream-regulation information. We then apply this model to study spontaneous type 2 diabetic Goto-Kakizaki (GK) and Wistar control rats. Our biological experiments validate the theoretical results. In particular, SP1 is found to be a hidden TF with changed regulatory activity, and the loss of SP1 activity contributes to the increased glucose production during diabetes development. APG model provides theoretical basis to quantitatively elucidate transcriptional regulation by modelling TF combinatorial interactions and exploiting multilevel high-throughput information. PMID:23346354
Jamal, Salma; Scaria, Vinod
2013-11-19
Leishmaniasis is a neglected tropical disease which affects approx. 12 million individuals worldwide and caused by parasite Leishmania. The current drugs used in the treatment of Leishmaniasis are highly toxic and has seen widespread emergence of drug resistant strains which necessitates the need for the development of new therapeutic options. The high throughput screen data available has made it possible to generate computational predictive models which have the ability to assess the active scaffolds in a chemical library followed by its ADME/toxicity properties in the biological trials. In the present study, we have used publicly available, high-throughput screen datasets of chemical moieties which have been adjudged to target the pyruvate kinase enzyme of L. mexicana (LmPK). The machine learning approach was used to create computational models capable of predicting the biological activity of novel antileishmanial compounds. Further, we evaluated the molecules using the substructure based approach to identify the common substructures contributing to their activity. We generated computational models based on machine learning methods and evaluated the performance of these models based on various statistical figures of merit. Random forest based approach was determined to be the most sensitive, better accuracy as well as ROC. We further added a substructure based approach to analyze the molecules to identify potentially enriched substructures in the active dataset. We believe that the models developed in the present study would lead to reduction in cost and length of clinical studies and hence newer drugs would appear faster in the market providing better healthcare options to the patients.
Connecting Earth observation to high-throughput biodiversity data.
Bush, Alex; Sollmann, Rahel; Wilting, Andreas; Bohmann, Kristine; Cole, Beth; Balzter, Heiko; Martius, Christopher; Zlinszky, András; Calvignac-Spencer, Sébastien; Cobbold, Christina A; Dawson, Terence P; Emerson, Brent C; Ferrier, Simon; Gilbert, M Thomas P; Herold, Martin; Jones, Laurence; Leendertz, Fabian H; Matthews, Louise; Millington, James D A; Olson, John R; Ovaskainen, Otso; Raffaelli, Dave; Reeve, Richard; Rödel, Mark-Oliver; Rodgers, Torrey W; Snape, Stewart; Visseren-Hamakers, Ingrid; Vogler, Alfried P; White, Piran C L; Wooster, Martin J; Yu, Douglas W
2017-06-22
Understandably, given the fast pace of biodiversity loss, there is much interest in using Earth observation technology to track biodiversity, ecosystem functions and ecosystem services. However, because most biodiversity is invisible to Earth observation, indicators based on Earth observation could be misleading and reduce the effectiveness of nature conservation and even unintentionally decrease conservation effort. We describe an approach that combines automated recording devices, high-throughput DNA sequencing and modern ecological modelling to extract much more of the information available in Earth observation data. This approach is achievable now, offering efficient and near-real-time monitoring of management impacts on biodiversity and its functions and services.
Brooke E. Penaluna; Steve F. Railsback; Jason B. Dunham; Sherri Johnson; Robert E. Bilby; Arne E. Skaugset; Michael Bradford
2015-01-01
The importance of multiple processes and instream factors to aquatic biota has been explored extensively, but questions remain about how local spatiotemporal variability of aquatic biota is tied to environmental regimes and the geophysical template of streams. We used an individual-based trout model to explore the relative role of the geophysical template versus...
Huang, Ke-Jung; Huang, Chun-Kai; Lin, Pei-Chun
2014-10-07
We report on the development of a robot's dynamic locomotion based on a template which fits the robot's natural dynamics. The developed template is a low degree-of-freedom planar model for running with rolling contact, which we call rolling spring loaded inverted pendulum (R-SLIP). Originating from a reduced-order model of the RHex-style robot with compliant circular legs, the R-SLIP model also acts as the template for general dynamic running. The model has a torsional spring and a large circular arc as the distributed foot, so during locomotion it rolls on the ground with varied equivalent linear stiffness. This differs from the well-known spring loaded inverted pendulum (SLIP) model with fixed stiffness and ground contact points. Through dimensionless steps-to-fall and return map analysis, within a wide range of parameter spaces, the R-SLIP model is revealed to have self-stable gaits and a larger stability region than that of the SLIP model. The R-SLIP model is then embedded as the reduced-order 'template' in a more complex 'anchor', the RHex-style robot, via various mapping definitions between the template and the anchor. Experimental validation confirms that by merely deploying the stable running gaits of the R-SLIP model on the empirical robot with simple open-loop control strategy, the robot can easily initiate its dynamic running behaviors with a flight phase and can move with similar body state profiles to those of the model, in all five testing speeds. The robot, embedded with the SLIP model but performing walking locomotion, further confirms the importance of finding an adequate template of the robot for dynamic locomotion.
Computational Modeling of Human Metabolism and Its Application to Systems Biomedicine.
Aurich, Maike K; Thiele, Ines
2016-01-01
Modern high-throughput techniques offer immense opportunities to investigate whole-systems behavior, such as those underlying human diseases. However, the complexity of the data presents challenges in interpretation, and new avenues are needed to address the complexity of both diseases and data. Constraint-based modeling is one formalism applied in systems biology. It relies on a genome-scale reconstruction that captures extensive biochemical knowledge regarding an organism. The human genome-scale metabolic reconstruction is increasingly used to understand normal cellular and disease states because metabolism is an important factor in many human diseases. The application of human genome-scale reconstruction ranges from mere querying of the model as a knowledge base to studies that take advantage of the model's topology and, most notably, to functional predictions based on cell- and condition-specific metabolic models built based on omics data.An increasing number and diversity of biomedical questions are being addressed using constraint-based modeling and metabolic models. One of the most successful biomedical applications to date is cancer metabolism, but constraint-based modeling also holds great potential for inborn errors of metabolism or obesity. In addition, it offers great prospects for individualized approaches to diagnostics and the design of disease prevention and intervention strategies. Metabolic models support this endeavor by providing easy access to complex high-throughput datasets. Personalized metabolic models have been introduced. Finally, constraint-based modeling can be used to model whole-body metabolism, which will enable the elucidation of metabolic interactions between organs and disturbances of these interactions as either causes or consequence of metabolic diseases. This chapter introduces constraint-based modeling and describes some of its contributions to systems biomedicine.
Worldwide initiatives to screen for toxicity potential among the thousands of chemicals currently in use require inexpensive and high-throughput in vitro models to meet their goals. The devTOX quickPredict platform is an in vitro human pluripotent stem cell-based assay used to as...
2013-01-01
Background As for other major crops, achieving a complete wheat genome sequence is essential for the application of genomics to breeding new and improved varieties. To overcome the complexities of the large, highly repetitive and hexaploid wheat genome, the International Wheat Genome Sequencing Consortium established a chromosome-based strategy that was validated by the construction of the physical map of chromosome 3B. Here, we present improved strategies for the construction of highly integrated and ordered wheat physical maps, using chromosome 1BL as a template, and illustrate their potential for evolutionary studies and map-based cloning. Results Using a combination of novel high throughput marker assays and an assembly program, we developed a high quality physical map representing 93% of wheat chromosome 1BL, anchored and ordered with 5,489 markers including 1,161 genes. Analysis of the gene space organization and evolution revealed that gene distribution and conservation along the chromosome results from the superimposition of the ancestral grass and recent wheat evolutionary patterns, leading to a peak of synteny in the central part of the chromosome arm and an increased density of non-collinear genes towards the telomere. With a density of about 11 markers per Mb, the 1BL physical map provides 916 markers, including 193 genes, for fine mapping the 40 QTLs mapped on this chromosome. Conclusions Here, we demonstrate that high marker density physical maps can be developed in complex genomes such as wheat to accelerate map-based cloning, gain new insights into genome evolution, and provide a foundation for reference sequencing. PMID:23800011
2013-01-01
Background A major hindrance to the development of high yielding biofuel feedstocks is the ability to rapidly assess large populations for fermentable sugar yields. Whilst recent advances have outlined methods for the rapid assessment of biomass saccharification efficiency, none take into account the total biomass, or the soluble sugar fraction of the plant. Here we present a holistic high-throughput methodology for assessing sweet Sorghum bicolor feedstocks at 10 days post-anthesis for total fermentable sugar yields including stalk biomass, soluble sugar concentrations, and cell wall saccharification efficiency. Results A mathematical method for assessing whole S. bicolor stalks using the fourth internode from the base of the plant proved to be an effective high-throughput strategy for assessing stalk biomass, soluble sugar concentrations, and cell wall composition and allowed calculation of total stalk fermentable sugars. A high-throughput method for measuring soluble sucrose, glucose, and fructose using partial least squares (PLS) modelling of juice Fourier transform infrared (FTIR) spectra was developed. The PLS prediction was shown to be highly accurate with each sugar attaining a coefficient of determination (R 2 ) of 0.99 with a root mean squared error of prediction (RMSEP) of 11.93, 5.52, and 3.23 mM for sucrose, glucose, and fructose, respectively, which constitutes an error of <4% in each case. The sugar PLS model correlated well with gas chromatography–mass spectrometry (GC-MS) and brix measures. Similarly, a high-throughput method for predicting enzymatic cell wall digestibility using PLS modelling of FTIR spectra obtained from S. bicolor bagasse was developed. The PLS prediction was shown to be accurate with an R 2 of 0.94 and RMSEP of 0.64 μg.mgDW-1.h-1. Conclusions This methodology has been demonstrated as an efficient and effective way to screen large biofuel feedstock populations for biomass, soluble sugar concentrations, and cell wall digestibility simultaneously allowing a total fermentable yield calculation. It unifies and simplifies previous screening methodologies to produce a holistic assessment of biofuel feedstock potential. PMID:24365407
Ultrasmooth Patterned Metals for Plasmonics and Metamaterials
NASA Astrophysics Data System (ADS)
Nagpal, Prashant; Lindquist, Nathan C.; Oh, Sang-Hyun; Norris, David J.
2009-07-01
Surface plasmons are electromagnetic waves that can exist at metal interfaces because of coupling between light and free electrons. Restricted to travel along the interface, these waves can be channeled, concentrated, or otherwise manipulated by surface patterning. However, because surface roughness and other inhomogeneities have so far limited surface-plasmon propagation in real plasmonic devices, simple high-throughput methods are needed to fabricate high-quality patterned metals. We combined template stripping with precisely patterned silicon substrates to obtain ultrasmooth pure metal films with grooves, bumps, pyramids, ridges, and holes. Measured surface-plasmon-propagation lengths on the resulting surfaces approach theoretical values for perfectly flat films. With the use of our method, we demonstrated structures that exhibit Raman scattering enhancements above 107 for sensing applications and multilayer films for optical metamaterials.
The Stochastic Human Exposure and Dose Simulation Model – High-Throughput (SHEDS-HT) is a U.S. Environmental Protection Agency research tool for predicting screening-level (low-tier) exposures to chemicals in consumer products. This course will present an overview of this m...
The past five years have witnessed a rapid shift in the exposure science and toxicology communities towards high-throughput (HT) analyses of chemicals as potential stressors of human and ecological health. Modeling efforts have largely led the charge in the exposure science field...
Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of bioche...
PLASMA PROTEIN PROFILING AS A HIGH THROUGHPUT TOOL FOR CHEMICAL SCREENING USING A SMALL FISH MODEL
Hudson, R. Tod, Michael J. Hemmer, Kimberly A. Salinas, Sherry S. Wilkinson, James Watts, James T. Winstead, Peggy S. Harris, Amy Kirkpatrick and Calvin C. Walker. In press. Plasma Protein Profiling as a High Throughput Tool for Chemical Screening Using a Small Fish Model (Abstra...
USDA-ARS?s Scientific Manuscript database
High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat (Triticum aestivum L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect s...
A Method for Preparing DNA Sequencing Templates Using a DNA-Binding Microplate
Yang, Yu; Hebron, Haroun R.; Hang, Jun
2009-01-01
A DNA-binding matrix was immobilized on the surface of a 96-well microplate and used for plasmid DNA preparation for DNA sequencing. The same DNA-binding plate was used for bacterial growth, cell lysis, DNA purification, and storage. In a single step using one buffer, bacterial cells were lysed by enzymes, and released DNA was captured on the plate simultaneously. After two wash steps, DNA was eluted and stored in the same plate. Inclusion of phosphates in the culture medium was found to enhance the yield of plasmid significantly. Purified DNA samples were used successfully in DNA sequencing with high consistency and reproducibility. Eleven vectors and nine libraries were tested using this method. In 10 μl sequencing reactions using 3 μl sample and 0.25 μl BigDye Terminator v3.1, the results from a 3730xl sequencer gave a success rate of 90–95% and read-lengths of 700 bases or more. The method is fully automatable and convenient for manual operation as well. It enables reproducible, high-throughput, rapid production of DNA with purity and yields sufficient for high-quality DNA sequencing at a substantially reduced cost. PMID:19568455
Evolutionary Patterns and Processes: Lessons from Ancient DNA.
Leonardi, Michela; Librado, Pablo; Der Sarkissian, Clio; Schubert, Mikkel; Alfarhan, Ahmed H; Alquraishi, Saleh A; Al-Rasheid, Khaled A S; Gamba, Cristina; Willerslev, Eske; Orlando, Ludovic
2017-01-01
Ever since its emergence in 1984, the field of ancient DNA has struggled to overcome the challenges related to the decay of DNA molecules in the fossil record. With the recent development of high-throughput DNA sequencing technologies and molecular techniques tailored to ultra-damaged templates, it has now come of age, merging together approaches in phylogenomics, population genomics, epigenomics, and metagenomics. Leveraging on complete temporal sample series, ancient DNA provides direct access to the most important dimension in evolution—time, allowing a wealth of fundamental evolutionary processes to be addressed at unprecedented resolution. This review taps into the most recent findings in ancient DNA research to present analyses of ancient genomic and metagenomic data.
Evolutionary Patterns and Processes: Lessons from Ancient DNA
Leonardi, Michela; Librado, Pablo; Der Sarkissian, Clio; Schubert, Mikkel; Alfarhan, Ahmed H.; Alquraishi, Saleh A.; Al-Rasheid, Khaled A. S.; Gamba, Cristina; Willerslev, Eske
2017-01-01
Abstract Ever since its emergence in 1984, the field of ancient DNA has struggled to overcome the challenges related to the decay of DNA molecules in the fossil record. With the recent development of high-throughput DNA sequencing technologies and molecular techniques tailored to ultra-damaged templates, it has now come of age, merging together approaches in phylogenomics, population genomics, epigenomics, and metagenomics. Leveraging on complete temporal sample series, ancient DNA provides direct access to the most important dimension in evolution—time, allowing a wealth of fundamental evolutionary processes to be addressed at unprecedented resolution. This review taps into the most recent findings in ancient DNA research to present analyses of ancient genomic and metagenomic data. PMID:28173586
Performing label-fusion-based segmentation using multiple automatically generated templates.
Chakravarty, M Mallar; Steadman, Patrick; van Eede, Matthijs C; Calcott, Rebecca D; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D Louis; Lerch, Jason P
2013-10-01
Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively). Copyright © 2012 Wiley Periodicals, Inc.
An automated method for modeling proteins on known templates using distance geometry.
Srinivasan, S; March, C J; Sudarsanam, S
1993-02-01
We present an automated method incorporated into a software package, FOLDER, to fold a protein sequence on a given three-dimensional (3D) template. Starting with the sequence alignment of a family of homologous proteins, tertiary structures are modeled using the known 3D structure of one member of the family as a template. Homologous interatomic distances from the template are used as constraints. For nonhomologous regions in the model protein, the lower and the upper bounds for the interatomic distances are imposed by steric constraints and the globular dimensions of the template, respectively. Distance geometry is used to embed an ensemble of structures consistent with these distance bounds. Structures are selected from this ensemble based on minimal distance error criteria, after a penalty function optimization step. These structures are then refined using energy optimization methods. The method is tested by simulating the alpha-chain of horse hemoglobin using the alpha-chain of human hemoglobin as the template and by comparing the generated models with the crystal structure of the alpha-chain of horse hemoglobin. We also test the packing efficiency of this method by reconstructing the atomic positions of the interior side chains beyond C beta atoms of a protein domain from a known 3D structure. In both test cases, models retain the template constraints and any additionally imposed constraints while the packing of the interior residues is optimized with no short contacts or bond deformations. To demonstrate the use of this method in simulating structures of proteins with nonhomologous disulfides, we construct a model of murine interleukin (IL)-4 using the NMR structure of human IL-4 as the template. The resulting geometry of the nonhomologous disulfide in the model structure for murine IL-4 is consistent with standard disulfide geometry.
Streamlined approaches that use in vitro experimental data to predict chemical toxicokinetics (TK) are increasingly being used to perform risk-based prioritization based upon dosimetric adjustment of high-throughput screening (HTS) data across thousands of chemicals. However, ass...
Xu, Dong; Zhang, Jian; Roy, Ambrish; Zhang, Yang
2011-01-01
I-TASSER is an automated pipeline for protein tertiary structure prediction using multiple threading alignments and iterative structure assembly simulations. In CASP9 experiments, two new algorithms, QUARK and FG-MD, were added to the I-TASSER pipeline for improving the structural modeling accuracy. QUARK is a de novo structure prediction algorithm used for structure modeling of proteins that lack detectable template structures. For distantly homologous targets, QUARK models are found useful as a reference structure for selecting good threading alignments and guiding the I-TASSER structure assembly simulations. FG-MD is an atomic-level structural refinement program that uses structural fragments collected from the PDB structures to guide molecular dynamics simulation and improve the local structure of predicted model, including hydrogen-bonding networks, torsion angles and steric clashes. Despite considerable progress in both the template-based and template-free structure modeling, significant improvements on protein target classification, domain parsing, model selection, and ab initio folding of beta-proteins are still needed to further improve the I-TASSER pipeline. PMID:22069036
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernán-Caballero, Antonio; Alonso-Herrero, Almudena; Hatziminaoglou, Evanthia
2015-04-20
We present results on the spectral decomposition of 118 Spitzer Infrared Spectrograph (IRS) spectra from local active galactic nuclei (AGNs) using a large set of Spitzer/IRS spectra as templates. The templates are themselves IRS spectra from extreme cases where a single physical component (stellar, interstellar, or AGN) completely dominates the integrated mid-infrared emission. We show that a linear combination of one template for each physical component reproduces the observed IRS spectra of AGN hosts with unprecedented fidelity for a template fitting method with no need to model extinction separately. We use full probability distribution functions to estimate expectation values andmore » uncertainties for observables, and find that the decomposition results are robust against degeneracies. Furthermore, we compare the AGN spectra derived from the spectral decomposition with sub-arcsecond resolution nuclear photometry and spectroscopy from ground-based observations. We find that the AGN component derived from the decomposition closely matches the nuclear spectrum with a 1σ dispersion of 0.12 dex in luminosity and typical uncertainties of ∼0.19 in the spectral index and ∼0.1 in the silicate strength. We conclude that the emission from the host galaxy can be reliably removed from the IRS spectra of AGNs. This allows for unbiased studies of the AGN emission in intermediate- and high-redshift galaxies—currently inaccesible to ground-based observations—with archival Spitzer/IRS data and in the future with the Mid-InfraRed Instrument of the James Webb Space Telescope. The decomposition code and templates are available at http://denebola.org/ahc/deblendIRS.« less
Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.
Bühler, Jonas; von Lieres, Eric; Huber, Gregor J
2018-01-01
Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.
Mis-modelling in Gravitational Wave Astronomy: The Trouble with Templates
NASA Astrophysics Data System (ADS)
Sampson, Laura; Cornish, Neil; Yunes, Nicolas
2014-03-01
Waveform templates are a powerful tool for extracting and characterizing gravitational wave signals. There are, however, attendant dangers in using these highly restrictive signal priors. If strong field gravity is not accurately described by General Relativity (GR), then using GR templates may result in fundamental bias in the recovered parameters, or worse - a complete failure to detect signals. Here we study such dangers, concentrating on three distinct possibilities. First, we show that there exist modified theories compatible with all existing tests that would fail to be detected by the LIGO/Virgo network using searches based on GR templates, but which would be detected using a one parameter post-Einsteinian extension. Second, we study modified theories that produce departures from GR that do not naively fit into the simplest parameterized post-Einsteinian (ppE) scheme. We show that even the simplest ppE templates are still capable of picking up these strange signals and diagnosing a departure from GR. Third, we study how using inspiral-only ppE waveforms for signals that include merger and ringdown can lead to problems in misidentifying a GR departure. We present an easy technique that allows us to self-consistently identify the inspiral portion of the signal.
Application of image flow cytometry for the characterization of red blood cell morphology
NASA Astrophysics Data System (ADS)
Pinto, Ruben N.; Sebastian, Joseph A.; Parsons, Michael; Chang, Tim C.; Acker, Jason P.; Kolios, Michael C.
2017-02-01
Red blood cells (RBCs) stored in hypothermic environments for the purpose of transfusion have been documented to undergo structural and functional changes over time. One sign of the so-called RBC storage lesion is irreversible damage to the cell membrane. Consequently, RBCs undergo a morphological transformation from regular, deformable biconcave discocytes to rigid spheroechinocytes. The spherically shaped RBCs lack the deformability to efficiently enter microvasculature, thereby reducing the capacity of RBCs to oxygenate tissue. Blood banks currently rely on microscope techniques that include fixing, staining and cell counting in order to morphologically characterize RBC samples; these methods are labor intensive and highly subjective. This study presents a novel, high-throughput RBC morphology characterization technique using image flow cytometry (IFC). An image segmentation template was developed to process 100,000 images acquired from the IFC system and output the relative spheroechinocyte percentage. The technique was applied on samples extracted from two blood bags to monitor the morphological changes of the RBCs during in vitro hypothermic storage. The study found that, for a given sample of RBCs, the IFC method was twice as fast in data acquisition, and analyzed 250-350 times more RBCs than the conventional method. Over the lifespan of the blood bags, the mean spheroechinocyte population increased by 37%. Future work will focus on expanding the template to segregate RBC images into more subpopulations for the validation of the IFC method against conventional techniques; the expanded template will aid in establishing quantitative links between spheroechinocyte increase and other RBC storage lesion characteristics.
NASA Astrophysics Data System (ADS)
Storm, Emma; Weniger, Christoph; Calore, Francesca
2017-08-01
We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (gtrsim 105) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |l|<90o and |b|<20o, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.
Hybridizing Gravitationl Waveforms of Inspiralling Binary Neutron Star Systems
NASA Astrophysics Data System (ADS)
Cullen, Torrey; LIGO Collaboration
2016-03-01
Gravitational waves are ripples in space and time and were predicted to be produced by astrophysical systems such as binary neutron stars by Albert Einstein. These are key targets for Laser Interferometer and Gravitational Wave Observatory (LIGO), which uses template waveforms to find weak signals. The simplified template models are known to break down at high frequency, so I wrote code that constructs hybrid waveforms from numerical simulations to accurately cover a large range of frequencies. These hybrid waveforms use Post Newtonian template models at low frequencies and numerical data from simulations at high frequencies. They are constructed by reading in existing Post Newtonian models with the same masses as simulated stars, reading in the numerical data from simulations, and finding the ideal frequency and alignment to ``stitch'' these waveforms together.
To address this need, new tools have been created for characterizing, simulating, and evaluating chemical biokinetics. Physiologically-based pharmacokinetic (PBPK) models provide estimates of chemical exposures that produce potentially hazardous tissue concentrations, while tissu...
Collaborative Core Research Program for Chemical-Biological Warfare Defense
2015-01-04
Discovery through High Throughput Screening (HTS) and Fragment-Based Drug Design (FBDD...Discovery through High Throughput Screening (HTS) and Fragment-Based Drug Design (FBDD) Current pharmaceutical approaches involving drug discovery...structural analysis and docking program generally known as fragment based drug design (FBDD). The main advantage of using these approaches is that
Mis, Emily K; Liem, Karel F; Kong, Yong; Schwartz, Nancy B; Domowicz, Miriam; Weatherbee, Scott D
2014-01-01
The long bones of the vertebrate body are built by the initial formation of a cartilage template that is later replaced by mineralized bone. The proliferation and maturation of the skeletal precursor cells (chondrocytes) within the cartilage template and their replacement by bone is a highly coordinated process which, if misregulated, can lead to a number of defects including dwarfism and other skeletal deformities. This is exemplified by the fact that abnormal bone development is one of the most common types of human birth defects. Yet, many of the factors that initiate and regulate chondrocyte maturation are not known. We identified a recessive dwarf mouse mutant (pug) from an N-ethyl-N-nitrosourea (ENU) mutagenesis screen. pug mutant skeletal elements are patterned normally during development, but display a ~20% length reduction compared to wild-type embryos. We show that the pug mutation does not lead to changes in chondrocyte proliferation but instead promotes premature maturation and early ossification, which ultimately leads to disproportionate dwarfism. Using sequence capture and high-throughput sequencing, we identified a missense mutation in the Xylosyltransferase 1 (Xylt1) gene in pug mutants. Xylosyltransferases catalyze the initial step in glycosaminoglycan (GAG) chain addition to proteoglycan core proteins, and these modifications are essential for normal proteoglycan function. We show that the pug mutation disrupts Xylt1 activity and subcellular localization, leading to a reduction in GAG chains in pug mutants. The pug mutant serves as a novel model for mammalian dwarfism and identifies a key role for proteoglycan modification in the initiation of chondrocyte maturation. © 2013 Published by Elsevier Inc.
Drug Discovery in Fish, Flies, and Worms
Strange, Kevin
2016-01-01
Abstract Nonmammalian model organisms such as the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster, and the zebrafish Danio rerio provide numerous experimental advantages for drug discovery including genetic and molecular tractability, amenability to high-throughput screening methods and reduced experimental costs and increased experimental throughput compared to traditional mammalian models. An interdisciplinary approach that strategically combines the study of nonmammalian and mammalian animal models with diverse experimental tools has and will continue to provide deep molecular and genetic understanding of human disease and will significantly enhance the discovery and application of new therapies to treat those diseases. This review will provide an overview of C. elegans, Drosophila, and zebrafish biology and husbandry and will discuss how these models are being used for phenotype-based drug screening and for identification of drug targets and mechanisms of action. The review will also describe how these and other nonmammalian model organisms are uniquely suited for the discovery of drug-based regenerative medicine therapies. PMID:28053067
Near-common-path interferometer for imaging Fourier-transform spectroscopy in wide-field microscopy
Wadduwage, Dushan N.; Singh, Vijay Raj; Choi, Heejin; Yaqoob, Zahid; Heemskerk, Hans; Matsudaira, Paul; So, Peter T. C.
2017-01-01
Imaging Fourier-transform spectroscopy (IFTS) is a powerful method for biological hyperspectral analysis based on various imaging modalities, such as fluorescence or Raman. Since the measurements are taken in the Fourier space of the spectrum, it can also take advantage of compressed sensing strategies. IFTS has been readily implemented in high-throughput, high-content microscope systems based on wide-field imaging modalities. However, there are limitations in existing wide-field IFTS designs. Non-common-path approaches are less phase-stable. Alternatively, designs based on the common-path Sagnac interferometer are stable, but incompatible with high-throughput imaging. They require exhaustive sequential scanning over large interferometric path delays, making compressive strategic data acquisition impossible. In this paper, we present a novel phase-stable, near-common-path interferometer enabling high-throughput hyperspectral imaging based on strategic data acquisition. Our results suggest that this approach can improve throughput over those of many other wide-field spectral techniques by more than an order of magnitude without compromising phase stability. PMID:29392168
High-throughput screening of filamentous fungi using nanoliter-range droplet-based microfluidics
NASA Astrophysics Data System (ADS)
Beneyton, Thomas; Wijaya, I. Putu Mahendra; Postros, Prexilia; Najah, Majdi; Leblond, Pascal; Couvent, Angélique; Mayot, Estelle; Griffiths, Andrew D.; Drevelle, Antoine
2016-06-01
Filamentous fungi are an extremely important source of industrial enzymes because of their capacity to secrete large quantities of proteins. Currently, functional screening of fungi is associated with low throughput and high costs, which severely limits the discovery of novel enzymatic activities and better production strains. Here, we describe a nanoliter-range droplet-based microfluidic system specially adapted for the high-throughput sceening (HTS) of large filamentous fungi libraries for secreted enzyme activities. The platform allowed (i) compartmentalization of single spores in ~10 nl droplets, (ii) germination and mycelium growth and (iii) high-throughput sorting of fungi based on enzymatic activity. A 104 clone UV-mutated library of Aspergillus niger was screened based on α-amylase activity in just 90 minutes. Active clones were enriched 196-fold after a single round of microfluidic HTS. The platform is a powerful tool for the development of new production strains with low cost, space and time footprint and should bring enormous benefit for improving the viability of biotechnological processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Martin L.; Choi, C. L.; Hattrick-Simpers, J. R.
The Materials Genome Initiative, a national effort to introduce new materials into the market faster and at lower cost, has made significant progress in computational simulation and modeling of materials. To build on this progress, a large amount of experimental data for validating these models, and informing more sophisticated ones, will be required. High-throughput experimentation generates large volumes of experimental data using combinatorial materials synthesis and rapid measurement techniques, making it an ideal experimental complement to bring the Materials Genome Initiative vision to fruition. This paper reviews the state-of-the-art results, opportunities, and challenges in high-throughput experimentation for materials design. Asmore » a result, a major conclusion is that an effort to deploy a federated network of high-throughput experimental (synthesis and characterization) tools, which are integrated with a modern materials data infrastructure, is needed.« less
Green, Martin L.; Choi, C. L.; Hattrick-Simpers, J. R.; ...
2017-03-28
The Materials Genome Initiative, a national effort to introduce new materials into the market faster and at lower cost, has made significant progress in computational simulation and modeling of materials. To build on this progress, a large amount of experimental data for validating these models, and informing more sophisticated ones, will be required. High-throughput experimentation generates large volumes of experimental data using combinatorial materials synthesis and rapid measurement techniques, making it an ideal experimental complement to bring the Materials Genome Initiative vision to fruition. This paper reviews the state-of-the-art results, opportunities, and challenges in high-throughput experimentation for materials design. Asmore » a result, a major conclusion is that an effort to deploy a federated network of high-throughput experimental (synthesis and characterization) tools, which are integrated with a modern materials data infrastructure, is needed.« less
Niche-based screening identifies small-molecule inhibitors of leukemia stem cells.
Hartwell, Kimberly A; Miller, Peter G; Mukherjee, Siddhartha; Kahn, Alissa R; Stewart, Alison L; Logan, David J; Negri, Joseph M; Duvet, Mildred; Järås, Marcus; Puram, Rishi; Dancik, Vlado; Al-Shahrour, Fatima; Kindler, Thomas; Tothova, Zuzana; Chattopadhyay, Shrikanta; Hasaka, Thomas; Narayan, Rajiv; Dai, Mingji; Huang, Christina; Shterental, Sebastian; Chu, Lisa P; Haydu, J Erika; Shieh, Jae Hung; Steensma, David P; Munoz, Benito; Bittker, Joshua A; Shamji, Alykhan F; Clemons, Paul A; Tolliday, Nicola J; Carpenter, Anne E; Gilliland, D Gary; Stern, Andrew M; Moore, Malcolm A S; Scadden, David T; Schreiber, Stuart L; Ebert, Benjamin L; Golub, Todd R
2013-12-01
Efforts to develop more effective therapies for acute leukemia may benefit from high-throughput screening systems that reflect the complex physiology of the disease, including leukemia stem cells (LSCs) and supportive interactions with the bone marrow microenvironment. The therapeutic targeting of LSCs is challenging because LSCs are highly similar to normal hematopoietic stem and progenitor cells (HSPCs) and are protected by stromal cells in vivo. We screened 14,718 compounds in a leukemia-stroma co-culture system for inhibition of cobblestone formation, a cellular behavior associated with stem-cell function. Among those compounds that inhibited malignant cells but spared HSPCs was the cholesterol-lowering drug lovastatin. Lovastatin showed anti-LSC activity in vitro and in an in vivo bone marrow transplantation model. Mechanistic studies demonstrated that the effect was on target, via inhibition of HMG-CoA reductase. These results illustrate the power of merging physiologically relevant models with high-throughput screening.
Niche-based screening identifies small-molecule inhibitors of leukemia stem cells
Mukherjee, Siddhartha; Kahn, Alissa R; Stewart, Alison L; Logan, David J; Negri, Joseph M; Duvet, Mildred; Järås, Marcus; Puram, Rishi; Dancik, Vlado; Al-Shahrour, Fatima; Kindler, Thomas; Tothova, Zuzana; Chattopadhyay, Shrikanta; Hasaka, Thomas; Narayan, Rajiv; Dai, Mingji; Huang, Christina; Shterental, Sebastian; Chu, Lisa P; Haydu, J Erika; Shieh, Jae Hung; Steensma, David P; Munoz, Benito; Bittker, Joshua A; Shamji, Alykhan F; Clemons, Paul A; Tolliday, Nicola J; Carpenter, Anne E; Gilliland, D Gary; Stern, Andrew M; Moore, Malcolm A S; Scadden, David T; Schreiber, Stuart L; Ebert, Benjamin L; Golub, Todd R
2014-01-01
Efforts to develop more effective therapies for acute leukemia may benefit from high-throughput screening systems that reflect the complex physiology of the disease, including leukemia stem cells (LSCs) and supportive interactions with the bone-marrow microenvironment. The therapeutic targeting of LSCs is challenging because LSCs are highly similar to normal hematopoietic stem and progenitor cells (HSPCs) and are protected by stromal cells in vivo. We screened 14,718 compounds in a leukemia-stroma co-culture system for inhibition of cobblestone formation, a cellular behavior associated with stem-cell function. Among those that inhibited malignant cells but spared HSPCs was the cholesterol-lowering drug lovastatin. Lovastatin showed anti-LSC activity in vitro and in an in vivo bone marrow transplantation model. Mechanistic studies demonstrated that the effect was on-target, via inhibition of HMGCoA reductase. These results illustrate the power of merging physiologically-relevant models with high-throughput screening. PMID:24161946
Automated MRI Cerebellar Size Measurements Using Active Appearance Modeling
Price, Mathew; Cardenas, Valerie A.; Fein, George
2014-01-01
Although the human cerebellum has been increasingly identified as an important hub that shows potential for helping in the diagnosis of a large spectrum of disorders, such as alcoholism, autism, and fetal alcohol spectrum disorder, the high costs associated with manual segmentation, and low availability of reliable automated cerebellar segmentation tools, has resulted in a limited focus on cerebellar measurement in human neuroimaging studies. We present here the CATK (Cerebellar Analysis Toolkit), which is based on the Bayesian framework implemented in FMRIB’s FIRST. This approach involves training Active Appearance Models (AAM) using hand-delineated examples. CATK can currently delineate the cerebellar hemispheres and three vermal groups (lobules I–V, VI–VII, and VIII–X). Linear registration with the low-resolution MNI152 template is used to provide initial alignment, and Point Distribution Models (PDM) are parameterized using stellar sampling. The Bayesian approach models the relationship between shape and texture through computation of conditionals in the training set. Our method varies from the FIRST framework in that initial fitting is driven by 1D intensity profile matching, and the conditional likelihood function is subsequently used to refine fitting. The method was developed using T1-weighted images from 63 subjects that were imaged and manually labeled: 43 subjects were scanned once and were used for training models, and 20 subjects were imaged twice (with manual labeling applied to both runs) and used to assess reliability and validity. Intraclass correlation analysis shows that CATK is highly reliable (average test-retest ICCs of 0.96), and offers excellent agreement with the gold standard (average validity ICC of 0.87 against manual labels). Comparisons against an alternative atlas-based approach, SUIT (Spatially Unbiased Infratentorial Template), that registers images with a high-resolution template of the cerebellum, show that our AAM approach offers superior reliability and validity. Extensions of CATK to cerebellar hemisphere parcels is envisioned. PMID:25192657
Toxico-Cheminformatics: New and Expanding Public ...
High-throughput screening (HTS) technologies, along with efforts to improve public access to chemical toxicity information resources and to systematize older toxicity studies, have the potential to significantly improve information gathering efforts for chemical assessments and predictive capabilities in toxicology. Important developments include: 1) large and growing public resources that link chemical structures to biological activity and toxicity data in searchable format, and that offer more nuanced and varied representations of activity; 2) standardized relational data models that capture relevant details of chemical treatment and effects of published in vivo experiments; and 3) the generation of large amounts of new data from public efforts that are employing HTS technologies to probe a wide range of bioactivity and cellular processes across large swaths of chemical space. By annotating toxicity data with associated chemical structure information, these efforts link data across diverse study domains (e.g., ‘omics’, HTS, traditional toxicity studies), toxicity domains (carcinogenicity, developmental toxicity, neurotoxicity, immunotoxicity, etc) and database sources (EPA, FDA, NCI, DSSTox, PubChem, GEO, ArrayExpress, etc.). Public initiatives are developing systematized data models of toxicity study areas and introducing standardized templates, controlled vocabularies, hierarchical organization, and powerful relational searching capability across capt
High-throughput gene mapping in Caenorhabditis elegans.
Swan, Kathryn A; Curtis, Damian E; McKusick, Kathleen B; Voinov, Alexander V; Mapa, Felipa A; Cancilla, Michael R
2002-07-01
Positional cloning of mutations in model genetic systems is a powerful method for the identification of targets of medical and agricultural importance. To facilitate the high-throughput mapping of mutations in Caenorhabditis elegans, we have identified a further 9602 putative new single nucleotide polymorphisms (SNPs) between two C. elegans strains, Bristol N2 and the Hawaiian mapping strain CB4856, by sequencing inserts from a CB4856 genomic DNA library and using an informatics pipeline to compare sequences with the canonical N2 genomic sequence. When combined with data from other laboratories, our marker set of 17,189 SNPs provides even coverage of the complete worm genome. To date, we have confirmed >1099 evenly spaced SNPs (one every 91 +/- 56 kb) across the six chromosomes and validated the utility of our SNP marker set and new fluorescence polarization-based genotyping methods for systematic and high-throughput identification of genes in C. elegans by cloning several proprietary genes. We illustrate our approach by recombination mapping and confirmation of the mutation in the cloned gene, dpy-18.
Anderson, Jeffrey A.; Teufel, Ronald J.; Yin, Philip D.; Hu, Wei-Shau
1998-01-01
Two models for the mechanism of retroviral recombination have been proposed: forced copy choice (minus-strand recombination) and strand displacement-assimilation (plus-strand recombination). Each minus-strand recombination event results in one template switch, whereas each plus-strand recombination event results in two template switches. Recombinant proviruses with one and more than one template switches were previously observed. Recombinants with one template switch were generated by minus-strand recombination, while recombinants containing more than one template switch may have been generated by plus-strand recombination or by correlated minus-strand recombination. We recently observed that retroviral recombination exhibits high negative interference whereby the frequency of recombinants containing multiple template-switching events is higher than expected. To delineate the mechanism that generates recombinants with more than one template switch, we devised a system that permits only minus-strand recombination. Two highly homologous vectors, WH204 and WH221, containing eight different restriction site markers were used. The primer binding site (PBS) of WH221 was deleted; although reverse transcription cannot initiate from WH221 RNA, it can serve as a template for DNA synthesis in heterozygotic virions. After one round of retroviral replication, the structures of the recombinant proviruses were examined. Recombinants containing two, three, four, and five template switches were observed at 1.4-, 10-, 65-, and 50-fold-higher frequencies, respectively, than expected. This indicates that minus-strand recombination events are correlated and can generate proviruses with multiple template switches efficiently. The frequencies of recombinants containing multiple template switches were similar to those observed in the previous system, which allowed both minus- and plus-strand recombination. Thus, the previously reported high negative interference during retroviral recombination can be caused by correlated template switches during minus-strand DNA synthesis. In addition, all examined recombinants contained an intact PBS, indicating that most of the plus-strand DNA transfer occurs after completion of the strong-stop DNA. PMID:9445017
Constraints on Galactic Neutrino Emission with Seven Years of IceCube Data
NASA Astrophysics Data System (ADS)
Aartsen, M. G.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Samarai, I. Al; Altmann, D.; Andeen, K.; Anderson, T.; Ansseau, I.; Anton, G.; Argüelles, C.; Auffenberg, J.; Axani, S.; Bagherpour, H.; Bai, X.; Barron, J. P.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; BenZvi, S.; Berley, D.; Bernardini, E.; Besson, D. Z.; Binder, G.; Bindig, D.; Blaufuss, E.; Blot, S.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Bourbeau, J.; Bradascio, F.; Braun, J.; Brayeur, L.; Brenzke, M.; Bretz, H.-P.; Bron, S.; Burgman, A.; Carver, T.; Casey, J.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Clark, K.; Classen, L.; Coenders, S.; Collin, G. H.; Conrad, J. M.; Cowen, D. F.; Cross, R.; Day, M.; de André, J. P. A. M.; De Clercq, C.; DeLaunay, J. J.; Dembinski, H.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; di Lorenzo, V.; Dujmovic, H.; Dumm, J. P.; Dunkman, M.; Eberhardt, B.; Ehrhardt, T.; Eichmann, B.; Eller, P.; Evenson, P. A.; Fahey, S.; Fazely, A. R.; Felde, J.; Filimonov, K.; Finley, C.; Flis, S.; Franckowiak, A.; Friedman, E.; Fuchs, T.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Giang, W.; Glauch, T.; Glüsenkamp, T.; Goldschmidt, A.; Gonzalez, J. G.; Grant, D.; Griffith, Z.; Haack, C.; Hallgren, A.; Halzen, F.; Hanson, K.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Hokanson-Fasig, B.; Hoshina, K.; Huang, F.; Huber, M.; Hultqvist, K.; In, S.; Ishihara, A.; Jacobi, E.; Japaridze, G. S.; Jeong, M.; Jero, K.; Jones, B. J. P.; Kalacynski, P.; Kang, W.; Kappes, A.; Karg, T.; Karle, A.; Katz, U.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kheirandish, A.; Kim, J.; Kim, M.; Kintscher, T.; Kiryluk, J.; Kittler, T.; Klein, S. R.; Kohnen, G.; Koirala, R.; Kolanoski, H.; Köpke, L.; Kopper, C.; Kopper, S.; Koschinsky, J. P.; Koskinen, D. J.; Kowalski, M.; Krings, K.; Kroll, M.; Krückl, G.; Kunnen, J.; Kunwar, S.; Kurahashi, N.; Kuwabara, T.; Kyriacou, A.; Labare, M.; Lanfranchi, J. L.; Larson, M. J.; Lauber, F.; Lennarz, D.; Lesiak-Bzdak, M.; Leuermann, M.; Liu, Q. R.; Lu, L.; Lünemann, J.; Luszczak, W.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Mancina, S.; Maruyama, R.; Mase, K.; Maunu, R.; McNally, F.; Meagher, K.; Medici, M.; Meier, M.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Micallef, J.; Momenté, G.; Montaruli, T.; Moore, R. W.; Moulai, M.; Nahnhauer, R.; Nakarmi, P.; Naumann, U.; Neer, G.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke Pollmann, A.; Olivas, A.; O'Murchadha, A.; Palczewski, T.; Pandya, H.; Pankova, D. V.; Peiffer, P.; Pepper, J. A.; Pérez de los Heros, C.; Pieloth, D.; Pinat, E.; Plum, M.; Price, P. B.; Przybylski, G. T.; Raab, C.; Rädel, L.; Rameez, M.; Rawlins, K.; Reimann, R.; Relethford, B.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Robertson, S.; Rongen, M.; Rott, C.; Ruhe, T.; Ryckbosch, D.; Rysewyk, D.; Sälzer, T.; Sanchez Herrera, S. E.; Sandrock, A.; Sandroos, J.; Sarkar, S.; Sarkar, S.; Satalecka, K.; Schlunder, P.; Schmidt, T.; Schneider, A.; Schoenen, S.; Schöneberg, S.; Schumacher, L.; Seckel, D.; Seunarine, S.; Soldin, D.; Song, M.; Spiczak, G. M.; Spiering, C.; Stachurska, J.; Stanev, T.; Stasik, A.; Stettner, J.; Steuer, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Strotjohann, N. L.; Sullivan, G. W.; Sutherland, M.; Taboada, I.; Tatar, J.; Tenholt, F.; Ter-Antonyan, S.; Terliuk, A.; Tešić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Toscano, S.; Tosi, D.; Tselengidou, M.; Tung, C. F.; Turcati, A.; Turley, C. F.; Ty, B.; Unger, E.; Usner, M.; Vandenbroucke, J.; Van Driessche, W.; van Eijndhoven, N.; Vanheule, S.; van Santen, J.; Vehring, M.; Vogel, E.; Vraeghe, M.; Walck, C.; Wallace, A.; Wallraff, M.; Wandler, F. D.; Wandkowsky, N.; Waza, A.; Weaver, C.; Weiss, M. J.; Wendt, C.; Westerhoff, S.; Whelan, B. J.; Wickmann, S.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wills, L.; Wolf, M.; Wood, J.; Wood, T. R.; Woolsey, E.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Yuan, T.; Zoll, M.; IceCube Collaboration
2017-11-01
The origins of high-energy astrophysical neutrinos remain a mystery despite extensive searches for their sources. We present constraints from seven years of IceCube Neutrino Observatory muon data on the neutrino flux coming from the Galactic plane. This flux is expected from cosmic-ray interactions with the interstellar medium or near localized sources. Two methods were developed to test for a spatially extended flux from the entire plane, both of which are maximum likelihood fits but with different signal and background modeling techniques. We consider three templates for Galactic neutrino emission based primarily on gamma-ray observations and models that cover a wide range of possibilities. Based on these templates and in the benchmark case of an unbroken {E}-2.5 power-law energy spectrum, we set 90% confidence level upper limits, constraining the possible Galactic contribution to the diffuse neutrino flux to be relatively small, less than 14% of the flux reported in Aartsen et al. above 1 TeV. A stacking method is also used to test catalogs of known high-energy Galactic gamma-ray sources.
From cancer genomes to cancer models: bridging the gaps
Baudot, Anaïs; Real, Francisco X.; Izarzugaza, José M. G.; Valencia, Alfonso
2009-01-01
Cancer genome projects are now being expanded in an attempt to provide complete landscapes of the mutations that exist in tumours. Although the importance of cataloguing genome variations is well recognized, there are obvious difficulties in bridging the gaps between high-throughput resequencing information and the molecular mechanisms of cancer evolution. Here, we describe the current status of the high-throughput genomic technologies, and the current limitations of the associated computational analysis and experimental validation of cancer genetic variants. We emphasize how the current cancer-evolution models will be influenced by the high-throughput approaches, in particular through efforts devoted to monitoring tumour progression, and how, in turn, the integration of data and models will be translated into mechanistic knowledge and clinical applications. PMID:19305388
NASA Astrophysics Data System (ADS)
Yamanaka, Eiji; Taniguchi, Rikiya; Itoh, Masamitsu; Omote, Kazuhiko; Ito, Yoshiyasu; Ogata, Kiyoshi; Hayashi, Naoya
2016-05-01
Nanoimprint lithography (NIL) is one of the most potential candidates for the next generation lithography for semiconductor. It will achieve the lithography with high resolution and low cost. High resolution of NIL will be determined by a high definition template. Nanoimprint lithography will faithfully transfer the pattern of NIL template to the wafer. Cross-sectional profile of the template pattern will greatly affect the resist profile on the wafer. Therefore, the management of the cross-sectional profile is essential. Grazing incidence small angle x-ray scattering (GI-SAXS) technique has been proposed as one of the method for measuring cross-sectional profile of periodic nanostructure pattern. Incident x-rays are irradiated to the sample surface with very low glancing angle. It is close to the critical angle of the total reflection of the x-ray. The scattered x-rays from the surface structure are detected on a two-dimensional detector. The observed intensity is discrete in the horizontal (2θ) direction. It is due to the periodicity of the structure, and diffraction is observed only when the diffraction condition is satisfied. In the vertical (β) direction, the diffraction intensity pattern shows interference fringes reflected to height and shape of the structure. Features of the measurement using x-ray are that the optical constant for the materials are well known, and it is possible to calculate a specific diffraction intensity pattern based on a certain model of the cross-sectional profile. The surface structure is estimated by to collate the calculated diffraction intensity pattern that sequentially while changing the model parameters with the measured diffraction intensity pattern. Furthermore, GI-SAXS technique can be measured an object in a non-destructive. It suggests the potential to be an effective tool for product quality assurance. We have developed a cross-sectional profile measurement of quartz template pattern using GI-SAXS technique. In this report, we will report the measurement capabilities of GI-SAXS technique as a cross-sectional profile measurement tool of NIL quartz template pattern.
Controlling high-throughput manufacturing at the nano-scale
NASA Astrophysics Data System (ADS)
Cooper, Khershed P.
2013-09-01
Interest in nano-scale manufacturing research and development is growing. The reason is to accelerate the translation of discoveries and inventions of nanoscience and nanotechnology into products that would benefit industry, economy and society. Ongoing research in nanomanufacturing is focused primarily on developing novel nanofabrication techniques for a variety of applications—materials, energy, electronics, photonics, biomedical, etc. Our goal is to foster the development of high-throughput methods of fabricating nano-enabled products. Large-area parallel processing and highspeed continuous processing are high-throughput means for mass production. An example of large-area processing is step-and-repeat nanoimprinting, by which nanostructures are reproduced again and again over a large area, such as a 12 in wafer. Roll-to-roll processing is an example of continuous processing, by which it is possible to print and imprint multi-level nanostructures and nanodevices on a moving flexible substrate. The big pay-off is high-volume production and low unit cost. However, the anticipated cost benefits can only be realized if the increased production rate is accompanied by high yields of high quality products. To ensure product quality, we need to design and construct manufacturing systems such that the processes can be closely monitored and controlled. One approach is to bring cyber-physical systems (CPS) concepts to nanomanufacturing. CPS involves the control of a physical system such as manufacturing through modeling, computation, communication and control. Such a closely coupled system will involve in-situ metrology and closed-loop control of the physical processes guided by physics-based models and driven by appropriate instrumentation, sensing and actuation. This paper will discuss these ideas in the context of controlling high-throughput manufacturing at the nano-scale.
VieSLAF Framework: Enabling Adaptive and Versatile SLA-Management
NASA Astrophysics Data System (ADS)
Brandic, Ivona; Music, Dejan; Leitner, Philipp; Dustdar, Schahram
Novel computing paradigms like Grid and Cloud computing demand guarantees on non-functional requirements such as application execution time or price. Such requirements are usually negotiated following a specific Quality of Service (QoS) model and are expressed using Service Level Agreements (SLAs). Currently available QoS models assume either that service provider and consumer have matching SLA templates and common understanding of the negotiated terms or provide public templates, which can be downloaded and utilized by the end users. On the one hand, matching SLA templates represent an unrealistic assumption in systems where service consumer and provider meet dynamically and on demand. On the other hand, handling of public templates seems to be a rather challenging issue, especially if the templates do not reflect users’ needs. In this paper we present VieSLAF, a novel framework for the specification and management of SLA mappings. Using VieSLAF users may specify, manage, and apply SLA mapping bridging the gap between non-matching SLA templates. Moreover, based on the predefined learning functions and considering accumulated SLA mappings, domain specific public SLA templates can be derived reflecting users’ needs.
NASA Astrophysics Data System (ADS)
Mok, Aaron T. Y.; Lee, Kelvin C. M.; Wong, Kenneth K. Y.; Tsia, Kevin K.
2018-02-01
Biophysical properties of cells could complement and correlate biochemical markers to characterize a multitude of cellular states. Changes in cell size, dry mass and subcellular morphology, for instance, are relevant to cell-cycle progression which is prevalently evaluated by DNA-targeted fluorescence measurements. Quantitative-phase microscopy (QPM) is among the effective biophysical phenotyping tools that can quantify cell sizes and sub-cellular dry mass density distribution of single cells at high spatial resolution. However, limited camera frame rate and thus imaging throughput makes QPM incompatible with high-throughput flow cytometry - a gold standard in multiparametric cell-based assay. Here we present a high-throughput approach for label-free analysis of cell cycle based on quantitative-phase time-stretch imaging flow cytometry at a throughput of > 10,000 cells/s. Our time-stretch QPM system enables sub-cellular resolution even at high speed, allowing us to extract a multitude (at least 24) of single-cell biophysical phenotypes (from both amplitude and phase images). Those phenotypes can be combined to track cell-cycle progression based on a t-distributed stochastic neighbor embedding (t-SNE) algorithm. Using multivariate analysis of variance (MANOVA) discriminant analysis, cell-cycle phases can also be predicted label-free with high accuracy at >90% in G1 and G2 phase, and >80% in S phase. We anticipate that high throughput label-free cell cycle characterization could open new approaches for large-scale single-cell analysis, bringing new mechanistic insights into complex biological processes including diseases pathogenesis.
Defining the taxonomic domain of applicability for mammalian-based high-throughput screening assays
Cell-based high throughput screening (HTS) technologies are becoming mainstream in chemical safety evaluations. The US Environmental Protection Agency (EPA) Toxicity Forecaster (ToxCastTM) and the multi-agency Tox21 Programs have been at the forefront in advancing this science, m...
Design and fabrication of asymmetric nanopores using pulsed PECVD
NASA Astrophysics Data System (ADS)
Kelkar, Sanket S.
Manipulating matter at nanometric length scales is important for many electronic, chemical and biological applications. Structures such as nanopores demonstrate a phenomenon known as hindered transport which can be exploited in analytical applications such as DNA sequencing, ionic transistors, and molecular sieving. Precisely controlling the size, geometry and surface characteristics of the nanopores is important for realizing these applications. In this work, we employ relatively large template structures (˜ 100 nm) produced by track-etching or electron beam lithography. The pore size is then reduced to the desired level by deposition of material using pulsed plasma enhanced chemical vapor deposition (PECVD). Pulsed PECVD has been developed as a high throughput alternative to atomic layer deposition (ALD) to deliver self-limiting growth of thin films. The goal of this thesis is to extend the application of pulsed PECVD to fabricate asymmetric nanopores. In contrast to ALD, pulsed PECVD does not result in perfectly conformal deposition profiles, and predicting the final nanostructure is more complicated. A two dimensional feature scale model was developed to predict film profile evolution. The model was built in COMSOL, and is based on a diffusion reaction framework with a spatially varying Knudsen diffusion coefficient to account for the molecular transport inside the features. A scaling analysis was used to account for ALD exposure limitations that commonly occur when coating these extremely high aspect ratio features. The model was verified by cross-section microscopy of deposition profiles on patterned cylinders and trenches. The model shows that it is possible to obtain unique nanopore morphologies in pulsed PECVD that are distinct from either steady state deposition processes such as physical vapor deposition (PVD) or conventional ALD. Polymeric track etched (TE) membrane supports with a nominal size of 100 nm were employed as model template structures to demonstrate the capability of pulsed PECVD for precise pore size reduction of model supports. The efficacy of pulsed PECVD for nanopore fabrication was compared to both ALD and PVD. Flux and solute rejection measurements demonstrate that the pulsed PECVD-modified TE membranes exhibit higher selectivity without compromising on the flux due to their asymmetric structure. For example, the TiO2 modified supports were demonstrated to deliver high retention (˜ 75%) of bovine serum albumin (BSA) protein while maintaining 70% of their initial pure water flux. PVD also forms asymmetric membranes that enable high flux. But due to morphological instabilities, reproducibility and control were poor in the PVD-modified membranes, and it was not possible to optimize the flux and the selectivity of the membranes simultaneously. Excellent agreement between measured flux and model predictions based on feature scale simulations provided further validation of the tool's fidelity. Since surface energetics can often dominate hindered transport, the kinetics and thermodynamics of the octadecyltrichlorosilane (OTS) attachment was investigated in-depth as an approach to convert hydrophilic metal oxides into hydrophobic surfaces. It was shown that a simple ozone treatment was a satisfactory alternative to hazardous acids to create the highly hydroxylated surface required for OTS attachment, and that using heptane as the solvent enabled the process to be conducted under ambient conditions without the need of a glovebox. The kinetics of OTS self-assembled monolayer (SAM) formation and the saturation contact angle (˜100°) on alumina are comparable to what has been observed for OTS attachment on silicon. The OTS SAMs also demonstrated excellent thermal stability, and the modified surface showed a critical surface tension of 21.4 dyne/cm.
High-Throughput Pharmacokinetics for Environmental Chemicals (SOT)
High throughput screening (HTS) promises to allow prioritization of thousands of environmental chemicals with little or no in vivo information. For bioactivity identified by HTS, toxicokinetic (TK) models are essential to predict exposure thresholds below which no significant bio...
In-field High Throughput Phenotyping and Cotton Plant Growth Analysis Using LiDAR.
Sun, Shangpeng; Li, Changying; Paterson, Andrew H; Jiang, Yu; Xu, Rui; Robertson, Jon S; Snider, John L; Chee, Peng W
2018-01-01
Plant breeding programs and a wide range of plant science applications would greatly benefit from the development of in-field high throughput phenotyping technologies. In this study, a terrestrial LiDAR-based high throughput phenotyping system was developed. A 2D LiDAR was applied to scan plants from overhead in the field, and an RTK-GPS was used to provide spatial coordinates. Precise 3D models of scanned plants were reconstructed based on the LiDAR and RTK-GPS data. The ground plane of the 3D model was separated by RANSAC algorithm and a Euclidean clustering algorithm was applied to remove noise generated by weeds. After that, clean 3D surface models of cotton plants were obtained, from which three plot-level morphologic traits including canopy height, projected canopy area, and plant volume were derived. Canopy height ranging from 85th percentile to the maximum height were computed based on the histogram of the z coordinate for all measured points; projected canopy area was derived by projecting all points on a ground plane; and a Trapezoidal rule based algorithm was proposed to estimate plant volume. Results of validation experiments showed good agreement between LiDAR measurements and manual measurements for maximum canopy height, projected canopy area, and plant volume, with R 2 -values of 0.97, 0.97, and 0.98, respectively. The developed system was used to scan the whole field repeatedly over the period from 43 to 109 days after planting. Growth trends and growth rate curves for all three derived morphologic traits were established over the monitoring period for each cultivar. Overall, four different cultivars showed similar growth trends and growth rate patterns. Each cultivar continued to grow until ~88 days after planting, and from then on varied little. However, the actual values were cultivar specific. Correlation analysis between morphologic traits and final yield was conducted over the monitoring period. When considering each cultivar individually, the three traits showed the best correlations with final yield during the period between around 67 and 109 days after planting, with maximum R 2 -values of up to 0.84, 0.88, and 0.85, respectively. The developed system demonstrated relatively high throughput data collection and analysis.
NASA Astrophysics Data System (ADS)
Moreland, Blythe; Oman, Kenji; Curfman, John; Yan, Pearlly; Bundschuh, Ralf
Methyl-binding domain (MBD) protein pulldown experiments have been a valuable tool in measuring the levels of methylated CpG dinucleotides. Due to the frequent use of this technique, high-throughput sequencing data sets are available that allow a detailed quantitative characterization of the underlying interaction between methylated DNA and MBD proteins. Analyzing such data sets, we first found that two such proteins cannot bind closer to each other than 2 bp, consistent with structural models of the DNA-protein interaction. Second, the large amount of sequencing data allowed us to find rather weak but nevertheless clearly statistically significant sequence preferences for several bases around the required CpG. These results demonstrate that pulldown sequencing is a high-precision tool in characterizing DNA-protein interactions. This material is based upon work supported by the National Science Foundation under Grant No. DMR-1410172.
Template-Based Geometric Simulation of Flexible Frameworks
Wells, Stephen A.; Sartbaeva, Asel
2012-01-01
Specialised modelling and simulation methods implementing simplified physical models are valuable generators of insight. Template-based geometric simulation is a specialised method for modelling flexible framework structures made up of rigid units. We review the background, development and implementation of the method, and its applications to the study of framework materials such as zeolites and perovskites. The “flexibility window” property of zeolite frameworks is a particularly significant discovery made using geometric simulation. Software implementing geometric simulation of framework materials, “GASP”, is freely available to researchers. PMID:28817055
Predicting hepatotoxicity using ToxCast in vitro bioactivity and chemical structure
Background: The U.S. EPA ToxCastTM program is screening thousands of environmental chemicals for bioactivity using hundreds of high-throughput in vitro assays to build predictive models of toxicity. We represented chemicals based on bioactivity and chemical structure descriptors ...
Identifying Structural Alerts Based on Zebrafish Developmental Morphological Toxicity (TDS)
Zebrafish constitute a powerful alternative animal model for chemical hazard evaluation. To provide an in vivo complement to high-throughput screening data from the ToxCast program, zebrafish developmental toxicity screens were conducted on the ToxCast Phase I (Padilla et al., 20...
Evaluating imputation algorithms for low-depth genotyping-by-sequencing (GBS) data
USDA-ARS?s Scientific Manuscript database
Well-powered genomic studies require genome-wide marker coverage across many individuals. For non-model species with few genomic resources, high-throughput sequencing (HTS) methods, such as Genotyping-By-Sequencing (GBS), offer an inexpensive alternative to array-based genotyping. Although affordabl...
Yin, Zheng; Zhou, Xiaobo; Bakal, Chris; Li, Fuhai; Sun, Youxian; Perrimon, Norbert; Wong, Stephen TC
2008-01-01
Background The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi) or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens. Results Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM) is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of image-based datasets derived from a wide spectrum of experimental conditions and is suitable to adaptively process new images which are continuously added to existing datasets. Validations were carried out on different dataset, including published RNAi screening using Drosophila embryos [Additional files 1, 2], dataset for cell cycle phase identification using HeLa cells [Additional files 1, 3, 4] and synthetic dataset using polygons, our methods tackled three aforementioned tasks effectively with an accuracy range of 85%–90%. When our method is implemented in the context of a Drosophila genome-scale RNAi image-based screening of cultured cells aimed to identifying the contribution of individual genes towards the regulation of cell-shape, it efficiently discovers meaningful new phenotypes and provides novel biological insight. We also propose a two-step procedure to modify the novelty detection method based on one-class SVM, so that it can be used to online phenotype discovery. In different conditions, we compared the SVM based method with our method using various datasets and our methods consistently outperformed SVM based method in at least two of three tasks by 2% to 5%. These results demonstrate that our methods can be used to better identify novel phenotypes in image-based datasets from a wide range of conditions and organisms. Conclusion We demonstrate that our method can detect various novel phenotypes effectively in complex datasets. Experiment results also validate that our method performs consistently under different order of image input, variation of starting conditions including the number and composition of existing phenotypes, and dataset from different screens. In our findings, the proposed method is suitable for online phenotype discovery in diverse high-throughput image-based genetic and chemical screens. PMID:18534020
A high throughput respirometric assay for mitochondrial biogenesis and toxicity
Beeson, Craig C.; Beeson, Gyda C.; Schnellmann, Rick G.
2010-01-01
Mitochondria are a common target of toxicity for drugs and other chemicals, and results in decreased aerobic metabolism and cell death. In contrast, mitochondrial biogenesis restores cell vitality and there is a need for new agents to induce biogenesis. Current cell-based models of mitochondrial biogenesis or toxicity are inadequate because cultured cell lines are highly glycolytic with minimal aerobic metabolism and altered mitochondrial physiology. In addition, there are no high-throughput, real-time assays that assess mitochondrial function. We adapted primary cultures of renal proximal tubular cells (RPTC) that exhibit in vivo levels of aerobic metabolism, are not glycolytic, and retain higher levels of differentiated functions and used the Seahorse Biosciences analyzer to measure mitochondrial function in real time in multi-well plates. Using uncoupled respiration as a marker of electron transport chain (ETC) integrity, the nephrotoxicants cisplatin, HgCl2 and gentamicin exhibited mitochondrial toxicity prior to decreases in basal respiration and cell death. Conversely, using FCCP-uncoupled respiration as a marker of maximal ETC activity, 1-(2,5-dimethoxy-4-iodophenyl)-2-aminopropane (DOI), SRT1720, resveratrol, daidzein, and metformin produced mitochondrial biogenesis in RPTC. The merger of the RPTC model and multi-well respirometry results in a single high throughput assay to measure mitochondrial biogenesis and toxicity, and nephrotoxic potential. PMID:20465991
Probabilistic Assessment of High-Throughput Wireless Sensor Networks
Kim, Robin E.; Mechitov, Kirill; Sim, Sung-Han; Spencer, Billie F.; Song, Junho
2016-01-01
Structural health monitoring (SHM) using wireless smart sensors (WSS) has the potential to provide rich information on the state of a structure. However, because of their distributed nature, maintaining highly robust and reliable networks can be challenging. Assessing WSS network communication quality before and after finalizing a deployment is critical to achieve a successful WSS network for SHM purposes. Early studies on WSS network reliability mostly used temporal signal indicators, composed of a smaller number of packets, to assess the network reliability. However, because the WSS networks for SHM purpose often require high data throughput, i.e., a larger number of packets are delivered within the communication, such an approach is not sufficient. Instead, in this study, a model that can assess, probabilistically, the long-term performance of the network is proposed. The proposed model is based on readily-available measured data sets that represent communication quality during high-throughput data transfer. Then, an empirical limit-state function is determined, which is further used to estimate the probability of network communication failure. Monte Carlo simulation is adopted in this paper and applied to a small and a full-bridge wireless networks. By performing the proposed analysis in complex sensor networks, an optimized sensor topology can be achieved. PMID:27258270
Litfin, Thomas; Zhou, Yaoqi; Yang, Yuedong
2017-04-15
The high cost of drug discovery motivates the development of accurate virtual screening tools. Binding-homology, which takes advantage of known protein-ligand binding pairs, has emerged as a powerful discrimination technique. In order to exploit all available binding data, modelled structures of ligand-binding sequences may be used to create an expanded structural binding template library. SPOT-Ligand 2 has demonstrated significantly improved screening performance over its previous version by expanding the template library 15 times over the previous one. It also performed better than or similar to other binding-homology approaches on the DUD and DUD-E benchmarks. The server is available online at http://sparks-lab.org . yaoqi.zhou@griffith.edu.au or yuedong.yang@griffith.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
High-Throughput, Motility-Based Sorter for Microswimmers such as C. elegans
Yuan, Jinzhou; Zhou, Jessie; Raizen, David M.; Bau, Haim H.
2015-01-01
Animal motility varies with genotype, disease, aging, and environmental conditions. In many studies, it is desirable to carry out high throughput motility-based sorting to isolate rare animals for, among other things, forward genetic screens to identify genetic pathways that regulate phenotypes of interest. Many commonly used screening processes are labor-intensive, lack sensitivity, and require extensive investigator training. Here, we describe a sensitive, high throughput, automated, motility-based method for sorting nematodes. Our method is implemented in a simple microfluidic device capable of sorting thousands of animals per hour per module, and is amenable to parallelism. The device successfully enriches for known C. elegans motility mutants. Furthermore, using this device, we isolate low-abundance mutants capable of suppressing the somnogenic effects of the flp-13 gene, which regulates C. elegans sleep. By performing genetic complementation tests, we demonstrate that our motility-based sorting device efficiently isolates mutants for the same gene identified by tedious visual inspection of behavior on an agar surface. Therefore, our motility-based sorter is capable of performing high throughput gene discovery approaches to investigate fundamental biological processes. PMID:26008643
Systems metabolic engineering: genome-scale models and beyond.
Blazeck, John; Alper, Hal
2010-07-01
The advent of high throughput genome-scale bioinformatics has led to an exponential increase in available cellular system data. Systems metabolic engineering attempts to use data-driven approaches--based on the data collected with high throughput technologies--to identify gene targets and optimize phenotypical properties on a systems level. Current systems metabolic engineering tools are limited for predicting and defining complex phenotypes such as chemical tolerances and other global, multigenic traits. The most pragmatic systems-based tool for metabolic engineering to arise is the in silico genome-scale metabolic reconstruction. This tool has seen wide adoption for modeling cell growth and predicting beneficial gene knockouts, and we examine here how this approach can be expanded for novel organisms. This review will highlight advances of the systems metabolic engineering approach with a focus on de novo development and use of genome-scale metabolic reconstructions for metabolic engineering applications. We will then discuss the challenges and prospects for this emerging field to enable model-based metabolic engineering. Specifically, we argue that current state-of-the-art systems metabolic engineering techniques represent a viable first step for improving product yield that still must be followed by combinatorial techniques or random strain mutagenesis to achieve optimal cellular systems.
Theory of adsorption in a polydisperse templated porous material: Hard sphere systems
NASA Astrophysics Data System (ADS)
RŻysko, Wojciech; Sokołowski, Stefan; Pizio, Orest
2002-03-01
A theoretical description of adsorption in a templated porous material, formed by an equilibrium quench of a polydisperse fluid composed of matrix and template particles and subsequent removal of the template particles is presented. The approach is based on the solution of the replica Ornstein-Zernike equations with Percus-Yevick and hypernetted chain closures. The method of solution uses expansions of size-dependent correlation functions into Fourier series, as described by Lado [J. Chem. Phys. 108, 6441 (1998)]. Specific calculations have been carried out for model systems, composed of hard spheres.
NASA Astrophysics Data System (ADS)
Wan, Mimi; Zhao, Wenbo; Peng, Fang; Wang, Qi; Xu, Ping; Mao, Chun; Shen, Jian
2016-08-01
A new kind of high-quality Ag/PS coaxial nanocables can be facilely synthesized by using soft/hard templates method. In order to effectively introduce Ag sources into porous polystyrene (PS) nanotubes which were trapped in porous anodic aluminum oxide (AAO) hard template, Pluronic F127 (F127) was used as guiding agent, soft template and reductant. Meanwhile, ethylene glycol solution was also used as solvent and co-reducing agent to assist in the formation of silver nanowires. The influences of concentration of F127 and reducing reaction time on the formation of Ag/PS coaxial nanocables were discussed. Results indicated that the high-quality Ag/PS coaxial nanocables can be obtained by the mixed mode of soft/hard templates under optimized conditions. This strategy is expected to be extended to design more metal/polymer coaxial nanocables for the benefit of creation of complex and functional nanoarchitectures and components.
THTM: A template matching algorithm based on HOG descriptor and two-stage matching
NASA Astrophysics Data System (ADS)
Jiang, Yuanjie; Ruan, Li; Xiao, Limin; Liu, Xi; Yuan, Feng; Wang, Haitao
2018-04-01
We propose a novel method for template matching named THTM - a template matching algorithm based on HOG (histogram of gradient) and two-stage matching. We rely on the fast construction of HOG and the two-stage matching that jointly lead to a high accuracy approach for matching. TMTM give enough attention on HOG and creatively propose a twice-stage matching while traditional method only matches once. Our contribution is to apply HOG to template matching successfully and present two-stage matching, which is prominent to improve the matching accuracy based on HOG descriptor. We analyze key features of THTM and perform compared to other commonly used alternatives on a challenging real-world datasets. Experiments show that our method outperforms the comparison method.
In response to a proposed vision and strategy for toxicity testing in the 21st century nascent high throughput toxicology (HTT) programs have tested thousands of chemicals in hundreds of pathway-based biological assays. Although, to date, use of HTT data for safety assessment of ...
Cavity-Type DNA Origami-Based Plasmonic Nanostructures for Raman Enhancement.
Zhao, Mengzhen; Wang, Xu; Ren, Shaokang; Xing, Yikang; Wang, Jun; Teng, Nan; Zhao, Dongxia; Liu, Wei; Zhu, Dan; Su, Shao; Shi, Jiye; Song, Shiping; Wang, Lihua; Chao, Jie; Wang, Lianhui
2017-07-05
DNA origami has been established as addressable templates for site-specific anchoring of gold nanoparticles (AuNPs). Given that AuNPs are assembled by charged DNA oligonucleotides, it is important to reduce the charge repulsion between AuNPs-DNA and the template to realize high yields. Herein, we developed a cavity-type DNA origami as templates to organize 30 nm AuNPs, which formed dimer and tetramer plasmonic nanostructures. Transmission electron microscopy images showed that high yields of dimer and tetramer plasmonic nanostructures were obtained by using the cavity-type DNA origami as the template. More importantly, we observed significant Raman signal enhancement from molecules covalently attached to the plasmonic nanostructures, which provides a new way to high-sensitivity Raman sensing.
Protein structure modeling for CASP10 by multiple layers of global optimization.
Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung
2014-02-01
In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.
Simple fluorescence-based high throughput cell viability assay for filamentous fungi.
Chadha, S; Kale, S P
2015-09-01
Filamentous fungi are important model organisms to understand the eukaryotic process and have been frequently exploited in research and industry. These fungi are also causative agents of serious diseases in plants and humans. Disease management strategies include in vitro susceptibility testing of the fungal pathogens to environmental conditions and antifungal agents. Conventional methods used for antifungal susceptibilities are cumbersome, time-consuming and are not suitable for a large-scale analysis. Here, we report a rapid, high throughput microplate-based fluorescence method for investigating the toxicity of antifungal and stress (osmotic, salt and oxidative) agents on Magnaporthe oryzae and compared it with agar dilution method. This bioassay is optimized for the resazurin reduction to fluorescent resorufin by the fungal hyphae. Resazurin bioassay showed inhibitory rates and IC50 values comparable to the agar dilution method and to previously reported IC50 or MICs for M. oryzae and other fungi. The present method can screen range of test agents from different chemical classes with different modes of action for antifungal activities in a simple, sensitive, time and cost effective manner. A simple fluorescence-based high throughput method is developed to test the effects of stress and antifungal agents on viability of filamentous fungus Magnaporthe oryzae. This resazurin fluorescence assay can detect inhibitory effects comparable to those obtained using the growth inhibition assay with added advantages of simplicity, time and cost effectiveness. This high throughput viability assay has a great potential in large-scale screening of the chemical libraries of antifungal agents, for evaluating the effects of environmental conditions and hyphal kinetic studies in mutant and natural populations of filamentous fungi. © 2015 The Society for Applied Microbiology.
Model-Based Assurance Case+ (MBAC+): Tutorial on Modeling Radiation Hardness Assurance Activities
NASA Technical Reports Server (NTRS)
Austin, Rebekah; Label, Ken A.; Sampson, Mike J.; Evans, John; Witulski, Art; Sierawski, Brian; Karsai, Gabor; Mahadevan, Nag; Schrimpf, Ron; Reed, Robert A.
2017-01-01
This presentation will cover why modeling is useful for radiation hardness assurance cases, and also provide information on Model-Based Assurance Case+ (MBAC+), NASAs Reliability Maintainability Template, and Fault Propagation Modeling.
Huang, Ming-Wei; Liu, Shu-Ming; Zheng, Lei; Shi, Yan; Zhang, Jie; Li, Yan-Sheng; Yu, Guang-Yan; Zhang, Jian-Guo
2012-11-01
To enhance the accuracy of radioactive seed implants in the head and neck, a digital model individual template, containing information simultaneously on needle pathway and facial features, was designed to guide implantation with CT imaging. Thirty-one patients with recurrent and local advanced malignant tumors of head and neck after prior surgery and radiotherapy were involved in this study. Before (125)I implants, patients received CT scans based on 0.75mm thickness. And the brachytherapy treatment planning system (BTPS) software was used to make the implantation plan based on the CT images. Mimics software and Geomagic software were used to read the data containing CT images and implantation plan, and to design the individual template. Then the individual template containing the information of needle pathway and face features simultaneously was made through rapid prototyping (RP) technique. All patients received (125)I seeds interstitial implantation under the guide of the individual template and CT. The individual templates were positioned easily and accurately, and were stable. After implants, treatment quality evaluation was made by CT and TPS. The seeds and dosages distribution (D(90),V(100),V(150)) were well meet the treatment requirement. Clinical practice confirms that this approach can facilitate easier and more accurate implantation.
NASA Astrophysics Data System (ADS)
Xu, Shicai; Zhan, Jian; Man, Baoyuan; Jiang, Shouzhen; Yue, Weiwei; Gao, Shoubao; Guo, Chengang; Liu, Hanping; Li, Zhenhua; Wang, Jihua; Zhou, Yaoqi
2017-03-01
Reliable determination of binding kinetics and affinity of DNA hybridization and single-base mismatches plays an essential role in systems biology, personalized and precision medicine. The standard tools are optical-based sensors that are difficult to operate in low cost and to miniaturize for high-throughput measurement. Biosensors based on nanowire field-effect transistors have been developed, but reliable and cost-effective fabrication remains a challenge. Here, we demonstrate that a graphene single-crystal domain patterned into multiple channels can measure time- and concentration-dependent DNA hybridization kinetics and affinity reliably and sensitively, with a detection limit of 10 pM for DNA. It can distinguish single-base mutations quantitatively in real time. An analytical model is developed to estimate probe density, efficiency of hybridization and the maximum sensor response. The results suggest a promising future for cost-effective, high-throughput screening of drug candidates, genetic variations and disease biomarkers by using an integrated, miniaturized, all-electrical multiplexed, graphene-based DNA array.
Liu, Ning; Tian, Ru; Loeb, Daniel D
2003-02-18
Synthesis of the relaxed-circular (RC) DNA genome of hepadnaviruses requires two template switches during plus-strand DNA synthesis: primer translocation and circularization. Although primer translocation and circularization use different donor and acceptor sequences, and are distinct temporally, they share the common theme of switching from one end of the minus-strand template to the other end. Studies of duck hepatitis B virus have indicated that, in addition to the donor and acceptor sequences, three other cis-acting sequences, named 3E, M, and 5E, are required for the synthesis of RC DNA by contributing to primer translocation and circularization. The mechanism by which 3E, M, and 5E act was not known. We present evidence that these sequences function by base pairing with each other within the minus-strand template. 3E base-pairs with one portion of M (M3) and 5E base-pairs with an adjacent portion of M (M5). We found that disrupting base pairing between 3E and M3 and between 5E and M5 inhibited primer translocation and circularization. More importantly, restoring base pairing with mutant sequences restored the production of RC DNA. These results are consistent with the model that, within duck hepatitis B virus capsids, the ends of the minus-strand template are juxtaposed via base pairing to facilitate the two template switches during plus-strand DNA synthesis.
AOPs & Biomarkers: Bridging High Throughput Screening and Regulatory Decision Making.
As high throughput screening (HTS) approaches play a larger role in toxicity testing, computational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models for this purpose are becoming increasingly more sophisticated...
A comparison of different functions for predicted protein model quality assessment.
Li, Juan; Fang, Huisheng
2016-07-01
In protein structure prediction, a considerable number of models are usually produced by either the Template-Based Method (TBM) or the ab initio prediction. The purpose of this study is to find the critical parameter in assessing the quality of the predicted models. A non-redundant template library was developed and 138 target sequences were modeled. The target sequences were all distant from the proteins in the template library and were aligned with template library proteins on the basis of the transformation matrix. The quality of each model was first assessed with QMEAN and its six parameters, which are C_β interaction energy (C_beta), all-atom pairwise energy (PE), solvation energy (SE), torsion angle energy (TAE), secondary structure agreement (SSA), and solvent accessibility agreement (SAE). Finally, the alignment score (score) was also used to assess the quality of model. Hence, a total of eight parameters (i.e., QMEAN, C_beta, PE, SE, TAE, SSA, SAE, score) were independently used to assess the quality of each model. The results indicate that SSA is the best parameter to estimate the quality of the model.
Chedjou, Jean Chamberlain; Kyamakya, Kyandoghere
2015-04-01
This paper develops and validates a comprehensive and universally applicable computational concept for solving nonlinear differential equations (NDEs) through a neurocomputing concept based on cellular neural networks (CNNs). High-precision, stability, convergence, and lowest-possible memory requirements are ensured by the CNN processor architecture. A significant challenge solved in this paper is that all these cited computing features are ensured in all system-states (regular or chaotic ones) and in all bifurcation conditions that may be experienced by NDEs.One particular quintessence of this paper is to develop and demonstrate a solver concept that shows and ensures that CNN processors (realized either in hardware or in software) are universal solvers of NDE models. The solving logic or algorithm of given NDEs (possible examples are: Duffing, Mathieu, Van der Pol, Jerk, Chua, Rössler, Lorenz, Burgers, and the transport equations) through a CNN processor system is provided by a set of templates that are computed by our comprehensive templates calculation technique that we call nonlinear adaptive optimization. This paper is therefore a significant contribution and represents a cutting-edge real-time computational engineering approach, especially while considering the various scientific and engineering applications of this ultrafast, energy-and-memory-efficient, and high-precise NDE solver concept. For illustration purposes, three NDE models are demonstratively solved, and related CNN templates are derived and used: the periodically excited Duffing equation, the Mathieu equation, and the transport equation.
Mathematical and Computational Modeling in Complex Biological Systems
Li, Wenyang; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558
Mathematical and Computational Modeling in Complex Biological Systems.
Ji, Zhiwei; Yan, Ke; Li, Wenyang; Hu, Haigen; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.
Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods
Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.
2016-09-01
Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less
Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.
Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less
Rapid formation of size-controllable multicellular spheroids via 3D acoustic tweezers.
Chen, Kejie; Wu, Mengxi; Guo, Feng; Li, Peng; Chan, Chung Yu; Mao, Zhangming; Li, Sixing; Ren, Liqiang; Zhang, Rui; Huang, Tony Jun
2016-07-05
The multicellular spheroid is an important 3D cell culture model for drug screening, tissue engineering, and fundamental biological research. Although several spheroid formation methods have been reported, the field still lacks high-throughput and simple fabrication methods to accelerate its adoption in drug development industry. Surface acoustic wave (SAW) based cell manipulation methods, which are known to be non-invasive, flexible, and high-throughput, have not been successfully developed for fabricating 3D cell assemblies or spheroids, due to the limited understanding on SAW-based vertical levitation. In this work, we demonstrated the capability of fabricating multicellular spheroids in the 3D acoustic tweezers platform. Our method used drag force from microstreaming to levitate cells in the vertical direction, and used radiation force from Gor'kov potential to aggregate cells in the horizontal plane. After optimizing the device geometry and input power, we demonstrated the rapid and high-throughput nature of our method by continuously fabricating more than 150 size-controllable spheroids and transferring them to Petri dishes every 30 minutes. The spheroids fabricated by our 3D acoustic tweezers can be cultured for a week with good cell viability. We further demonstrated that spheroids fabricated by this method could be used for drug testing. Unlike the 2D monolayer model, HepG2 spheroids fabricated by the 3D acoustic tweezers manifested distinct drug resistance, which matched existing reports. The 3D acoustic tweezers based method can serve as a novel bio-manufacturing tool to fabricate complex 3D cell assembles for biological research, tissue engineering, and drug development.
Towards sensitive, high-throughput, biomolecular assays based on fluorescence lifetime
NASA Astrophysics Data System (ADS)
Ioanna Skilitsi, Anastasia; Turko, Timothé; Cianfarani, Damien; Barre, Sophie; Uhring, Wilfried; Hassiepen, Ulrich; Léonard, Jérémie
2017-09-01
Time-resolved fluorescence detection for robust sensing of biomolecular interactions is developed by implementing time-correlated single photon counting in high-throughput conditions. Droplet microfluidics is used as a promising platform for the very fast handling of low-volume samples. We illustrate the potential of this very sensitive and cost-effective technology in the context of an enzymatic activity assay based on fluorescently-labeled biomolecules. Fluorescence lifetime detection by time-correlated single photon counting is shown to enable reliable discrimination between positive and negative control samples at a throughput as high as several hundred samples per second.
ToxCast Workflow: High-throughput screening assay data processing, analysis and management (SOT)
US EPA’s ToxCast program is generating data in high-throughput screening (HTS) and high-content screening (HCS) assays for thousands of environmental chemicals, for use in developing predictive toxicity models. Currently the ToxCast screening program includes over 1800 unique c...
NASA Astrophysics Data System (ADS)
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-12-01
Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-01-01
Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.
Process in manufacturing high efficiency AlGaAs/GaAs solar cells by MO-CVD
NASA Technical Reports Server (NTRS)
Yeh, Y. C. M.; Chang, K. I.; Tandon, J.
1984-01-01
Manufacturing technology for mass producing high efficiency GaAs solar cells is discussed. A progress using a high throughput MO-CVD reactor to produce high efficiency GaAs solar cells is discussed. Thickness and doping concentration uniformity of metal oxide chemical vapor deposition (MO-CVD) GaAs and AlGaAs layer growth are discussed. In addition, new tooling designs are given which increase the throughput of solar cell processing. To date, 2cm x 2cm AlGaAs/GaAs solar cells with efficiency up to 16.5% were produced. In order to meet throughput goals for mass producing GaAs solar cells, a large MO-CVD system (Cambridge Instrument Model MR-200) with a susceptor which was initially capable of processing 20 wafers (up to 75 mm diameter) during a single growth run was installed. In the MR-200, the sequencing of the gases and the heating power are controlled by a microprocessor-based programmable control console. Hence, operator errors can be reduced, leading to a more reproducible production sequence.
Development of rapid and sensitive high throughput pharmacologic assays for marine phycotoxins.
Van Dolah, F M; Finley, E L; Haynes, B L; Doucette, G J; Moeller, P D; Ramsdell, J S
1994-01-01
The lack of rapid, high throughput assays is a major obstacle to many aspects of research on marine phycotoxins. Here we describe the application of microplate scintillation technology to develop high throughput assays for several classes of marine phycotoxin based on their differential pharmacologic actions. High throughput "drug discovery" format microplate receptor binding assays developed for brevetoxins/ciguatoxins and for domoic acid are described. Analysis for brevetoxins/ciguatoxins is carried out by binding competition with [3H] PbTx-3 for site 5 on the voltage dependent sodium channel in rat brain synaptosomes. Analysis of domoic acid is based on binding competition with [3H] kainic acid for the kainate/quisqualate glutamate receptor using frog brain synaptosomes. In addition, a high throughput microplate 45Ca flux assay for determination of maitotoxins is described. These microplate assays can be completed within 3 hours, have sensitivities of less than 1 ng, and can analyze dozens of samples simultaneously. The assays have been demonstrated to be useful for assessing algal toxicity and for assay-guided purification of toxins, and are applicable to the detection of biotoxins in seafood.
Chromosome rearrangements via template switching between diverged repeated sequences
Anand, Ranjith P.; Tsaponina, Olga; Greenwell, Patricia W.; Lee, Cheng-Sheng; Du, Wei; Petes, Thomas D.
2014-01-01
Recent high-resolution genome analyses of cancer and other diseases have revealed the occurrence of microhomology-mediated chromosome rearrangements and copy number changes. Although some of these rearrangements appear to involve nonhomologous end-joining, many must have involved mechanisms requiring new DNA synthesis. Models such as microhomology-mediated break-induced replication (MM-BIR) have been invoked to explain these rearrangements. We examined BIR and template switching between highly diverged sequences in Saccharomyces cerevisiae, induced during repair of a site-specific double-strand break (DSB). Our data show that such template switches are robust mechanisms that give rise to complex rearrangements. Template switches between highly divergent sequences appear to be mechanistically distinct from the initial strand invasions that establish BIR. In particular, such jumps are less constrained by sequence divergence and exhibit a different pattern of microhomology junctions. BIR traversing repeated DNA sequences frequently results in complex translocations analogous to those seen in mammalian cells. These results suggest that template switching among repeated genes is a potent driver of genome instability and evolution. PMID:25367035
Toxicity pathway-based mode of action modeling for risk assessment
In response to the 2007 NRC report on toxicity testing in the 21st century, the USEPA has entered into a memorandum of understanding with the National Human Genome Research Institute and the national Toxicology Program to jointly pursue ways to incorporate high throughput methods...
Identifying chemicals that provide a specific function within a product, yet have minimal impact on the human body or environment, is the goal of most formulation chemists and engineers practicing green chemistry. We present a methodology to identify potential chemical functional...
New methods are needed to screen thousands of environmental chemicals for toxicity, including developmental neurotoxicity. In vitro, cell-based assays that model key cellular events have been proposed for high throughput screening of chemicals for developmental neurotoxicity. Whi...
Won, Jonghun; Lee, Gyu Rie; Park, Hahnbeom; Seok, Chaok
2018-06-07
The second extracellular loops (ECL2s) of G-protein-coupled receptors (GPCRs) are often involved in GPCR functions, and their structures have important implications in drug discovery. However, structure prediction of ECL2 is difficult because of its long length and the structural diversity among different GPCRs. In this study, a new ECL2 conformational sampling method involving both template-based and ab initio sampling was developed. Inspired by the observation of similar ECL2 structures of closely related GPCRs, a template-based sampling method employing loop structure templates selected from the structure database was developed. A new metric for evaluating similarity of the target loop to templates was introduced for template selection. An ab initio loop sampling method was also developed to treat cases without highly similar templates. The ab initio method is based on the previously developed fragment assembly and loop closure method. A new sampling component that takes advantage of secondary structure prediction was added. In addition, a conserved disulfide bridge restraining ECL2 conformation was predicted and analytically incorporated into sampling, reducing the effective dimension of the conformational search space. The sampling method was combined with an existing energy function for comparison with previously reported loop structure prediction methods, and the benchmark test demonstrated outstanding performance.
Hu, W S; Bowman, E H; Delviks, K A; Pathak, V K
1997-01-01
Homologous recombination and deletions occur during retroviral replication when reverse transcriptase switches templates. While recombination occurs solely by intermolecular template switching (between copackaged RNAs), deletions can occur by an intermolecular or an intramolecular template switch (within the same RNA). To directly compare the rates of intramolecular and intermolecular template switching, two spleen necrosis virus-based vectors were constructed. Each vector contained a 110-bp direct repeat that was previously shown to delete at a high rate. The 110-bp direct repeat was flanked by two different sets of restriction site markers. These vectors were used to form heterozygotic virions containing RNAs of each parental vector, from which recombinant viruses were generated. By analyses of the markers flanking the direct repeats in recombinant and nonrecombinant proviruses, the rates of intramolecular and intermolecular template switching were determined. The results of these analyses indicate that intramolecular template switching is much more efficient than intermolecular template switching and that direct repeat deletions occur primarily through intramolecular template switching events. These studies also indicate that retroviral recombination occurs within a distinct viral subpopulation and exhibits high negative interference, whereby the selection of one recombination event increases the probability that a second recombination event will be observed. PMID:9223494
Creating Shape Templates for Patient Specific Biventricular Modeling in Congenital Heart Disease
Gilbert, Kathleen; Farrar, Genevieve; Cowan, Brett R.; Suinesiaputra, Avan; Occleshaw, Christopher; Pontré, Beau; Perry, James; Hegde, Sanjeet; Marsden, Alison; Omens, Jeff; McCulloch, Andrew; Young, Alistair A.
2018-01-01
Survival rates for infants with congenital heart disease (CHD) are improving, resulting in a growing population of adults with CHD. However, the analysis of left and right ventricular function is very time-consuming owing to the variety of congenital morphologies. Efficient customization of patient geometry and function depends on high quality shape templates specifically designed for the application. In this paper, we combine a method for creating finite element shape templates with an interactive template customization to patient MRI examinations. This enables different templates to be chosen depending on patient morphology. To demonstrate this pipeline, a new biventricular template with 162 elements was created and tested in place of an existing 82-element template. The method was able to provide fast interactive biventricular analysis with 0.31 sec per edit response time. The new template was customized to 13 CHD patients with similar biventricular topology, showing improved performance over the previous template and good agreement with clinical indices. PMID:26736353
High-Throughput Lectin Microarray-Based Analysis of Live Cell Surface Glycosylation
Li, Yu; Tao, Sheng-ce; Zhu, Heng; Schneck, Jonathan P.
2011-01-01
Lectins, plant-derived glycan-binding proteins, have long been used to detect glycans on cell surfaces. However, the techniques used to characterize serum or cells have largely been limited to mass spectrometry, blots, flow cytometry, and immunohistochemistry. While these lectin-based approaches are well established and they can discriminate a limited number of sugar isomers by concurrently using a limited number of lectins, they are not amenable for adaptation to a high-throughput platform. Fortunately, given the commercial availability of lectins with a variety of glycan specificities, lectins can be printed on a glass substrate in a microarray format to profile accessible cell-surface glycans. This method is an inviting alternative for analysis of a broad range of glycans in a high-throughput fashion and has been demonstrated to be a feasible method of identifying binding-accessible cell surface glycosylation on living cells. The current unit presents a lectin-based microarray approach for analyzing cell surface glycosylation in a high-throughput fashion. PMID:21400689
High-throughput determination of structural phase diagram and constituent phases using GRENDEL
NASA Astrophysics Data System (ADS)
Kusne, A. G.; Keller, D.; Anderson, A.; Zaban, A.; Takeuchi, I.
2015-11-01
Advances in high-throughput materials fabrication and characterization techniques have resulted in faster rates of data collection and rapidly growing volumes of experimental data. To convert this mass of information into actionable knowledge of material process-structure-property relationships requires high-throughput data analysis techniques. This work explores the use of the Graph-based endmember extraction and labeling (GRENDEL) algorithm as a high-throughput method for analyzing structural data from combinatorial libraries, specifically, to determine phase diagrams and constituent phases from both x-ray diffraction and Raman spectral data. The GRENDEL algorithm utilizes a set of physical constraints to optimize results and provides a framework by which additional physics-based constraints can be easily incorporated. GRENDEL also permits the integration of database data as shown by the use of critically evaluated data from the Inorganic Crystal Structure Database in the x-ray diffraction data analysis. Also the Sunburst radial tree map is demonstrated as a tool to visualize material structure-property relationships found through graph based analysis.
Congenital limb malformations are among the most frequent malformation occurs in humans, with a frequency of about 1 in 500 to 1 in 1000 human live births. ToxCast is profiling the bioactivity of thousands of chemicals based on high-throughput (HTS) and computational methods that...
Inter-Individual Variability in High-Throughput Risk Prioritization of Environmental Chemicals (Sot)
We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have...
We incorporate inter-individual variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which hav...
High-throughput screening, predictive modeling and computational embryology - Abstract
High-throughput screening (HTS) studies are providing a rich source of data that can be applied to chemical profiling to address sensitivity and specificity of molecular targets, biological pathways, cellular and developmental processes. EPA’s ToxCast project is testing 960 uniq...
High-throughput purification of recombinant proteins using self-cleaving intein tags.
Coolbaugh, M J; Shakalli Tang, M J; Wood, D W
2017-01-01
High throughput methods for recombinant protein production using E. coli typically involve the use of affinity tags for simple purification of the protein of interest. One drawback of these techniques is the occasional need for tag removal before study, which can be hard to predict. In this work, we demonstrate two high throughput purification methods for untagged protein targets based on simple and cost-effective self-cleaving intein tags. Two model proteins, E. coli beta-galactosidase (βGal) and superfolder green fluorescent protein (sfGFP), were purified using self-cleaving versions of the conventional chitin-binding domain (CBD) affinity tag and the nonchromatographic elastin-like-polypeptide (ELP) precipitation tag in a 96-well filter plate format. Initial tests with shake flask cultures confirmed that the intein purification scheme could be scaled down, with >90% pure product generated in a single step using both methods. The scheme was then validated in a high throughput expression platform using 24-well plate cultures followed by purification in 96-well plates. For both tags and with both target proteins, the purified product was consistently obtained in a single-step, with low well-to-well and plate-to-plate variability. This simple method thus allows the reproducible production of highly pure untagged recombinant proteins in a convenient microtiter plate format. Copyright © 2016 Elsevier Inc. All rights reserved.
Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi
2016-01-01
Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362
Optimization and high-throughput screening of antimicrobial peptides.
Blondelle, Sylvie E; Lohner, Karl
2010-01-01
While a well-established process for lead compound discovery in for-profit companies, high-throughput screening is becoming more popular in basic and applied research settings in academia. The development of combinatorial libraries combined with easy and less expensive access to new technologies have greatly contributed to the implementation of high-throughput screening in academic laboratories. While such techniques were earlier applied to simple assays involving single targets or based on binding affinity, they have now been extended to more complex systems such as whole cell-based assays. In particular, the urgent need for new antimicrobial compounds that would overcome the rapid rise of drug-resistant microorganisms, where multiple target assays or cell-based assays are often required, has forced scientists to focus onto high-throughput technologies. Based on their existence in natural host defense systems and their different mode of action relative to commercial antibiotics, antimicrobial peptides represent a new hope in discovering novel antibiotics against multi-resistant bacteria. The ease of generating peptide libraries in different formats has allowed a rapid adaptation of high-throughput assays to the search for novel antimicrobial peptides. Similarly, the availability nowadays of high-quantity and high-quality antimicrobial peptide data has permitted the development of predictive algorithms to facilitate the optimization process. This review summarizes the various library formats that lead to de novo antimicrobial peptide sequences as well as the latest structural knowledge and optimization processes aimed at improving the peptides selectivity.
Ghedira, Rim; Papazova, Nina; Vuylsteke, Marnik; Ruttink, Tom; Taverniers, Isabel; De Loose, Marc
2009-10-28
GMO quantification, based on real-time PCR, relies on the amplification of an event-specific transgene assay and a species-specific reference assay. The uniformity of the nucleotide sequences targeted by both assays across various transgenic varieties is an important prerequisite for correct quantification. Single nucleotide polymorphisms (SNPs) frequently occur in the maize genome and might lead to nucleotide variation in regions used to design primers and probes for reference assays. Further, they may affect the annealing of the primer to the template and reduce the efficiency of DNA amplification. We assessed the effect of a minor DNA template modification, such as a single base pair mismatch in the primer attachment site, on real-time PCR quantification. A model system was used based on the introduction of artificial mismatches between the forward primer and the DNA template in the reference assay targeting the maize starch synthase (SSIIb) gene. The results show that the presence of a mismatch between the primer and the DNA template causes partial to complete failure of the amplification of the initial DNA template depending on the type and location of the nucleotide mismatch. With this study, we show that the presence of a primer/template mismatch affects the estimated total DNA quantity to a varying degree.
Evaluation of High-Throughput Chemical Exposure Models ...
The U.S. EPA, under its ExpoCast program, is developing high-throughput near-field modeling methods to estimate human chemical exposure and to provide real-world context to high-throughput screening (HTS) hazard data. These novel modeling methods include reverse methods to infer parent chemical exposures from biomonitoring measurements and forward models to predict multi-pathway exposures from chemical use information and/or residential media concentrations. Here, both forward and reverse modeling methods are used to characterize the relationship between matched near-field environmental (air and dust) and biomarker measurements. Indoor air, house dust, and urine samples from a sample of 120 females (aged 60 to 80 years) were analyzed. In the measured data, 78% of the residential media measurements (across 80 chemicals) and 54% of the urine measurements (across 21 chemicals) were censored, i.e. below the limit of quantification (LOQ). Because of the degree of censoring, we applied a Bayesian approach to impute censored values for 69 chemicals having at least 15% of measurements above LOQ. This resulted in 10 chemicals (5 phthalates, 5 pesticides) with matched air, dust, and urine metabolite measurements. The population medians of indoor air and dust concentrations were compared to population median exposures inferred from urine metabolites concentrations using a high-throughput reverse-dosimetry approach. Median air and dust concentrations were found to be correl
Mouse EEG spike detection based on the adapted continuous wavelet transform
NASA Astrophysics Data System (ADS)
Tieng, Quang M.; Kharatishvili, Irina; Chen, Min; Reutens, David C.
2016-04-01
Objective. Electroencephalography (EEG) is an important tool in the diagnosis of epilepsy. Interictal spikes on EEG are used to monitor the development of epilepsy and the effects of drug therapy. EEG recordings are generally long and the data voluminous. Thus developing a sensitive and reliable automated algorithm for analyzing EEG data is necessary. Approach. A new algorithm for detecting and classifying interictal spikes in mouse EEG recordings is proposed, based on the adapted continuous wavelet transform (CWT). The construction of the adapted mother wavelet is founded on a template obtained from a sample comprising the first few minutes of an EEG data set. Main Result. The algorithm was tested with EEG data from a mouse model of epilepsy and experimental results showed that the algorithm could distinguish EEG spikes from other transient waveforms with a high degree of sensitivity and specificity. Significance. Differing from existing approaches, the proposed approach combines wavelet denoising, to isolate transient signals, with adapted CWT-based template matching, to detect true interictal spikes. Using the adapted wavelet constructed from a predefined template, the adapted CWT is calculated on small EEG segments to fit dynamical changes in the EEG recording.
Song, Zewei; Schlatter, Dan; Kennedy, Peter; Kinkel, Linda L.; Kistler, H. Corby; Nguyen, Nhu; Bates, Scott T.
2015-01-01
Next generation fungal amplicon sequencing is being used with increasing frequency to study fungal diversity in various ecosystems; however, the influence of sample preparation on the characterization of fungal community is poorly understood. We investigated the effects of four procedural modifications to library preparation for high-throughput sequencing (HTS). The following treatments were considered: 1) the amount of soil used in DNA extraction, 2) the inclusion of additional steps (freeze/thaw cycles, sonication, or hot water bath incubation) in the extraction procedure, 3) the amount of DNA template used in PCR, and 4) the effect of sample pooling, either physically or computationally. Soils from two different ecosystems in Minnesota, USA, one prairie and one forest site, were used to assess the generality of our results. The first three treatments did not significantly influence observed fungal OTU richness or community structure at either site. Physical pooling captured more OTU richness compared to individual samples, but total OTU richness at each site was highest when individual samples were computationally combined. We conclude that standard extraction kit protocols are well optimized for fungal HTS surveys, but because sample pooling can significantly influence OTU richness estimates, it is important to carefully consider the study aims when planning sampling procedures. PMID:25974078
Vállez Garcia, David; Casteels, Cindy; Schwarz, Adam J; Dierckx, Rudi A J O; Koole, Michel; Doorduin, Janine
2015-01-01
High-resolution anatomical image data in preclinical brain PET and SPECT studies is often not available, and inter-modality spatial normalization to an MRI brain template is frequently performed. However, this procedure can be challenging for tracers where substantial anatomical structures present limited tracer uptake. Therefore, we constructed and validated strain- and tracer-specific rat brain templates in Paxinos space to allow intra-modal registration. PET [18F]FDG, [11C]flumazenil, [11C]MeDAS, [11C]PK11195 and [11C]raclopride, and SPECT [99mTc]HMPAO brain scans were acquired from healthy male rats. Tracer-specific templates were constructed by averaging the scans, and by spatial normalization to a widely used MRI-based template. The added value of tracer-specific templates was evaluated by quantification of the residual error between original and realigned voxels after random misalignments of the data set. Additionally, the impact of strain differences, disease uptake patterns (focal and diffuse lesion), and the effect of image and template size on the registration errors were explored. Mean registration errors were 0.70 ± 0.32 mm for [18F]FDG (n = 25), 0.23 ± 0.10mm for [11C]flumazenil (n = 13), 0.88 ± 0.20 mm for [11C]MeDAS (n = 15), 0.64 ± 0.28 mm for [11C]PK11195 (n = 19), 0.34 ± 0.15 mm for [11C]raclopride (n = 6), and 0.40 ± 0.13 mm for [99mTc]HMPAO (n = 15). These values were smallest with tracer-specific templates, when compared to the use of [18F]FDG as reference template (p<0.001). Additionally, registration errors were smallest with strain-specific templates (p<0.05), and when images and templates had the same size (p ≤ 0.001). Moreover, highest registration errors were found for the focal lesion group (p<0.005) and the diffuse lesion group (p = n.s.). In the voxel-based analysis, the reported coordinates of the focal lesion model are consistent with the stereotaxic injection procedure. The use of PET/SPECT strain- and tracer-specific templates allows accurate registration of functional rat brain data, independent of disease specific uptake patterns and with registration error below spatial resolution of the cameras. The templates and the SAMIT package will be freely available for the research community [corrected].
Vállez Garcia, David; Casteels, Cindy; Schwarz, Adam J.; Dierckx, Rudi A. J. O.; Koole, Michel; Doorduin, Janine
2015-01-01
High-resolution anatomical image data in preclinical brain PET and SPECT studies is often not available, and inter-modality spatial normalization to an MRI brain template is frequently performed. However, this procedure can be challenging for tracers where substantial anatomical structures present limited tracer uptake. Therefore, we constructed and validated strain- and tracer-specific rat brain templates in Paxinos space to allow intra-modal registration. PET [18F]FDG, [11C]flumazenil, [11C]MeDAS, [11C]PK11195 and [11C]raclopride, and SPECT [99mTc]HMPAO brain scans were acquired from healthy male rats. Tracer-specific templates were constructed by averaging the scans, and by spatial normalization to a widely used MRI-based template. The added value of tracer-specific templates was evaluated by quantification of the residual error between original and realigned voxels after random misalignments of the data set. Additionally, the impact of strain differences, disease uptake patterns (focal and diffuse lesion), and the effect of image and template size on the registration errors were explored. Mean registration errors were 0.70±0.32mm for [18F]FDG (n = 25), 0.23±0.10mm for [11C]flumazenil (n = 13), 0.88±0.20 mm for [11C]MeDAS (n = 15), 0.64±0.28mm for [11C]PK11195 (n = 19), 0.34±0.15mm for [11C]raclopride (n = 6), and 0.40±0.13mm for [99mTc]HMPAO (n = 15). These values were smallest with tracer-specific templates, when compared to the use of [18F]FDG as reference template (p&0.001). Additionally, registration errors were smallest with strain-specific templates (p&0.05), and when images and templates had the same size (p≤0.001). Moreover, highest registration errors were found for the focal lesion group (p&0.005) and the diffuse lesion group (p = n.s.). In the voxel-based analysis, the reported coordinates of the focal lesion model are consistent with the stereotaxic injection procedure. The use of PET/SPECT strain- and tracer-specific templates allows accurate registration of functional rat brain data, independent of disease specific uptake patterns and with registration error below spatial resolution of the cameras. The templates and the SAMIT package will be freely available for the research community. PMID:25823005
Handheld Fluorescence Microscopy based Flow Analyzer.
Saxena, Manish; Jayakumar, Nitin; Gorthi, Sai Siva
2016-03-01
Fluorescence microscopy has the intrinsic advantages of favourable contrast characteristics and high degree of specificity. Consequently, it has been a mainstay in modern biological inquiry and clinical diagnostics. Despite its reliable nature, fluorescence based clinical microscopy and diagnostics is a manual, labour intensive and time consuming procedure. The article outlines a cost-effective, high throughput alternative to conventional fluorescence imaging techniques. With system level integration of custom-designed microfluidics and optics, we demonstrate fluorescence microscopy based imaging flow analyzer. Using this system we have imaged more than 2900 FITC labeled fluorescent beads per minute. This demonstrates high-throughput characteristics of our flow analyzer in comparison to conventional fluorescence microscopy. The issue of motion blur at high flow rates limits the achievable throughput in image based flow analyzers. Here we address the issue by computationally deblurring the images and show that this restores the morphological features otherwise affected by motion blur. By further optimizing concentration of the sample solution and flow speeds, along with imaging multiple channels simultaneously, the system is capable of providing throughput of about 480 beads per second.
View-Invariant Gait Recognition Through Genetic Template Segmentation
NASA Astrophysics Data System (ADS)
Isaac, Ebenezer R. H. P.; Elias, Susan; Rajagopalan, Srinivasan; Easwarakumar, K. S.
2017-08-01
Template-based model-free approach provides by far the most successful solution to the gait recognition problem in literature. Recent work discusses how isolating the head and leg portion of the template increase the performance of a gait recognition system making it robust against covariates like clothing and carrying conditions. However, most involve a manual definition of the boundaries. The method we propose, the genetic template segmentation (GTS), employs the genetic algorithm to automate the boundary selection process. This method was tested on the GEI, GEnI and AEI templates. GEI seems to exhibit the best result when segmented with our approach. Experimental results depict that our approach significantly outperforms the existing implementations of view-invariant gait recognition.
Dawes, Timothy D; Turincio, Rebecca; Jones, Steven W; Rodriguez, Richard A; Gadiagellan, Dhireshan; Thana, Peter; Clark, Kevin R; Gustafson, Amy E; Orren, Linda; Liimatta, Marya; Gross, Daniel P; Maurer, Till; Beresini, Maureen H
2016-02-01
Acoustic droplet ejection (ADE) as a means of transferring library compounds has had a dramatic impact on the way in which high-throughput screening campaigns are conducted in many laboratories. Two Labcyte Echo ADE liquid handlers form the core of the compound transfer operation in our 1536-well based ultra-high-throughput screening (uHTS) system. Use of these instruments has promoted flexibility in compound formatting in addition to minimizing waste and eliminating compound carryover. We describe the use of ADE for the generation of assay-ready plates for primary screening as well as for follow-up dose-response evaluations. Custom software has enabled us to harness the information generated by the ADE instrumentation. Compound transfer via ADE also contributes to the screening process outside of the uHTS system. A second fully automated ADE-based system has been used to augment the capacity of the uHTS system as well as to permit efficient use of previously picked compound aliquots for secondary assay evaluations. Essential to the utility of ADE in the high-throughput screening process is the high quality of the resulting data. Examples of data generated at various stages of high-throughput screening campaigns are provided. Advantages and disadvantages of the use of ADE in high-throughput screening are discussed. © 2015 Society for Laboratory Automation and Screening.
Tschiersch, Henning; Junker, Astrid; Meyer, Rhonda C; Altmann, Thomas
2017-01-01
Automated plant phenotyping has been established as a powerful new tool in studying plant growth, development and response to various types of biotic or abiotic stressors. Respective facilities mainly apply non-invasive imaging based methods, which enable the continuous quantification of the dynamics of plant growth and physiology during developmental progression. However, especially for plants of larger size, integrative, automated and high throughput measurements of complex physiological parameters such as photosystem II efficiency determined through kinetic chlorophyll fluorescence analysis remain a challenge. We present the technical installations and the establishment of experimental procedures that allow the integrated high throughput imaging of all commonly determined PSII parameters for small and large plants using kinetic chlorophyll fluorescence imaging systems (FluorCam, PSI) integrated into automated phenotyping facilities (Scanalyzer, LemnaTec). Besides determination of the maximum PSII efficiency, we focused on implementation of high throughput amenable protocols recording PSII operating efficiency (Φ PSII ). Using the presented setup, this parameter is shown to be reproducibly measured in differently sized plants despite the corresponding variation in distance between plants and light source that caused small differences in incident light intensity. Values of Φ PSII obtained with the automated chlorophyll fluorescence imaging setup correlated very well with conventionally determined data using a spot-measuring chlorophyll fluorometer. The established high throughput operating protocols enable the screening of up to 1080 small and 184 large plants per hour, respectively. The application of the implemented high throughput protocols is demonstrated in screening experiments performed with large Arabidopsis and maize populations assessing natural variation in PSII efficiency. The incorporation of imaging systems suitable for kinetic chlorophyll fluorescence analysis leads to a substantial extension of the feature spectrum to be assessed in the presented high throughput automated plant phenotyping platforms, thus enabling the simultaneous assessment of plant architectural and biomass-related traits and their relations to physiological features such as PSII operating efficiency. The implemented high throughput protocols are applicable to a broad spectrum of model and crop plants of different sizes (up to 1.80 m height) and architectures. The deeper understanding of the relation of plant architecture, biomass formation and photosynthetic efficiency has a great potential with respect to crop and yield improvement strategies.
Modularity of Protein Folds as a Tool for Template-Free Modeling of Structures.
Vallat, Brinda; Madrid-Aliste, Carlos; Fiser, Andras
2015-08-01
Predicting the three-dimensional structure of proteins from their amino acid sequences remains a challenging problem in molecular biology. While the current structural coverage of proteins is almost exclusively provided by template-based techniques, the modeling of the rest of the protein sequences increasingly require template-free methods. However, template-free modeling methods are much less reliable and are usually applicable for smaller proteins, leaving much space for improvement. We present here a novel computational method that uses a library of supersecondary structure fragments, known as Smotifs, to model protein structures. The library of Smotifs has saturated over time, providing a theoretical foundation for efficient modeling. The method relies on weak sequence signals from remotely related protein structures to create a library of Smotif fragments specific to the target protein sequence. This Smotif library is exploited in a fragment assembly protocol to sample decoys, which are assessed by a composite scoring function. Since the Smotif fragments are larger in size compared to the ones used in other fragment-based methods, the proposed modeling algorithm, SmotifTF, can employ an exhaustive sampling during decoy assembly. SmotifTF successfully predicts the overall fold of the target proteins in about 50% of the test cases and performs competitively when compared to other state of the art prediction methods, especially when sequence signal to remote homologs is diminishing. Smotif-based modeling is complementary to current prediction methods and provides a promising direction in addressing the structure prediction problem, especially when targeting larger proteins for modeling.
Design, Fabrication, Characterization and Modeling of Integrated Functional Materials
2014-10-01
oxide ( AAO ) membranes were fabricated from high purity aluminum foil (99.999%) by electrochemical route using a controlled two-step anodization ...deposition of Fe and Co in anodized alumina templates. We used commercially prepared AAO templates which had pore diameters of 100 nm (300 nm), an...a thermal decomposition method. The final product was suspended in high-purity hexane to create a ferrofluid. Custom highly ordered anodic aluminum
Development of forensic-quality full mtGenome haplotypes: success rates with low template specimens.
Just, Rebecca S; Scheible, Melissa K; Fast, Spence A; Sturk-Andreaggi, Kimberly; Higginbotham, Jennifer L; Lyons, Elizabeth A; Bush, Jocelyn M; Peck, Michelle A; Ring, Joseph D; Diegoli, Toni M; Röck, Alexander W; Huber, Gabriela E; Nagl, Simone; Strobl, Christina; Zimmermann, Bettina; Parson, Walther; Irwin, Jodi A
2014-05-01
Forensic mitochondrial DNA (mtDNA) testing requires appropriate, high quality reference population data for estimating the rarity of questioned haplotypes and, in turn, the strength of the mtDNA evidence. Available reference databases (SWGDAM, EMPOP) currently include information from the mtDNA control region; however, novel methods that quickly and easily recover mtDNA coding region data are becoming increasingly available. Though these assays promise to both facilitate the acquisition of mitochondrial genome (mtGenome) data and maximize the general utility of mtDNA testing in forensics, the appropriate reference data and database tools required for their routine application in forensic casework are lacking. To address this deficiency, we have undertaken an effort to: (1) increase the large-scale availability of high-quality entire mtGenome reference population data, and (2) improve the information technology infrastructure required to access/search mtGenome data and employ them in forensic casework. Here, we describe the application of a data generation and analysis workflow to the development of more than 400 complete, forensic-quality mtGenomes from low DNA quantity blood serum specimens as part of a U.S. National Institute of Justice funded reference population databasing initiative. We discuss the minor modifications made to a published mtGenome Sanger sequencing protocol to maintain a high rate of throughput while minimizing manual reprocessing with these low template samples. The successful use of this semi-automated strategy on forensic-like samples provides practical insight into the feasibility of producing complete mtGenome data in a routine casework environment, and demonstrates that large (>2kb) mtDNA fragments can regularly be recovered from high quality but very low DNA quantity specimens. Further, the detailed empirical data we provide on the amplification success rates across a range of DNA input quantities will be useful moving forward as PCR-based strategies for mtDNA enrichment are considered for targeted next-generation sequencing workflows. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Polymeric lithography editor: Editing lithographic errors with nanoporous polymeric probes
Rajasekaran, Pradeep Ramiah; Zhou, Chuanhong; Dasari, Mallika; Voss, Kay-Obbe; Trautmann, Christina; Kohli, Punit
2017-01-01
A new lithographic editing system with an ability to erase and rectify errors in microscale with real-time optical feedback is demonstrated. The erasing probe is a conically shaped hydrogel (tip size, ca. 500 nm) template-synthesized from track-etched conical glass wafers. The “nanosponge” hydrogel probe “erases” patterns by hydrating and absorbing molecules into a porous hydrogel matrix via diffusion analogous to a wet sponge. The presence of an interfacial liquid water layer between the hydrogel tip and the substrate during erasing enables frictionless, uninterrupted translation of the eraser on the substrate. The erasing capacity of the hydrogel is extremely high because of the large free volume of the hydrogel matrix. The fast frictionless translocation and interfacial hydration resulted in an extremely high erasing rate (~785 μm2/s), which is two to three orders of magnitude higher in comparison with the atomic force microscopy–based erasing (~0.1 μm2/s) experiments. The high precision and accuracy of the polymeric lithography editor (PLE) system stemmed from coupling piezoelectric actuators to an inverted optical microscope. Subsequently after erasing the patterns using agarose erasers, a polydimethylsiloxane probe fabricated from the same conical track-etched template was used to precisely redeposit molecules of interest at the erased spots. PLE also provides a continuous optical feedback throughout the entire molecular editing process—writing, erasing, and rewriting. To demonstrate its potential in device fabrication, we used PLE to electrochemically erase metallic copper thin film, forming an interdigitated array of microelectrodes for the fabrication of a functional microphotodetector device. High-throughput dot and line erasing, writing with the conical “wet nanosponge,” and continuous optical feedback make PLE complementary to the existing catalog of nanolithographic/microlithographic and three-dimensional printing techniques. This new PLE technique will potentially open up many new and exciting avenues in lithography, which remain unexplored due to the inherent limitations in error rectification capabilities of the existing lithographic techniques. PMID:28630898
Mismodeling in gravitational-wave astronomy: The trouble with templates
NASA Astrophysics Data System (ADS)
Sampson, Laura; Cornish, Neil; Yunes, Nicolás
2014-03-01
Waveform templates are a powerful tool for extracting and characterizing gravitational wave signals, acting as highly restrictive priors on the signal morphologies that allow us to extract weak events buried deep in the instrumental noise. The templates map the waveform shapes to physical parameters, thus allowing us to produce posterior probability distributions for these parameters. However, there are attendant dangers in using highly restrictive signal priors. If strong field gravity is not accurately described by general relativity (GR), then using GR templates may result in fundamental bias in the recovered parameters, or even worse, a complete failure to detect signals. Here we study such dangers, concentrating on three distinct possibilities. First, we show that there exist modified theories compatible with all existing observations that would fail to be detected by the LIGO/Virgo network using searches based on GR templates, but which would be detected using a one parameter post-Einsteinian extension. Second, we study modified theories that produce departures from GR that turn on suddenly at a critical frequency, producing waveforms that do not directly fit into the simplest parametrized post-Einsteinian (ppE) scheme. We show that even the simplest ppE templates are still capable of picking up these strange signals and diagnosing a departure from GR. Third, we study whether using inspiral-only ppE waveforms for signals that include merger and ringdown can lead to problems in misidentifying a GR departure. We present a simple technique that allows us to self-consistently identify the inspiral portion of the signal, and thus remove these potential biases, allowing GR tests to be performed on higher mass signals that merge within the detector band. We close by studying a parametrized waveform model that may allow us to test GR using the full inspiral-merger-ringdown signal.
Fu, Wei; Zhu, Pengyu; Wei, Shuang; Zhixin, Du; Wang, Chenguang; Wu, Xiyang; Li, Feiwu; Zhu, Shuifang
2017-04-01
Among all of the high-throughput detection methods, PCR-based methodologies are regarded as the most cost-efficient and feasible methodologies compared with the next-generation sequencing or ChIP-based methods. However, the PCR-based methods can only achieve multiplex detection up to 15-plex due to limitations imposed by the multiplex primer interactions. The detection throughput cannot meet the demands of high-throughput detection, such as SNP or gene expression analysis. Therefore, in our study, we have developed a new high-throughput PCR-based detection method, multiplex enrichment quantitative PCR (ME-qPCR), which is a combination of qPCR and nested PCR. The GMO content detection results in our study showed that ME-qPCR could achieve high-throughput detection up to 26-plex. Compared to the original qPCR, the Ct values of ME-qPCR were lower for the same group, which showed that ME-qPCR sensitivity is higher than the original qPCR. The absolute limit of detection for ME-qPCR could achieve levels as low as a single copy of the plant genome. Moreover, the specificity results showed that no cross-amplification occurred for irrelevant GMO events. After evaluation of all of the parameters, a practical evaluation was performed with different foods. The more stable amplification results, compared to qPCR, showed that ME-qPCR was suitable for GMO detection in foods. In conclusion, ME-qPCR achieved sensitive, high-throughput GMO detection in complex substrates, such as crops or food samples. In the future, ME-qPCR-based GMO content identification may positively impact SNP analysis or multiplex gene expression of food or agricultural samples. Graphical abstract For the first-step amplification, four primers (A, B, C, and D) have been added into the reaction volume. In this manner, four kinds of amplicons have been generated. All of these four amplicons could be regarded as the target of second-step PCR. For the second-step amplification, three parallels have been taken for the final evaluation. After the second evaluation, the final amplification curves and melting curves have been achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storm, Emma; Weniger, Christoph; Calore, Francesca, E-mail: e.m.storm@uva.nl, E-mail: c.weniger@uva.nl, E-mail: francesca.calore@lapth.cnrs.fr
We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (∼> 10{sup 5}) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that aremore » motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |ℓ|<90{sup o} and | b |<20{sup o}, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.« less
Fishing on chips: up-and-coming technological advances in analysis of zebrafish and Xenopus embryos.
Zhu, Feng; Skommer, Joanna; Huang, Yushi; Akagi, Jin; Adams, Dany; Levin, Michael; Hall, Chris J; Crosier, Philip S; Wlodkowic, Donald
2014-11-01
Biotests performed on small vertebrate model organisms provide significant investigative advantages as compared with bioassays that employ cell lines, isolated primary cells, or tissue samples. The main advantage offered by whole-organism approaches is that the effects under study occur in the context of intact physiological milieu, with all its intercellular and multisystem interactions. The gap between the high-throughput cell-based in vitro assays and low-throughput, disproportionally expensive and ethically controversial mammal in vivo tests can be closed by small model organisms such as zebrafish or Xenopus. The optical transparency of their tissues, the ease of genetic manipulation and straightforward husbandry, explain the growing popularity of these model organisms. Nevertheless, despite the potential for miniaturization, automation and subsequent increase in throughput of experimental setups, the manipulation, dispensing and analysis of living fish and frog embryos remain labor-intensive. Recently, a new generation of miniaturized chip-based devices have been developed for zebrafish and Xenopus embryo on-chip culture and experimentation. In this work, we review the critical developments in the field of Lab-on-a-Chip devices designed to alleviate the limits of traditional platforms for studies on zebrafish and clawed frog embryo and larvae. © 2014 International Society for Advancement of Cytometry. © 2014 International Society for Advancement of Cytometry.
Stepanauskas, Ramunas; Fergusson, Elizabeth A; Brown, Joseph; Poulton, Nicole J; Tupper, Ben; Labonté, Jessica M; Becraft, Eric D; Brown, Julia M; Pachiadaki, Maria G; Povilaitis, Tadas; Thompson, Brian P; Mascena, Corianna J; Bellows, Wendy K; Lubys, Arvydas
2017-07-20
Microbial single-cell genomics can be used to provide insights into the metabolic potential, interactions, and evolution of uncultured microorganisms. Here we present WGA-X, a method based on multiple displacement amplification of DNA that utilizes a thermostable mutant of the phi29 polymerase. WGA-X enhances genome recovery from individual microbial cells and viral particles while maintaining ease of use and scalability. The greatest improvements are observed when amplifying high G+C content templates, such as those belonging to the predominant bacteria in agricultural soils. By integrating WGA-X with calibrated index-cell sorting and high-throughput genomic sequencing, we are able to analyze genomic sequences and cell sizes of hundreds of individual, uncultured bacteria, archaea, protists, and viral particles, obtained directly from marine and soil samples, in a single experiment. This approach may find diverse applications in microbiology and in biomedical and forensic studies of humans and other multicellular organisms.Single-cell genomics can be used to study uncultured microorganisms. Here, Stepanauskas et al. present a method combining improved multiple displacement amplification and FACS, to obtain genomic sequences and cell size information from uncultivated microbial cells and viral particles in environmental samples.
NASA Astrophysics Data System (ADS)
Maes, Pieter-Jan; Amelynck, Denis; Leman, Marc
2012-12-01
In this article, a computational platform is presented, entitled "Dance-the-Music", that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers' models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method-can determine the quality of a student's performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures.
Fun with High Throughput Toxicokinetics (CalEPA webinar)
Thousands of chemicals have been profiled by high-throughput screening (HTS) programs such as ToxCast and Tox21. These chemicals are tested in part because there are limited or no data on hazard, exposure, or toxicokinetics (TK). TK models aid in predicting tissue concentrations ...
High-Throughput Dietary Exposure Predictions for Chemical Migrants from Food Packaging Materials
United States Environmental Protection Agency researchers have developed a Stochastic Human Exposure and Dose Simulation High -Throughput (SHEDS-HT) model for use in prioritization of chemicals under the ExpoCast program. In this research, new methods were implemented in SHEDS-HT...
We incorporate inter-individual variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which hav...
HTTK: R Package for High-Throughput Toxicokinetics
Thousands of chemicals have been profiled by high-throughput screening programs such as ToxCast and Tox21; these chemicals are tested in part because most of them have limited or no data on hazard, exposure, or toxicokinetics. Toxicokinetic models aid in predicting tissue concent...
High-throughput screening, predictive modeling and computational embryology
High-throughput screening (HTS) studies are providing a rich source of data that can be applied to profile thousands of chemical compounds for biological activity and potential toxicity. EPA’s ToxCast™ project, and the broader Tox21 consortium, in addition to projects worldwide,...
NASA Astrophysics Data System (ADS)
Kelkboom, Emile J. C.; Breebaart, Jeroen; Buhan, Ileana; Veldhuis, Raymond N. J.
2010-04-01
Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from or binding a key to a biometric sample. The achieved protection depends on the size of the key and its closeness to being random. In the literature it can be observed that there is a large variation on the reported key lengths at similar classification performance of the same template protection system, even when based on the same biometric modality and database. In this work we determine the analytical relationship between the system performance and the theoretical maximum key size given a biometric source modeled by parallel Gaussian channels. We consider the case where the source capacity is evenly distributed across all channels and the channels are independent. We also determine the effect of the parameters such as the source capacity, the number of enrolment and verification samples, and the operating point selection on the maximum key size. We show that a trade-off exists between the privacy protection of the biometric system and its convenience for its users.
In silico study of in vitro GPCR assays by QSAR modeling
The U.S. EPA is screening thousands of chemicals of environmental interest in hundreds of in vitro high-throughput screening (HTS) assays (the ToxCast program). One goal is to prioritize chemicals for more detailed analyses based on activity in molecular initiating events (MIE) o...
Predictive models of prenatal developmental toxicity from ToxCast high-throughput screening data
EPA's ToxCast™ project is profiling the in vitro bioactivity of chemicals to assess pathway-level and cell-based signatures that correlate with observed in vivo toxicity. We hypothesized that developmental toxicity in guideline animal studies captured in the ToxRefDB database wou...
Defining a predictive model of developmental toxicity from in vitro and high-throughput screening (HTS) assays can be limited by the availability of developmental defects data. ToxRefDB (www.epa.gov/ncct/todrefdb) was built from animal studies on data-rich environmental chemicals...
Mixture toxicology in the 21st century: Pathway-based concepts and tools
The past decade has witnessed notable evolution of approaches focused on predicting chemical hazards and risks in the absence of empirical data from resource-intensive in vivo toxicity tests. In silico models, in vitro high-throughput toxicity assays, and short-term in vivo tests...
The rapidly expanding field of nanotechnology is introducing a large number and diversity of engineered nanomaterials into research and commerce with concordant uncertainty regarding the potential adverse health and ecological effects. With costs and time of traditional animal to...
Hybrid pregnant reference phantom series based on adult female ICRP reference phantom
NASA Astrophysics Data System (ADS)
Rafat-Motavalli, Laleh; Miri-Hakimabad, Hashem; Hoseinian-Azghadi, Elie
2018-03-01
This paper presents boundary representation (BREP) models of pregnant female and her fetus at the end of each trimester. The International Commission on Radiological Protection (ICRP) female reference voxel phantom was used as a base template in development process of the pregnant hybrid phantom series. The differences in shape and location of the displaced maternal organs caused by enlarging uterus were also taken into account. The CT and MR images of fetus specimens and pregnant patients of various ages were used to replace the maternal abdominal pelvic organs of template phantom and insert the fetus inside the gravid uterus. Each fetal model contains 21 different organs and tissues. The skeletal model of the fetus also includes age-dependent cartilaginous and ossified skeletal components. The replaced maternal organ models were converted to NURBS surfaces and then modified to conform to reference values of ICRP Publication 89. The particular feature of current series compared to the previously developed pregnant phantoms is being constructed upon the basis of ICRP reference phantom. The maternal replaced organ models are NURBS surfaces. With this great potential, they might have the feasibility of being converted to high quality polygon mesh phantoms.
Using structure to explore the sequence alignment space of remote homologs.
Kuziemko, Andrew; Honig, Barry; Petrey, Donald
2011-10-01
Protein structure modeling by homology requires an accurate sequence alignment between the query protein and its structural template. However, sequence alignment methods based on dynamic programming (DP) are typically unable to generate accurate alignments for remote sequence homologs, thus limiting the applicability of modeling methods. A central problem is that the alignment that is "optimal" in terms of the DP score does not necessarily correspond to the alignment that produces the most accurate structural model. That is, the correct alignment based on structural superposition will generally have a lower score than the optimal alignment obtained from sequence. Variations of the DP algorithm have been developed that generate alternative alignments that are "suboptimal" in terms of the DP score, but these still encounter difficulties in detecting the correct structural alignment. We present here a new alternative sequence alignment method that relies heavily on the structure of the template. By initially aligning the query sequence to individual fragments in secondary structure elements and combining high-scoring fragments that pass basic tests for "modelability", we can generate accurate alignments within a small ensemble. Our results suggest that the set of sequences that can currently be modeled by homology can be greatly extended.
Integration of QUARK and I-TASSER for ab initio protein structure prediction in CASP11
Zhang, Wenxuan; Yang, Jianyi; He, Baoji; Walker, Sara Elizabeth; Zhang, Hongjiu; Govindarajoo, Brandon; Virtanen, Jouko; Xue, Zhidong; Shen, Hong-Bin; Zhang, Yang
2015-01-01
We tested two pipelines developed for template-free protein structure prediction in the CASP11 experiment. First, the QUARK pipeline constructs structure models by reassembling fragments of continuously distributed lengths excised from unrelated proteins. Five free-modeling (FM) targets have the model successfully constructed by QUARK with a TM-score above 0.4, including the first model of T0837-D1, which has a TM-score=0.736 and RMSD=2.9 Å to the native. Detailed analysis showed that the success is partly attributed to the high-resolution contact map prediction derived from fragment-based distance-profiles, which are mainly located between regular secondary structure elements and loops/turns and help guide the orientation of secondary structure assembly. In the Zhang-Server pipeline, weakly scoring threading templates are re-ordered by the structural similarity to the ab initio folding models, which are then reassembled by I-TASSER based structure assembly simulations; 60% more domains with length up to 204 residues, compared to the QUARK pipeline, were successfully modeled by the I-TASSER pipeline with a TM-score above 0.4. The robustness of the I-TASSER pipeline can stem from the composite fragment-assembly simulations that combine structures from both ab initio folding and threading template refinements. Despite the promising cases, challenges still exist in long-range beta-strand folding, domain parsing, and the uncertainty of secondary structure prediction; the latter of which was found to affect nearly all aspects of FM structure predictions, from fragment identification, target classification, structure assembly, to final model selection. Significant efforts are needed to solve these problems before real progress on FM could be made. PMID:26370505
Integration of QUARK and I-TASSER for Ab Initio Protein Structure Prediction in CASP11.
Zhang, Wenxuan; Yang, Jianyi; He, Baoji; Walker, Sara Elizabeth; Zhang, Hongjiu; Govindarajoo, Brandon; Virtanen, Jouko; Xue, Zhidong; Shen, Hong-Bin; Zhang, Yang
2016-09-01
We tested two pipelines developed for template-free protein structure prediction in the CASP11 experiment. First, the QUARK pipeline constructs structure models by reassembling fragments of continuously distributed lengths excised from unrelated proteins. Five free-modeling (FM) targets have the model successfully constructed by QUARK with a TM-score above 0.4, including the first model of T0837-D1, which has a TM-score = 0.736 and RMSD = 2.9 Å to the native. Detailed analysis showed that the success is partly attributed to the high-resolution contact map prediction derived from fragment-based distance-profiles, which are mainly located between regular secondary structure elements and loops/turns and help guide the orientation of secondary structure assembly. In the Zhang-Server pipeline, weakly scoring threading templates are re-ordered by the structural similarity to the ab initio folding models, which are then reassembled by I-TASSER based structure assembly simulations; 60% more domains with length up to 204 residues, compared to the QUARK pipeline, were successfully modeled by the I-TASSER pipeline with a TM-score above 0.4. The robustness of the I-TASSER pipeline can stem from the composite fragment-assembly simulations that combine structures from both ab initio folding and threading template refinements. Despite the promising cases, challenges still exist in long-range beta-strand folding, domain parsing, and the uncertainty of secondary structure prediction; the latter of which was found to affect nearly all aspects of FM structure predictions, from fragment identification, target classification, structure assembly, to final model selection. Significant efforts are needed to solve these problems before real progress on FM could be made. Proteins 2016; 84(Suppl 1):76-86. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
How to Choose the Suitable Template for Homology Modelling of GPCRs: 5-HT7 Receptor as a Test Case.
Shahaf, Nir; Pappalardo, Matteo; Basile, Livia; Guccione, Salvatore; Rayan, Anwar
2016-09-01
G protein-coupled receptors (GPCRs) are a super-family of membrane proteins that attract great pharmaceutical interest due to their involvement in almost every physiological activity, including extracellular stimuli, neurotransmission, and hormone regulation. Currently, structural information on many GPCRs is mainly obtained by the techniques of computer modelling in general and by homology modelling in particular. Based on a quantitative analysis of eighteen antagonist-bound, resolved structures of rhodopsin family "A" receptors - also used as templates to build 153 homology models - it was concluded that a higher sequence identity between two receptors does not guarantee a lower RMSD between their structures, especially when their pair-wise sequence identity (within trans-membrane domain and/or in binding pocket) lies between 25 % and 40 %. This study suggests that we should consider all template receptors having a sequence identity ≤50 % with the query receptor. In fact, most of the GPCRs, compared to the currently available resolved structures of GPCRs, fall within this range and lack a correlation between structure and sequence. When testing suitability for structure-based drug design, it was found that choosing as a template the most similar resolved protein, based on sequence resemblance only, led to unsound results in many cases. Molecular docking analyses were carried out, and enrichment factors as well as attrition rates were utilized as criteria for assessing suitability for structure-based drug design. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
High-throughput assays that can quantify chemical-induced changes at the cellular and molecular level have been recommended for use in chemical safety assessment. High-throughput, high content imaging assays for the key cellular events of neurodevelopment have been proposed to ra...
Stochastic model of template-directed elongation processes in biology.
Schilstra, Maria J; Nehaniv, Chrystopher L
2010-10-01
We present a novel modular, stochastic model for biological template-based linear chain elongation processes. In this model, elongation complexes (ECs; DNA polymerase, RNA polymerase, or ribosomes associated with nascent chains) that span a finite number of template units step along the template, one after another, with semaphore constructs preventing overtaking. The central elongation module is readily extended with modules that represent initiation and termination processes. The model was used to explore the effect of EC span on motor velocity and dispersion, and the effect of initiation activator and repressor binding kinetics on the overall elongation dynamics. The results demonstrate that (1) motors that move smoothly are able to travel at a greater velocity and closer together than motors that move more erratically, and (2) the rate at which completed chains are released is proportional to the occupancy or vacancy of activator or repressor binding sites only when initiation or activator/repressor dissociation is slow in comparison with elongation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Incorporating High-Throughput Exposure Predictions with ...
We previously integrated dosimetry and exposure with high-throughput screening (HTS) to enhance the utility of ToxCast™ HTS data by translating in vitro bioactivity concentrations to oral equivalent doses (OEDs) required to achieve these levels internally. These OEDs were compared against regulatory exposure estimates, providing an activity-to-exposure ratio (AER) useful for a risk-based ranking strategy. As ToxCast™ efforts expand (i.e., Phase II) beyond food-use pesticides towards a wider chemical domain that lacks exposure and toxicity information, prediction tools become increasingly important. In this study, in vitro hepatic clearance and plasma protein binding were measured to estimate OEDs for a subset of Phase II chemicals. OEDs were compared against high-throughput (HT) exposure predictions generated using probabilistic modeling and Bayesian approaches generated by the U.S. EPA ExpoCast™ program. This approach incorporated chemical-specific use and national production volume data with biomonitoring data to inform the exposure predictions. This HT exposure modeling approach provided predictions for all Phase II chemicals assessed in this study whereas estimates from regulatory sources were available for only 7% of chemicals. Of the 163 chemicals assessed in this study, three or 13 chemicals possessed AERs <1 or <100, respectively. Diverse bioactivities y across a range of assays and concentrations was also noted across the wider chemical space su
Xing, Jie; Zang, Meitong; Zhang, Haiying; Zhu, Mingshe
2015-10-15
Patients are usually exposed to multiple drugs, and metabolite profiling of each drug in complex biological matrices is a big challenge. This study presented a new application of an improved high resolution mass spectrometry (HRMS)-based data-mining tools in tandem to fast and comprehensive metabolite identification of combination drugs in human. The model drug combination was metronidazole-pantoprazole-clarithromycin (MET-PAN-CLAR), which is widely used in clinic to treat ulcers caused by Helicobacter pylori. First, mass defect filter (MDF), as a targeted data processing tool, was able to recover all relevant metabolites of MET-PAN-CLAR in human plasma and urine from the full-scan MS dataset when appropriate MDF templates for each drug were defined. Second, the accurate mass-based background subtraction (BS), as an untargeted data-mining tool, worked effectively except for several trace metabolites, which were buried in the remaining background signals. Third, an integrated strategy, i.e., untargeted BS followed by improved MDF, was effective for metabolite identification of MET-PAN-CLAR. Most metabolites except for trace ones were found in the first step of BS-processed datasets, and the results led to the setup of appropriate metabolite MDF template for the subsequent MDF data processing. Trace metabolites were further recovered by MDF, which used both common MDF templates and the novel metabolite-based MDF templates. As a result, a total of 44 metabolites or related components were found for MET-PAN-CLAR in human plasma and urine using the integrated strategy. New metabolic pathways such as N-glucuronidation of PAN and dehydrogenation of CLAR were found. This study demonstrated that the combination of accurate mass-based multiple data-mining techniques in tandem, i.e., untargeted background subtraction followed by targeted mass defect filtering, can be a valuable tool for rapid metabolite profiling of combination drugs in vivo. Copyright © 2015 Elsevier B.V. All rights reserved.
Vision-based measurement for rotational speed by improving Lucas-Kanade template tracking algorithm.
Guo, Jie; Zhu, Chang'an; Lu, Siliang; Zhang, Dashan; Zhang, Chunyu
2016-09-01
Rotational angle and speed are important parameters for condition monitoring and fault diagnosis of rotating machineries, and their measurement is useful in precision machining and early warning of faults. In this study, a novel vision-based measurement algorithm is proposed to complete this task. A high-speed camera is first used to capture the video of the rotational object. To extract the rotational angle, the template-based Lucas-Kanade algorithm is introduced to complete motion tracking by aligning the template image in the video sequence. Given the special case of nonplanar surface of the cylinder object, a nonlinear transformation is designed for modeling the rotation tracking. In spite of the unconventional and complex form, the transformation can realize angle extraction concisely with only one parameter. A simulation is then conducted to verify the tracking effect, and a practical tracking strategy is further proposed to track consecutively the video sequence. Based on the proposed algorithm, instantaneous rotational speed (IRS) can be measured accurately and efficiently. Finally, the effectiveness of the proposed algorithm is verified on a brushless direct current motor test rig through the comparison with results obtained by the microphone. Experimental results demonstrate that the proposed algorithm can extract accurately rotational angles and can measure IRS with the advantage of noncontact and effectiveness.
A high-throughput assay for quantifying appetite and digestive dynamics.
Jordi, Josua; Guggiana-Nilo, Drago; Soucy, Edward; Song, Erin Yue; Lei Wee, Caroline; Engert, Florian
2015-08-15
Food intake and digestion are vital functions, and their dysregulation is fundamental for many human diseases. Current methods do not support their dynamic quantification on large scales in unrestrained vertebrates. Here, we combine an infrared macroscope with fluorescently labeled food to quantify feeding behavior and intestinal nutrient metabolism with high temporal resolution, sensitivity, and throughput in naturally behaving zebrafish larvae. Using this method and rate-based modeling, we demonstrate that zebrafish larvae match nutrient intake to their bodily demand and that larvae adjust their digestion rate, according to the ingested meal size. Such adaptive feedback mechanisms make this model system amenable to identify potential chemical modulators. As proof of concept, we demonstrate that nicotine, l-lysine, ghrelin, and insulin have analogous impact on food intake as in mammals. Consequently, the method presented here will promote large-scale translational research of food intake and digestive function in a naturally behaving vertebrate. Copyright © 2015 the American Physiological Society.
A high-throughput assay for quantifying appetite and digestive dynamics
Guggiana-Nilo, Drago; Soucy, Edward; Song, Erin Yue; Lei Wee, Caroline; Engert, Florian
2015-01-01
Food intake and digestion are vital functions, and their dysregulation is fundamental for many human diseases. Current methods do not support their dynamic quantification on large scales in unrestrained vertebrates. Here, we combine an infrared macroscope with fluorescently labeled food to quantify feeding behavior and intestinal nutrient metabolism with high temporal resolution, sensitivity, and throughput in naturally behaving zebrafish larvae. Using this method and rate-based modeling, we demonstrate that zebrafish larvae match nutrient intake to their bodily demand and that larvae adjust their digestion rate, according to the ingested meal size. Such adaptive feedback mechanisms make this model system amenable to identify potential chemical modulators. As proof of concept, we demonstrate that nicotine, l-lysine, ghrelin, and insulin have analogous impact on food intake as in mammals. Consequently, the method presented here will promote large-scale translational research of food intake and digestive function in a naturally behaving vertebrate. PMID:26108871
Liu, Ning; Tian, Ru; Loeb, Daniel D.
2003-01-01
Synthesis of the relaxed-circular (RC) DNA genome of hepadnaviruses requires two template switches during plus-strand DNA synthesis: primer translocation and circularization. Although primer translocation and circularization use different donor and acceptor sequences, and are distinct temporally, they share the common theme of switching from one end of the minus-strand template to the other end. Studies of duck hepatitis B virus have indicated that, in addition to the donor and acceptor sequences, three other cis-acting sequences, named 3E, M, and 5E, are required for the synthesis of RC DNA by contributing to primer translocation and circularization. The mechanism by which 3E, M, and 5E act was not known. We present evidence that these sequences function by base pairing with each other within the minus-strand template. 3E base-pairs with one portion of M (M3) and 5E base-pairs with an adjacent portion of M (M5). We found that disrupting base pairing between 3E and M3 and between 5E and M5 inhibited primer translocation and circularization. More importantly, restoring base pairing with mutant sequences restored the production of RC DNA. These results are consistent with the model that, within duck hepatitis B virus capsids, the ends of the minus-strand template are juxtaposed via base pairing to facilitate the two template switches during plus-strand DNA synthesis. PMID:12578983
Wave Energy Prize - General Information
Scharmen, Wesley
2016-12-01
All the informational files, templates, rules and guidelines for Wave Energy Prize (WEP), including the Wave Energy Prize Rules, Participant Terms and Conditions Template, WEC Prize Name, Logo, Branding, WEC Publicity, Technical Submission Template , Numerical Modeling Template, SSTF Submission Template, 1/20th Scale Model Design and Construction Plan Template, Final Report template, and Webinars.
High-throughput analysis of yeast replicative aging using a microfluidic system
Jo, Myeong Chan; Liu, Wei; Gu, Liang; Dang, Weiwei; Qin, Lidong
2015-01-01
Saccharomyces cerevisiae has been an important model for studying the molecular mechanisms of aging in eukaryotic cells. However, the laborious and low-throughput methods of current yeast replicative lifespan assays limit their usefulness as a broad genetic screening platform for research on aging. We address this limitation by developing an efficient, high-throughput microfluidic single-cell analysis chip in combination with high-resolution time-lapse microscopy. This innovative design enables, to our knowledge for the first time, the determination of the yeast replicative lifespan in a high-throughput manner. Morphological and phenotypical changes during aging can also be monitored automatically with a much higher throughput than previous microfluidic designs. We demonstrate highly efficient trapping and retention of mother cells, determination of the replicative lifespan, and tracking of yeast cells throughout their entire lifespan. Using the high-resolution and large-scale data generated from the high-throughput yeast aging analysis (HYAA) chips, we investigated particular longevity-related changes in cell morphology and characteristics, including critical cell size, terminal morphology, and protein subcellular localization. In addition, because of the significantly improved retention rate of yeast mother cell, the HYAA-Chip was capable of demonstrating replicative lifespan extension by calorie restriction. PMID:26170317
I describe research on high throughput exposure and toxicokinetics. These tools provide context for data generated by high throughput toxicity screening to allow risk-based prioritization of thousands of chemicals.
MIPHENO: Data normalization for high throughput metabolic analysis.
High throughput methodologies such as microarrays, mass spectrometry and plate-based small molecule screens are increasingly used to facilitate discoveries from gene function to drug candidate identification. These large-scale experiments are typically carried out over the course...
Ontology-based reusable clinical document template production system.
Nam, Sejin; Lee, Sungin; Kim, James G Boram; Kim, Hong-Gee
2012-01-01
Clinical documents embody professional clinical knowledge. This paper shows an effective clinical document template (CDT) production system that uses a clinical description entity (CDE) model, a CDE ontology, and a knowledge management system called STEP that manages ontology-based clinical description entities. The ontology represents CDEs and their inter-relations, and the STEP system stores and manages CDE ontology-based information regarding CDTs. The system also provides Web Services interfaces for search and reasoning over clinical entities. The system was populated with entities and relations extracted from 35 CDTs that were used in admission, discharge, and progress reports, as well as those used in nursing and operation functions. A clinical document template editor is shown that uses STEP.
USDA-ARS?s Scientific Manuscript database
High-throughput phenotyping platforms (HTPPs) provide novel opportunities to more effectively dissect the genetic basis of drought-adaptive traits. This genome-wide association study (GWAS) compares the results obtained with two Unmanned Aerial Vehicles (UAVs) and a ground-based platform used to mea...
Inhibition of Retinoblastoma Protein Inactivation
2016-09-01
Retinoblastoma protein, E2F transcription factor, high throughput screen, drug discovery, x-ray crystallography 16. SECURITY CLASSIFICATION OF: 17...developed a method to perform fragment based screening by x-ray crystallography . 2.0 KEYWORDS Retinoblastoma (Rb) pathway, E2F transcription factor...cancer, cell-cycle inhibition, activation, modulation, inhibition, high throughput screening, fragment-based screening, x-ray crystallography
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Kiraz, Mehmet Sabir; Erdogan, Hakan; Savas, Erkay
2015-12-01
In this paper, we introduce a new biometric verification and template protection system which we call THRIVE. The system includes novel enrollment and authentication protocols based on threshold homomorphic encryption where a private key is shared between a user and a verifier. In the THRIVE system, only encrypted binary biometric templates are stored in a database and verification is performed via homomorphically randomized templates, thus, original templates are never revealed during authentication. Due to the underlying threshold homomorphic encryption scheme, a malicious database owner cannot perform full decryption on encrypted templates of the users in the database. In addition, security of the THRIVE system is enhanced using a two-factor authentication scheme involving user's private key and biometric data. Using simulation-based techniques, the proposed system is proven secure in the malicious model. The proposed system is suitable for applications where the user does not want to reveal her biometrics to the verifier in plain form, but needs to prove her identity by using biometrics. The system can be used with any biometric modality where a feature extraction method yields a fixed size binary template and a query template is verified when its Hamming distance to the database template is less than a threshold. The overall connection time for the proposed THRIVE system is estimated to be 336 ms on average for 256-bit biometric templates on a desktop PC running with quad core 3.2 GHz CPUs at 10 Mbit/s up/down link connection speed. Consequently, the proposed system can be efficiently used in real-life applications.
Erickson, Heidi S
2012-09-28
The future of personalized medicine depends on the ability to efficiently and rapidly elucidate a reliable set of disease-specific molecular biomarkers. High-throughput molecular biomarker analysis methods have been developed to identify disease risk, diagnostic, prognostic, and therapeutic targets in human clinical samples. Currently, high throughput screening allows us to analyze thousands of markers from one sample or one marker from thousands of samples and will eventually allow us to analyze thousands of markers from thousands of samples. Unfortunately, the inherent nature of current high throughput methodologies, clinical specimens, and cost of analysis is often prohibitive for extensive high throughput biomarker analysis. This review summarizes the current state of high throughput biomarker screening of clinical specimens applicable to genetic epidemiology and longitudinal population-based studies with a focus on considerations related to biospecimens, laboratory techniques, and sample pooling. Copyright © 2012 John Wiley & Sons, Ltd.
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-01-01
Abstract Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure–property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure–property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure–property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials. PMID:28458737
Identifying apicoplast-targeting antimalarials using high-throughput compatible approaches
Ekland, Eric H.; Schneider, Jessica; Fidock, David A.
2011-01-01
Malarial parasites have evolved resistance to all previously used therapies, and recent evidence suggests emerging resistance to the first-line artemisinins. To identify antimalarials with novel mechanisms of action, we have developed a high-throughput screen targeting the apicoplast organelle of Plasmodium falciparum. Antibiotics known to interfere with this organelle, such as azithromycin, exhibit an unusual phenotype whereby the progeny of drug-treated parasites die. Our screen exploits this phenomenon by assaying for “delayed death” compounds that exhibit a higher potency after two cycles of intraerythrocytic development compared to one. We report a primary assay employing parasites with an integrated copy of a firefly luciferase reporter gene and a secondary flow cytometry-based assay using a nucleic acid stain paired with a mitochondrial vital dye. Screening of the U.S. National Institutes of Health Clinical Collection identified known and novel antimalarials including kitasamycin. This inexpensive macrolide, used for agricultural applications, exhibited an in vitro IC50 in the 50 nM range, comparable to the 30 nM activity of our control drug, azithromycin. Imaging and pharmacologic studies confirmed kitasamycin action against the apicoplast, and in vivo activity was observed in a murine malaria model. These assays provide the foundation for high-throughput campaigns to identify novel chemotypes for combination therapies to treat multidrug-resistant malaria.—Ekland, E. H., Schneider, J., Fidock, D. A. Identifying apicoplast-targeting antimalarials using high-throughput compatible approaches. PMID:21746861
Wan, Mimi; Zhao, Wenbo; Peng, Fang; Wang, Qi; Xu, Ping; Mao, Chun; Shen, Jian
2016-01-01
A new kind of high-quality Ag/PS coaxial nanocables can be facilely synthesized by using soft/hard templates method. In order to effectively introduce Ag sources into porous polystyrene (PS) nanotubes which were trapped in porous anodic aluminum oxide (AAO) hard template, Pluronic F127 (F127) was used as guiding agent, soft template and reductant. Meanwhile, ethylene glycol solution was also used as solvent and co-reducing agent to assist in the formation of silver nanowires. The influences of concentration of F127 and reducing reaction time on the formation of Ag/PS coaxial nanocables were discussed. Results indicated that the high-quality Ag/PS coaxial nanocables can be obtained by the mixed mode of soft/hard templates under optimized conditions. This strategy is expected to be extended to design more metal/polymer coaxial nanocables for the benefit of creation of complex and functional nanoarchitectures and components. PMID:27477888
Nemenman, Ilya; Escola, G Sean; Hlavacek, William S; Unkefer, Pat J; Unkefer, Clifford J; Wall, Michael E
2007-12-01
We investigate the ability of algorithms developed for reverse engineering of transcriptional regulatory networks to reconstruct metabolic networks from high-throughput metabolite profiling data. For benchmarking purposes, we generate synthetic metabolic profiles based on a well-established model for red blood cell metabolism. A variety of data sets are generated, accounting for different properties of real metabolic networks, such as experimental noise, metabolite correlations, and temporal dynamics. These data sets are made available online. We use ARACNE, a mainstream algorithm for reverse engineering of transcriptional regulatory networks from gene expression data, to predict metabolic interactions from these data sets. We find that the performance of ARACNE on metabolic data is comparable to that on gene expression data.
High-throughput screening in two dimensions: binding intensity and off-rate on a peptide microarray.
Greving, Matthew P; Belcher, Paul E; Cox, Conor D; Daniel, Douglas; Diehnelt, Chris W; Woodbury, Neal W
2010-07-01
We report a high-throughput two-dimensional microarray-based screen, incorporating both target binding intensity and off-rate, which can be used to analyze thousands of compounds in a single binding assay. Relative binding intensities and time-resolved dissociation are measured for labeled tumor necrosis factor alpha (TNF-alpha) bound to a peptide microarray. The time-resolved dissociation is fitted to a one-component exponential decay model, from which relative dissociation rates are determined for all peptides with binding intensities above background. We show that most peptides with the slowest off-rates on the microarray also have the slowest off-rates when measured by surface plasmon resonance (SPR). 2010 Elsevier Inc. All rights reserved.
Rioualen, Claire; Da Costa, Quentin; Chetrit, Bernard; Charafe-Jauffret, Emmanuelle; Ginestier, Christophe
2017-01-01
High-throughput RNAi screenings (HTS) allow quantifying the impact of the deletion of each gene in any particular function, from virus-host interactions to cell differentiation. However, there has been less development for functional analysis tools dedicated to RNAi analyses. HTS-Net, a network-based analysis program, was developed to identify gene regulatory modules impacted in high-throughput screenings, by integrating transcription factors-target genes interaction data (regulome) and protein-protein interaction networks (interactome) on top of screening z-scores. HTS-Net produces exhaustive HTML reports for results navigation and exploration. HTS-Net is a new pipeline for RNA interference screening analyses that proves better performance than simple gene rankings by z-scores, by re-prioritizing genes and replacing them in their biological context, as shown by the three studies that we reanalyzed. Formatted input data for the three studied datasets, source code and web site for testing the system are available from the companion web site at http://htsnet.marseille.inserm.fr/. We also compared our program with existing algorithms (CARD and hotnet2). PMID:28949986
Role of APOE Isoforms in the Pathogenesis of TBI induced Alzheimer’s Disease
2016-10-01
deletion, APOE targeted replacement, complex breeding, CCI model optimization, mRNA library generation, high throughput massive parallel sequencing...demonstrate that the lack of Abca1 increases amyloid plaques and decreased APOE protein levels in AD-model mice. In this proposal we will test the hypothesis...injury, inflammatory reaction, transcriptome, high throughput massive parallel sequencing, mRNA-seq., behavioral testing, memory impairment, recovery 3
2011-01-01
The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments. PMID:22136293
Salvo-Chirnside, Eliane; Kane, Steven; Kerr, Lorraine E
2011-12-02
The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments.
From scores to face templates: a model-based approach.
Mohanty, Pranab; Sarkar, Sudeep; Kasturi, Rangachar
2007-12-01
Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With three different levels of score quantization, we achieve 69 percent, 68 percent and 49 percent probability of break-in, indicating the robustness of our proposed scheme to score quantization. We also show that the proposed reconstruction scheme has 47 percent more probability of breaking in as a randomly chosen target subject for the commercial system as compared to a hill climbing approach with the same number of attempts. Given that the proposed template reconstruction method uses distinct face templates to reconstruct faces, this work exposes a more severe form of vulnerability than a hill climbing kind of attack where incrementally different versions of the same face are used. Also, the ability of the proposed approach to reconstruct actual face templates of the users increases privacy concerns in biometric systems.
Adaptive Packet Combining Scheme in Three State Channel Model
NASA Astrophysics Data System (ADS)
Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak
2018-01-01
The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.
The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD
NASA Astrophysics Data System (ADS)
Cox, M. A.; Reed, R.; Mellado, B.
2015-01-01
After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented.
NASA Astrophysics Data System (ADS)
Surekha, Kanagarajan; Nachiappan, Mutharasappan; Prabhu, Dhamodharan; Choubey, Sanjay Kumar; Biswal, Jayashree; Jeyakanthan, Jeyaraman
2017-01-01
Dihydroorotate dehydrogenase (DHODH) plays a major role in the rate limiting step of de novo pyrimidine biosynthesis pathway and it is pronounced as a novel target for drug development of cancer. The currently available drugs against DHODH are ineffective and bear various side effects. Three-dimensional structure of the targeted protein was constructed using molecular modeling approach followed by 100 ns molecular dynamics simulations. In this study, High Throughput Virtual Screening (HTVS) was performed using various compound libraries to identify pharmacologically potential molecules. The top four identified lead molecules includes NCI_47074, HitFinder_7630, Binding_66981 and Specs_108872 with high docking score of -9.45, -8.29, -8.04 and -8.03 kcal/mol and the corresponding binding free energy were -16.25, -56.37, -26.93 and -48.04 kcal/mol respectively. Arg122, Arg185, Glu255 and Gly257 are the key residues found to be interacting with the ligands. Molecular dynamics simulations of DHODH-inhibitors complexes were performed to assess the stability of various conformations from complex structures of TtDHODH. Furthermore, stereoelectronic features of the ligands were explored to facilitate charge transfer during the protein-ligand interactions using Density Functional Theoretical approach. Based on in silico analysis, the ligand NCI_47074 ((2Z)-3-({6-[(2Z)-3-carboxylatoprop-2-enamido]pyridin-2-yl}carbamoyl)prop-2-enoate) was found to be the most potent lead molecule which was validated using energetic and electronic parameters and it could serve as a template for designing effective anticancerous drug molecule.