Sample records for sample screening pipeline

  1. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    PubMed

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. © 2016 Society for Laboratory Automation and Screening.

  2. Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.

    PubMed

    Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J

    2017-01-01

    There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.

  3. 1. FIRST SECTION OF PIPELINE BETWEEN CONFLUENCE POOL AND FISH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. FIRST SECTION OF PIPELINE BETWEEN CONFLUENCE POOL AND FISH SCREEN. NOTE RETAINING WALL BESIDE PIPE. VIEW TO NORTH-NORTHEAST. - Santa Ana River Hydroelectric System, Pipeline to Fish Screen, Redlands, San Bernardino County, CA

  4. High-throughput Screening of Recalcitrance Variations in Lignocellulosic Biomass: Total Lignin, Lignin Monomers, and Enzymatic Sugar Release

    PubMed Central

    Decker, Stephen R.; Sykes, Robert W.; Turner, Geoffrey B.; Lupoi, Jason S.; Doepkke, Crissa; Tucker, Melvin P.; Schuster, Logan A.; Mazza, Kimberly; Himmel, Michael E.; Davis, Mark F.; Gjersing, Erica

    2015-01-01

    The conversion of lignocellulosic biomass to fuels, chemicals, and other commodities has been explored as one possible pathway toward reductions in the use of non-renewable energy sources. In order to identify which plants, out of a diverse pool, have the desired chemical traits for downstream applications, attributes, such as cellulose and lignin content, or monomeric sugar release following an enzymatic saccharification, must be compared. The experimental and data analysis protocols of the standard methods of analysis can be time-consuming, thereby limiting the number of samples that can be measured. High-throughput (HTP) methods alleviate the shortcomings of the standard methods, and permit the rapid screening of available samples to isolate those possessing the desired traits. This study illustrates the HTP sugar release and pyrolysis-molecular beam mass spectrometry pipelines employed at the National Renewable Energy Lab. These pipelines have enabled the efficient assessment of thousands of plants while decreasing experimental time and costs through reductions in labor and consumables. PMID:26437006

  5. High-Throughput Screening of Recalcitrance Variations in Lignocellulosic Biomass: Total Lignin, Lignin Monomers, and Enzymatic Sugar Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Decker, Stephen R.; Sykes, Robert W.; Turner, Geoffrey B.

    The conversion of lignocellulosic biomass to fuels, chemicals, and other commodities has been explored as one possible pathway toward reductions in the use of non-renewable energy sources. In order to identify which plants, out of a diverse pool, have the desired chemical traits for downstream applications, attributes, such as cellulose and lignin content, or monomeric sugar release following an enzymatic saccharification, must be compared. The experimental and data analysis protocols of the standard methods of analysis can be time-consuming, thereby limiting the number of samples that can be measured. High-throughput (HTP) methods alleviate the shortcomings of the standard methods, andmore » permit the rapid screening of available samples to isolate those possessing the desired traits. This study illustrates the HTP sugar release and pyrolysis-molecular beam mass spectrometry pipelines employed at the National Renewable Energy Lab. These pipelines have enabled the efficient assessment of thousands of plants while decreasing experimental time and costs through reductions in labor and consumables.« less

  6. Building a virtual ligand screening pipeline using free software: a survey.

    PubMed

    Glaab, Enrico

    2016-03-01

    Virtual screening, the search for bioactive compounds via computational methods, provides a wide range of opportunities to speed up drug development and reduce the associated risks and costs. While virtual screening is already a standard practice in pharmaceutical companies, its applications in preclinical academic research still remain under-exploited, in spite of an increasing availability of dedicated free databases and software tools. In this survey, an overview of recent developments in this field is presented, focusing on free software and data repositories for screening as alternatives to their commercial counterparts, and outlining how available resources can be interlinked into a comprehensive virtual screening pipeline using typical academic computing facilities. Finally, to facilitate the set-up of corresponding pipelines, a downloadable software system is provided, using platform virtualization to integrate pre-installed screening tools and scripts for reproducible application across different operating systems. © The Author 2015. Published by Oxford University Press.

  7. Building a virtual ligand screening pipeline using free software: a survey

    PubMed Central

    2016-01-01

    Virtual screening, the search for bioactive compounds via computational methods, provides a wide range of opportunities to speed up drug development and reduce the associated risks and costs. While virtual screening is already a standard practice in pharmaceutical companies, its applications in preclinical academic research still remain under-exploited, in spite of an increasing availability of dedicated free databases and software tools. In this survey, an overview of recent developments in this field is presented, focusing on free software and data repositories for screening as alternatives to their commercial counterparts, and outlining how available resources can be interlinked into a comprehensive virtual screening pipeline using typical academic computing facilities. Finally, to facilitate the set-up of corresponding pipelines, a downloadable software system is provided, using platform virtualization to integrate pre-installed screening tools and scripts for reproducible application across different operating systems. PMID:26094053

  8. tcpl: The ToxCast Pipeline for High-Throughput Screening Data

    EPA Science Inventory

    Motivation: The large and diverse high-throughput chemical screening efforts carried out by the US EPAToxCast program requires an efficient, transparent, and reproducible data pipeline.Summary: The tcpl R package and its associated MySQL database provide a generalized platform fo...

  9. Cellular and Biophysical Pipeline for the Screening of Peroxisome Proliferator-Activated Receptor Beta/Delta Agonists: Avoiding False Positives

    PubMed Central

    Batista, Fernanda Aparecida Heleno

    2018-01-01

    Peroxisome proliferator-activated receptor beta/delta (PPARß/δ) is considered a therapeutic target for metabolic disorders, cancer, and cardiovascular diseases. Here, we developed one pipeline for the screening of PPARß/δ agonists, which reduces the cost, time, and false-positive hits. The first step is an optimized 3-day long cellular transactivation assay based on reporter-gene technology, which is supported by automated liquid-handlers. This primary screening is followed by a confirmatory transactivation assay and by two biophysical validation methods (thermal shift assay (TSA) and (ANS) fluorescence quenching), which allow the calculation of the affinity constant, giving more information about the selected hits. All of the assays were validated using well-known commercial agonists providing trustworthy data. Furthermore, to validate and test this pipeline, we screened a natural extract library (560 extracts), and we found one plant extract that might be interesting for PPARß/δ modulation. In conclusion, our results suggested that we developed a cheaper and more robust pipeline that goes beyond the single activation screening, as it also evaluates PPARß/δ tertiary structure stabilization and the ligand affinity constant, selecting only molecules that directly bind to the receptor. Moreover, this approach might improve the effectiveness of the screening for agonists that target PPARß/δ for drug development.

  10. CRISPR-DAV: CRISPR NGS data analysis and visualization pipeline.

    PubMed

    Wang, Xuning; Tilford, Charles; Neuhaus, Isaac; Mintier, Gabe; Guo, Qi; Feder, John N; Kirov, Stefan

    2017-12-01

    The simplicity and precision of CRISPR/Cas9 system has brought in a new era of gene editing. Screening for desired clones with CRISPR-mediated genomic edits in a large number of samples is made possible by next generation sequencing (NGS) due to its multiplexing. Here we present CRISPR-DAV (CRISPR Data Analysis and Visualization) pipeline to analyze the CRISPR NGS data in a high throughput manner. In the pipeline, Burrows-Wheeler Aligner and Assembly Based ReAlignment are used for small and large indel detection, and results are presented in a comprehensive set of charts and interactive alignment view. CRISPR-DAV is available at GitHub and Docker Hub repositories: https://github.com/pinetree1/crispr-dav.git and https://hub.docker.com/r/pinetree1/crispr-dav/. xuning.wang@bms.com. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  11. A Ligand-observed Mass Spectrometry Approach Integrated into the Fragment Based Lead Discovery Pipeline

    PubMed Central

    Chen, Xin; Qin, Shanshan; Chen, Shuai; Li, Jinlong; Li, Lixin; Wang, Zhongling; Wang, Quan; Lin, Jianping; Yang, Cheng; Shui, Wenqing

    2015-01-01

    In fragment-based lead discovery (FBLD), a cascade combining multiple orthogonal technologies is required for reliable detection and characterization of fragment binding to the target. Given the limitations of the mainstream screening techniques, we presented a ligand-observed mass spectrometry approach to expand the toolkits and increase the flexibility of building a FBLD pipeline especially for tough targets. In this study, this approach was integrated into a FBLD program targeting the HCV RNA polymerase NS5B. Our ligand-observed mass spectrometry analysis resulted in the discovery of 10 hits from a 384-member fragment library through two independent screens of complex cocktails and a follow-up validation assay. Moreover, this MS-based approach enabled quantitative measurement of weak binding affinities of fragments which was in general consistent with SPR analysis. Five out of the ten hits were then successfully translated to X-ray structures of fragment-bound complexes to lay a foundation for structure-based inhibitor design. With distinctive strengths in terms of high capacity and speed, minimal method development, easy sample preparation, low material consumption and quantitative capability, this MS-based assay is anticipated to be a valuable addition to the repertoire of current fragment screening techniques. PMID:25666181

  12. Preparation of Protein Samples for NMR Structure, Function, and Small Molecule Screening Studies

    PubMed Central

    Acton, Thomas B.; Xiao, Rong; Anderson, Stephen; Aramini, James; Buchwald, William A.; Ciccosanti, Colleen; Conover, Ken; Everett, John; Hamilton, Keith; Huang, Yuanpeng Janet; Janjua, Haleema; Kornhaber, Gregory; Lau, Jessica; Lee, Dong Yup; Liu, Gaohua; Maglaqui, Melissa; Ma, Lichung; Mao, Lei; Patel, Dayaban; Rossi, Paolo; Sahdev, Seema; Shastry, Ritu; Swapna, G.V.T.; Tang, Yeufeng; Tong, Saichiu; Wang, Dongyan; Wang, Huang; Zhao, Li; Montelione, Gaetano T.

    2014-01-01

    In this chapter, we concentrate on the production of high quality protein samples for NMR studies. In particular, we provide an in-depth description of recent advances in the production of NMR samples and their synergistic use with recent advancements in NMR hardware. We describe the protein production platform of the Northeast Structural Genomics Consortium, and outline our high-throughput strategies for producing high quality protein samples for nuclear magnetic resonance (NMR) studies. Our strategy is based on the cloning, expression and purification of 6X-His-tagged proteins using T7-based Escherichia coli systems and isotope enrichment in minimal media. We describe 96-well ligation-independent cloning and analytical expression systems, parallel preparative scale fermentation, and high-throughput purification protocols. The 6X-His affinity tag allows for a similar two-step purification procedure implemented in a parallel high-throughput fashion that routinely results in purity levels sufficient for NMR studies (> 97% homogeneity). Using this platform, the protein open reading frames of over 17,500 different targeted proteins (or domains) have been cloned as over 28,000 constructs. Nearly 5,000 of these proteins have been purified to homogeneity in tens of milligram quantities (see Summary Statistics, http://nesg.org/statistics.html), resulting in more than 950 new protein structures, including more than 400 NMR structures, deposited in the Protein Data Bank. The Northeast Structural Genomics Consortium pipeline has been effective in producing protein samples of both prokaryotic and eukaryotic origin. Although this paper describes our entire pipeline for producing isotope-enriched protein samples, it focuses on the major updates introduced during the last 5 years (Phase 2 of the National Institute of General Medical Sciences Protein Structure Initiative). Our advanced automated and/or parallel cloning, expression, purification, and biophysical screening technologies are suitable for implementation in a large individual laboratory or by a small group of collaborating investigators for structural biology, functional proteomics, ligand screening and structural genomics research. PMID:21371586

  13. 40 CFR 761.250 - Sample site selection for pipeline section abandonment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample site selection for pipeline... Disposal of Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.250 Sample site selection for pipeline section abandonment. This procedure...

  14. 40 CFR 761.250 - Sample site selection for pipeline section abandonment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Sample site selection for pipeline... Disposal of Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.250 Sample site selection for pipeline section abandonment. This procedure...

  15. A cross docking pipeline for improving pose prediction and virtual screening performance

    NASA Astrophysics Data System (ADS)

    Kumar, Ashutosh; Zhang, Kam Y. J.

    2018-01-01

    Pose prediction and virtual screening performance of a molecular docking method depend on the choice of protein structures used for docking. Multiple structures for a target protein are often used to take into account the receptor flexibility and problems associated with a single receptor structure. However, the use of multiple receptor structures is computationally expensive when docking a large library of small molecules. Here, we propose a new cross-docking pipeline suitable to dock a large library of molecules while taking advantage of multiple target protein structures. Our method involves the selection of a suitable receptor for each ligand in a screening library utilizing ligand 3D shape similarity with crystallographic ligands. We have prospectively evaluated our method in D3R Grand Challenge 2 and demonstrated that our cross-docking pipeline can achieve similar or better performance than using either single or multiple-receptor structures. Moreover, our method displayed not only decent pose prediction performance but also better virtual screening performance over several other methods.

  16. AutoDrug: fully automated macromolecular crystallography workflows for fragment-based drug discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, Yingssu; Stanford University, 333 Campus Drive, Mudd Building, Stanford, CA 94305-5080; McPhillips, Scott E.

    New software has been developed for automating the experimental and data-processing stages of fragment-based drug discovery at a macromolecular crystallography beamline. A new workflow-automation framework orchestrates beamline-control and data-analysis software while organizing results from multiple samples. AutoDrug is software based upon the scientific workflow paradigm that integrates the Stanford Synchrotron Radiation Lightsource macromolecular crystallography beamlines and third-party processing software to automate the crystallography steps of the fragment-based drug-discovery process. AutoDrug screens a cassette of fragment-soaked crystals, selects crystals for data collection based on screening results and user-specified criteria and determines optimal data-collection strategies. It then collects and processes diffraction data,more » performs molecular replacement using provided models and detects electron density that is likely to arise from bound fragments. All processes are fully automated, i.e. are performed without user interaction or supervision. Samples can be screened in groups corresponding to particular proteins, crystal forms and/or soaking conditions. A single AutoDrug run is only limited by the capacity of the sample-storage dewar at the beamline: currently 288 samples. AutoDrug was developed in conjunction with RestFlow, a new scientific workflow-automation framework. RestFlow simplifies the design of AutoDrug by managing the flow of data and the organization of results and by orchestrating the execution of computational pipeline steps. It also simplifies the execution and interaction of third-party programs and the beamline-control system. Modeling AutoDrug as a scientific workflow enables multiple variants that meet the requirements of different user groups to be developed and supported. A workflow tailored to mimic the crystallography stages comprising the drug-discovery pipeline of CoCrystal Discovery Inc. has been deployed and successfully demonstrated. This workflow was run once on the same 96 samples that the group had examined manually and the workflow cycled successfully through all of the samples, collected data from the same samples that were selected manually and located the same peaks of unmodeled density in the resulting difference Fourier maps.« less

  17. VecScreen_plus_taxonomy: imposing a tax(onomy) increase on vector contamination screening.

    PubMed

    Schäffer, Alejandro A; Nawrocki, Eric P; Choi, Yoon; Kitts, Paul A; Karsch-Mizrachi, Ilene; McVeigh, Richard

    2018-03-01

    Nucleic acid sequences in public databases should not contain vector contamination, but many sequences in GenBank do (or did) contain vectors. The National Center for Biotechnology Information uses the program VecScreen to screen submitted sequences for contamination. Additional tools are needed to distinguish true-positive (contamination) from false-positive (not contamination) VecScreen matches. A principal reason for false-positive VecScreen matches is that the sequence and the matching vector subsequence originate from closely related or identical organisms (for example, both originate in Escherichia coli). We collected information on the taxonomy of sources of vector segments in the UniVec database used by VecScreen. We used that information in two overlapping software pipelines for retrospective analysis of contamination in GenBank and for prospective analysis of contamination in new sequence submissions. Using the retrospective pipeline, we identified and corrected over 8000 contaminated sequences in the nonredundant nucleotide database. The prospective analysis pipeline has been in production use since April 2017 to evaluate some new GenBank submissions. Data on the sources of UniVec entries were included in release 10.0 (ftp://ftp.ncbi.nih.gov/pub/UniVec/). The main software is freely available at https://github.com/aaschaffer/vecscreen_plus_taxonomy. aschaffe@helix.nih.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  18. Open PHACTS computational protocols for in silico target validation of cellular phenotypic screens: knowing the knowns† †The authors declare no competing interests. ‡ ‡Electronic supplementary information (ESI) available: Pipeline Pilot protocols, xls file with the output of the Pipeline Pilot protocols, KNIME workflows, and supplementary figures showing the Pipeline Pilot protocols. See DOI: 10.1039/c6md00065g Click here for additional data file.

    PubMed Central

    Zdrazil, B.; Neefs, J.-M.; Van Vlijmen, H.; Herhaus, C.; Caracoti, A.; Brea, J.; Roibás, B.; Loza, M. I.; Queralt-Rosinach, N.; Furlong, L. I.; Gaulton, A.; Bartek, L.; Senger, S.; Chichester, C.; Engkvist, O.; Evelo, C. T.; Franklin, N. I.; Marren, D.; Ecker, G. F.

    2016-01-01

    Phenotypic screening is in a renaissance phase and is expected by many academic and industry leaders to accelerate the discovery of new drugs for new biology. Given that phenotypic screening is per definition target agnostic, the emphasis of in silico and in vitro follow-up work is on the exploration of possible molecular mechanisms and efficacy targets underlying the biological processes interrogated by the phenotypic screening experiments. Herein, we present six exemplar computational protocols for the interpretation of cellular phenotypic screens based on the integration of compound, target, pathway, and disease data established by the IMI Open PHACTS project. The protocols annotate phenotypic hit lists and allow follow-up experiments and mechanistic conclusions. The annotations included are from ChEMBL, ChEBI, GO, WikiPathways and DisGeNET. Also provided are protocols which select from the IUPHAR/BPS Guide to PHARMACOLOGY interaction file selective compounds to probe potential targets and a correlation robot which systematically aims to identify an overlap of active compounds in both the phenotypic as well as any kinase assay. The protocols are applied to a phenotypic pre-lamin A/C splicing assay selected from the ChEMBL database to illustrate the process. The computational protocols make use of the Open PHACTS API and data and are built within the Pipeline Pilot and KNIME workflow tools. PMID:27774140

  19. tcpl: the ToxCast pipeline for high-throughput screening data.

    PubMed

    Filer, Dayne L; Kothiya, Parth; Setzer, R Woodrow; Judson, Richard S; Martin, Matthew T

    2017-02-15

    Large high-throughput screening (HTS) efforts are widely used in drug development and chemical toxicity screening. Wide use and integration of these data can benefit from an efficient, transparent and reproducible data pipeline. Summary: The tcpl R package and its associated MySQL database provide a generalized platform for efficiently storing, normalizing and dose-response modeling of large high-throughput and high-content chemical screening data. The novel dose-response modeling algorithm has been tested against millions of diverse dose-response series, and robustly fits data with outliers and cytotoxicity-related signal loss. tcpl is freely available on the Comprehensive R Archive Network under the GPL-2 license. martin.matt@epa.gov. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  20. SAMPL4 & DOCK3.7: lessons for automated docking procedures

    NASA Astrophysics Data System (ADS)

    Coleman, Ryan G.; Sterling, Teague; Weiss, Dahlia R.

    2014-03-01

    The SAMPL4 challenges were used to test current automated methods for solvation energy, virtual screening, pose and affinity prediction of the molecular docking pipeline DOCK 3.7. Additionally, first-order models of binding affinity were proposed as milestones for any method predicting binding affinity. Several important discoveries about the molecular docking software were made during the challenge: (1) Solvation energies of ligands were five-fold worse than any other method used in SAMPL4, including methods that were similarly fast, (2) HIV Integrase is a challenging target, but automated docking on the correct allosteric site performed well in terms of virtual screening and pose prediction (compared to other methods) but affinity prediction, as expected, was very poor, (3) Molecular docking grid sizes can be very important, serious errors were discovered with default settings that have been adjusted for all future work. Overall, lessons from SAMPL4 suggest many changes to molecular docking tools, not just DOCK 3.7, that could improve the state of the art. Future difficulties and projects will be discussed.

  1. Identification of missing variants by combining multiple analytic pipelines.

    PubMed

    Ren, Yingxue; Reddy, Joseph S; Pottier, Cyril; Sarangi, Vivekananda; Tian, Shulan; Sinnwell, Jason P; McDonnell, Shannon K; Biernacka, Joanna M; Carrasquillo, Minerva M; Ross, Owen A; Ertekin-Taner, Nilüfer; Rademakers, Rosa; Hudson, Matthew; Mainzer, Liudmila Sergeevna; Asmann, Yan W

    2018-04-16

    After decades of identifying risk factors using array-based genome-wide association studies (GWAS), genetic research of complex diseases has shifted to sequencing-based rare variants discovery. This requires large sample sizes for statistical power and has brought up questions about whether the current variant calling practices are adequate for large cohorts. It is well-known that there are discrepancies between variants called by different pipelines, and that using a single pipeline always misses true variants exclusively identifiable by other pipelines. Nonetheless, it is common practice today to call variants by one pipeline due to computational cost and assume that false negative calls are a small percent of total. We analyzed 10,000 exomes from the Alzheimer's Disease Sequencing Project (ADSP) using multiple analytic pipelines consisting of different read aligners and variant calling strategies. We compared variants identified by using two aligners in 50,100, 200, 500, 1000, and 1952 samples; and compared variants identified by adding single-sample genotyping to the default multi-sample joint genotyping in 50,100, 500, 2000, 5000 and 10,000 samples. We found that using a single pipeline missed increasing numbers of high-quality variants correlated with sample sizes. By combining two read aligners and two variant calling strategies, we rescued 30% of pass-QC variants at sample size of 2000, and 56% at 10,000 samples. The rescued variants had higher proportions of low frequency (minor allele frequency [MAF] 1-5%) and rare (MAF < 1%) variants, which are the very type of variants of interest. In 660 Alzheimer's disease cases with earlier onset ages of ≤65, 4 out of 13 (31%) previously-published rare pathogenic and protective mutations in APP, PSEN1, and PSEN2 genes were undetected by the default one-pipeline approach but recovered by the multi-pipeline approach. Identification of the complete variant set from sequencing data is the prerequisite of genetic association analyses. The current analytic practice of calling genetic variants from sequencing data using a single bioinformatics pipeline is no longer adequate with the increasingly large projects. The number and percentage of quality variants that passed quality filters but are missed by the one-pipeline approach rapidly increased with sample size.

  2. Early-type galaxies: Automated reduction and analysis of ROSAT PSPC data

    NASA Technical Reports Server (NTRS)

    Mackie, G.; Fabbiano, G.; Harnden, F. R., Jr.; Kim, D.-W.; Maggio, A.; Micela, G.; Sciortino, S.; Ciliegi, P.

    1996-01-01

    Preliminary results of early-type galaxies that will be part of a galaxy catalog to be derived from the complete Rosat data base are presented. The stored data were reduced and analyzed by an automatic pipeline. This pipeline is based on a command language scrip. The important features of the pipeline include new data time screening in order to maximize the signal to noise ratio of faint point-like sources, source detection via a wavelet algorithm, and the identification of sources with objects from existing catalogs. The pipeline outputs include reduced images, contour maps, surface brightness profiles, spectra, color and hardness ratios.

  3. Development of an Automated Imaging Pipeline for the Analysis of the Zebrafish Larval Kidney

    PubMed Central

    Westhoff, Jens H.; Giselbrecht, Stefan; Schmidts, Miriam; Schindler, Sebastian; Beales, Philip L.; Tönshoff, Burkhard; Liebel, Urban; Gehrig, Jochen

    2013-01-01

    The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP) transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems. PMID:24324758

  4. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    PubMed

    Westhoff, Jens H; Giselbrecht, Stefan; Schmidts, Miriam; Schindler, Sebastian; Beales, Philip L; Tönshoff, Burkhard; Liebel, Urban; Gehrig, Jochen

    2013-01-01

    The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP) transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  5. 2. PLANK WALKWAY ATOP PIPE, ALSO SHOWING OVERFLOW CONTROL BOX ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. PLANK WALKWAY ATOP PIPE, ALSO SHOWING OVERFLOW CONTROL BOX AT JUNCTION OF PIPE WITH CONCRETE CHANNEL TO FISH SCREEN. VIEW TO NORTHEAST. - Santa Ana River Hydroelectric System, Pipeline to Fish Screen, Redlands, San Bernardino County, CA

  6. ENVIRONMENTAL BENIGN MITIGATION OF MICROBIOLOGICALLY INFLUENCED CORROSION (MIC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.R. Paterek; G. Husmillo; V. Trbovic

    The overall program objective is to develop and evaluate environmental benign agents or products that are effective in the prevention, inhibition, and mitigation of microbially influenced corrosion (MIC) in the internal surfaces of metallic natural gas pipelines. The goal is one or more environmental benign, a.k.a. ''green'' products that can be applied to maintain the structure and dependability of the natural gas infrastructure. The technical approach for this quarter were isolation and cultivation of MIC-causing microorganisms from corroded pipeline samples, optimizing parameters in the laboratory-scale corrosion test loop system and testing the effective concentrations of Capsicum sp. extracts to verifymore » the extent of corrosion on metal coupons by batch culture method. A total of 22 strains from the group of heterotrophic, acid producing, denitrifying and sulfate reducing bacteria were isolated from the gas pipeline samples obtained from Northern Indiana Public Service Company in Trenton, Indiana. They were purified and will be sent out for identification. Bacterial strains of interest were used in antimicrobial screenings and test loop experiments. Parameters for the laboratory-scale test loop system such as gas and culture medium flow rate; temperature; inoculation period; and length of incubation were established. Batch culture corrosion study against Desulfovibrio vulgaris showed that one (S{sub 1}M) out of the four Capsicum sp. extracts tested was effective in controlling the corrosion rate in metal coupons by 33.33% when compared to the untreated group.« less

  7. Application of the Attagene FACTORIAL™ assay to ...

    EPA Pesticide Factsheets

    Bioassays can be used to evaluate the integrated effects of complex mixtures from both known and unidentified contaminants present in environmental samples. However, such bio-monitoring approaches have typically focused only on one or a few pathways (e.g. estrogen receptor, androgen receptor) despite the fact that the chemicals in a mixture may exhibit a range of biological activities. High-throughput screening approaches that can rapidly assess samples for a broad diversity of biological activities offer a means to provide a more comprehensive characterization of complex mixtures. The Attagene FactorialTM platform is a high-throughput, cell based assay utilized by US EPA’s ToxCast Program, which provides high-content assessment of over 90 different gene regulatory pathways and all 48 human nuclear receptors (NRs). This assay has previously been used in a preliminary screening of surface water extracts from sites across the Great Lakes. In the current study, surface waters samples from 38 sites were collected, extracted, and screened through the Factorial assay as part of a USGS nationwide stream assessment. All samples were evaluated in a six point, 3-fold dilution series and analyzed using the ToxCast Data Pipeline (TCPL) to generate dose-response curves and corresponding half-maximal activity concentration (AC50) estimates. A total of 27 assay endpoints responded to extracts from one or more sites, with up to 14 assays active for a single extract. The four

  8. Natural Language Processing Accurately Calculates Adenoma and Sessile Serrated Polyp Detection Rates.

    PubMed

    Nayor, Jennifer; Borges, Lawrence F; Goryachev, Sergey; Gainer, Vivian S; Saltzman, John R

    2018-07-01

    ADR is a widely used colonoscopy quality indicator. Calculation of ADR is labor-intensive and cumbersome using current electronic medical databases. Natural language processing (NLP) is a method used to extract meaning from unstructured or free text data. (1) To develop and validate an accurate automated process for calculation of adenoma detection rate (ADR) and serrated polyp detection rate (SDR) on data stored in widely used electronic health record systems, specifically Epic electronic health record system, Provation ® endoscopy reporting system, and Sunquest PowerPath pathology reporting system. Screening colonoscopies performed between June 2010 and August 2015 were identified using the Provation ® reporting tool. An NLP pipeline was developed to identify adenomas and sessile serrated polyps (SSPs) on pathology reports corresponding to these colonoscopy reports. The pipeline was validated using a manual search. Precision, recall, and effectiveness of the natural language processing pipeline were calculated. ADR and SDR were then calculated. We identified 8032 screening colonoscopies that were linked to 3821 pathology reports (47.6%). The NLP pipeline had an accuracy of 100% for adenomas and 100% for SSPs. Mean total ADR was 29.3% (range 14.7-53.3%); mean male ADR was 35.7% (range 19.7-62.9%); and mean female ADR was 24.9% (range 9.1-51.0%). Mean total SDR was 4.0% (0-9.6%). We developed and validated an NLP pipeline that accurately and automatically calculates ADRs and SDRs using data stored in Epic, Provation ® and Sunquest PowerPath. This NLP pipeline can be used to evaluate colonoscopy quality parameters at both individual and practice levels.

  9. Drug Development Pipeline

    MedlinePlus

    ... Care Guidelines Newborn Screening Clinical Care Guidelines Sweat Test Clinical Care Guidelines Infection Prevention and Control Care Guidelines Allergic Bronchopulmonary Aspergillosis Clinical Care Guidelines ...

  10. Lateral instability of high temperature pipelines, the 20-in. Sleipner Vest pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saevik, S.; Levold, E.; Johnsen, O.K.

    1996-12-01

    The present paper addresses methods to control snaking behavior of high temperature pipelines resting on a flat sea bed. A case study is presented based on the detail engineering of the 12.5 km long 20 inch gas pipeline connecting the Sleipner Vest wellhead platform to the Sleipner T processing platform in the North Sea. The study includes screening and evaluation of alternative expansion control methods, ending up with a recommended method. The methodology and philosophy, used as basis to ensure sufficient structural strength throughout the lifetime of the pipeline, are thereafter presented. The results show that in order to findmore » the optimum technical solution to control snaking behavior, many aspects need to be considered such as process requirements, allowable strain, hydrodynamic stability, vertical profile, pipelay installation and trawlboard loading. It is concluded that by proper consideration of all the above aspects, the high temperature pipeline can be designed to obtain sufficient safety level.« less

  11. Understanding gene functions and disease mechanisms: Phenotyping pipelines in the German Mouse Clinic.

    PubMed

    Fuchs, Helmut; Aguilar-Pimentel, Juan Antonio; Amarie, Oana V; Becker, Lore; Calzada-Wack, Julia; Cho, Yi-Li; Garrett, Lillian; Hölter, Sabine M; Irmler, Martin; Kistler, Martin; Kraiger, Markus; Mayer-Kuckuk, Philipp; Moreth, Kristin; Rathkolb, Birgit; Rozman, Jan; da Silva Buttkus, Patricia; Treise, Irina; Zimprich, Annemarie; Gampe, Kristine; Hutterer, Christine; Stöger, Claudia; Leuchtenberger, Stefanie; Maier, Holger; Miller, Manuel; Scheideler, Angelika; Wu, Moya; Beckers, Johannes; Bekeredjian, Raffi; Brielmeier, Markus; Busch, Dirk H; Klingenspor, Martin; Klopstock, Thomas; Ollert, Markus; Schmidt-Weber, Carsten; Stöger, Tobias; Wolf, Eckhard; Wurst, Wolfgang; Yildirim, Ali Önder; Zimmer, Andreas; Gailus-Durner, Valérie; Hrabě de Angelis, Martin

    2017-09-29

    Since decades, model organisms have provided an important approach for understanding the mechanistic basis of human diseases. The German Mouse Clinic (GMC) was the first phenotyping facility that established a collaboration-based platform for phenotype characterization of mouse lines. In order to address individual projects by a tailor-made phenotyping strategy, the GMC advanced in developing a series of pipelines with tests for the analysis of specific disease areas. For a general broad analysis, there is a screening pipeline that covers the key parameters for the most relevant disease areas. For hypothesis-driven phenotypic analyses, there are thirteen additional pipelines with focus on neurological and behavioral disorders, metabolic dysfunction, respiratory system malfunctions, immune-system disorders and imaging techniques. In this article, we give an overview of the pipelines and describe the scientific rationale behind the different test combinations. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Investigating the viral ecology of global bee communities with high-throughput metagenomics.

    PubMed

    Galbraith, David A; Fuller, Zachary L; Ray, Allyson M; Brockmann, Axel; Frazier, Maryann; Gikungu, Mary W; Martinez, J Francisco Iturralde; Kapheim, Karen M; Kerby, Jeffrey T; Kocher, Sarah D; Losyev, Oleksiy; Muli, Elliud; Patch, Harland M; Rosa, Cristina; Sakamoto, Joyce M; Stanley, Scott; Vaudo, Anthony D; Grozinger, Christina M

    2018-06-11

    Bee viral ecology is a fascinating emerging area of research: viruses exert a range of effects on their hosts, exacerbate impacts of other environmental stressors, and, importantly, are readily shared across multiple bee species in a community. However, our understanding of bee viral communities is limited, as it is primarily derived from studies of North American and European Apis mellifera populations. Here, we examined viruses in populations of A. mellifera and 11 other bee species from 9 countries, across 4 continents and Oceania. We developed a novel pipeline to rapidly and inexpensively screen for bee viruses. This pipeline includes purification of encapsulated RNA/DNA viruses, sequence-independent amplification, high throughput sequencing, integrated assembly of contigs, and filtering to identify contigs specifically corresponding to viral sequences. We identified sequences for (+)ssRNA, (-)ssRNA, dsRNA, and ssDNA viruses. Overall, we found 127 contigs corresponding to novel viruses (i.e. previously not observed in bees), with 27 represented by >0.1% of the reads in a given sample, and 7 contained an RdRp or replicase sequence which could be used for robust phylogenetic analysis. This study provides a sequence-independent pipeline for viral metagenomics analysis, and greatly expands our understanding of the diversity of viruses found in bee communities.

  13. Remedial Investigation Report on the Abandoned Nitric Acid Pipeline at the Oak Ridge Y-12 Plant, Oak Ridge, Tennessee. Energy Systems Environmental Restoration Program; Y-12 Environmental Restoration Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-02-01

    Upper East Fork Poplar Creek Operable Unit 2 consists of the Abandoned Nitric Acid pipeline (ANAP). This pipeline was installed in 1951 to transport liquid wastes {approximately}4800 ft from Buildings 9212, 9215, and 9206 to the S-3 Ponds. Materials known to have been discharged through the pipeline include nitric acid, depleted and enriched uranium, various metal nitrates, salts, and lead skimmings. During the mid-1980s, sections of the pipeline were removed during various construction projects. A total of 19 locations were chosen to be investigated along the pipeline for the first phase of this Remedial Investigation. Sampling consisted of drilling downmore » to obtain a soil sample at a depth immediately below the pipeline. Additional samples were obtained deeper in the subsurface depending upon the depth of the pipeline, the depth of the water table, and the point of auger refusal. The 19 samples collected below the pipeline were analyzed by the Oak Ridge Y-12 Plant`s laboratory for metals, nitrate/nitrite, and isotopic uranium. Samples collected from three boreholes were also analyzed for volatile organic compounds because these samples produced a response with organic vapor monitoring equipment. Uranium activities in the soil samples ranged from 0.53 to 13.0 pCi/g for {sup 238}U, from 0.075 to 0.75 pCi/g for {sup 235}U, and from 0.71 to 5.0 pCi/g for {sup 238}U. Maximum total values for lead, chromium, and nickel were 75.1 mg/kg, 56.3 mg/kg, and 53.0 mg/kg, respectively. The maximum nitrate/nitrite value detected was 32.0 mg-N/kg. One sample obtained adjacent to a sewer line contained various organic compounds, at least some of which were tentatively identified as fragrance chemicals commonly associated with soaps and cleaning solutions. The results of the baseline human health risk assessment for the ANAP contaminants of potential concern show no unacceptable risks to human health.« less

  14. ENVIRONMENTALLY BENIGN MITIGATION OF MICROBIOLOGICALLY INFLUENCED CORROSION (MIC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Robert Paterek; Gemma Husmillo; Amrutha Daram

    The overall program objective is to develop and evaluate environmentally benign agents or products that are effective in the prevention, inhibition, and mitigation of microbially influenced corrosion (MIC) in the internal surfaces of metallic natural gas pipelines. The goal is to develop one or more environmentally benign (a.k.a. ''green'') products that can be applied to maintain the structure and dependability of the natural gas infrastructure. The technical approach for this quarter includes the application of new methods of Capsicum sp. (pepper) extraction by soxhlet method and analysis of a new set of extracts by thin layer chromatography (TLC) and highmore » performance liquid chromatography (HPLC); isolation and cultivation of MIC-causing microorganisms from corroded pipeline samples; and evaluation of antimicrobial activities of the old set of pepper extracts in comparison with major components of known biocides and corrosion inhibitors. Twelve new extracts from three varieties of Capsicum sp. (Serrano, Habanero, and Chile de Arbol) were obtained by soxhlet extraction using 4 different solvents. Results of TLC done on these extracts showed the presence of capsaicin and some phenolic compounds, while that of HPLC detected capsaicin and dihydrocapsaicin peaks. More tests will be done to determine specific components. Additional isolates from the group of heterotrophic, acid-producing, denitrifying and sulfate-reducing bacteria were obtained from the pipeline samples submitted by gas companies. Isolates of interest will be used in subsequent antimicrobial testing and test-loop simulation system experiments. Results of antimicrobial screening of Capsicum sp. extracts and components of known commercial biocides showed comparable activities when tested against two strains of sulfate-reducing bacteria.« less

  15. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform.

    PubMed

    Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N

    2017-03-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.

  16. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform

    PubMed Central

    Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.

    2016-01-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692

  17. GMOseek: a user friendly tool for optimized GMO testing.

    PubMed

    Morisset, Dany; Novak, Petra Kralj; Zupanič, Darko; Gruden, Kristina; Lavrač, Nada; Žel, Jana

    2014-08-01

    With the increasing pace of new Genetically Modified Organisms (GMOs) authorized or in pipeline for commercialization worldwide, the task of the laboratories in charge to test the compliance of food, feed or seed samples with their relevant regulations became difficult and costly. Many of them have already adopted the so called "matrix approach" to rationalize the resources and efforts used to increase their efficiency within a limited budget. Most of the time, the "matrix approach" is implemented using limited information and some proprietary (if any) computational tool to efficiently use the available data. The developed GMOseek software is designed to support decision making in all the phases of routine GMO laboratory testing, including the interpretation of wet-lab results. The tool makes use of a tabulated matrix of GM events and their genetic elements, of the laboratory analysis history and the available information about the sample at hand. The tool uses an optimization approach to suggest the most suited screening assays for the given sample. The practical GMOseek user interface allows the user to customize the search for a cost-efficient combination of screening assays to be employed on a given sample. It further guides the user to select appropriate analyses to determine the presence of individual GM events in the analyzed sample, and it helps taking a final decision regarding the GMO composition in the sample. GMOseek can also be used to evaluate new, previously unused GMO screening targets and to estimate the profitability of developing new GMO screening methods. The presented freely available software tool offers the GMO testing laboratories the possibility to select combinations of assays (e.g. quantitative real-time PCR tests) needed for their task, by allowing the expert to express his/her preferences in terms of multiplexing and cost. The utility of GMOseek is exemplified by analyzing selected food, feed and seed samples from a national reference laboratory for GMO testing and by comparing its performance to existing tools which use the matrix approach. GMOseek proves superior when tested on real samples in terms of GMO coverage and cost efficiency of its screening strategies, including its capacity of simple interpretation of the testing results.

  18. Sub-soil contamination due to oil spills in zones surrounding oil pipeline-pump stations and oil pipeline right-of-ways in Southwest-Mexico.

    PubMed

    Iturbe, Rosario; Flores, Carlos; Castro, Alejandrina; Torres, Luis G

    2007-10-01

    Oil spills due to oil pipelines is a very frequent problem in Mexico. Petroleos Mexicanos (PEMEX), very concerned with the environmental agenda, has been developing inspection and correction plans for zones around oil pipelines pumping stations and pipeline right-of-way. These stations are located at regular intervals of kilometres along the pipelines. In this study, two sections of an oil pipeline and two pipeline pumping stations zones are characterized in terms of the presence of Total Petroleum Hydrocarbons (TPHs) and Polycyclic Aromatic Hydrocarbons (PAHs). The study comprehends sampling of the areas, delimitation of contamination in the vertical and horizontal extension, analysis of the sampled soils regarding TPHs content and, in some cases, the 16 PAHs considered as priority by USEPA, calculation of areas and volumes contaminated (according to Mexican legislation, specifically NOM-EM-138-ECOL-2002) and, finally, a proposal for the best remediation techniques suitable for the contamination levels and the localization of contaminants.

  19. Pipeline for illumination correction of images for high-throughput microscopy.

    PubMed

    Singh, S; Bray, M-A; Jones, T R; Carpenter, A E

    2014-12-01

    The presence of systematic noise in images in high-throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non-homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high-content screen readouts due to software-based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real-world high-throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z'-factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high-content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post-hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open-source image analysis pipelines publicly available. This software-based solution has the potential to improve outcomes for a wide-variety of image-based HTS experiments. © 2014 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  20. The HEASARC Swift Gamma-Ray Burst Archive: The Pipeline and the Catalog

    NASA Technical Reports Server (NTRS)

    Donato, Davide; Angelini, Lorella; Padgett, C.A.; Reichard, T.; Gehrels, Neil; Marshall, Francis E.; Sakamoto, Takanori

    2012-01-01

    Since its launch in late 2004, the Swift satellite triggered or observed an average of one gamma-ray burst (GRB) every 3 days, for a total of 771 GRBs by 2012 January. Here, we report the development of a pipeline that semi automatically performs the data-reduction and data-analysis processes for the three instruments on board Swift (BAT, XRT, UVOT). The pipeline is written in Perl, and it uses only HEAsoft tools and can be used to perform the analysis of a majority of the point-like objects (e.g., GRBs, active galactic nuclei, pulsars) observed by Swift. We run the pipeline on the GRBs, and we present a database containing the screened data, the output products, and the results of our ongoing analysis. Furthermore, we created a catalog summarizing some GRB information, collected either by running the pipeline or from the literature. The Perl script, the database, and the catalog are available for downloading and querying at the HEASARC Web site.

  1. The HEASARC Swift Gamma-Ray Burst Archive: The Pipeline and the Catalog

    NASA Astrophysics Data System (ADS)

    Donato, D.; Angelini, L.; Padgett, C. A.; Reichard, T.; Gehrels, N.; Marshall, F. E.; Sakamoto, T.

    2012-11-01

    Since its launch in late 2004, the Swift satellite triggered or observed an average of one gamma-ray burst (GRB) every 3 days, for a total of 771 GRBs by 2012 January. Here, we report the development of a pipeline that semi-automatically performs the data-reduction and data-analysis processes for the three instruments on board Swift (BAT, XRT, UVOT). The pipeline is written in Perl, and it uses only HEAsoft tools and can be used to perform the analysis of a majority of the point-like objects (e.g., GRBs, active galactic nuclei, pulsars) observed by Swift. We run the pipeline on the GRBs, and we present a database containing the screened data, the output products, and the results of our ongoing analysis. Furthermore, we created a catalog summarizing some GRB information, collected either by running the pipeline or from the literature. The Perl script, the database, and the catalog are available for downloading and querying at the HEASARC Web site.

  2. Identification of a Novel Class of BRD4 Inhibitors by Computational Screening and Binding Simulations

    PubMed Central

    2017-01-01

    Computational screening is a method to prioritize small-molecule compounds based on the structural and biochemical attributes built from ligand and target information. Previously, we have developed a scalable virtual screening workflow to identify novel multitarget kinase/bromodomain inhibitors. In the current study, we identified several novel N-[3-(2-oxo-pyrrolidinyl)phenyl]-benzenesulfonamide derivatives that scored highly in our ensemble docking protocol. We quantified the binding affinity of these compounds for BRD4(BD1) biochemically and generated cocrystal structures, which were deposited in the Protein Data Bank. As the docking poses obtained in the virtual screening pipeline did not align with the experimental cocrystal structures, we evaluated the predictions of their precise binding modes by performing molecular dynamics (MD) simulations. The MD simulations closely reproduced the experimentally observed protein–ligand cocrystal binding conformations and interactions for all compounds. These results suggest a computational workflow to generate experimental-quality protein–ligand binding models, overcoming limitations of docking results due to receptor flexibility and incomplete sampling, as a useful starting point for the structure-based lead optimization of novel BRD4(BD1) inhibitors. PMID:28884163

  3. The Visual Uncertainty Paradigm for Controlling Screen-Space Information in Visualization

    ERIC Educational Resources Information Center

    Dasgupta, Aritra

    2012-01-01

    The information visualization pipeline serves as a lossy communication channel for presentation of data on a screen-space of limited resolution. The lossy communication is not just a machine-only phenomenon due to information loss caused by translation of data, but also a reflection of the degree to which the human user can comprehend visual…

  4. Abnormal plasma DNA profiles in early ovarian cancer using a non-invasive prenatal testing platform: implications for cancer screening.

    PubMed

    Cohen, Paul A; Flowers, Nicola; Tong, Stephen; Hannan, Natalie; Pertile, Mark D; Hui, Lisa

    2016-08-24

    Non-invasive prenatal testing (NIPT) identifies fetal aneuploidy by sequencing cell-free DNA in the maternal plasma. Pre-symptomatic maternal malignancies have been incidentally detected during NIPT based on abnormal genomic profiles. This low coverage sequencing approach could have potential for ovarian cancer screening in the non-pregnant population. Our objective was to investigate whether plasma DNA sequencing with a clinical whole genome NIPT platform can detect early- and late-stage high-grade serous ovarian carcinomas (HGSOC). This is a case control study of prospectively-collected biobank samples comprising preoperative plasma from 32 women with HGSOC (16 'early cancer' (FIGO I-II) and 16 'advanced cancer' (FIGO III-IV)) and 32 benign controls. Plasma DNA from cases and controls were sequenced using a commercial NIPT platform and chromosome dosage measured. Sequencing data were blindly analyzed with two methods: (1) Subchromosomal changes were called using an open source algorithm WISECONDOR (WIthin-SamplE COpy Number aberration DetectOR). Genomic gains or losses ≥ 15 Mb were prespecified as "screen positive" calls, and mapped to recurrent copy number variations reported in an ovarian cancer genome atlas. (2) Selected whole chromosome gains or losses were reported using the routine NIPT pipeline for fetal aneuploidy. We detected 13/32 cancer cases using the subchromosomal analysis (sensitivity 40.6 %, 95 % CI, 23.7-59.4 %), including 6/16 early and 7/16 advanced HGSOC cases. Two of 32 benign controls had subchromosomal gains ≥ 15 Mb (specificity 93.8 %, 95 % CI, 79.2-99.2 %). Twelve of the 13 true positive cancer cases exhibited specific recurrent changes reported in HGSOC tumors. The NIPT pipeline resulted in one "monosomy 18" call from the cancer group, and two "monosomy X" calls in the controls. Low coverage plasma DNA sequencing used for prenatal testing detected 40.6 % of all HGSOC, including 38 % of early stage cases. Our findings demonstrate the potential of a high throughput sequencing platform to screen for early HGSOC in plasma based on characteristic multiple segmental chromosome gains and losses. The performance of this approach may be further improved by refining bioinformatics algorithms and targeting selected cancer copy number variations.

  5. Remaining Sites Verification Package for the 100-F-44:2, Discovery Pipeline Near 108-F Building, Waste Site Reclassification Form 2007-006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. M. Capron

    2008-05-30

    The 100-F-44:2 waste site is a steel pipeline that was discovered in a junction box during confirmatory sampling of the 100-F-26:4 pipeline from December 2004 through January 2005. The 100-F-44:2 pipeline feeds into the 100-F-26:4 subsite vitrified clay pipe (VCP) process sewer pipeline from the 108-F Biology Laboratory at the junction box. In accordance with this evaluation, the confirmatory sampling results support a reclassification of this site to No Action. The current site conditions achieve the remedial action objectives and the corresponding remedial action goals established in the Remaining Sites ROD. The results of confirmatory sampling show that residual contaminantmore » concentrations do not preclude any future uses and allow for unrestricted use of shallow zone soils. The results also demonstrate that residual contaminant concentrations are protective of groundwater and the Columbia River.« less

  6. Use of microwaves for the detection of corrosion under insulation: The effect of bends

    NASA Astrophysics Data System (ADS)

    Jones, R. E.; Simonetti, F.; Lowe, M. J. S.; Bradley, I. P.

    2012-05-01

    The detection of corrosion under insulation is an ongoing challenge in the oil and gas industry. An early warning of areas of pipe at risk of corrosion can be obtained by screening along the length of the pipeline to inspect the insulation layer for the presence of water, as water is a necessary precursor to corrosion. Long-range detection of water volumes can be achieved with microwave signals, using the structure of the clad and insulated pipeline as a coaxial waveguide, with water volumes presenting an impedance contrast and producing reflections of the incident microwave signal. An investigation into what effect bends in the pipeline will have on this inspection technique is presented here.

  7. Accounting Artifacts in High-Throughput Toxicity Assays.

    PubMed

    Hsieh, Jui-Hua

    2016-01-01

    Compound activity identification is the primary goal in high-throughput screening (HTS) assays. However, assay artifacts including both systematic (e.g., compound auto-fluorescence) and nonsystematic (e.g., noise) complicate activity interpretation. In addition, other than the traditional potency parameter, half-maximal effect concentration (EC50), additional activity parameters (e.g., point-of-departure, POD) could be derived from HTS data for activity profiling. A data analysis pipeline has been developed to handle the artifacts and to provide compound activity characterization with either binary or continuous metrics. This chapter outlines the steps in the pipeline using Tox21 glucocorticoid receptor (GR) β-lactamase assays, including the formats to identify either agonists or antagonists, as well as the counter-screen assays for identifying artifacts as examples. The steps can be applied to other lower-throughput assays with concentration-response data.

  8. High-Content Microscopy Analysis of Subcellular Structures: Assay Development and Application to Focal Adhesion Quantification.

    PubMed

    Kroll, Torsten; Schmidt, David; Schwanitz, Georg; Ahmad, Mubashir; Hamann, Jana; Schlosser, Corinne; Lin, Yu-Chieh; Böhm, Konrad J; Tuckermann, Jan; Ploubidou, Aspasia

    2016-07-01

    High-content analysis (HCA) converts raw light microscopy images to quantitative data through the automated extraction, multiparametric analysis, and classification of the relevant information content. Combined with automated high-throughput image acquisition, HCA applied to the screening of chemicals or RNAi-reagents is termed high-content screening (HCS). Its power in quantifying cell phenotypes makes HCA applicable also to routine microscopy. However, developing effective HCA and bioinformatic analysis pipelines for acquisition of biologically meaningful data in HCS is challenging. Here, the step-by-step development of an HCA assay protocol and an HCS bioinformatics analysis pipeline are described. The protocol's power is demonstrated by application to focal adhesion (FA) detection, quantitative analysis of multiple FA features, and functional annotation of signaling pathways regulating FA size, using primary data of a published RNAi screen. The assay and the underlying strategy are aimed at researchers performing microscopy-based quantitative analysis of subcellular features, on a small scale or in large HCS experiments. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  9. High speed quantitative digital microscopy

    NASA Technical Reports Server (NTRS)

    Castleman, K. R.; Price, K. H.; Eskenazi, R.; Ovadya, M. M.; Navon, M. A.

    1984-01-01

    Modern digital image processing hardware makes possible quantitative analysis of microscope images at high speed. This paper describes an application to automatic screening for cervical cancer. The system uses twelve MC6809 microprocessors arranged in a pipeline multiprocessor configuration. Each processor executes one part of the algorithm on each cell image as it passes through the pipeline. Each processor communicates with its upstream and downstream neighbors via shared two-port memory. Thus no time is devoted to input-output operations as such. This configuration is expected to be at least ten times faster than previous systems.

  10. Ligand Fishing: A Remarkable Strategy for Discovering Bioactive Compounds from Complex Mixture of Natural Products.

    PubMed

    Zhuo, Rongjie; Liu, Hao; Liu, Ningning; Wang, Yi

    2016-11-11

    Identification of active compounds from natural products is a critical and challenging task in drug discovery pipelines. Besides commonly used bio-guided screening approaches, affinity selection strategy coupled with liquid chromatography or mass spectrometry, known as ligand fishing, has been gaining increasing interest from researchers. In this review, we summarized this emerging strategy and categorized those methods as off-line or on-line mode according to their features. The separation principles of ligand fishing were introduced based on distinct analytical techniques, including biochromatography, capillary electrophoresis, ultrafiltration, equilibrium dialysis, microdialysis, and magnetic beads. The applications of ligand fishing approaches in the discovery of lead compounds were reviewed. Most of ligand fishing methods display specificity, high efficiency, and require less sample pretreatment, which makes them especially suitable for screening active compounds from complex mixtures of natural products. We also summarized the applications of ligand fishing in the modernization of Traditional Chinese Medicine (TCM), and propose some perspectives of this remarkable technique.

  11. Framework for computationally-predicted AOPs

    EPA Science Inventory

    Framework for computationally-predicted AOPs Given that there are a vast number of existing and new chemicals in the commercial pipeline, emphasis is placed on developing high throughput screening (HTS) methods for hazard prediction. Adverse Outcome Pathways (AOPs) represent a...

  12. Stepwise high-throughput virtual screening of Rho kinase inhibitors from natural product library and potential therapeutics for pulmonary hypertension.

    PubMed

    Su, Hao; Yan, Ji; Xu, Jian; Fan, Xi-Zhen; Sun, Xian-Lin; Chen, Kang-Yu

    2015-08-01

    Pulmonary hypertension (PH) is a devastating disease characterized by progressive elevation of pulmonary arterial pressure and vascular resistance due to pulmonary vasoconstriction and vessel remodeling. The activation of RhoA/Rho-kinase (ROCK) pathway plays a central role in the pathologic progression of PH and thus the Rho kinase, an essential effector of the ROCK pathway, is considered as a potential therapeutic target to attenuate PH. In the current study, a synthetic pipeline is used to discover new potent Rho inhibitors from various natural products. In the pipeline, the stepwise high-throughput virtual screening, quantitative structure-activity relationship (QSAR)-based rescoring, and kinase assay were integrated. The screening was performed against a structurally diverse, drug-like natural product library, from which six identified compounds were tested to determine their inhibitory potencies agonist Rho by using a standard kinase assay protocol. With this scheme, we successfully identified two potent Rho inhibitors, namely phloretin and baicalein, with activity values of IC50 = 0.22 and 0.95 μM, respectively. Structural examination suggested that complicated networks of non-bonded interactions such as hydrogen bonding, hydrophobic forces, and van der Waals contacts across the complex interfaces of Rho kinase are formed with the screened compounds.

  13. A Pipeline for High-Throughput Concentration Response Modeling of Gene Expression for Toxicogenomics

    PubMed Central

    House, John S.; Grimm, Fabian A.; Jima, Dereje D.; Zhou, Yi-Hui; Rusyn, Ivan; Wright, Fred A.

    2017-01-01

    Cell-based assays are an attractive option to measure gene expression response to exposure, but the cost of whole-transcriptome RNA sequencing has been a barrier to the use of gene expression profiling for in vitro toxicity screening. In addition, standard RNA sequencing adds variability due to variable transcript length and amplification. Targeted probe-sequencing technologies such as TempO-Seq, with transcriptomic representation that can vary from hundreds of genes to the entire transcriptome, may reduce some components of variation. Analyses of high-throughput toxicogenomics data require renewed attention to read-calling algorithms and simplified dose–response modeling for datasets with relatively few samples. Using data from induced pluripotent stem cell-derived cardiomyocytes treated with chemicals at varying concentrations, we describe here and make available a pipeline for handling expression data generated by TempO-Seq to align reads, clean and normalize raw count data, identify differentially expressed genes, and calculate transcriptomic concentration–response points of departure. The methods are extensible to other forms of concentration–response gene-expression data, and we discuss the utility of the methods for assessing variation in susceptibility and the diseased cellular state. PMID:29163636

  14. Plant cell wall glycosyltransferases: High-throughput recombinant expression screening and general requirements for these challenging enzymes

    DOE PAGES

    Welner, Ditte Hededam; Shin, David; Tomaleri, Giovani P.; ...

    2017-06-09

    Molecular characterization of plant cell wall glycosyltransferases is a critical step towards understanding the biosynthesis of the complex plant cell wall, and ultimately for efficient engineering of biofuel and agricultural crops. The majority of these enzymes have proven very difficult to obtain in the needed amount and purity for such molecular studies, and recombinant cell wall glycosyltransferase production efforts have largely failed. A daunting number of strategies can be employed to overcome this challenge, including optimization of DNA and protein sequences, choice of expression organism, expression conditions, coexpression partners, purification methods, and optimization of protein solubility and stability. Hence researchersmore » are presented with thousands of potential conditions to test. Ultimately, the subset of conditions that will be sampled depends on practical considerations and prior knowledge of the enzyme(s) being studied. We have developed a rational approach to this process. We devise a pipeline comprising in silico selection of targets and construct design, and high-throughput expression screening, target enrichment, and hit identification. We have applied this pipeline to a test set of Arabidopsis thaliana cell wall glycosyltransferases known to be challenging to obtain in soluble form, as well as to a library of cell wall glycosyltransferases from other plants including agricultural and biofuel crops. The screening results suggest that recombinant cell wall glycosyltransferases in general have a very low soluble: Insoluble ratio in lysates from heterologous expression cultures, and that co-expression of chaperones as well as lysis buffer optimization can increase this ratio. We have applied the identified preferred conditions to Reversibly Glycosylated Polypeptide 1 from Arabidopsis thaliana, and processed this enzyme to near-purity in unprecedented milligram amounts. The obtained preparation of Reversibly Glycosylated Polypeptide 1 has the expected arabinopyranose mutase and autoglycosylation activities.This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.« less

  15. Plant cell wall glycosyltransferases: High-throughput recombinant expression screening and general requirements for these challenging enzymes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welner, Ditte Hededam; Shin, David; Tomaleri, Giovani P.

    Molecular characterization of plant cell wall glycosyltransferases is a critical step towards understanding the biosynthesis of the complex plant cell wall, and ultimately for efficient engineering of biofuel and agricultural crops. The majority of these enzymes have proven very difficult to obtain in the needed amount and purity for such molecular studies, and recombinant cell wall glycosyltransferase production efforts have largely failed. A daunting number of strategies can be employed to overcome this challenge, including optimization of DNA and protein sequences, choice of expression organism, expression conditions, coexpression partners, purification methods, and optimization of protein solubility and stability. Hence researchersmore » are presented with thousands of potential conditions to test. Ultimately, the subset of conditions that will be sampled depends on practical considerations and prior knowledge of the enzyme(s) being studied. We have developed a rational approach to this process. We devise a pipeline comprising in silico selection of targets and construct design, and high-throughput expression screening, target enrichment, and hit identification. We have applied this pipeline to a test set of Arabidopsis thaliana cell wall glycosyltransferases known to be challenging to obtain in soluble form, as well as to a library of cell wall glycosyltransferases from other plants including agricultural and biofuel crops. The screening results suggest that recombinant cell wall glycosyltransferases in general have a very low soluble: Insoluble ratio in lysates from heterologous expression cultures, and that co-expression of chaperones as well as lysis buffer optimization can increase this ratio. We have applied the identified preferred conditions to Reversibly Glycosylated Polypeptide 1 from Arabidopsis thaliana, and processed this enzyme to near-purity in unprecedented milligram amounts. The obtained preparation of Reversibly Glycosylated Polypeptide 1 has the expected arabinopyranose mutase and autoglycosylation activities.This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.« less

  16. Reducing the Bottleneck in Discovery of Novel Antibiotics.

    PubMed

    Jones, Marcus B; Nierman, William C; Shan, Yue; Frank, Bryan C; Spoering, Amy; Ling, Losee; Peoples, Aaron; Zullo, Ashley; Lewis, Kim; Nelson, Karen E

    2017-04-01

    Most antibiotics were discovered by screening soil actinomycetes, but the efficiency of the discovery platform collapsed in the 1960s. By now, more than 3000 antibiotics have been described and most of the current discovery effort is focused on the rediscovery of known compounds, making the approach impractical. The last marketed broad-spectrum antibiotics discovered were daptomycin, linezolid, and fidaxomicin. The current state of the art in the development of new anti-infectives is a non-existent pipeline in the absence of a discovery platform. This is particularly troubling given the emergence of pan-resistant pathogens. The current practice in dealing with the problem of the background of known compounds is to use chemical dereplication of extracts to assess the relative novelty of a compound it contains. Dereplication typically requires scale-up, extraction, and often fractionation before an accurate mass and structure can be produced by MS analysis in combination with 2D NMR. Here, we describe a transcriptome analysis approach using RNA sequencing (RNASeq) to identify promising novel antimicrobial compounds from microbial extracts. Our pipeline permits identification of antimicrobial compounds that produce distinct transcription profiles using unfractionated cell extracts. This efficient pipeline will eliminate the requirement for purification and structure determination of compounds from extracts and will facilitate high-throughput screen of cell extracts for identification of novel compounds.

  17. Implementation of Cloud based next generation sequencing data analysis in a clinical laboratory.

    PubMed

    Onsongo, Getiria; Erdmann, Jesse; Spears, Michael D; Chilton, John; Beckman, Kenneth B; Hauge, Adam; Yohe, Sophia; Schomaker, Matthew; Bower, Matthew; Silverstein, Kevin A T; Thyagarajan, Bharat

    2014-05-23

    The introduction of next generation sequencing (NGS) has revolutionized molecular diagnostics, though several challenges remain limiting the widespread adoption of NGS testing into clinical practice. One such difficulty includes the development of a robust bioinformatics pipeline that can handle the volume of data generated by high-throughput sequencing in a cost-effective manner. Analysis of sequencing data typically requires a substantial level of computing power that is often cost-prohibitive to most clinical diagnostics laboratories. To address this challenge, our institution has developed a Galaxy-based data analysis pipeline which relies on a web-based, cloud-computing infrastructure to process NGS data and identify genetic variants. It provides additional flexibility, needed to control storage costs, resulting in a pipeline that is cost-effective on a per-sample basis. It does not require the usage of EBS disk to run a sample. We demonstrate the validation and feasibility of implementing this bioinformatics pipeline in a molecular diagnostics laboratory. Four samples were analyzed in duplicate pairs and showed 100% concordance in mutations identified. This pipeline is currently being used in the clinic and all identified pathogenic variants confirmed using Sanger sequencing further validating the software.

  18. Remaining Sites Verification Package for the 100-F-26:12, 1.8-m (72-in.) Main Process Sewer Pipeline, Waste Site Reclassification Form 2007-034

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. M. Capron

    2008-04-29

    The 100-F-26:12 waste site was an approximately 308-m-long, 1.8-m-diameter east-west-trending reinforced concrete pipe that joined the North Process Sewer Pipelines (100-F-26:1) and the South Process Pipelines (100-F-26:4) with the 1.8-m reactor cooling water effluent pipeline (100-F-19). In accordance with this evaluation, the verification sampling results support a reclassification of this site to Interim Closed Out. The results of verification sampling show that residual contaminant concentrations do not preclude any future uses and allow for unrestricted use of shallow zone soils. The results also demonstrate that residual contaminant concentrations are protective of groundwater and the Columbia River.

  19. IMPACT: a whole-exome sequencing analysis pipeline for integrating molecular profiles with actionable therapeutics in clinical samples

    PubMed Central

    Hintzsche, Jennifer; Kim, Jihye; Yadav, Vinod; Amato, Carol; Robinson, Steven E; Seelenfreund, Eric; Shellman, Yiqun; Wisell, Joshua; Applegate, Allison; McCarter, Martin; Box, Neil; Tentler, John; De, Subhajyoti

    2016-01-01

    Objective Currently, there is a disconnect between finding a patient’s relevant molecular profile and predicting actionable therapeutics. Here we develop and implement the Integrating Molecular Profiles with Actionable Therapeutics (IMPACT) analysis pipeline, linking variants detected from whole-exome sequencing (WES) to actionable therapeutics. Methods and materials The IMPACT pipeline contains 4 analytical modules: detecting somatic variants, calling copy number alterations, predicting drugs against deleterious variants, and analyzing tumor heterogeneity. We tested the IMPACT pipeline on whole-exome sequencing data in The Cancer Genome Atlas (TCGA) lung adenocarcinoma samples with known EGFR mutations. We also used IMPACT to analyze melanoma patient tumor samples before treatment, after BRAF-inhibitor treatment, and after BRAF- and MEK-inhibitor treatment. Results IMPACT Food and Drug Administration (FDA) correctly identified known EGFR mutations in the TCGA lung adenocarcinoma samples. IMPACT linked these EGFR mutations to the appropriate FDA-approved EGFR inhibitors. For the melanoma patient samples, we identified NRAS p.Q61K as an acquired resistance mutation to BRAF-inhibitor treatment. We also identified CDKN2A deletion as a novel acquired resistance mutation to BRAFi/MEKi inhibition. The IMPACT analysis pipeline predicts these somatic variants to actionable therapeutics. We observed the clonal dynamic in the tumor samples after various treatments. We showed that IMPACT not only helped in successful prioritization of clinically relevant variants but also linked these variations to possible targeted therapies. Conclusion IMPACT provides a new bioinformatics strategy to delineate candidate somatic variants and actionable therapies. This approach can be applied to other patient tumor samples to discover effective drug targets for personalized medicine. IMPACT is publicly available at http://tanlab.ucdenver.edu/IMPACT. PMID:27026619

  20. IMPACT: a whole-exome sequencing analysis pipeline for integrating molecular profiles with actionable therapeutics in clinical samples.

    PubMed

    Hintzsche, Jennifer; Kim, Jihye; Yadav, Vinod; Amato, Carol; Robinson, Steven E; Seelenfreund, Eric; Shellman, Yiqun; Wisell, Joshua; Applegate, Allison; McCarter, Martin; Box, Neil; Tentler, John; De, Subhajyoti; Robinson, William A; Tan, Aik Choon

    2016-07-01

    Currently, there is a disconnect between finding a patient's relevant molecular profile and predicting actionable therapeutics. Here we develop and implement the Integrating Molecular Profiles with Actionable Therapeutics (IMPACT) analysis pipeline, linking variants detected from whole-exome sequencing (WES) to actionable therapeutics. The IMPACT pipeline contains 4 analytical modules: detecting somatic variants, calling copy number alterations, predicting drugs against deleterious variants, and analyzing tumor heterogeneity. We tested the IMPACT pipeline on whole-exome sequencing data in The Cancer Genome Atlas (TCGA) lung adenocarcinoma samples with known EGFR mutations. We also used IMPACT to analyze melanoma patient tumor samples before treatment, after BRAF-inhibitor treatment, and after BRAF- and MEK-inhibitor treatment. IMPACT Food and Drug Administration (FDA) correctly identified known EGFR mutations in the TCGA lung adenocarcinoma samples. IMPACT linked these EGFR mutations to the appropriate FDA-approved EGFR inhibitors. For the melanoma patient samples, we identified NRAS p.Q61K as an acquired resistance mutation to BRAF-inhibitor treatment. We also identified CDKN2A deletion as a novel acquired resistance mutation to BRAFi/MEKi inhibition. The IMPACT analysis pipeline predicts these somatic variants to actionable therapeutics. We observed the clonal dynamic in the tumor samples after various treatments. We showed that IMPACT not only helped in successful prioritization of clinically relevant variants but also linked these variations to possible targeted therapies. IMPACT provides a new bioinformatics strategy to delineate candidate somatic variants and actionable therapies. This approach can be applied to other patient tumor samples to discover effective drug targets for personalized medicine.IMPACT is publicly available at http://tanlab.ucdenver.edu/IMPACT. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. An Exploratory Study of the Effect of Screen Size and Resolution on the Legibility of Graphics in Automated Job Performance Aids. Final Report.

    ERIC Educational Resources Information Center

    Dwyer, Daniel J.

    Designed to assess the effect of alternative display (CRT) screen sizes and resolution levels on user ability to identify and locate printed circuit (PC) board points, this study is the first in a protracted research program on the legibility of graphics in computer-based job aids. Air Force maintenance training pipeline students (35 male and 1…

  2. DEVELOPMENT AND VERIFICATION OF A SCREENING MODEL FOR SURFACE SPREADING OF PETROLEUM

    EPA Science Inventory

    Overflows and leakage from aboveground storage tanks and pipelines carrying crude oil and petroleum products occur frequently. The spilled hydrocarbons pose environmental threats by contaminating the surrounding soil and the underlying ground water. Predicting the fate and transp...

  3. Routine DNA testing

    USDA-ARS?s Scientific Manuscript database

    Routine DNA testing. It’s done once you’ve Marker-Assisted Breeding Pipelined promising Qantitative Trait Loci within your own breeding program and thereby established the performance-predictive power of each DNA test for your germplasm under your conditions. By then you are ready to screen your par...

  4. Transforming microbial genotyping: a robotic pipeline for genotyping bacterial strains.

    PubMed

    O'Farrell, Brian; Haase, Jana K; Velayudhan, Vimalkumar; Murphy, Ronan A; Achtman, Mark

    2012-01-01

    Microbial genotyping increasingly deals with large numbers of samples, and data are commonly evaluated by unstructured approaches, such as spread-sheets. The efficiency, reliability and throughput of genotyping would benefit from the automation of manual manipulations within the context of sophisticated data storage. We developed a medium- throughput genotyping pipeline for MultiLocus Sequence Typing (MLST) of bacterial pathogens. This pipeline was implemented through a combination of four automated liquid handling systems, a Laboratory Information Management System (LIMS) consisting of a variety of dedicated commercial operating systems and programs, including a Sample Management System, plus numerous Python scripts. All tubes and microwell racks were bar-coded and their locations and status were recorded in the LIMS. We also created a hierarchical set of items that could be used to represent bacterial species, their products and experiments. The LIMS allowed reliable, semi-automated, traceable bacterial genotyping from initial single colony isolation and sub-cultivation through DNA extraction and normalization to PCRs, sequencing and MLST sequence trace evaluation. We also describe robotic sequencing to facilitate cherrypicking of sequence dropouts. This pipeline is user-friendly, with a throughput of 96 strains within 10 working days at a total cost of < €25 per strain. Since developing this pipeline, >200,000 items were processed by two to three people. Our sophisticated automated pipeline can be implemented by a small microbiology group without extensive external support, and provides a general framework for semi-automated bacterial genotyping of large numbers of samples at low cost.

  5. Transforming Microbial Genotyping: A Robotic Pipeline for Genotyping Bacterial Strains

    PubMed Central

    Velayudhan, Vimalkumar; Murphy, Ronan A.; Achtman, Mark

    2012-01-01

    Microbial genotyping increasingly deals with large numbers of samples, and data are commonly evaluated by unstructured approaches, such as spread-sheets. The efficiency, reliability and throughput of genotyping would benefit from the automation of manual manipulations within the context of sophisticated data storage. We developed a medium- throughput genotyping pipeline for MultiLocus Sequence Typing (MLST) of bacterial pathogens. This pipeline was implemented through a combination of four automated liquid handling systems, a Laboratory Information Management System (LIMS) consisting of a variety of dedicated commercial operating systems and programs, including a Sample Management System, plus numerous Python scripts. All tubes and microwell racks were bar-coded and their locations and status were recorded in the LIMS. We also created a hierarchical set of items that could be used to represent bacterial species, their products and experiments. The LIMS allowed reliable, semi-automated, traceable bacterial genotyping from initial single colony isolation and sub-cultivation through DNA extraction and normalization to PCRs, sequencing and MLST sequence trace evaluation. We also describe robotic sequencing to facilitate cherrypicking of sequence dropouts. This pipeline is user-friendly, with a throughput of 96 strains within 10 working days at a total cost of < €25 per strain. Since developing this pipeline, >200,000 items were processed by two to three people. Our sophisticated automated pipeline can be implemented by a small microbiology group without extensive external support, and provides a general framework for semi-automated bacterial genotyping of large numbers of samples at low cost. PMID:23144721

  6. @TOME-2: a new pipeline for comparative modeling of protein-ligand complexes.

    PubMed

    Pons, Jean-Luc; Labesse, Gilles

    2009-07-01

    @TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein-protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein-ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/

  7. @TOME-2: a new pipeline for comparative modeling of protein–ligand complexes

    PubMed Central

    Pons, Jean-Luc; Labesse, Gilles

    2009-01-01

    @TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein–protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein–ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/ PMID:19443448

  8. Biocorrosive activity analysis of the oil pipeline soil in the Khanty-Mansiysk Autonomous Region of Ugra and the Krasnodar Territory of the Russian Federation

    NASA Astrophysics Data System (ADS)

    Chesnokova, M. G.; Shalay, V. V.; Kriga, A. S.

    2017-08-01

    The purpose of the study was to assess the biocorrosive activity of oil pipeline soil in the Khanty-Mansiysk Autonomous Region of Yugra and the Krasnodar Territory of the Russian Federation, due to the action of a complex of factors and analysis of sulfate-reducing and thionic bacteria content. The number of bacteria in the sulfur cycle (autotrophic thionic and sulfate-reducing bacteria), the total concentration of sulfur and iron in soil samples adjacent to the surface of underground pipelines, the specific electrical resistivity of the soil was determined. A criterion for the biocorrosive activity of the soil (CBA) was established. The study of the biocorrosive activity of the soil has established its features in the area of the oil pipeline construction in the compared territories. In the soil of the Krasnodar Territory pipeline, aggressive samples were recorded in 5.75% of cases, samples with moderate aggressiveness (49.43%), with weak soil aggressiveness (42.53% of cases), and samples with potential aggressiveness (2.30%). On the territory of the Khanty-Mansiysk Autonomous Region of Yugra, samples with weak soil aggressiveness prevailed (55.17% of cases), with moderate aggressiveness (34.5% of cases). When carrying out multiple regression analysis in the system of variables "factors of soil biocorrosive activity", informative data of modeling the indicator "the content of thiobacteria in soil" was established. The results of the research show the need for dynamic monitoring and the development of preventive measures to prevent biocorrosion.

  9. SIMPLEX: Cloud-Enabled Pipeline for the Comprehensive Analysis of Exome Sequencing Data

    PubMed Central

    Fischer, Maria; Snajder, Rene; Pabinger, Stephan; Dander, Andreas; Schossig, Anna; Zschocke, Johannes; Trajanoski, Zlatko; Stocker, Gernot

    2012-01-01

    In recent studies, exome sequencing has proven to be a successful screening tool for the identification of candidate genes causing rare genetic diseases. Although underlying targeted sequencing methods are well established, necessary data handling and focused, structured analysis still remain demanding tasks. Here, we present a cloud-enabled autonomous analysis pipeline, which comprises the complete exome analysis workflow. The pipeline combines several in-house developed and published applications to perform the following steps: (a) initial quality control, (b) intelligent data filtering and pre-processing, (c) sequence alignment to a reference genome, (d) SNP and DIP detection, (e) functional annotation of variants using different approaches, and (f) detailed report generation during various stages of the workflow. The pipeline connects the selected analysis steps, exposes all available parameters for customized usage, performs required data handling, and distributes computationally expensive tasks either on a dedicated high-performance computing infrastructure or on the Amazon cloud environment (EC2). The presented application has already been used in several research projects including studies to elucidate the role of rare genetic diseases. The pipeline is continuously tested and is publicly available under the GPL as a VirtualBox or Cloud image at http://simplex.i-med.ac.at; additional supplementary data is provided at http://www.icbi.at/exome. PMID:22870267

  10. 75 FR 33883 - Notice of Extension of Public Comment Period for the Proposed Keystone XL Pipeline Project; Draft...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-15

    ... mail can be delayed due to security screening. Fax to: (202) 647-1052, attention Elizabeth Orlando. FOR... by fax at (202) 647-1052. You may also visit the Project Web site: http://www.keystonepipeline-xl...

  11. 75 FR 22890 - Notice of Extension of Public Comment Period for the Proposed Keystone XL Pipeline Project Draft...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-30

    ... screening. Fax to: (202) 647-1052, attention Elizabeth Orlando. FOR FURTHER INFORMATION CONTACT: For... of State, Washington, DC 20520, or by telephone (202) 647- 4284, or by fax at (202) 647-1052. You may...

  12. Developing science gateways for drug discovery in a grid environment.

    PubMed

    Pérez-Sánchez, Horacio; Rezaei, Vahid; Mezhuyev, Vitaliy; Man, Duhu; Peña-García, Jorge; den-Haan, Helena; Gesing, Sandra

    2016-01-01

    Methods for in silico screening of large databases of molecules increasingly complement and replace experimental techniques to discover novel compounds to combat diseases. As these techniques become more complex and computationally costly we are faced with an increasing problem to provide the research community of life sciences with a convenient tool for high-throughput virtual screening on distributed computing resources. To this end, we recently integrated the biophysics-based drug-screening program FlexScreen into a service, applicable for large-scale parallel screening and reusable in the context of scientific workflows. Our implementation is based on Pipeline Pilot and Simple Object Access Protocol and provides an easy-to-use graphical user interface to construct complex workflows, which can be executed on distributed computing resources, thus accelerating the throughput by several orders of magnitude.

  13. Development of a Pipeline for Exploratory Metabolic Profiling of Infant Urine

    PubMed Central

    Jackson, Frances; Georgakopoulou, Nancy; Kaluarachchi, Manuja; Kyriakides, Michael; Andreas, Nicholas; Przysiezna, Natalia; Hyde, Matthew J.; Modi, Neena; Nicholson, Jeremy K.; Wijeyesekera, Anisha; Holmes, Elaine

    2017-01-01

    Numerous metabolic profiling pipelines have been developed to characterize the composition of human biofluids and tissues, the vast majority of these being for studies in adults. To accommodate limited sample volume and to take into account the compositional differences between adult and infant biofluids, we developed and optimized sample handling and analytical procedures for studying urine from newborns. A robust pipeline for metabolic profiling using NMR spectroscopy was established, encompassing sample collection, preparation, spectroscopic measurement, and computational analysis. Longitudinal samples were collected from five infants from birth until 14 months of age. Methods of extraction and effects of freezing and sample dilution were assessed, and urinary contaminants from breakdown of polymers in a range of diapers and cotton wool balls were identified and compared, including propylene glycol, acrylic acid, and tert-butanol. Finally, assessment of urinary profiles obtained over the first few weeks of life revealed a dramatic change in composition, with concentrations of phenols, amino acids, and betaine altering systematically over the first few months of life. Therefore, neonatal samples require more stringent standardization of experimental design, sample handling, and analysis compared to that of adult samples to accommodate the variability and limited sample volume. PMID:27476583

  14. Microbial Methane Production Associated with Carbon Steel Corrosion in a Nigerian Oil Field

    PubMed Central

    Mand, Jaspreet; Park, Hyung S.; Okoro, Chuma; Lomans, Bart P.; Smith, Seun; Chiejina, Leo; Voordouw, Gerrit

    2016-01-01

    Microbially influenced corrosion (MIC) in oil field pipeline systems can be attributed to many different types of hydrogenotrophic microorganisms including sulfate reducers, methanogens and acetogens. Samples from a low temperature oil reservoir in Nigeria were analyzed using DNA pyrotag sequencing. The microbial community compositions of these samples revealed an abundance of anaerobic methanogenic archaea. Activity of methanogens was demonstrated by incubating samples anaerobically in a basal salts medium, in the presence of carbon steel and carbon dioxide. Methane formation was measured in all enrichments and correlated with metal weight loss. Methanogens were prominently represented in pipeline solids samples, scraped from the inside of a pipeline, comprising over 85% of all pyrosequencing reads. Methane production was only witnessed when carbon steel beads were added to these pipeline solids samples, indicating that no methane was formed as a result of degradation of the oil organics present in these samples. These results were compared to those obtained for samples taken from a low temperature oil field in Canada, which had been incubated with oil, either in the presence or in the absence of carbon steel. Again, methanogens present in these samples catalyzed methane production only when carbon steel was present. Moreover, acetate production was also found in these enrichments only in the presence of carbon steel. From these studies it appears that carbon steel, not oil organics, was the predominant electron donor for acetate production and methane formation in these low temperature oil fields, indicating that the methanogens and acetogens found may contribute significantly to MIC. PMID:26793176

  15. CloVR-ITS: Automated internal transcribed spacer amplicon sequence analysis pipeline for the characterization of fungal microbiota

    PubMed Central

    2013-01-01

    Background Besides the development of comprehensive tools for high-throughput 16S ribosomal RNA amplicon sequence analysis, there exists a growing need for protocols emphasizing alternative phylogenetic markers such as those representing eukaryotic organisms. Results Here we introduce CloVR-ITS, an automated pipeline for comparative analysis of internal transcribed spacer (ITS) pyrosequences amplified from metagenomic DNA isolates and representing fungal species. This pipeline performs a variety of steps similar to those commonly used for 16S rRNA amplicon sequence analysis, including preprocessing for quality, chimera detection, clustering of sequences into operational taxonomic units (OTUs), taxonomic assignment (at class, order, family, genus, and species levels) and statistical analysis of sample groups of interest based on user-provided information. Using ITS amplicon pyrosequencing data from a previous human gastric fluid study, we demonstrate the utility of CloVR-ITS for fungal microbiota analysis and provide runtime and cost examples, including analysis of extremely large datasets on the cloud. We show that the largest fractions of reads from the stomach fluid samples were assigned to Dothideomycetes, Saccharomycetes, Agaricomycetes and Sordariomycetes but that all samples were dominated by sequences that could not be taxonomically classified. Representatives of the Candida genus were identified in all samples, most notably C. quercitrusa, while sequence reads assigned to the Aspergillus genus were only identified in a subset of samples. CloVR-ITS is made available as a pre-installed, automated, and portable software pipeline for cloud-friendly execution as part of the CloVR virtual machine package (http://clovr.org). Conclusion The CloVR-ITS pipeline provides fungal microbiota analysis that can be complementary to bacterial 16S rRNA and total metagenome sequence analysis allowing for more comprehensive studies of environmental and host-associated microbial communities. PMID:24451270

  16. A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application

    NASA Astrophysics Data System (ADS)

    Roy, Sounak; Banerjee, Swapna

    2018-03-01

    This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.

  17. A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application

    NASA Astrophysics Data System (ADS)

    Roy, Sounak; Banerjee, Swapna

    2018-06-01

    This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.

  18. Characterization and Expression of Drug Resistance Genes in MDROs Originating from Combat Wound Infections

    DTIC Science & Technology

    2016-09-01

    assigned a classification. MLST analysis MLST was determined using an in-house automated pipeline that first searches for homologs of each gene of...and virulence mechanism contributing to their success as pathogens in the wound environment. A novel bioinformatics pipeline was used to incorporate...monitored in two ways: read-based genome QC and assembly based metrics. The JCVI Genome QC pipeline samples sequence reads and performs BLAST

  19. Using Hierarchical Virtual Screening To Combat Drug Resistance of the HIV-1 Protease.

    PubMed

    Li, Nan; Ainsworth, Richard I; Ding, Bo; Hou, Tingjun; Wang, Wei

    2015-07-27

    Human immunodeficiency virus (HIV) protease inhibitors (PIs) are important components of highly active anti-retroviral therapy (HAART) that block the catalytic site of HIV protease, thus preventing maturation of the HIV virion. However, with two decades of PI prescriptions in clinical practice, drug-resistant HIV mutants have now been found for all of the PI drugs. Therefore, the continuous development of new PI drugs is crucial both to combat the existing drug-resistant HIV strains and to provide treatments for future patients. Here we purpose an HIV PI drug design strategy to select candidate PIs with binding energy distributions dominated by interactions with conserved protease residues in both wild-type and various drug-resistant mutants. On the basis of this strategy, we have constructed a virtual screening pipeline including combinatorial library construction, combinatorial docking, MM/GBSA-based rescoring, and reranking on the basis of the binding energy distribution. We have tested our strategy on lopinavir by modifying its two functional groups. From an initial 751 689 candidate molecules, 18 candidate inhibitors were selected using the pipeline for experimental validation. IC50 measurements and drug resistance predictions successfully identified two ligands with both HIV protease inhibitor activity and an improved drug resistance profile on 2382 HIV mutants. This study provides a proof of concept for the integration of MM/GBSA energy analysis and drug resistance information at the stage of virtual screening and sheds light on future HIV drug design and the use of virtual screening to combat drug resistance.

  20. The Center for Optimized Structural Studies (COSS) platform for automation in cloning, expression, and purification of single proteins and protein-protein complexes.

    PubMed

    Mlynek, Georg; Lehner, Anita; Neuhold, Jana; Leeb, Sarah; Kostan, Julius; Charnagalov, Alexej; Stolt-Bergner, Peggy; Djinović-Carugo, Kristina; Pinotsis, Nikos

    2014-06-01

    Expression in Escherichia coli represents the simplest and most cost effective means for the production of recombinant proteins. This is a routine task in structural biology and biochemistry where milligrams of the target protein are required in high purity and monodispersity. To achieve these criteria, the user often needs to screen several constructs in different expression and purification conditions in parallel. We describe a pipeline, implemented in the Center for Optimized Structural Studies, that enables the systematic screening of expression and purification conditions for recombinant proteins and relies on a series of logical decisions. We first use bioinformatics tools to design a series of protein fragments, which we clone in parallel, and subsequently screen in small scale for optimal expression and purification conditions. Based on a scoring system that assesses soluble expression, we then select the top ranking targets for large-scale purification. In the establishment of our pipeline, emphasis was put on streamlining the processes such that it can be easily but not necessarily automatized. In a typical run of about 2 weeks, we are able to prepare and perform small-scale expression screens for 20-100 different constructs followed by large-scale purification of at least 4-6 proteins. The major advantage of our approach is its flexibility, which allows for easy adoption, either partially or entirely, by any average hypothesis driven laboratory in a manual or robot-assisted manner.

  1. A Pipeline To Enhance Ligand Virtual Screening: Integrating Molecular Dynamics and Fingerprints for Ligand and Proteins.

    PubMed

    Spyrakis, Francesca; Benedetti, Paolo; Decherchi, Sergio; Rocchia, Walter; Cavalli, Andrea; Alcaro, Stefano; Ortuso, Francesco; Baroni, Massimo; Cruciani, Gabriele

    2015-10-26

    The importance of taking into account protein flexibility in drug design and virtual ligand screening (VS) has been widely debated in the literature, and molecular dynamics (MD) has been recognized as one of the most powerful tools for investigating intrinsic protein dynamics. Nevertheless, deciphering the amount of information hidden in MD simulations and recognizing a significant minimal set of states to be used in virtual screening experiments can be quite complicated. Here we present an integrated MD-FLAP (molecular dynamics-fingerprints for ligand and proteins) approach, comprising a pipeline of molecular dynamics, clustering and linear discriminant analysis, for enhancing accuracy and efficacy in VS campaigns. We first extracted a limited number of representative structures from tens of nanoseconds of MD trajectories by means of the k-medoids clustering algorithm as implemented in the BiKi Life Science Suite ( http://www.bikitech.com [accessed July 21, 2015]). Then, instead of applying arbitrary selection criteria, that is, RMSD, pharmacophore properties, or enrichment performances, we allowed the linear discriminant analysis algorithm implemented in FLAP ( http://www.moldiscovery.com [accessed July 21, 2015]) to automatically choose the best performing conformational states among medoids and X-ray structures. Retrospective virtual screenings confirmed that ensemble receptor protocols outperform single rigid receptor approaches, proved that computationally generated conformations comprise the same quantity/quality of information included in X-ray structures, and pointed to the MD-FLAP approach as a valuable tool for improving VS performances.

  2. MEASURING TRANSIT SIGNAL RECOVERY IN THE KEPLER PIPELINE. I. INDIVIDUAL EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christiansen, Jessie L.; Clarke, Bruce D.; Burke, Christopher J.

    The Kepler mission was designed to measure the frequency of Earth-size planets in the habitable zone of Sun-like stars. A crucial component for recovering the underlying planet population from a sample of detected planets is understanding the completeness of that sample-the fraction of the planets that could have been discovered in a given data set that actually were detected. Here, we outline the information required to determine the sample completeness, and describe an experiment to address a specific aspect of that question, i.e., the issue of transit signal recovery. We investigate the extent to which the Kepler pipeline preserves individualmore » transit signals by injecting simulated transits into the pixel-level data, processing the modified pixels through the pipeline, and comparing the measured transit signal-to-noise ratio (S/N) to that expected without perturbation by the pipeline. We inject simulated transit signals across the full focal plane for a set of observations for a duration of 89 days. On average, we find that the S/N of the injected signal is recovered at MS = 0.9973({+-} 0.0012) Multiplication-Sign BS - 0.0151({+-} 0.0049), where MS is the measured S/N and BS is the baseline, or expected, S/N. The 1{sigma} width of the distribution around this correlation is {+-}2.64%. This indicates an extremely high fidelity in reproducing the expected detection statistics for single transit events, and provides teams performing their own periodic transit searches the confidence that there is no systematic reduction in transit signal strength introduced by the pipeline. We discuss the pipeline processes that cause the measured S/N to deviate significantly from the baseline S/N for a small fraction of targets; these are primarily the handling of data adjacent to spacecraft re-pointings and the removal of harmonics prior to the measurement of the S/N. Finally, we outline the further work required to characterize the completeness of the Kepler pipeline.« less

  3. Feasibility Studies on Pipeline Disposal of Concentrated Copper Tailings Slurry for Waste Minimization

    NASA Astrophysics Data System (ADS)

    Senapati, Pradipta Kumar; Mishra, Barada Kanta

    2017-06-01

    The conventional lean phase copper tailings slurry disposal systems create pollution all around the disposal area through seepage and flooding of waste slurry water. In order to reduce water consumption and minimize pollution, the pipeline disposal of these waste slurries at high solids concentrations may be considered as a viable option. The paper presents the rheological and pipeline flow characteristics of copper tailings samples in the solids concentration range of 65-72 % by weight. The tailings slurry indicated non-Newtonian behaviour at these solids concentrations and the rheological data were best fitted by Bingham plastic model. The influence of solids concentration on yield stress and plastic viscosity for the copper tailings samples were discussed. Using a high concentration test loop, pipeline experiments were conducted in a 50 mm nominal bore (NB) pipe by varying the pipe flow velocity from 1.5 to 3.5 m/s. A non-Newtonian Bingham plastic pressure drop model predicted the experimental data reasonably well for the concentrated tailings slurry. The pressure drop model was used for higher size pipes and the operating conditions for pipeline disposal of concentrated copper tailings slurry in a 200 mm NB pipe with respect to specific power consumption were discussed.

  4. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  5. Ub-ISAP: a streamlined UNIX pipeline for mining unique viral vector integration sites from next generation sequencing data.

    PubMed

    Kamboj, Atul; Hallwirth, Claus V; Alexander, Ian E; McCowage, Geoffrey B; Kramer, Belinda

    2017-06-17

    The analysis of viral vector genomic integration sites is an important component in assessing the safety and efficiency of patient treatment using gene therapy. Alongside this clinical application, integration site identification is a key step in the genetic mapping of viral elements in mutagenesis screens that aim to elucidate gene function. We have developed a UNIX-based vector integration site analysis pipeline (Ub-ISAP) that utilises a UNIX-based workflow for automated integration site identification and annotation of both single and paired-end sequencing reads. Reads that contain viral sequences of interest are selected and aligned to the host genome, and unique integration sites are then classified as transcription start site-proximal, intragenic or intergenic. Ub-ISAP provides a reliable and efficient pipeline to generate large datasets for assessing the safety and efficiency of integrating vectors in clinical settings, with broader applications in cancer research. Ub-ISAP is available as an open source software package at https://sourceforge.net/projects/ub-isap/ .

  6. Effect of Sodium Bisulfite Injection on the Microbial Community Composition in a Brackish-Water-Transporting Pipeline▿†

    PubMed Central

    Park, Hyung Soo; Chatterjee, Indranil; Dong, Xiaoli; Wang, Sheng-Hung; Sensen, Christoph W.; Caffrey, Sean M.; Jack, Thomas R.; Boivin, Joe; Voordouw, Gerrit

    2011-01-01

    Pipelines transporting brackish subsurface water, used in the production of bitumen by steam-assisted gravity drainage, are subject to frequent corrosion failures despite the addition of the oxygen scavenger sodium bisulfite (SBS). Pyrosequencing of 16S rRNA genes was used to determine the microbial community composition for planktonic samples of transported water and for sessile samples of pipe-associated solids (PAS) scraped from pipeline cutouts representing corrosion failures. These were obtained from upstream (PAS-616P) and downstream (PAS-821TP and PAS-821LP, collected under rapid-flow and stagnant conditions, respectively) of the SBS injection point. Most transported water samples had a large fraction (1.8% to 97% of pyrosequencing reads) of Pseudomonas not found in sessile pipe samples. The sessile population of PAS-616P had methanogens (Methanobacteriaceae) as the main (56%) community component, whereas Deltaproteobacteria of the genera Desulfomicrobium and Desulfocapsa were not detected. In contrast, PAS-821TP and PAS-821LP had lower fractions (41% and 0.6%) of Methanobacteriaceae archaea but increased fractions of sulfate-reducing Desulfomicrobium (18% and 48%) and of bisulfite-disproportionating Desulfocapsa (35% and 22%) bacteria. Hence, SBS injection strongly changed the sessile microbial community populations. X-ray diffraction analysis of pipeline scale indicated that iron carbonate was present both upstream and downstream, whereas iron sulfide and sulfur were found only downstream of the SBS injection point, suggesting a contribution of the bisulfite-disproportionating and sulfate-reducing bacteria in the scale to iron corrosion. Incubation of iron coupons with pipeline waters indicated iron corrosion coupled to the formation of methane. Hence, both methanogenic and sulfidogenic microbial communities contributed to corrosion of pipelines transporting these brackish waters. PMID:21856836

  7. Anchoring protein crystals to mounting loops with hydrogel using inkjet technology.

    PubMed

    Shinoda, Akira; Tanaka, Yoshikazu; Yao, Min; Tanaka, Isao

    2014-11-01

    X-ray crystallography is an important technique for structure-based drug discovery, mainly because it is the only technique that can reveal whether a ligand binds to the target protein as well as where and how it binds. However, ligand screening by X-ray crystallography involves a crystal-soaking experiment, which is usually performed manually. Thus, the throughput is not satisfactory for screening large numbers of candidate ligands. In this study, a technique to anchor protein crystals to mounting loops by using gel and inkjet technology has been developed; the method allows soaking of the mounted crystals in ligand-containing solution. This new technique may assist in the design of a fully automated drug-screening pipeline.

  8. [Comparison of gut microbiotal compositional analysis of patients with irritable bowel syndrome through different bioinformatics pipelines].

    PubMed

    Zhu, S W; Liu, Z J; Li, M; Zhu, H Q; Duan, L P

    2018-04-18

    To assess whether the same biological conclusion, diagnostic or curative effects regarding microbial composition of irritable bowel syndrome (IBS) patients could be reached through different bioinformatics pipelines, we used two common bioinformatics pipelines (Uparse V2.0 and Mothur V1.39.5)to analyze the same fecal microbial 16S rRNA high-throughput sequencing data. The two pipelines were used to analyze the diversity and richness of fecal microbial 16S rRNA high-throughput sequencing data of 27 samples, including 9 healthy controls (HC group), 9 diarrhea IBS patients before (IBS group) and after Rifaximin treatment (IBS-treatment, IBSt group). Analyses such as microbial diversity, principal co-ordinates analysis (PCoA), nonmetric multidimensional scaling (NMDS) and linear discriminant analysis effect size (LEfSe) were used to find out the microbial differences among HC group vs. IBS group and IBS group vs. IBSt group. (1) Microbial composition comparison of the 27 samples in the two pipelines showed significant variations at both family and genera levels while no significant variations at phylum level; (2) There was no significant difference in the comparison of HC vs. IBS or IBS vs. IBSt (Uparse: HC vs. IBS, F=0.98, P=0.445; IBS vs. IBSt, F=0.47,P=0.926; Mothur: HC vs.IBS, F=0.82, P=0.646; IBS vs. IBSt, F=0.37, P=0.961). The Shannon index was significantly decreased in IBSt; (3) Both workshops distinguished the significantly enriched genera between HC and IBS groups. For example, Nitrosomonas and Paraprevotella increased while Pseudoalteromonadaceae and Anaerotruncus decreased in HC group through Uparse pipeline, nevertheless Roseburia 62 increased while Butyricicoccus and Moraxellaceae decreased in HC group through Mothur pipeline.Only Uparse pipeline could pick out significant genera between IBS and IBSt, such as Pseudobutyricibrio, Clostridiaceae 1 and Clostridiumsensustricto 1. There were taxonomic and phylogenetic diversity differences between the two pipelines, Mothur can get more taxonomic details because the count number of each taxonomic level is higher. Both pipelines could distinguish the significantly enriched genera between HC and IBS groups, but Uparse was more capable to identity the difference between IBS and IBSt groups. To increase the reproducibility and reliability and to retain the consistency among similar studies, it is very important to consider the impact on different pipelines.

  9. Robust, high-throughput solution structural analyses by small angle X-ray scattering (SAXS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hura, Greg L.; Menon, Angeli L.; Hammel, Michal

    2009-07-20

    We present an efficient pipeline enabling high-throughput analysis of protein structure in solution with small angle X-ray scattering (SAXS). Our SAXS pipeline combines automated sample handling of microliter volumes, temperature and anaerobic control, rapid data collection and data analysis, and couples structural analysis with automated archiving. We subjected 50 representative proteins, mostly from Pyrococcus furiosus, to this pipeline and found that 30 were multimeric structures in solution. SAXS analysis allowed us to distinguish aggregated and unfolded proteins, define global structural parameters and oligomeric states for most samples, identify shapes and similar structures for 25 unknown structures, and determine envelopes formore » 41 proteins. We believe that high-throughput SAXS is an enabling technology that may change the way that structural genomics research is done.« less

  10. Characterization of Microbial Communities in Gas Industry Pipelines

    PubMed Central

    Zhu, Xiang Y.; Lubeck, John; Kilbane, John J.

    2003-01-01

    Culture-independent techniques, denaturing gradient gel electrophoresis (DGGE) analysis, and random cloning of 16S rRNA gene sequences amplified from community DNA were used to determine the diversity of microbial communities in gas industry pipelines. Samples obtained from natural gas pipelines were used directly for DNA extraction, inoculated into sulfate-reducing bacterium medium, or used to inoculate a reactor that simulated a natural gas pipeline environment. The variable V2-V3 (average size, 384 bp) and V3-V6 (average size, 648 bp) regions of bacterial and archaeal 16S rRNA genes, respectively, were amplified from genomic DNA isolated from nine natural gas pipeline samples and analyzed. A total of 106 bacterial 16S rDNA sequences were derived from DGGE bands, and these formed three major clusters: beta and gamma subdivisions of Proteobacteria and gram-positive bacteria. The most frequently encountered bacterial species was Comamonas denitrificans, which was not previously reported to be associated with microbial communities found in gas pipelines or with microbially influenced corrosion. The 31 archaeal 16S rDNA sequences obtained in this study were all related to those of methanogens and phylogenetically fall into three clusters: order I, Methanobacteriales; order III, Methanomicrobiales; and order IV, Methanosarcinales. Further microbial ecology studies are needed to better understand the relationship among bacterial and archaeal groups and the involvement of these groups in the process of microbially influenced corrosion in order to develop improved ways of monitoring and controlling microbially influenced corrosion. PMID:12957923

  11. A Data Analysis Pipeline Accounting for Artifacts in Tox21 Quantitative High-Throughput Screening Assays.

    PubMed

    Hsieh, Jui-Hua; Sedykh, Alexander; Huang, Ruili; Xia, Menghang; Tice, Raymond R

    2015-08-01

    A main goal of the U.S. Tox21 program is to profile a 10K-compound library for activity against a panel of stress-related and nuclear receptor signaling pathway assays using a quantitative high-throughput screening (qHTS) approach. However, assay artifacts, including nonreproducible signals and assay interference (e.g., autofluorescence), complicate compound activity interpretation. To address these issues, we have developed a data analysis pipeline that includes an updated signal noise-filtering/curation protocol and an assay interference flagging system. To better characterize various types of signals, we adopted a weighted version of the area under the curve (wAUC) to quantify the amount of activity across the tested concentration range in combination with the assay-dependent point-of-departure (POD) concentration. Based on the 32 Tox21 qHTS assays analyzed, we demonstrate that signal profiling using wAUC affords the best reproducibility (Pearson's r = 0.91) in comparison with the POD (0.82) only or the AC(50) (i.e., half-maximal activity concentration, 0.81). Among the activity artifacts characterized, cytotoxicity is the major confounding factor; on average, about 8% of Tox21 compounds are affected, whereas autofluorescence affects less than 0.5%. To facilitate data evaluation, we implemented two graphical user interface applications, allowing users to rapidly evaluate the in vitro activity of Tox21 compounds. © 2015 Society for Laboratory Automation and Screening.

  12. A systematic approach to prioritize drug targets using machine learning, a molecular descriptor-based classification model, and high-throughput screening of plant derived molecules: a case study in oral cancer.

    PubMed

    Randhawa, Vinay; Kumar Singh, Anil; Acharya, Vishal

    2015-12-01

    Systems-biology inspired identification of drug targets and machine learning-based screening of small molecules which modulate their activity have the potential to revolutionize modern drug discovery by complementing conventional methods. To utilize the effectiveness of such pipelines, we first analyzed the dysregulated gene pairs between control and tumor samples and then implemented an ensemble-based feature selection approach to prioritize targets in oral squamous cell carcinoma (OSCC) for therapeutic exploration. Based on the structural information of known inhibitors of CXCR4-one of the best targets identified in this study-a feature selection was implemented for the identification of optimal structural features (molecular descriptor) based on which a classification model was generated. Furthermore, the CXCR4-centered descriptor-based classification model was finally utilized to screen a repository of plant derived small-molecules to obtain potential inhibitors. The application of our methodology may assist effective selection of the best targets which may have previously been overlooked, that in turn will lead to the development of new oral cancer medications. The small molecules identified in this study can be ideal candidates for trials as potential novel anti-oral cancer agents. Importantly, distinct steps of this whole study may provide reference for the analysis of other complex human diseases.

  13. Efficient Detection of Copy Number Mutations in PMS2 Exons with a Close Homolog.

    PubMed

    Herman, Daniel S; Smith, Christina; Liu, Chang; Vaughn, Cecily P; Palaniappan, Selvi; Pritchard, Colin C; Shirts, Brian H

    2018-07-01

    Detection of 3' PMS2 copy-number mutations that cause Lynch syndrome is difficult because of highly homologous pseudogenes. To improve the accuracy and efficiency of clinical screening for these mutations, we developed a new method to analyze standard capture-based, next-generation sequencing data to identify deletions and duplications in PMS2 exons 9 to 15. The approach captures sequences using PMS2 targets, maps sequences randomly among regions with equal mapping quality, counts reads aligned to homologous exons and introns, and flags read count ratios outside of empirically derived reference ranges. The method was trained on 1352 samples, including 8 known positives, and tested on 719 samples, including 17 known positives. Clinical implementation of the first version of this method detected new mutations in the training (N = 7) and test (N = 2) sets that had not been identified by our initial clinical testing pipeline. The described final method showed complete sensitivity in both sample sets and false-positive rates of 5% (training) and 7% (test), dramatically decreasing the number of cases needing additional mutation evaluation. This approach leveraged the differences between gene and pseudogene to distinguish between PMS2 and PMS2CL copy-number mutations. These methods enable efficient and sensitive Lynch syndrome screening for 3' PMS2 copy-number mutations and may be applied similarly to other genomic regions with highly homologous pseudogenes. Copyright © 2018 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  14. Surface Transportation Security Priority Assessment

    DTIC Science & Technology

    2010-03-01

    intercity buses), and pipelines, and related infrastructure (including roads and highways), that are within the territory of the United States...Modernizing the information technology infrastructure used to vet the identity of travelers and transportation workers  Using terrorist databases to...examination of persons travelling , surface transportation modes tend to operate in a much more open environment, making it difficult to screen workers

  15. 20180312 - Profiling the ToxCast library with a pluripotent human (H9) embryonic stem cell assay (SOT)

    EPA Science Inventory

    The Stemina devTOX quickPredict platform (STM) is a human pluripotent H9 stem cell-based assay that predicts developmental toxicants. Using the STM model, we screened 1065 ToxCast chemicals and entered the data into the ToxCast data analysis pipeline. Model performance was 83.3% ...

  16. The presence of opportunistic pathogens, Legionella spp., L. pneumophila and Mycobacterium avium complex, in South Australian reuse water distribution pipelines.

    PubMed

    Whiley, H; Keegan, A; Fallowfield, H; Bentham, R

    2015-06-01

    Water reuse has become increasingly important for sustainable water management. Currently, its application is primarily constrained by the potential health risks. Presently there is limited knowledge regarding the presence and fate of opportunistic pathogens along reuse water distribution pipelines. In this study opportunistic human pathogens Legionella spp., L. pneumophila and Mycobacterium avium complex were detected using real-time polymerase chain reaction along two South Australian reuse water distribution pipelines at maximum concentrations of 10⁵, 10³ and 10⁵ copies/mL, respectively. During the summer period of sampling the concentration of all three organisms significantly increased (P < 0.05) along the pipeline, suggesting multiplication and hence viability. No seasonality in the decrease in chlorine residual along the pipelines was observed. This suggests that the combination of reduced chlorine residual and increased water temperature promoted the presence of these opportunistic pathogens.

  17. An efficient and scalable analysis framework for variant extraction and refinement from population-scale DNA sequence data.

    PubMed

    Jun, Goo; Wing, Mary Kate; Abecasis, Gonçalo R; Kang, Hyun Min

    2015-06-01

    The analysis of next-generation sequencing data is computationally and statistically challenging because of the massive volume of data and imperfect data quality. We present GotCloud, a pipeline for efficiently detecting and genotyping high-quality variants from large-scale sequencing data. GotCloud automates sequence alignment, sample-level quality control, variant calling, filtering of likely artifacts using machine-learning techniques, and genotype refinement using haplotype information. The pipeline can process thousands of samples in parallel and requires less computational resources than current alternatives. Experiments with whole-genome and exome-targeted sequence data generated by the 1000 Genomes Project show that the pipeline provides effective filtering against false positive variants and high power to detect true variants. Our pipeline has already contributed to variant detection and genotyping in several large-scale sequencing projects, including the 1000 Genomes Project and the NHLBI Exome Sequencing Project. We hope it will now prove useful to many medical sequencing studies. © 2015 Jun et al.; Published by Cold Spring Harbor Laboratory Press.

  18. Review of environmental exposure concentrations of chemical warfare agent residues and associated the fish community risk following the construction and completion of the Nord Stream gas pipeline between Russia and Germany.

    PubMed

    Sanderson, Hans; Fauser, Patrik; Rahbek, Malene; Larsen, Jørn Bo

    2014-08-30

    This paper compiles all the measured chemical warfare agent (CWA) concentrations found in relation to the Nord Stream pipeline work in Danish waters for the past 5 years. Sediment and biota sampling were performed along the pipeline route in four campaigns, prior to (in 2008 and 2010), during (in 2011) and after (in 2012) the construction work. No parent CWAs were detected in the sediments. Patchy residues of CWA degradation products of Adamsite, Clark I, phenyldichloroarsine, trichloroarsine and Lewisite II, were detected in a total of 29 of the 391 sediment samples collected and analyzed the past 5 years. The cumulative fish community risk quotient for the different locations, calculated as a sum of background and added risk, ranged between 0 and 0.017 suggesting a negligible acute CWA risk toward the fish community. The added risk from sediment disturbance in relation to construction of the pipelines represents less than 2% of the total risk in the areas with the highest calculated risk. The analyses of benthic infauna corroborate the finding of CWA related low risk across the years. There was no significant difference in CWA risk before (2008) and after the pipeline construction (2012). Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Detection of Legionella, L. pneumophila and Mycobacterium Avium Complex (MAC) along Potable Water Distribution Pipelines

    PubMed Central

    Whiley, Harriet; Keegan, Alexandra; Fallowfield, Howard; Bentham, Richard

    2014-01-01

    Inhalation of potable water presents a potential route of exposure to opportunistic pathogens and hence warrants significant public health concern. This study used qPCR to detect opportunistic pathogens Legionella spp., L. pneumophila and MAC at multiple points along two potable water distribution pipelines. One used chlorine disinfection and the other chloramine disinfection. Samples were collected four times over the year to provide seasonal variation and the chlorine or chloramine residual was measured during collection. Legionella spp., L. pneumophila and MAC were detected in both distribution systems throughout the year and were all detected at a maximum concentration of 103 copies/mL in the chlorine disinfected system and 106, 103 and 104 copies/mL respectively in the chloramine disinfected system. The concentrations of these opportunistic pathogens were primarily controlled throughout the distribution network through the maintenance of disinfection residuals. At a dead-end and when the disinfection residual was not maintained significant (p < 0.05) increases in concentration were observed when compared to the concentration measured closest to the processing plant in the same pipeline and sampling period. Total coliforms were not present in any water sample collected. This study demonstrates the ability of Legionella spp., L. pneumophila and MAC to survive the potable water disinfection process and highlights the need for greater measures to control these organisms along the distribution pipeline and at point of use. PMID:25046636

  20. Detection of Legionella, L. pneumophila and Mycobacterium avium complex (MAC) along potable water distribution pipelines.

    PubMed

    Whiley, Harriet; Keegan, Alexandra; Fallowfield, Howard; Bentham, Richard

    2014-07-18

    Inhalation of potable water presents a potential route of exposure to opportunistic pathogens and hence warrants significant public health concern. This study used qPCR to detect opportunistic pathogens Legionella spp., L. pneumophila and MAC at multiple points along two potable water distribution pipelines. One used chlorine disinfection and the other chloramine disinfection. Samples were collected four times over the year to provide seasonal variation and the chlorine or chloramine residual was measured during collection. Legionella spp., L. pneumophila and MAC were detected in both distribution systems throughout the year and were all detected at a maximum concentration of 103 copies/mL in the chlorine disinfected system and 106, 103 and 104 copies/mL respectively in the chloramine disinfected system. The concentrations of these opportunistic pathogens were primarily controlled throughout the distribution network through the maintenance of disinfection residuals. At a dead-end and when the disinfection residual was not maintained significant (p < 0.05) increases in concentration were observed when compared to the concentration measured closest to the processing plant in the same pipeline and sampling period. Total coliforms were not present in any water sample collected. This study demonstrates the ability of Legionella spp., L. pneumophila and MAC to survive the potable water disinfection process and highlights the need for greater measures to control these organisms along the distribution pipeline and at point of use.

  1. A high efficiency, high quality and low cost internal regulated bioanalytical laboratory to support drug development needs.

    PubMed

    Song, Yan; Dhodda, Raj; Zhang, Jun; Sydor, Jens

    2014-05-01

    In the recent past, we have seen an increase in the outsourcing of bioanalysis in pharmaceutical companies in support of their drug development pipeline. This trend is largely driven by the effort to reduce internal cost, especially in support of late-stage pipeline assets where established bioanalytical assays are used to analyze a large volume of samples. This article will highlight our perspective of how bioanalytical laboratories within pharmaceutical companies can be developed into the best partner in the advancement of drug development pipelines with high-quality support at competitive cost.

  2. Protein-Protein Interaction Assays with Effector-GFP Fusions in Nicotiana benthamiana.

    PubMed

    Petre, Benjamin; Win, Joe; Menke, Frank L H; Kamoun, Sophien

    2017-01-01

    Plant parasites secrete proteins known as effectors into host tissues to manipulate host cell structures and functions. One of the major goals in effector biology is to determine the host cell compartments and the protein complexes in which effectors accumulate. Here, we describe a five-step pipeline that we routinely use in our lab to achieve this goal, which consists of (1) Golden Gate assembly of pathogen effector-green fluorescent protein (GFP) fusions into binary vectors, (2) Agrobacterium-mediated heterologous protein expression in Nicotiana benthamiana leaf cells, (3) laser-scanning confocal microscopy assay, (4) anti-GFP coimmunoprecipitation-liquid chromatography-tandem mass spectrometry (coIP/MS) assay, and (5) anti-GFP western blotting. This pipeline is suitable for rapid, cost-effective, and medium-throughput screening of pathogen effectors in planta.

  3. Streamlining workflow and automation to accelerate laboratory scale protein production.

    PubMed

    Konczal, Jennifer; Gray, Christopher H

    2017-05-01

    Protein production facilities are often required to produce diverse arrays of proteins for demanding methodologies including crystallography, NMR, ITC and other reagent intensive techniques. It is common for these teams to find themselves a bottleneck in the pipeline of ambitious projects. This pressure to deliver has resulted in the evolution of many novel methods to increase capacity and throughput at all stages in the pipeline for generation of recombinant proteins. This review aims to describe current and emerging options to accelerate the success of protein production in Escherichia coli. We emphasize technologies that have been evaluated and implemented in our laboratory, including innovative molecular biology and expression vectors, small-scale expression screening strategies and the automation of parallel and multidimensional chromatography. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Remaining Sites Verification Package for the 100-F-26:13, 108-F Drain Pipelines, Waste Site Reclassification Form 2005-011

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L. M. Dittmer

    2008-03-03

    The 100-F-26:13 waste site is the network of process sewer pipelines that received effluent from the 108-F Biological Laboratory and discharged it to the 188-F Ash Disposal Area (126-F-1 waste site). The pipelines included one 0.15-m (6-in.)-, two 0.2-m (8-in.)-, and one 0.31-m (12-in.)-diameter vitrified clay pipe segments encased in concrete. In accordance with this evaluation, the verification sampling results support a reclassification of this site to Interim Closed Out. The results of verification sampling demonstrated that residual contaminant concentrations do not preclude any future uses and allow for unrestricted use of shallow zone soils. The results also showed thatmore » residual contaminant concentrations are protective of groundwater and the Columbia River.« less

  5. A novel identification approach for discovery of 5-HydroxyTriptamine 2A antagonists: combination of 2D/3D similarity screening, molecular docking and molecular dynamics.

    PubMed

    Kumar, Rakesh; Jade, Dhananjay; Gupta, Dinesh

    2018-03-05

    5-HydroxyTriptamine 2A antagonists are potential targets for treatment of various cerebrovascular and cardiovascular disorders. In this study, we have developed and performed a unique screening pipeline for filtering ZINC database compounds on the basis of similarities to known antagonists to determine novel small molecule antagonists of 5-HydroxyTriptamine 2A. The screening pipeline is based on 2D similarity, 3D dissimilarity and a combination of 2D/3D similarity. The shortlisted compounds were docked to a 5-HydroxyTriptamine 2A homology-based model, and complexes with low binding energies (287 complexes) were selected for molecular dynamics (MD) simulations in a lipid bilayer. The MD simulations of the shortlisted compounds in complex with 5-HydroxyTriptamine 2A confirmed the stability of the complexes and revealed novel interaction insights. The receptor residues S239, N343, S242, S159, Y370 and D155 predominantly participate in hydrogen bonding. π-π stacking is observed in F339, F340, F234, W151 and W336, whereas hydrophobic interactions are observed amongst V156, F339, F234, V362, V366, F340, V235, I152 and W151. The known and potential antagonists shortlisted by us have similar overlapping molecular interaction patterns. The 287 potential 5-HydroxyTriptamine 2A antagonists may be experimentally verified.

  6. siRNA screen identifies QPCT as a druggable target for Huntington's disease.

    PubMed

    Jimenez-Sanchez, Maria; Lam, Wun; Hannus, Michael; Sönnichsen, Birte; Imarisio, Sara; Fleming, Angeleen; Tarditi, Alessia; Menzies, Fiona; Dami, Teresa Ed; Xu, Catherine; Gonzalez-Couto, Eduardo; Lazzeroni, Giulia; Heitz, Freddy; Diamanti, Daniela; Massai, Luisa; Satagopam, Venkata P; Marconi, Guido; Caramelli, Chiara; Nencini, Arianna; Andreini, Matteo; Sardone, Gian Luca; Caradonna, Nicola P; Porcari, Valentina; Scali, Carla; Schneider, Reinhard; Pollio, Giuseppe; O'Kane, Cahir J; Caricasole, Andrea; Rubinsztein, David C

    2015-05-01

    Huntington's disease (HD) is a currently incurable neurodegenerative condition caused by an abnormally expanded polyglutamine tract in huntingtin (HTT). We identified new modifiers of mutant HTT toxicity by performing a large-scale 'druggable genome' siRNA screen in human cultured cells, followed by hit validation in Drosophila. We focused on glutaminyl cyclase (QPCT), which had one of the strongest effects on mutant HTT-induced toxicity and aggregation in the cell-based siRNA screen and also rescued these phenotypes in Drosophila. We found that QPCT inhibition induced the levels of the molecular chaperone αB-crystallin and reduced the aggregation of diverse proteins. We generated new QPCT inhibitors using in silico methods followed by in vitro screening, which rescued the HD-related phenotypes in cell, Drosophila and zebrafish HD models. Our data reveal a new HD druggable target affecting mutant HTT aggregation and provide proof of principle for a discovery pipeline from druggable genome screen to drug development.

  7. MetaSRA: normalized human sample-specific metadata for the Sequence Read Archive.

    PubMed

    Bernstein, Matthew N; Doan, AnHai; Dewey, Colin N

    2017-09-15

    The NCBI's Sequence Read Archive (SRA) promises great biological insight if one could analyze the data in the aggregate; however, the data remain largely underutilized, in part, due to the poor structure of the metadata associated with each sample. The rules governing submissions to the SRA do not dictate a standardized set of terms that should be used to describe the biological samples from which the sequencing data are derived. As a result, the metadata include many synonyms, spelling variants and references to outside sources of information. Furthermore, manual annotation of the data remains intractable due to the large number of samples in the archive. For these reasons, it has been difficult to perform large-scale analyses that study the relationships between biomolecular processes and phenotype across diverse diseases, tissues and cell types present in the SRA. We present MetaSRA, a database of normalized SRA human sample-specific metadata following a schema inspired by the metadata organization of the ENCODE project. This schema involves mapping samples to terms in biomedical ontologies, labeling each sample with a sample-type category, and extracting real-valued properties. We automated these tasks via a novel computational pipeline. The MetaSRA is available at metasra.biostat.wisc.edu via both a searchable web interface and bulk downloads. Software implementing our computational pipeline is available at http://github.com/deweylab/metasra-pipeline. cdewey@biostat.wisc.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  8. A high-throughput pipeline for the production of synthetic antibodies for analysis of ribonucleoprotein complexes

    PubMed Central

    Na, Hong; Laver, John D.; Jeon, Jouhyun; Singh, Fateh; Ancevicius, Kristin; Fan, Yujie; Cao, Wen Xi; Nie, Kun; Yang, Zhenglin; Luo, Hua; Wang, Miranda; Rissland, Olivia; Westwood, J. Timothy; Kim, Philip M.; Smibert, Craig A.; Lipshitz, Howard D.; Sidhu, Sachdev S.

    2016-01-01

    Post-transcriptional regulation of mRNAs plays an essential role in the control of gene expression. mRNAs are regulated in ribonucleoprotein (RNP) complexes by RNA-binding proteins (RBPs) along with associated protein and noncoding RNA (ncRNA) cofactors. A global understanding of post-transcriptional control in any cell type requires identification of the components of all of its RNP complexes. We have previously shown that these complexes can be purified by immunoprecipitation using anti-RBP synthetic antibodies produced by phage display. To develop the large number of synthetic antibodies required for a global analysis of RNP complex composition, we have established a pipeline that combines (i) a computationally aided strategy for design of antigens located outside of annotated domains, (ii) high-throughput antigen expression and purification in Escherichia coli, and (iii) high-throughput antibody selection and screening. Using this pipeline, we have produced 279 antibodies against 61 different protein components of Drosophila melanogaster RNPs. Together with those produced in our low-throughput efforts, we have a panel of 311 antibodies for 67 RNP complex proteins. Tests of a subset of our antibodies demonstrated that 89% immunoprecipitate their endogenous target from embryo lysate. This panel of antibodies will serve as a resource for global studies of RNP complexes in Drosophila. Furthermore, our high-throughput pipeline permits efficient production of synthetic antibodies against any large set of proteins. PMID:26847261

  9. Subsoil TPH contamination in two oil pipeline pumping stations and one pipeline right-of-way in north Mexico.

    PubMed

    Iturbe, R; Flores-Serrano, R M; Castro, A; Flores, C; Torres, L G

    2010-11-01

    This investigation deals with the characterization carried out in zones around two pipeline pumping stations and one pipeline right-of-way in the north of Mexico. In particular those areas where contamination was evaluated: (a) south area of the separation ditch in the Avalos station, (b) the area between the separation ditch at the Avalos station, (c) km 194+420 of the Moctuzma station, and (d) km 286+900 in the Candelaria station. Results of this investigation showed that only four samples showed TPH values higher than the Mexican limit for 2004: AVA 1B, with 21,191 mg kg(-1); AVA 1C, with 9348 mg kg(-1); AVA 2B, with 13,970 mg kg(-1); and MOC 2A, with 4108 mg kg(-1).None of the sampled points showed the presence of PAHs at values higher than those found in the Mexican or American legislations. PAH were detected in the range of 0.0004 and 13.05 mg kg(-1).It is suggested to implement surfactant soil washing as a remediation technique for the approximately 600 m(3) that need to be treated. Copyright 2010 Elsevier Ltd. All rights reserved.

  10. The HTS barcode checker pipeline, a tool for automated detection of illegally traded species from high-throughput sequencing data.

    PubMed

    Lammers, Youri; Peelen, Tamara; Vos, Rutger A; Gravendeel, Barbara

    2014-02-06

    Mixtures of internationally traded organic substances can contain parts of species protected by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). These mixtures often raise the suspicion of border control and customs offices, which can lead to confiscation, for example in the case of Traditional Chinese medicines (TCMs). High-throughput sequencing of DNA barcoding markers obtained from such samples provides insight into species constituents of mixtures, but manual cross-referencing of results against the CITES appendices is labor intensive. Matching DNA barcodes against NCBI GenBank using BLAST may yield misleading results both as false positives, due to incorrectly annotated sequences, and false negatives, due to spurious taxonomic re-assignment. Incongruence between the taxonomies of CITES and NCBI GenBank can result in erroneous estimates of illegal trade. The HTS barcode checker pipeline is an application for automated processing of sets of 'next generation' barcode sequences to determine whether these contain DNA barcodes obtained from species listed on the CITES appendices. This analytical pipeline builds upon and extends existing open-source applications for BLAST matching against the NCBI GenBank reference database and for taxonomic name reconciliation. In a single operation, reads are converted into taxonomic identifications matched with names on the CITES appendices. By inclusion of a blacklist and additional names databases, the HTS barcode checker pipeline prevents false positives and resolves taxonomic heterogeneity. The HTS barcode checker pipeline can detect and correctly identify DNA barcodes of CITES-protected species from reads obtained from TCM samples in just a few minutes. The pipeline facilitates and improves molecular monitoring of trade in endangered species, and can aid in safeguarding these species from extinction in the wild. The HTS barcode checker pipeline is available at https://github.com/naturalis/HTS-barcode-checker.

  11. The HTS barcode checker pipeline, a tool for automated detection of illegally traded species from high-throughput sequencing data

    PubMed Central

    2014-01-01

    Background Mixtures of internationally traded organic substances can contain parts of species protected by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). These mixtures often raise the suspicion of border control and customs offices, which can lead to confiscation, for example in the case of Traditional Chinese medicines (TCMs). High-throughput sequencing of DNA barcoding markers obtained from such samples provides insight into species constituents of mixtures, but manual cross-referencing of results against the CITES appendices is labor intensive. Matching DNA barcodes against NCBI GenBank using BLAST may yield misleading results both as false positives, due to incorrectly annotated sequences, and false negatives, due to spurious taxonomic re-assignment. Incongruence between the taxonomies of CITES and NCBI GenBank can result in erroneous estimates of illegal trade. Results The HTS barcode checker pipeline is an application for automated processing of sets of 'next generation’ barcode sequences to determine whether these contain DNA barcodes obtained from species listed on the CITES appendices. This analytical pipeline builds upon and extends existing open-source applications for BLAST matching against the NCBI GenBank reference database and for taxonomic name reconciliation. In a single operation, reads are converted into taxonomic identifications matched with names on the CITES appendices. By inclusion of a blacklist and additional names databases, the HTS barcode checker pipeline prevents false positives and resolves taxonomic heterogeneity. Conclusions The HTS barcode checker pipeline can detect and correctly identify DNA barcodes of CITES-protected species from reads obtained from TCM samples in just a few minutes. The pipeline facilitates and improves molecular monitoring of trade in endangered species, and can aid in safeguarding these species from extinction in the wild. The HTS barcode checker pipeline is available at https://github.com/naturalis/HTS-barcode-checker. PMID:24502833

  12. MRI-compatible pipeline for three-dimensional MALDI imaging mass spectrometry using PAXgene fixation.

    PubMed

    Oetjen, Janina; Aichler, Michaela; Trede, Dennis; Strehlow, Jan; Berger, Judith; Heldmann, Stefan; Becker, Michael; Gottschalk, Michael; Kobarg, Jan Hendrik; Wirtz, Stefan; Schiffler, Stefan; Thiele, Herbert; Walch, Axel; Maass, Peter; Alexandrov, Theodore

    2013-09-02

    MALDI imaging mass spectrometry (MALDI-imaging) has emerged as a spatially-resolved label-free bioanalytical technique for direct analysis of biological samples and was recently introduced for analysis of 3D tissue specimens. We present a new experimental and computational pipeline for molecular analysis of tissue specimens which integrates 3D MALDI-imaging, magnetic resonance imaging (MRI), and histological staining and microscopy, and evaluate the pipeline by applying it to analysis of a mouse kidney. To ensure sample integrity and reproducible sectioning, we utilized the PAXgene fixation and paraffin embedding and proved its compatibility with MRI. Altogether, 122 serial sections of the kidney were analyzed using MALDI-imaging, resulting in a 3D dataset of 200GB comprised of 2million spectra. We show that elastic image registration better compensates for local distortions of tissue sections. The computational analysis of 3D MALDI-imaging data was performed using our spatial segmentation pipeline which determines regions of distinct molecular composition and finds m/z-values co-localized with these regions. For facilitated interpretation of 3D distribution of ions, we evaluated isosurfaces providing simplified visualization. We present the data in a multimodal fashion combining 3D MALDI-imaging with the MRI volume rendering and with light microscopic images of histologically stained sections. Our novel experimental and computational pipeline for 3D MALDI-imaging can be applied to address clinical questions such as proteomic analysis of the tumor morphologic heterogeneity. Examining the protein distribution as well as the drug distribution throughout an entire tumor using our pipeline will facilitate understanding of the molecular mechanisms of carcinogenesis. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. TH-AB-207A-05: A Fully-Automated Pipeline for Generating CT Images Across a Range of Doses and Reconstruction Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S; Lo, P; Hoffman, J

    Purpose: To evaluate the robustness of CAD or Quantitative Imaging methods, they should be tested on a variety of cases and under a variety of image acquisition and reconstruction conditions that represent the heterogeneity encountered in clinical practice. The purpose of this work was to develop a fully-automated pipeline for generating CT images that represent a wide range of dose and reconstruction conditions. Methods: The pipeline consists of three main modules: reduced-dose simulation, image reconstruction, and quantitative analysis. The first two modules of the pipeline can be operated in a completely automated fashion, using configuration files and running the modulesmore » in a batch queue. The input to the pipeline is raw projection CT data; this data is used to simulate different levels of dose reduction using a previously-published algorithm. Filtered-backprojection reconstructions are then performed using FreeCT-wFBP, a freely-available reconstruction software for helical CT. We also added support for an in-house, model-based iterative reconstruction algorithm using iterative coordinate-descent optimization, which may be run in tandem with the more conventional recon methods. The reduced-dose simulations and image reconstructions are controlled automatically by a single script, and they can be run in parallel on our research cluster. The pipeline was tested on phantom and lung screening datasets from a clinical scanner (Definition AS, Siemens Healthcare). Results: The images generated from our test datasets appeared to represent a realistic range of acquisition and reconstruction conditions that we would expect to find clinically. The time to generate images was approximately 30 minutes per dose/reconstruction combination on a hybrid CPU/GPU architecture. Conclusion: The automated research pipeline promises to be a useful tool for either training or evaluating performance of quantitative imaging software such as classifiers and CAD algorithms across the range of acquisition and reconstruction parameters present in the clinical environment. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less

  14. Computational Biomarker Pipeline from Discovery to Clinical Implementation: Plasma Proteomic Biomarkers for Cardiac Transplantation

    PubMed Central

    Cohen Freue, Gabriela V.; Meredith, Anna; Smith, Derek; Bergman, Axel; Sasaki, Mayu; Lam, Karen K. Y.; Hollander, Zsuzsanna; Opushneva, Nina; Takhar, Mandeep; Lin, David; Wilson-McManus, Janet; Balshaw, Robert; Keown, Paul A.; Borchers, Christoph H.; McManus, Bruce; Ng, Raymond T.; McMaster, W. Robert

    2013-01-01

    Recent technical advances in the field of quantitative proteomics have stimulated a large number of biomarker discovery studies of various diseases, providing avenues for new treatments and diagnostics. However, inherent challenges have limited the successful translation of candidate biomarkers into clinical use, thus highlighting the need for a robust analytical methodology to transition from biomarker discovery to clinical implementation. We have developed an end-to-end computational proteomic pipeline for biomarkers studies. At the discovery stage, the pipeline emphasizes different aspects of experimental design, appropriate statistical methodologies, and quality assessment of results. At the validation stage, the pipeline focuses on the migration of the results to a platform appropriate for external validation, and the development of a classifier score based on corroborated protein biomarkers. At the last stage towards clinical implementation, the main aims are to develop and validate an assay suitable for clinical deployment, and to calibrate the biomarker classifier using the developed assay. The proposed pipeline was applied to a biomarker study in cardiac transplantation aimed at developing a minimally invasive clinical test to monitor acute rejection. Starting with an untargeted screening of the human plasma proteome, five candidate biomarker proteins were identified. Rejection-regulated proteins reflect cellular and humoral immune responses, acute phase inflammatory pathways, and lipid metabolism biological processes. A multiplex multiple reaction monitoring mass-spectrometry (MRM-MS) assay was developed for the five candidate biomarkers and validated by enzyme-linked immune-sorbent (ELISA) and immunonephelometric assays (INA). A classifier score based on corroborated proteins demonstrated that the developed MRM-MS assay provides an appropriate methodology for an external validation, which is still in progress. Plasma proteomic biomarkers of acute cardiac rejection may offer a relevant post-transplant monitoring tool to effectively guide clinical care. The proposed computational pipeline is highly applicable to a wide range of biomarker proteomic studies. PMID:23592955

  15. Design and analysis of quantitative differential proteomics investigations using LC-MS technology.

    PubMed

    Bukhman, Yury V; Dharsee, Moyez; Ewing, Rob; Chu, Peter; Topaloglou, Thodoros; Le Bihan, Thierry; Goh, Theo; Duewel, Henry; Stewart, Ian I; Wisniewski, Jacek R; Ng, Nancy F

    2008-02-01

    Liquid chromatography-mass spectrometry (LC-MS)-based proteomics is becoming an increasingly important tool in characterizing the abundance of proteins in biological samples of various types and across conditions. Effects of disease or drug treatments on protein abundance are of particular interest for the characterization of biological processes and the identification of biomarkers. Although state-of-the-art instrumentation is available to make high-quality measurements and commercially available software is available to process the data, the complexity of the technology and data presents challenges for bioinformaticians and statisticians. Here, we describe a pipeline for the analysis of quantitative LC-MS data. Key components of this pipeline include experimental design (sample pooling, blocking, and randomization) as well as deconvolution and alignment of mass chromatograms to generate a matrix of molecular abundance profiles. An important challenge in LC-MS-based quantitation is to be able to accurately identify and assign abundance measurements to members of protein families. To address this issue, we implement a novel statistical method for inferring the relative abundance of related members of protein families from tryptic peptide intensities. This pipeline has been used to analyze quantitative LC-MS data from multiple biomarker discovery projects. We illustrate our pipeline here with examples from two of these studies, and show that the pipeline constitutes a complete workable framework for LC-MS-based differential quantitation. Supplementary material is available at http://iec01.mie.utoronto.ca/~thodoros/Bukhman/.

  16. Optimized planning of in-service inspections of local flow-accelerated corrosion of pipeline elements used in the secondary coolant circuit of the VVER-440-based units at the Novovoronezh NPP

    NASA Astrophysics Data System (ADS)

    Tomarov, G. V.; Povarov, V. P.; Shipkov, A. A.; Gromov, A. F.; Budanov, V. A.; Golubeva, T. N.

    2015-03-01

    Matters concerned with making efficient use of the information-analytical system on the flow-accelerated corrosion problem in setting up in-service examination of the metal of pipeline elements operating in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh NPP are considered. The principles used to select samples of pipeline elements in planning ultrasonic thickness measurements for timely revealing metal thinning due to flow-accelerated corrosion along with reducing the total amount of measurements in the condensate-feedwater path are discussed.

  17. 40 CFR 761.257 - Determining the regulatory status of sampled pipe.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... COMMERCE, AND USE PROHIBITIONS Determining a PCB Concentration for Purposes of Abandonment or Disposal of Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe... disposal of a pipe segment that has been sampled, the sample results for that segment determines its PCB...

  18. Domain selection combined with improved cloning strategy for high throughput expression of higher eukaryotic proteins

    PubMed Central

    Chen, Yunjia; Qiu, Shihong; Luan, Chi-Hao; Luo, Ming

    2007-01-01

    Background Expression of higher eukaryotic genes as soluble, stable recombinant proteins is still a bottleneck step in biochemical and structural studies of novel proteins today. Correct identification of stable domains/fragments within the open reading frame (ORF), combined with proper cloning strategies, can greatly enhance the success rate when higher eukaryotic proteins are expressed as these domains/fragments. Furthermore, a HTP cloning pipeline incorporated with bioinformatics domain/fragment selection methods will be beneficial to studies of structure and function genomics/proteomics. Results With bioinformatics tools, we developed a domain/domain boundary prediction (DDBP) method, which was trained by available experimental data. Combined with an improved cloning strategy, DDBP had been applied to 57 proteins from C. elegans. Expression and purification results showed there was a 10-fold increase in terms of obtaining purified proteins. Based on the DDBP method, the improved GATEWAY cloning strategy and a robotic platform, we constructed a high throughput (HTP) cloning pipeline, including PCR primer design, PCR, BP reaction, transformation, plating, colony picking and entry clones extraction, which have been successfully applied to 90 C. elegans genes, 88 Brucella genes, and 188 human genes. More than 97% of the targeted genes were obtained as entry clones. This pipeline has a modular design and can adopt different operations for a variety of cloning/expression strategies. Conclusion The DDBP method and improved cloning strategy were satisfactory. The cloning pipeline, combined with our recombinant protein HTP expression pipeline and the crystal screening robots, constitutes a complete platform for structure genomics/proteomics. This platform will increase the success rate of purification and crystallization dramatically and promote the further advancement of structure genomics/proteomics. PMID:17663785

  19. Development of Biomarkers for Screening Hepatocellular Carcinoma Using Global Data Mining and Multiple Reaction Monitoring

    PubMed Central

    Yu, Su Jong; Jang, Eun Sun; Yu, Jiyoung; Cho, Geunhee; Yoon, Jung-Hwan; Kim, Youngsoo

    2013-01-01

    Hepatocellular carcinoma (HCC) is one of the most common and aggressive cancers and is associated with a poor survival rate. Clinically, the level of alpha-fetoprotein (AFP) has been used as a biomarker for the diagnosis of HCC. The discovery of useful biomarkers for HCC, focused solely on the proteome, has been difficult; thus, wide-ranging global data mining of genomic and proteomic databases from previous reports would be valuable in screening biomarker candidates. Further, multiple reaction monitoring (MRM), based on triple quadrupole mass spectrometry, has been effective with regard to high-throughput verification, complementing antibody-based verification pipelines. In this study, global data mining was performed using 5 types of HCC data to screen for candidate biomarker proteins: cDNA microarray, copy number variation, somatic mutation, epigenetic, and quantitative proteomics data. Next, we applied MRM to verify HCC candidate biomarkers in individual serum samples from 3 groups: a healthy control group, patients who have been diagnosed with HCC (Before HCC treatment group), and HCC patients who underwent locoregional therapy (After HCC treatment group). After determining the relative quantities of the candidate proteins by MRM, we compared their expression levels between the 3 groups, identifying 4 potential biomarkers: the actin-binding protein anillin (ANLN), filamin-B (FLNB), complementary C4-A (C4A), and AFP. The combination of 2 markers (ANLN, FLNB) improved the discrimination of the before HCC treatment group from the healthy control group compared with AFP. We conclude that the combination of global data mining and MRM verification enhances the screening and verification of potential HCC biomarkers. This efficacious integrative strategy is applicable to the development of markers for cancer and other diseases. PMID:23717429

  20. Development of biomarkers for screening hepatocellular carcinoma using global data mining and multiple reaction monitoring.

    PubMed

    Kim, Hyunsoo; Kim, Kyunggon; Yu, Su Jong; Jang, Eun Sun; Yu, Jiyoung; Cho, Geunhee; Yoon, Jung-Hwan; Kim, Youngsoo

    2013-01-01

    Hepatocellular carcinoma (HCC) is one of the most common and aggressive cancers and is associated with a poor survival rate. Clinically, the level of alpha-fetoprotein (AFP) has been used as a biomarker for the diagnosis of HCC. The discovery of useful biomarkers for HCC, focused solely on the proteome, has been difficult; thus, wide-ranging global data mining of genomic and proteomic databases from previous reports would be valuable in screening biomarker candidates. Further, multiple reaction monitoring (MRM), based on triple quadrupole mass spectrometry, has been effective with regard to high-throughput verification, complementing antibody-based verification pipelines. In this study, global data mining was performed using 5 types of HCC data to screen for candidate biomarker proteins: cDNA microarray, copy number variation, somatic mutation, epigenetic, and quantitative proteomics data. Next, we applied MRM to verify HCC candidate biomarkers in individual serum samples from 3 groups: a healthy control group, patients who have been diagnosed with HCC (Before HCC treatment group), and HCC patients who underwent locoregional therapy (After HCC treatment group). After determining the relative quantities of the candidate proteins by MRM, we compared their expression levels between the 3 groups, identifying 4 potential biomarkers: the actin-binding protein anillin (ANLN), filamin-B (FLNB), complementary C4-A (C4A), and AFP. The combination of 2 markers (ANLN, FLNB) improved the discrimination of the before HCC treatment group from the healthy control group compared with AFP. We conclude that the combination of global data mining and MRM verification enhances the screening and verification of potential HCC biomarkers. This efficacious integrative strategy is applicable to the development of markers for cancer and other diseases.

  1. Characterization of bacterial community associated to biofilms of corroded oil pipelines from the southeast of Mexico.

    PubMed

    Neria-González, Isabel; Wang, En Tao; Ramírez, Florina; Romero, Juan M; Hernández-Rodríguez, César

    2006-06-01

    Microbial communities associated to biofilms promote corrosion of oil pipelines. The community structure of bacteria in the biofilm formed in oil pipelines is the basic knowledge to understand the complexity and mechanisms of metal corrosion. To assess bacterial diversity, biofilm samples were obtained from X52 steel coupons corroded after 40 days of exposure to normal operation and flow conditions. The biofilm samples were directly used to extract metagenomic DNA, which was used as template to amplify 16S ribosomal gene by PCR. The PCR products of 16S ribosomal gene were also employed as template for sulfate-reducing bacteria (SRB) specific nested-PCR and both PCR products were utilized for the construction of gene libraries. The V3 region of the 16S rRNA gene was also amplified to analyse the bacterial diversity by analysis of denaturing gradient gel electrophoresis (DGGE). Ribosomal library and DGGE profiles exhibited limited bacterial diversity, basically including Citrobacter spp., Enterobacter spp. and Halanaerobium spp. while Desulfovibrio alaskensis and a novel clade within the genus Desulfonatronovibrio were detected from the nested PCR library. The biofilm samples were also taken for the isolation of SRB. Desulfovibrio alaskensis and Desulfovibrio capillatus, as well as some strains related to Citrobacter were isolated. SRB consists in a very small proportion of the community and Desulfovibrio spp. were the relatively abundant groups among the SRB. This is the first study directly exploring bacterial diversity in corrosive biofilms associated to steel pipelines subjected to normal operation conditions.

  2. Pipeline corridors through wetlands - impacts on plant communities: Deep Creek and Brandy Branch crossings, Nassau County, Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shem, L.M.; Van Dyke, G.D.; Zimmerman, R.E.

    The goal of the Gas Research Institute Wetland Corridors Program is to document impacts of existing pipelines on the wetlands they traverse. To accomplish this goal, 12 existing wetland crossings were surveyed. These sites varied in elapsed time since pipeline construction, wetland type, pipeline installation techniques, and right-of-way (ROW) management practices. This report presents the results of surveys conducted July 14-18, 1992, at the Deep Creek and the Brandy Branch crossings of a pipeline installed during May 1991 in Nassau County, Florida. Both floodplains supported bottomland hardwood forests. The pipeline at the Deep Creek crossing was installed by means ofmore » horizontal directional drilling after the ROW had been clear-cut, while the pipeline at the Brandy Branch crossing was installed by means of conventional open trenching. Neither site was seeded or fertilized. At the time of sampling, a dense vegetative community, made up primarily of native perennial herbaceous species, occupied the ROW within the Deep Creek floodplain. The Brandy Branch ROW was vegetated by a less dense stand of primarily native perennial herbaceous plants. Plant diversity was also lower at the Brandy Branch crossing than at the Deep Creek crossing. The results suggest that some of the differences in plant communities are related to the more hydric conditions at the Brandy Branch floodplain.« less

  3. 40 CFR 761.247 - Sample site selection for pipe segment removal.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.247 Sample site selection for pipe segment removal. (a) General. (1) Select the pipe... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample site selection for pipe segment...

  4. 40 CFR 761.247 - Sample site selection for pipe segment removal.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Sample site selection for pipe segment... Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.247 Sample site selection for pipe segment removal. (a) General. (1) Select the pipe...

  5. A Next-Generation Sequencing Data Analysis Pipeline for Detecting Unknown Pathogens from Mixed Clinical Samples and Revealing Their Genetic Diversity.

    PubMed

    Gong, Yu-Nong; Chen, Guang-Wu; Yang, Shu-Li; Lee, Ching-Ju; Shih, Shin-Ru; Tsao, Kuo-Chien

    2016-01-01

    Forty-two cytopathic effect (CPE)-positive isolates were collected from 2008 to 2012. All isolates could not be identified for known viral pathogens by routine diagnostic assays. They were pooled into 8 groups of 5-6 isolates to reduce the sequencing cost. Next-generation sequencing (NGS) was conducted for each group of mixed samples, and the proposed data analysis pipeline was used to identify viral pathogens in these mixed samples. Polymerase chain reaction (PCR) or enzyme-linked immunosorbent assay (ELISA) was individually conducted for each of these 42 isolates depending on the predicted viral types in each group. Two isolates remained unknown after these tests. Moreover, iteration mapping was implemented for each of these 2 isolates, and predicted human parechovirus (HPeV) in both. In summary, our NGS pipeline detected the following viruses among the 42 isolates: 29 human rhinoviruses (HRVs), 10 HPeVs, 1 human adenovirus (HAdV), 1 echovirus and 1 rotavirus. We then focused on the 10 identified Taiwanese HPeVs because of their reported clinical significance over HRVs. Their genomes were assembled and their genetic diversity was explored. One novel 6-bp deletion was found in one HPeV-1 virus. In terms of nucleotide heterogeneity, 64 genetic variants were detected from these HPeVs using the mapped NGS reads. Most importantly, a recombination event was found between our HPeV-3 and a known HPeV-4 strain in the database. Similar event was detected in the other HPeV-3 strains in the same clade of the phylogenetic tree. These findings demonstrated that the proposed NGS data analysis pipeline identified unknown viruses from the mixed clinical samples, revealed their genetic identity and variants, and characterized their genetic features in terms of viral evolution.

  6. Electrochemical Study of Polymer and Ceramic-Based Nanocomposite Coatings for Corrosion Protection of Cast Iron Pipeline

    PubMed Central

    Ammar, Ameen Uddin; Shahid, Muhammad; Ahmed, Muhammad Khitab; Khan, Munawar; Khalid, Amir

    2018-01-01

    Coating is one of the most effective measures to protect metallic materials from corrosion. Various types of coatings such as metallic, ceramic and polymer coatings have been investigated in a quest to find durable coatings to resist electrochemical decay of metals in industrial applications. Many polymeric composite coatings have proved to be resistant against aggressive environments. Two major applications of ferrous materials are in marine environments and in the oil and gas industry. Knowing the corroding behavior of ferrous-based materials during exposure to these aggressive applications, an effort has been made to protect the material by using polymeric and ceramic-based coatings reinforced with nano materials. Uncoated and coated cast iron pipeline material was investigated during corrosion resistance by employing EIS (electrochemical impedance spectroscopy) and electrochemical DC corrosion testing using the “three electrode system”. Cast iron pipeline samples were coated with Polyvinyl Alcohol/Polyaniline/FLG (Few Layers Graphene) and TiO2/GO (graphene oxide) nanocomposite by dip-coating. The EIS data indicated better capacitance and higher impedance values for coated samples compared with the bare metal, depicting enhanced corrosion resistance against seawater and “produce water” of a crude oil sample from a local oil rig; Tafel scans confirmed a significant decrease in corrosion rate of coated samples. PMID:29495339

  7. Electrochemical Study of Polymer and Ceramic-Based Nanocomposite Coatings for Corrosion Protection of Cast Iron Pipeline.

    PubMed

    Ammar, Ameen Uddin; Shahid, Muhammad; Ahmed, Muhammad Khitab; Khan, Munawar; Khalid, Amir; Khan, Zulfiqar Ahmad

    2018-02-25

    Coating is one of the most effective measures to protect metallic materials from corrosion. Various types of coatings such as metallic, ceramic and polymer coatings have been investigated in a quest to find durable coatings to resist electrochemical decay of metals in industrial applications. Many polymeric composite coatings have proved to be resistant against aggressive environments. Two major applications of ferrous materials are in marine environments and in the oil and gas industry. Knowing the corroding behavior of ferrous-based materials during exposure to these aggressive applications, an effort has been made to protect the material by using polymeric and ceramic-based coatings reinforced with nano materials. Uncoated and coated cast iron pipeline material was investigated during corrosion resistance by employing EIS (electrochemical impedance spectroscopy) and electrochemical DC corrosion testing using the "three electrode system". Cast iron pipeline samples were coated with Polyvinyl Alcohol/Polyaniline/FLG (Few Layers Graphene) and TiO₂/GO (graphene oxide) nanocomposite by dip-coating. The EIS data indicated better capacitance and higher impedance values for coated samples compared with the bare metal, depicting enhanced corrosion resistance against seawater and "produce water" of a crude oil sample from a local oil rig; Tafel scans confirmed a significant decrease in corrosion rate of coated samples.

  8. From screen to structure with a harvestable microfluidic device.

    PubMed

    Stojanoff, Vivian; Jakoncic, Jean; Oren, Deena A; Nagarajan, V; Poulsen, Jens-Christian Navarro; Adams-Cioaba, Melanie A; Bergfors, Terese; Sommer, Morten O A

    2011-08-01

    Advances in automation have facilitated the widespread adoption of high-throughput vapour-diffusion methods for initial crystallization screening. However, for many proteins, screening thousands of crystallization conditions fails to yield crystals of sufficient quality for structural characterization. Here, the rates of crystal identification for thaumatin, catalase and myoglobin using microfluidic Crystal Former devices and sitting-drop vapour-diffusion plates are compared. It is shown that the Crystal Former results in a greater number of identified initial crystallization conditions compared with vapour diffusion. Furthermore, crystals of thaumatin and lysozyme obtained in the Crystal Former were used directly for structure determination both in situ and upon harvesting and cryocooling. On the basis of these results, a crystallization strategy is proposed that uses multiple methods with distinct kinetic trajectories through the protein phase diagram to increase the output of crystallization pipelines.

  9. An Automated Pipeline for Engineering Many-Enzyme Pathways: Computational Sequence Design, Pathway Expression-Flux Mapping, and Scalable Pathway Optimization.

    PubMed

    Halper, Sean M; Cetnar, Daniel P; Salis, Howard M

    2018-01-01

    Engineering many-enzyme metabolic pathways suffers from the design curse of dimensionality. There are an astronomical number of synonymous DNA sequence choices, though relatively few will express an evolutionary robust, maximally productive pathway without metabolic bottlenecks. To solve this challenge, we have developed an integrated, automated computational-experimental pipeline that identifies a pathway's optimal DNA sequence without high-throughput screening or many cycles of design-build-test. The first step applies our Operon Calculator algorithm to design a host-specific evolutionary robust bacterial operon sequence with maximally tunable enzyme expression levels. The second step applies our RBS Library Calculator algorithm to systematically vary enzyme expression levels with the smallest-sized library. After characterizing a small number of constructed pathway variants, measurements are supplied to our Pathway Map Calculator algorithm, which then parameterizes a kinetic metabolic model that ultimately predicts the pathway's optimal enzyme expression levels and DNA sequences. Altogether, our algorithms provide the ability to efficiently map the pathway's sequence-expression-activity space and predict DNA sequences with desired metabolic fluxes. Here, we provide a step-by-step guide to applying the Pathway Optimization Pipeline on a desired multi-enzyme pathway in a bacterial host.

  10. All-passive pixel super-resolution of time-stretch imaging

    PubMed Central

    Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.

    2017-01-01

    Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936

  11. Weight-of-evidence environmental risk assessment of dumped chemical weapons after WWII along the Nord-Stream gas pipeline in the Bornholm Deep.

    PubMed

    Sanderson, Hans; Fauser, Patrik; Thomsen, Marianne; Larsen, Jørn Bo

    2012-05-15

    In connection with installation of two natural gas pipelines through the Baltic Sea between Russia and Germany, there has been concern regarding potential re-suspension of historically dumped chemical warfare agents (CWA) in a nearby dump site and the potential environmental risks associated. 192 sediment and 11 porewater samples were analyzed for CWA residues, both parent and metabolites in 2008 and 2010 along the pipeline corridor next to the dump site. Macrozoobenthos and background variables were also collected and compared to the observed CWA levels and predicted potential risks. Detection frequencies and levels of intact CWA found were low, whereas CWA metabolites were more frequently found. Re-suspension of CWA residue-containing sediment from installation of the pipelines contributes marginally to the overall background CWA residue exposure and risk along the pipeline route. The multivariate weight-of-evidence analysis showed that physical and background parameters of the sediment were of higher importance for the biota than observed CWA levels. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Nonlinear Optical Characterization of Membrane Protein Microcrystals and Nanocrystals.

    PubMed

    Newman, Justin A; Simpson, Garth J

    2016-01-01

    Nonlinear optical methods such as second harmonic generation (SHG) and two-photon excited UV fluorescence (TPE-UVF) imaging are promising approaches to address bottlenecks in the membrane protein structure determination pipeline. The general principles of SHG and TPE-UVF are discussed here along with instrument design considerations. Comparisons to conventional methods in high throughput crystallization condition screening and crystal quality assessment prior to X-ray diffraction are also discussed.

  13. High-Throughput Functional Validation of Progression Drivers in Lung Adenocarcinoma

    DTIC Science & Technology

    2013-09-01

    2) a novel molecular barcoding approach that facilitates cost- effective detection of driver events following in vitro and in vivo functional screens...aberration construction pipeline, which we named High-Throughput 3 Mutagenesis and Molecular Barcoding (HiTMMoB; Fig.1). We have therefore been able...lentiviral vector specially constructed for this project. This vector is compatible with our flexible molecular barcoding technology (Fig. 1), thus each

  14. A Thoroughly Validated Virtual Screening Strategy for Discovery of Novel HDAC3 Inhibitors.

    PubMed

    Hu, Huabin; Xia, Jie; Wang, Dongmei; Wang, Xiang Simon; Wu, Song

    2017-01-18

    Histone deacetylase 3 (HDAC3) has been recently identified as a potential target for the treatment of cancer and other diseases, such as chronic inflammation, neurodegenerative diseases, and diabetes. Virtual screening (VS) is currently a routine technique for hit identification, but its success depends on rational development of VS strategies. To facilitate this process, we applied our previously released benchmarking dataset, i.e., MUBD-HDAC3 to the evaluation of structure-based VS (SBVS) and ligand-based VS (LBVS) combinatorial approaches. We have identified FRED (Chemgauss4) docking against a structural model of HDAC3, i.e., SAHA-3 generated by a computationally inexpensive "flexible docking", as the best SBVS approach and a common feature pharmacophore model, i.e., Hypo1 generated by Catalyst/HipHop as the optimal model for LBVS. We then developed a pipeline that was composed of Hypo1, FRED (Chemgauss4), and SAHA-3 sequentially, and demonstrated that it was superior to other combinations in terms of ligand enrichment. In summary, we present the first highly-validated, rationally-designed VS strategy specific to HDAC3 inhibitor discovery. The constructed pipeline is publicly accessible for the scientific community to identify novel HDAC3 inhibitors in a time-efficient and cost-effective way.

  15. A Thoroughly Validated Virtual Screening Strategy for Discovery of Novel HDAC3 Inhibitors

    PubMed Central

    Hu, Huabin; Xia, Jie; Wang, Dongmei; Wang, Xiang Simon; Wu, Song

    2017-01-01

    Histone deacetylase 3 (HDAC3) has been recently identified as a potential target for the treatment of cancer and other diseases, such as chronic inflammation, neurodegenerative diseases, and diabetes. Virtual screening (VS) is currently a routine technique for hit identification, but its success depends on rational development of VS strategies. To facilitate this process, we applied our previously released benchmarking dataset, i.e., MUBD-HDAC3 to the evaluation of structure-based VS (SBVS) and ligand-based VS (LBVS) combinatorial approaches. We have identified FRED (Chemgauss4) docking against a structural model of HDAC3, i.e., SAHA-3 generated by a computationally inexpensive “flexible docking”, as the best SBVS approach and a common feature pharmacophore model, i.e., Hypo1 generated by Catalyst/HipHop as the optimal model for LBVS. We then developed a pipeline that was composed of Hypo1, FRED (Chemgauss4), and SAHA-3 sequentially, and demonstrated that it was superior to other combinations in terms of ligand enrichment. In summary, we present the first highly-validated, rationally-designed VS strategy specific to HDAC3 inhibitor discovery. The constructed pipeline is publicly accessible for the scientific community to identify novel HDAC3 inhibitors in a time-efficient and cost-effective way. PMID:28106794

  16. [Analysis of different pipe corrosion by ESEM and bacteria identification by API in pilot distribution network].

    PubMed

    Wu, Qing; Zhao, Xinhua; Yu, Qing; Li, Jun

    2008-07-01

    To understand the corrosion of different material water supply pipelines and bacterium in drinking water and biofilms. A pilot distribution network was built and water quality detection was made on popular pipelines of galvanized iron pipe, PPR and ABS plastic pipes by ESEM (environmental scanning electron microscopy). Bacterium in drinking water and biofilms were identified by API Bacteria Identification System 10s and 20E (Biomerieux, France), and pathogenicity of bacterium were estimated. Galvanized zinc pipes were seriously corroded; there were thin layers on inner face of PPR and ABS plastic pipes. 10 bacterium (got from water samples) were identified by API10S, in which 7 bacterium were opportunistic pathogens. 21 bacterium (got from water and biofilms samples) were identified by API20E, in which 5 bacterium were pathogens and 11 bacterium were opportunistic pathogens and 5 bacteria were not reported for their pathogenicities to human beings. The bacterial water quality of drinking water distribution networks were not good. Most bacterium in drinking water and biofilms on the inner face of pipeline of the drinking water distribution network were opportunistic pathogens, it could cause serious water supply accident, if bacteria spread in suitable conditions. In the aspect of pipe material, old pipelines should be changed by new material pipes.

  17. APRICOT: an integrated computational pipeline for the sequence-based identification and characterization of RNA-binding proteins.

    PubMed

    Sharan, Malvika; Förstner, Konrad U; Eulalio, Ana; Vogel, Jörg

    2017-06-20

    RNA-binding proteins (RBPs) have been established as core components of several post-transcriptional gene regulation mechanisms. Experimental techniques such as cross-linking and co-immunoprecipitation have enabled the identification of RBPs, RNA-binding domains (RBDs) and their regulatory roles in the eukaryotic species such as human and yeast in large-scale. In contrast, our knowledge of the number and potential diversity of RBPs in bacteria is poorer due to the technical challenges associated with the existing global screening approaches. We introduce APRICOT, a computational pipeline for the sequence-based identification and characterization of proteins using RBDs known from experimental studies. The pipeline identifies functional motifs in protein sequences using position-specific scoring matrices and Hidden Markov Models of the functional domains and statistically scores them based on a series of sequence-based features. Subsequently, APRICOT identifies putative RBPs and characterizes them by several biological properties. Here we demonstrate the application and adaptability of the pipeline on large-scale protein sets, including the bacterial proteome of Escherichia coli. APRICOT showed better performance on various datasets compared to other existing tools for the sequence-based prediction of RBPs by achieving an average sensitivity and specificity of 0.90 and 0.91 respectively. The command-line tool and its documentation are available at https://pypi.python.org/pypi/bio-apricot. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Comparison of software packages for detecting differential expression in RNA-seq studies

    PubMed Central

    Seyednasrollah, Fatemeh; Laiho, Asta

    2015-01-01

    RNA-sequencing (RNA-seq) has rapidly become a popular tool to characterize transcriptomes. A fundamental research problem in many RNA-seq studies is the identification of reliable molecular markers that show differential expression between distinct sample groups. Together with the growing popularity of RNA-seq, a number of data analysis methods and pipelines have already been developed for this task. Currently, however, there is no clear consensus about the best practices yet, which makes the choice of an appropriate method a daunting task especially for a basic user without a strong statistical or computational background. To assist the choice, we perform here a systematic comparison of eight widely used software packages and pipelines for detecting differential expression between sample groups in a practical research setting and provide general guidelines for choosing a robust pipeline. In general, our results demonstrate how the data analysis tool utilized can markedly affect the outcome of the data analysis, highlighting the importance of this choice. PMID:24300110

  19. Comparison of software packages for detecting differential expression in RNA-seq studies.

    PubMed

    Seyednasrollah, Fatemeh; Laiho, Asta; Elo, Laura L

    2015-01-01

    RNA-sequencing (RNA-seq) has rapidly become a popular tool to characterize transcriptomes. A fundamental research problem in many RNA-seq studies is the identification of reliable molecular markers that show differential expression between distinct sample groups. Together with the growing popularity of RNA-seq, a number of data analysis methods and pipelines have already been developed for this task. Currently, however, there is no clear consensus about the best practices yet, which makes the choice of an appropriate method a daunting task especially for a basic user without a strong statistical or computational background. To assist the choice, we perform here a systematic comparison of eight widely used software packages and pipelines for detecting differential expression between sample groups in a practical research setting and provide general guidelines for choosing a robust pipeline. In general, our results demonstrate how the data analysis tool utilized can markedly affect the outcome of the data analysis, highlighting the importance of this choice. © The Author 2013. Published by Oxford University Press.

  20. Software/hardware optimization for attenuation-based microtomography using SR at PETRA III (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Beckmann, Felix

    2016-10-01

    The Helmholtz-Zentrum Geesthacht, Germany, is operating the user experiments for microtomography at the beamlines P05 and P07 using synchrotron radiation produced in the storage ring PETRA III at DESY, Hamburg, Germany. In recent years the software pipeline, sample changing hardware for performing high throughput experiments were developed. In this talk the current status of the beamlines will be given. Furthermore, optimisation and automatisation of scanning techniques, will be presented. These are required to scan samples which are larger than the field of view defined by the X-ray beam. The integration into an optimized reconstruction pipeline will be shown.

  1. Remaining Sites Verification Package for the 100-F-26:15 Miscellaneous Pipelines Associated with the 132-F-6, 1608-F Waste Water Pumping Station, Waste Site Reclassification Form 2007-031

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L. M. Dittmer

    2008-03-18

    The 100-F-26:15 waste site consisted of the remnant portions of underground process effluent and floor drain pipelines that originated at the 105-F Reactor. In accordance with this evaluation, the verification sampling results support a reclassification of this site to Interim Closed Out. The results of verification sampling show that residual contaminant concentrations do not preclude any future uses and allow for unrestricted use of shallow zone soils. The results also demonstrate that residual contaminant concentrations are protective of groundwater and the Columbia River.

  2. Metabolomic and Metagenomic Analysis of Two Crude Oil Production Pipelines Experiencing Differential Rates of Corrosion

    PubMed Central

    Bonifay, Vincent; Wawrik, Boris; Sunner, Jan; Snodgrass, Emily C.; Aydin, Egemen; Duncan, Kathleen E.; Callaghan, Amy V.; Oldham, Athenia; Liengen, Turid; Beech, Iwona

    2017-01-01

    Corrosion processes in two North Sea oil production pipelines were studied by analyzing pig envelope samples via metagenomic and metabolomic techniques. Both production systems have similar physico-chemical properties and injection waters are treated with nitrate, but one pipeline experiences severe corrosion and the other does not. Early and late pigging material was collected to gain insight into the potential causes for differential corrosion rates. Metabolites were extracted and analyzed via ultra-high performance liquid chromatography/high-resolution mass spectrometry with electrospray ionization (ESI) in both positive and negative ion modes. Metabolites were analyzed by comparison with standards indicative of aerobic and anaerobic hydrocarbon metabolism and by comparison to predicted masses for KEGG metabolites. Microbial community structure was analyzed via 16S rRNA gene qPCR, sequencing of 16S PCR products, and MySeq Illumina shotgun sequencing of community DNA. Metagenomic data were used to reconstruct the full length 16S rRNA genes and genomes of dominant microorganisms. Sequence data were also interrogated via KEGG annotation and for the presence of genes related to terminal electron accepting (TEA) processes as well as aerobic and anaerobic hydrocarbon degradation. Significant and distinct differences were observed when comparing the ‘high corrosion’ (HC) and the ‘low corrosion’ (LC) pipeline systems, especially with respect to the TEA utilization potential. The HC samples were dominated by sulfate-reducing bacteria (SRB) and archaea known for their ability to utilize simple carbon substrates, whereas LC samples were dominated by pseudomonads with the genetic potential for denitrification and aerobic hydrocarbon degradation. The frequency of aerobic hydrocarbon degradation genes was low in the HC system, and anaerobic hydrocarbon degradation genes were not detected in either pipeline. This is in contrast with metabolite analysis, which demonstrated the presence of several succinic acids in HC samples that are diagnostic of anaerobic hydrocarbon metabolism. Identifiable aerobic metabolites were confined to the LC samples, consistent with the metagenomic data. Overall, these data suggest that corrosion management might benefit from a more refined understanding of microbial community resilience in the face of disturbances such as nitrate treatment or pigging, which frequently prove insufficient to alter community structure toward a stable, less-corrosive assemblage. PMID:28197141

  3. A novel multi-scale adaptive sampling-based approach for energy saving in leak detection for WSN-based water pipelines

    NASA Astrophysics Data System (ADS)

    Saqib, Najam us; Faizan Mysorewala, Muhammad; Cheded, Lahouari

    2017-12-01

    In this paper, we propose a novel monitoring strategy for a wireless sensor networks (WSNs)-based water pipeline network. Our strategy uses a multi-pronged approach to reduce energy consumption based on the use of two types of vibration sensors and pressure sensors, all having different energy levels, and a hierarchical adaptive sampling mechanism to determine the sampling frequency. The sampling rate of the sensors is adjusted according to the bandwidth of the vibration signal being monitored by using a wavelet-based adaptive thresholding scheme that calculates the new sampling frequency for the following cycle. In this multimodal sensing scheme, the duty-cycling approach is used for all sensors to reduce the sampling instances, such that the high-energy, high-precision (HE-HP) vibration sensors have low duty cycles, and the low-energy, low-precision (LE-LP) vibration sensors have high duty cycles. The low duty-cycling (HE-HP) vibration sensor adjusts the sampling frequency of the high duty-cycling (LE-LP) vibration sensor. The simulated test bed considered here consists of a water pipeline network which uses pressure and vibration sensors, with the latter having different energy consumptions and precision levels, at various locations in the network. This is all the more useful for energy conservation for extended monitoring. It is shown that by using the novel features of our proposed scheme, a significant reduction in energy consumption is achieved and the leak is effectively detected by the sensor node that is closest to it. Finally, both the total energy consumed by monitoring as well as the time to detect the leak by a WSN node are computed, and show the superiority of our proposed hierarchical adaptive sampling algorithm over a non-adaptive sampling approach.

  4. VizieR Online Data Catalog: Galaxy structural parameters from 3.6um images (Kim+, 2014)

    NASA Astrophysics Data System (ADS)

    Kim, T.; Gadotti, D. A.; Sheth, K.; Athanassoula, E.; Bosma, A.; Lee, M. G.; Madore, B. F.; Elmegreen, B.; Knapen, J. H.; Zaritsky, D.; Ho, L. C.; Comeron, S.; Holwerda, B.; Hinz, J. L.; Munoz-Mateos, J.-C.; Cisternas, M.; Erroz-Ferrer, S.; Buta, R.; Laurikainen, E.; Salo, H.; Laine, J.; Menendez-Delmestre, K.; Regan, M. W.; de Swardt, B.; Gil de Paz, A.; Seibert, M.; Mizusawa, T.

    2016-03-01

    We select our samples from the Spitzer Survey of Stellar Structure in Galaxies (S4G; Sheth et al. 2010, cat. J/PASP/122/1397). We chose galaxies that had already been processed by the first three S4G pipelines (Pipelines 1, 2, and 3; Sheth et al. 2010, cat. J/PASP/122/1397) at the moment of this study (2011 November). In brief, Pipeline processes images and provides science-ready images. Pipeline 2 prepares mask images (to exclude foreground and background objects) for further analysis, and Pipeline 3 derives surface brightness profiles and total magnitudes using IRAF ellipse fits. We excluded highly inclined (b/a<0.5), significantly disturbed, very faint, or irregular galaxies. Galaxies were also discarded if their images are unsuitable for decomposition due to contamination such as a bright foreground star or significant stray light from stars in the IRAC scattering zones. Then we chose barred galaxies from all Hubble types from S0 to Sdm using the numerical Hubble types from Hyperleda (Paturel et al. 2003, cat. VII/237, VII/238). The assessment of the presence of a bar was done visually by K. Sheth, T. Kim, and B. de Swardt. Later, we also confirmed the presence of a bar by checking the mid-infrared classification (Buta et al. 2010, cat. J/ApJS/190/147; Buta et al. 2015, cat. J/ApJS/217/32). A total of 144 barred galaxies were selected that satisfy our criteria, and we list our sample in Table1 with basic information. Table2 presents the measures of structural parameters for all galaxies in the sample obtained from the 2D model fit with BUDDA (BUlge/disk Decomposition Analysis, de Souza et al., 2004ApJS..153..411D; Gadotti, 2008MNRAS.384..420G) code. (2 data files).

  5. Arctic ecosystem reaction on permafrost melting as a result of 40 years anthropogenic impact

    NASA Astrophysics Data System (ADS)

    Petrzhik, Nataliya; Matyshak, George; Myshonkov, Alexander; Petrov, Dmitry

    2017-04-01

    Arctic ecosystems are sensitive indicators of environmental change. The increasing of anthropogenic impact perturb the natural ecosystems balance, first of all significant changes happen in soil and vegetation. It is necessary to study the permafrost ecosystem response, as the permafrost covers the quarter of the world and more than a half of Russia. Since 1960 the oil and gas industry grows in Russia. The hydrocarbons can be transferred by pipelines only in the heated state. The main effect of construction and operation of pipeline is the heating of soil and permafrost degradation. The goal of this study was to estimate the response of landscapes and permafrost ecosystem of north of West Siberia to the cumulative action of pipelines. The main objective was to investigate the warming impact on the properties and function of the soil along the pipelines in permafrost zone. The studied object was vegetation and soil cover of the north of Western Siberia ecosystems after the action of pipelines. The areas with maximum effect of heat lines were selected by remote sensing. Ten transects of 50 meters in length with sampling points every 5 meters from pipeline to undisturbed background area were selected in three different natural zones. The soil and vegetation cover was described, sampled, active layer of soil and the power of organic horizon were measured, the hydrothermal regime of soils in a layer of 0-10 cm was measured, the emission of greenhouse gas was studied. In the laboratory, the content of labile carbon, microbial biomass carbon, basal and substrate-induced respiration were measured. The main effect of the pipelines impact is the active degradation of permafrost and changes in hydrothermal settings. From background to broken areas the following settings changing: the depth of thawing increase in 10 times; the soil temperature changes from 4 to 10,5 ° C in taiga, from 4.5 to 13,5 ° C in tundra, from 5.5 to 12 ° C in forest-tundra; the soil moisture reduces from 20% to 10% in the tundra and forest tundra and from 45.5% to 7.7% in taiga. As a result, we established a significant transformation of ecosystems along pipelines, primarily due to a change in the hydrothermal regime of soils due to permafrost degradation. There is not only a change in the functioning and properties of soil, but also in the species composition of vegetation. There are the increasing of its biomass, expansion of woody vegetation along pipelines in the north.

  6. viGEN: An Open Source Pipeline for the Detection and Quantification of Viral RNA in Human Tumors.

    PubMed

    Bhuvaneshwar, Krithika; Song, Lei; Madhavan, Subha; Gusev, Yuriy

    2018-01-01

    An estimated 17% of cancers worldwide are associated with infectious causes. The extent and biological significance of viral presence/infection in actual tumor samples is generally unknown but could be measured using human transcriptome (RNA-seq) data from tumor samples. We present an open source bioinformatics pipeline viGEN, which allows for not only the detection and quantification of viral RNA, but also variants in the viral transcripts. The pipeline includes 4 major modules: The first module aligns and filter out human RNA sequences; the second module maps and count (remaining un-aligned) reads against reference genomes of all known and sequenced human viruses; the third module quantifies read counts at the individual viral-gene level thus allowing for downstream differential expression analysis of viral genes between case and controls groups. The fourth module calls variants in these viruses. To the best of our knowledge, there are no publicly available pipelines or packages that would provide this type of complete analysis in one open source package. In this paper, we applied the viGEN pipeline to two case studies. We first demonstrate the working of our pipeline on a large public dataset, the TCGA cervical cancer cohort. In the second case study, we performed an in-depth analysis on a small focused study of TCGA liver cancer patients. In the latter cohort, we performed viral-gene quantification, viral-variant extraction and survival analysis. This allowed us to find differentially expressed viral-transcripts and viral-variants between the groups of patients, and connect them to clinical outcome. From our analyses, we show that we were able to successfully detect the human papilloma virus among the TCGA cervical cancer patients. We compared the viGEN pipeline with two metagenomics tools and demonstrate similar sensitivity/specificity. We were also able to quantify viral-transcripts and extract viral-variants using the liver cancer dataset. The results presented corresponded with published literature in terms of rate of detection, and impact of several known variants of HBV genome. This pipeline is generalizable, and can be used to provide novel biological insights into microbial infections in complex diseases and tumorigeneses. Our viral pipeline could be used in conjunction with additional type of immuno-oncology analysis based on RNA-seq data of host RNA for cancer immunology applications. The source code, with example data and tutorial is available at: https://github.com/ICBI/viGEN/.

  7. Untargeted UPLC-MS Profiling Pipeline to Expand Tissue Metabolome Coverage: Application to Cardiovascular Disease

    PubMed Central

    2015-01-01

    Metabolic profiling studies aim to achieve broad metabolome coverage in specific biological samples. However, wide metabolome coverage has proven difficult to achieve, mostly because of the diverse physicochemical properties of small molecules, obligating analysts to seek multiplatform and multimethod approaches. Challenges are even greater when it comes to applications to tissue samples, where tissue lysis and metabolite extraction can induce significant systematic variation in composition. We have developed a pipeline for obtaining the aqueous and organic compounds from diseased arterial tissue using two consecutive extractions, followed by a different untargeted UPLC-MS analysis method for each extract. Methods were rationally chosen and optimized to address the different physicochemical properties of each extract: hydrophilic interaction liquid chromatography (HILIC) for the aqueous extract and reversed-phase chromatography for the organic. This pipeline can be generic for tissue analysis as demonstrated by applications to different tissue types. The experimental setup and fast turnaround time of the two methods contributed toward obtaining highly reproducible features with exceptional chromatographic performance (CV % < 0.5%), making this pipeline suitable for metabolic profiling applications. We structurally assigned 226 metabolites from a range of chemical classes (e.g., carnitines, α-amino acids, purines, pyrimidines, phospholipids, sphingolipids, free fatty acids, and glycerolipids) which were mapped to their corresponding pathways, biological functions and known disease mechanisms. The combination of the two untargeted UPLC-MS methods showed high metabolite complementarity. We demonstrate the application of this pipeline to cardiovascular disease, where we show that the analyzed diseased groups (n = 120) of arterial tissue could be distinguished based on their metabolic profiles. PMID:25664760

  8. HTS-Net: An integrated regulome-interactome approach for establishing network regulation models in high-throughput screenings

    PubMed Central

    Rioualen, Claire; Da Costa, Quentin; Chetrit, Bernard; Charafe-Jauffret, Emmanuelle; Ginestier, Christophe

    2017-01-01

    High-throughput RNAi screenings (HTS) allow quantifying the impact of the deletion of each gene in any particular function, from virus-host interactions to cell differentiation. However, there has been less development for functional analysis tools dedicated to RNAi analyses. HTS-Net, a network-based analysis program, was developed to identify gene regulatory modules impacted in high-throughput screenings, by integrating transcription factors-target genes interaction data (regulome) and protein-protein interaction networks (interactome) on top of screening z-scores. HTS-Net produces exhaustive HTML reports for results navigation and exploration. HTS-Net is a new pipeline for RNA interference screening analyses that proves better performance than simple gene rankings by z-scores, by re-prioritizing genes and replacing them in their biological context, as shown by the three studies that we reanalyzed. Formatted input data for the three studied datasets, source code and web site for testing the system are available from the companion web site at http://htsnet.marseille.inserm.fr/. We also compared our program with existing algorithms (CARD and hotnet2). PMID:28949986

  9. 40 CFR 761.243 - Standard wipe sample method and size.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., AND USE PROHIBITIONS Determining a PCB Concentration for Purposes of Abandonment or Disposal of Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe.../Rinse Cleanup as Recommended by the Environmental Protection Agency PCB Spill Cleanup Policy,” dated...

  10. 2dx_automator: implementation of a semiautomatic high-throughput high-resolution cryo-electron crystallography pipeline.

    PubMed

    Scherer, Sebastian; Kowal, Julia; Chami, Mohamed; Dandey, Venkata; Arheit, Marcel; Ringler, Philippe; Stahlberg, Henning

    2014-05-01

    The introduction of direct electron detectors (DED) to cryo-electron microscopy has tremendously increased the signal-to-noise ratio (SNR) and quality of the recorded images. We discuss the optimal use of DEDs for cryo-electron crystallography, introduce a new automatic image processing pipeline, and demonstrate the vast improvement in the resolution achieved by the use of both together, especially for highly tilted samples. The new processing pipeline (now included in the software package 2dx) exploits the high SNR and frame readout frequency of DEDs to automatically correct for beam-induced sample movement, and reliably processes individual crystal images without human interaction as data are being acquired. A new graphical user interface (GUI) condenses all information required for quality assessment in one window, allowing the imaging conditions to be verified and adjusted during the data collection session. With this new pipeline an automatically generated unit cell projection map of each recorded 2D crystal is available less than 5 min after the image was recorded. The entire processing procedure yielded a three-dimensional reconstruction of the 2D-crystallized ion-channel membrane protein MloK1 with a much-improved resolution of 5Å in-plane and 7Å in the z-direction, within 2 days of data acquisition and simultaneous processing. The results obtained are superior to those delivered by conventional photographic film-based methodology of the same sample, and demonstrate the importance of drift-correction. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Clostridium difficile Drug Pipeline: Challenges in Discovery and Development of New Agents

    PubMed Central

    2015-01-01

    In the past decade Clostridium difficile has become a bacterial pathogen of global significance. Epidemic strains have spread throughout hospitals, while community acquired infections and other sources ensure a constant inoculation of spores into hospitals. In response to the increasing medical burden, a new C. difficile antibiotic, fidaxomicin, was approved in 2011 for the treatment of C. difficile-associated diarrhea. Rudimentary fecal transplants are also being trialed as effective treatments. Despite these advances, therapies that are more effective against C. difficile spores and less damaging to the resident gastrointestinal microbiome and that reduce recurrent disease are still desperately needed. However, bringing a new treatment for C. difficile infection to market involves particular challenges. This review covers the current drug discovery pipeline, including both small molecule and biologic therapies, and highlights the challenges associated with in vitro and in vivo models of C. difficile infection for drug screening and lead optimization. PMID:25760275

  12. Crucial considerations for pipelines to validate circulating biomarkers for breast cancer.

    PubMed

    Ewaisha, Radwa; Gawryletz, Chelsea D; Anderson, Karen S

    2016-01-01

    Despite decades of progress in breast imaging, breast cancer remains the second most common cause of cancer mortality in women. The rapidly proliferative breast cancers that are associated with high relapse rates and mortality frequently present in younger women, in unscreened individuals, or in the intervals between screening mammography. Biomarkers exist for monitoring metastatic disease, such as CEA, CA27.29 and CA15-3, but there are no circulating biomarkers clinically available for early detection, prognosis, or monitoring for clinical relapse. There has been significant progress in the discovery of potential circulating biomarkers, including proteins, autoantibodies, nucleic acids, exosomes, and circulating tumor cells, but the vast majority of these biomarkers have not progressed beyond initial research discovery, and none have yet been approved for clinical use in early stage disease. Here, the authors review the crucial considerations of developing pipelines for the rapid evaluation of circulating biomarkers for breast cancer.

  13. Analysis of mammalian gene function through broad based phenotypic screens across a consortium of mouse clinics

    PubMed Central

    Adams, David J; Adams, Niels C; Adler, Thure; Aguilar-Pimentel, Antonio; Ali-Hadji, Dalila; Amann, Gregory; André, Philippe; Atkins, Sarah; Auburtin, Aurelie; Ayadi, Abdel; Becker, Julien; Becker, Lore; Bedu, Elodie; Bekeredjian, Raffi; Birling, Marie-Christine; Blake, Andrew; Bottomley, Joanna; Bowl, Mike; Brault, Véronique; Busch, Dirk H; Bussell, James N; Calzada-Wack, Julia; Cater, Heather; Champy, Marie-France; Charles, Philippe; Chevalier, Claire; Chiani, Francesco; Codner, Gemma F; Combe, Roy; Cox, Roger; Dalloneau, Emilie; Dierich, André; Di Fenza, Armida; Doe, Brendan; Duchon, Arnaud; Eickelberg, Oliver; Esapa, Chris T; El Fertak, Lahcen; Feigel, Tanja; Emelyanova, Irina; Estabel, Jeanne; Favor, Jack; Flenniken, Ann; Gambadoro, Alessia; Garrett, Lilian; Gates, Hilary; Gerdin, Anna-Karin; Gkoutos, George; Greenaway, Simon; Glasl, Lisa; Goetz, Patrice; Da Cruz, Isabelle Goncalves; Götz, Alexander; Graw, Jochen; Guimond, Alain; Hans, Wolfgang; Hicks, Geoff; Hölter, Sabine M; Höfler, Heinz; Hancock, John M; Hoehndorf, Robert; Hough, Tertius; Houghton, Richard; Hurt, Anja; Ivandic, Boris; Jacobs, Hughes; Jacquot, Sylvie; Jones, Nora; Karp, Natasha A; Katus, Hugo A; Kitchen, Sharon; Klein-Rodewald, Tanja; Klingenspor, Martin; Klopstock, Thomas; Lalanne, Valerie; Leblanc, Sophie; Lengger, Christoph; le Marchand, Elise; Ludwig, Tonia; Lux, Aline; McKerlie, Colin; Maier, Holger; Mandel, Jean-Louis; Marschall, Susan; Mark, Manuel; Melvin, David G; Meziane, Hamid; Micklich, Kateryna; Mittelhauser, Christophe; Monassier, Laurent; Moulaert, David; Muller, Stéphanie; Naton, Beatrix; Neff, Frauke; Nolan, Patrick M; Nutter, Lauryl MJ; Ollert, Markus; Pavlovic, Guillaume; Pellegata, Natalia S; Peter, Emilie; Petit-Demoulière, Benoit; Pickard, Amanda; Podrini, Christine; Potter, Paul; Pouilly, Laurent; Puk, Oliver; Richardson, David; Rousseau, Stephane; Quintanilla-Fend, Leticia; Quwailid, Mohamed M; Racz, Ildiko; Rathkolb, Birgit; Riet, Fabrice; Rossant, Janet; Roux, Michel; Rozman, Jan; Ryder, Ed; Salisbury, Jennifer; Santos, Luis; Schäble, Karl-Heinz; Schiller, Evelyn; Schrewe, Anja; Schulz, Holger; Steinkamp, Ralf; Simon, Michelle; Stewart, Michelle; Stöger, Claudia; Stöger, Tobias; Sun, Minxuan; Sunter, David; Teboul, Lydia; Tilly, Isabelle; Tocchini-Valentini, Glauco P; Tost, Monica; Treise, Irina; Vasseur, Laurent; Velot, Emilie; Vogt-Weisenhorn, Daniela; Wagner, Christelle; Walling, Alison; Weber, Bruno; Wendling, Olivia; Westerberg, Henrik; Willershäuser, Monja; Wolf, Eckhard; Wolter, Anne; Wood, Joe; Wurst, Wolfgang; Yildirim, Ali Önder; Zeh, Ramona; Zimmer, Andreas; Zimprich, Annemarie

    2015-01-01

    The function of the majority of genes in the mouse and human genomes remains unknown. The mouse ES cell knockout resource provides a basis for characterisation of relationships between gene and phenotype. The EUMODIC consortium developed and validated robust methodologies for broad-based phenotyping of knockouts through a pipeline comprising 20 disease-orientated platforms. We developed novel statistical methods for pipeline design and data analysis aimed at detecting reproducible phenotypes with high power. We acquired phenotype data from 449 mutant alleles, representing 320 unique genes, of which half had no prior functional annotation. We captured data from over 27,000 mice finding that 83% of the mutant lines are phenodeviant, with 65% demonstrating pleiotropy. Surprisingly, we found significant differences in phenotype annotation according to zygosity. Novel phenotypes were uncovered for many genes with unknown function providing a powerful basis for hypothesis generation and further investigation in diverse systems. PMID:26214591

  14. Finite element modeling simulation-assisted design of integrated microfluidic chips for heavy metal ion stripping analysis

    NASA Astrophysics Data System (ADS)

    Hong, Ying; Zou, Jianhua; Ge, Gang; Xiao, Wanyue; Gao, Ling; Shao, Jinjun; Dong, Xiaochen

    2017-10-01

    In this article, a transparent integrated microfluidic device composed of a 3D-printed thin-layer flow cell (3D-PTLFC) and an S-shaped screen-printed electrode (SPE) has been designed and fabricated for heavy metal ion stripping analysis. A finite element modeling (FEM) simulation is employed to optimize the shape of the electrode, the direction of the inlet pipeline, the thin-layer channel height and the sample flow rate to enhance the electron-enrichment efficiency for stripping analysis. The results demonstrate that the S-shaped SPE configuration matches the channel in 3D-PTLFC perfectly for the anodic stripping behavior of the heavy metal ions. Under optimized conditions, a wide linear range of 1-80 µg l-1 is achieved for Pb2+ detection with a limit of 0.3 µg l-1 for the microfluidic device. Thus, the obtained integrated microfluidic device proves to be a promising approach for heavy metal ions stripping analysis with low cost and high performance.

  15. 40 CFR 761.240 - Scope and definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PROHIBITIONS Determining a PCB Concentration for Purposes of Abandonment or Disposal of Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe Samples § 761.240 Scope... determine its PCB surface concentration for abandonment-in-place or removal and disposal off-site in...

  16. In-solution hybridization for mammalian mitogenome enrichment: pros, cons and challenges associated with multiplexing degraded DNA.

    PubMed

    Hawkins, Melissa T R; Hofman, Courtney A; Callicrate, Taylor; McDonough, Molly M; Tsuchiya, Mirian T N; Gutiérrez, Eliécer E; Helgen, Kristofer M; Maldonado, Jesus E

    2016-09-01

    Here, we present a set of RNA-based probes for whole mitochondrial genome in-solution enrichment, targeting a diversity of mammalian mitogenomes. This probes set was designed from seven mammalian orders and tested to determine the utility for enriching degraded DNA. We generated 63 mitogenomes representing five orders and 22 genera of mammals that yielded varying coverage ranging from 0 to >5400X. Based on a threshold of 70% mitogenome recovery and at least 10× average coverage, 32 individuals or 51% of samples were considered successful. The estimated sequence divergence of samples from the probe sequences used to construct the array ranged up to nearly 20%. Sample type was more predictive of mitogenome recovery than sample age. The proportion of reads from each individual in multiplexed enrichments was highly skewed, with each pool having one sample that yielded a majority of the reads. Recovery across each mitochondrial gene varied with most samples exhibiting regions with gaps or ambiguous sites. We estimated the ability of the probes to capture mitogenomes from a diversity of mammalian taxa not included here by performing a clustering analysis of published sequences for 100 taxa representing most mammalian orders. Our study demonstrates that a general array can be cost and time effective when there is a need to screen a modest number of individuals from a variety of taxa. We also address the practical concerns for using such a tool, with regard to pooling samples, generating high quality mitogenomes and detail a pipeline to remove chimeric molecules. © 2015 John Wiley & Sons Ltd.

  17. Efficient high-throughput biological process characterization: Definitive screening design with the ambr250 bioreactor system.

    PubMed

    Tai, Mitchell; Ly, Amanda; Leung, Inne; Nayar, Gautam

    2015-01-01

    The burgeoning pipeline for new biologic drugs has increased the need for high-throughput process characterization to efficiently use process development resources. Breakthroughs in highly automated and parallelized upstream process development have led to technologies such as the 250-mL automated mini bioreactor (ambr250™) system. Furthermore, developments in modern design of experiments (DoE) have promoted the use of definitive screening design (DSD) as an efficient method to combine factor screening and characterization. Here we utilize the 24-bioreactor ambr250™ system with 10-factor DSD to demonstrate a systematic experimental workflow to efficiently characterize an Escherichia coli (E. coli) fermentation process for recombinant protein production. The generated process model is further validated by laboratory-scale experiments and shows how the strategy is useful for quality by design (QbD) approaches to control strategies for late-stage characterization. © 2015 American Institute of Chemical Engineers.

  18. Key aspects of the Novartis compound collection enhancement project for the compilation of a comprehensive chemogenomics drug discovery screening collection.

    PubMed

    Jacoby, Edgar; Schuffenhauer, Ansgar; Popov, Maxim; Azzaoui, Kamal; Havill, Benjamin; Schopfer, Ulrich; Engeloch, Caroline; Stanek, Jaroslav; Acklin, Pierre; Rigollier, Pascal; Stoll, Friederike; Koch, Guido; Meier, Peter; Orain, David; Giger, Rudolph; Hinrichs, Jürgen; Malagu, Karine; Zimmermann, Jürg; Roth, Hans-Joerg

    2005-01-01

    The NIBR (Novartis Institutes for BioMedical Research) compound collection enrichment and enhancement project integrates corporate internal combinatorial compound synthesis and external compound acquisition activities in order to build up a comprehensive screening collection for a modern drug discovery organization. The main purpose of the screening collection is to supply the Novartis drug discovery pipeline with hit-to-lead compounds for today's and the future's portfolio of drug discovery programs, and to provide tool compounds for the chemogenomics investigation of novel biological pathways and circuits. As such, it integrates designed focused and diversity-based compound sets from the synthetic and natural paradigms able to cope with druggable and currently deemed undruggable targets and molecular interaction modes. Herein, we will summarize together with new trends published in the literature, scientific challenges faced and key approaches taken at NIBR to match the chemical and biological spaces.

  19. Selection and optimization of hits from a high-throughput phenotypic screen against Trypanosoma cruzi.

    PubMed

    Keenan, Martine; Alexander, Paul W; Chaplin, Jason H; Abbott, Michael J; Diao, Hugo; Wang, Zhisen; Best, Wayne M; Perez, Catherine J; Cornwall, Scott M J; Keatley, Sarah K; Thompson, R C Andrew; Charman, Susan A; White, Karen L; Ryan, Eileen; Chen, Gong; Ioset, Jean-Robert; von Geldern, Thomas W; Chatelain, Eric

    2013-10-01

    Inhibitors of Trypanosoma cruzi with novel mechanisms of action are urgently required to diversify the current clinical and preclinical pipelines. Increasing the number and diversity of hits available for assessment at the beginning of the discovery process will help to achieve this aim. We report the evaluation of multiple hits generated from a high-throughput screen to identify inhibitors of T. cruzi and from these studies the discovery of two novel series currently in lead optimization. Lead compounds from these series potently and selectively inhibit growth of T. cruzi in vitro and the most advanced compound is orally active in a subchronic mouse model of T. cruzi infection. High-throughput screening of novel compound collections has an important role to play in diversifying the trypanosomatid drug discovery portfolio. A new T. cruzi inhibitor series with good drug-like properties and promising in vivo efficacy has been identified through this process.

  20. Rigorous Screening Technology for Identifying Suitable CO2 Storage Sites II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George J. Koperna Jr.; Vello A. Kuuskraa; David E. Riestenberg

    2009-06-01

    This report serves as the final technical report and users manual for the 'Rigorous Screening Technology for Identifying Suitable CO2 Storage Sites II SBIR project. Advanced Resources International has developed a screening tool by which users can technically screen, assess the storage capacity and quantify the costs of CO2 storage in four types of CO2 storage reservoirs. These include CO2-enhanced oil recovery reservoirs, depleted oil and gas fields (non-enhanced oil recovery candidates), deep coal seems that are amenable to CO2-enhanced methane recovery, and saline reservoirs. The screening function assessed whether the reservoir could likely serve as a safe, long-term CO2more » storage reservoir. The storage capacity assessment uses rigorous reservoir simulation models to determine the timing, ultimate storage capacity, and potential for enhanced hydrocarbon recovery. Finally, the economic assessment function determines both the field-level and pipeline (transportation) costs for CO2 sequestration in a given reservoir. The screening tool has been peer reviewed at an Electrical Power Research Institute (EPRI) technical meeting in March 2009. A number of useful observations and recommendations emerged from the Workshop on the costs of CO2 transport and storage that could be readily incorporated into a commercial version of the Screening Tool in a Phase III SBIR.« less

  1. Valve For Extracting Samples From A Process Stream

    NASA Technical Reports Server (NTRS)

    Callahan, Dave

    1995-01-01

    Valve for extracting samples from process stream includes cylindrical body bolted to pipe that contains stream. Opening in valve body matched and sealed against opening in pipe. Used to sample process streams in variety of facilities, including cement plants, plants that manufacture and reprocess plastics, oil refineries, and pipelines.

  2. Comparison study on qualitative and quantitative risk assessment methods for urban natural gas pipeline network.

    PubMed

    Han, Z Y; Weng, W G

    2011-05-15

    In this paper, a qualitative and a quantitative risk assessment methods for urban natural gas pipeline network are proposed. The qualitative method is comprised of an index system, which includes a causation index, an inherent risk index, a consequence index and their corresponding weights. The quantitative method consists of a probability assessment, a consequences analysis and a risk evaluation. The outcome of the qualitative method is a qualitative risk value, and for quantitative method the outcomes are individual risk and social risk. In comparison with previous research, the qualitative method proposed in this paper is particularly suitable for urban natural gas pipeline network, and the quantitative method takes different consequences of accidents into consideration, such as toxic gas diffusion, jet flame, fire ball combustion and UVCE. Two sample urban natural gas pipeline networks are used to demonstrate these two methods. It is indicated that both of the two methods can be applied to practical application, and the choice of the methods depends on the actual basic data of the gas pipelines and the precision requirements of risk assessment. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  3. Pipeline corridors through wetlands - impacts on plant communities: Bayou Grand Cane, De Soto Parish, Louisiana. Topical report, August 1991--July 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shem, L.M.; Zimmerman, R.E.; Hayes, D.

    The goal of the Gas Research Institute Wetland Corridors Program is to document impacts of existing pipeline on the wetlands they traverse. To accomplish this goal, 12 existing wetland crossings were surveyed. These sites varied in elapsed time since pipeline construction, wetland type, pipeline installation techniques, and night of-way (ROW) management practices. This report presents the results of a survey conducted over the period of August 12-13, 1991, at the Bayou Grand Cane crossing in De Soto Parish, Louisiana, where a pipeline constructed three years prior to the survey crosses the bayou through mature bottomland hardwoods. The sit was notmore » seeded or fertilized after construction activities. At the time of sampling, a dense herb stratum (composed of mostly native species) covered the 20-m-wide ROW, except within drainage channels. As a result of the creation of the ROW, new habitat was created, plant diversity increased, and forest habitat became fragmented. The ROW must be maintained at an early stage of succession to allow access to the pipeline however, impacts to the wetland were minimized by decreasing the width of the ROW to 20 m and recreating the drainage channels across the ROW. The canopy trees on the ROW`s edge shaded part of the ROW, which helped to minimize the effects of the ROW.« less

  4. [Character accentuations as a criterion for psychological risks in the professional activity of the builders of main gas pipelines in the conditions of arctic].

    PubMed

    Korneeva, Ia A; Simonova, N N

    2015-01-01

    The article is devoted to the study of character accentuations as a criterion for psychological risks in the professional activity of builders of main gas pipelines in the conditions of Arctic. to study the severity of character accentuations in rotation-employed builders of main gas pipelines, stipulated by their professional activities, as well as personal resources to overcome these destructions. The study involved 70 rotation-employed builders of trunk pipelines, working in the Tyumen Region (duration of the shift-in--52 days), aged from 23 to 59 (mean age 34,9 ± 8.1) years, with the experience of work from 0.5 years to 14 years (the average length of 4.42 ± 3.1). Methods of the study: questionnaires, psychological testing, participant observation. One-Sample t-test of Student, multiple regression analysis, incremental analysis. In the work there were revealed differences of expression of character accentuations in builders of trunk pipelines with experience in work on rotation less and more than five years. There was determined that builders of the main gas pipelines, working on the rotation in Arctic, with more pronounced accentuation ofthe character use mainly psychological defenses of compensation, substitution and denial, and have an average level of expression of flexibility as the regulatory process.

  5. Inexpensive and Highly Reproducible Cloud-Based Variant Calling of 2,535 Human Genomes

    PubMed Central

    Shringarpure, Suyash S.; Carroll, Andrew; De La Vega, Francisco M.; Bustamante, Carlos D.

    2015-01-01

    Population scale sequencing of whole human genomes is becoming economically feasible; however, data management and analysis remains a formidable challenge for many research groups. Large sequencing studies, like the 1000 Genomes Project, have improved our understanding of human demography and the effect of rare genetic variation in disease. Variant calling on datasets of hundreds or thousands of genomes is time-consuming, expensive, and not easily reproducible given the myriad components of a variant calling pipeline. Here, we describe a cloud-based pipeline for joint variant calling in large samples using the Real Time Genomics population caller. We deployed the population caller on the Amazon cloud with the DNAnexus platform in order to achieve low-cost variant calling. Using our pipeline, we were able to identify 68.3 million variants in 2,535 samples from Phase 3 of the 1000 Genomes Project. By performing the variant calling in a parallel manner, the data was processed within 5 days at a compute cost of $7.33 per sample (a total cost of $18,590 for completed jobs and $21,805 for all jobs). Analysis of cost dependence and running time on the data size suggests that, given near linear scalability, cloud computing can be a cheap and efficient platform for analyzing even larger sequencing studies in the future. PMID:26110529

  6. DEVELOPMENT OF AN ENVIRONMENTALLY BENIGN MICROBIAL INHIBITOR TO CONTROL INTERNAL PIPELINE CORROSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Robert Paterek; Gemma Husmillo

    The overall program objective is to develop and evaluate environmental benign agents or products that are effective in the prevention, inhibition, and mitigation of microbially influenced corrosion (MIC) in the internal surfaces of metallic natural gas pipelines. The goal is one or more environmental benign, a.k.a. ''green'' products that can be applied to maintain the structure and dependability of the natural gas infrastructure. Capsicum sp. extracts and pure compounds were screened for their antimicrobial activity against MIC causing bacteria. Studies on the ability of these compounds to dissociate biofilm from the substratum were conducted using microtiter plate assays. Tests usingmore » laboratory scale pipeline simulators continued. Preliminary results showed that the natural extracts possess strong antimicrobial activity being comparable to or even better than the pure compounds tested against strains of sulfate reducers. Their minimum inhibitory concentrations had been determined. It was also found that they possess bactericidal properties at minimal concentrations. Biofilm dissociation activity as assessed by microtiter plate assays demonstrated varying degrees of differences between the treated and untreated group with the superior performance of the extracts over pure compounds. Such is an indication of the possible benefits that could be obtained from these natural products. Confirmatory experiments are underway.« less

  7. A Data Analysis Pipeline Accounting for Artifacts in Tox21 Quantitative High-Throughput Screening Assays

    PubMed Central

    Hsieh, Jui-Hua; Sedykh, Alexander; Huang, Ruili; Xia, Menghang; Tice, Raymond R.

    2015-01-01

    A main goal of the U.S. Tox21 program is to profile a 10K-compound library for activity against a panel of stress-related and nuclear receptor signaling pathway assays using a quantitative high-throughput screening (qHTS) approach. However, assay artifacts, including nonreproducible signals and assay interference (e.g., autofluorescence), complicate compound activity interpretation. To address these issues, we have developed a data analysis pipeline that includes an updated signal noise–filtering/curation protocol and an assay interference flagging system. To better characterize various types of signals, we adopted a weighted version of the area under the curve (wAUC) to quantify the amount of activity across the tested concentration range in combination with the assay-dependent point-of-departure (POD) concentration. Based on the 32 Tox21 qHTS assays analyzed, we demonstrate that signal profiling using wAUC affords the best reproducibility (Pearson's r = 0.91) in comparison with the POD (0.82) only or the AC50 (i.e., half-maximal activity concentration, 0.81). Among the activity artifacts characterized, cytotoxicity is the major confounding factor; on average, about 8% of Tox21 compounds are affected, whereas autofluorescence affects less than 0.5%. To facilitate data evaluation, we implemented two graphical user interface applications, allowing users to rapidly evaluate the in vitro activity of Tox21 compounds. PMID:25904095

  8. Identification of New Molecular Entities (NMEs) as Potential Leads against Tuberculosis from Open Source Compound Repository.

    PubMed

    Kotapalli, Sudha Sravanti; Nallam, Sri Satya Anila; Nadella, Lavanya; Banerjee, Tanmay; Rode, Haridas B; Mainkar, Prathama S; Ummanni, Ramesh

    2015-01-01

    The purpose of this study was to provide a number of diverse and promising early-lead compounds that will feed into the drug discovery pipeline for developing new antitubercular agents. The results from the phenotypic screening of the open-source compound library against Mycobacterium smegmatis and Mycobacterium bovis (BCG) with hit validation against M. tuberculosis (H37Rv) have identified novel potent hit compounds. To determine their druglikeness, a systematic analysis of physicochemical properties of the hit compounds has been performed using cheminformatics tools. The hit molecules were analysed by clustering based on their chemical finger prints and structural similarity determining their chemical diversity. The hit compound library is also filtered for druglikeness based on the physicochemical descriptors following Lipinski filters. The robust filtration of hits followed by secondary screening against BCG, H37Rv and cytotoxicity evaluation has identified 12 compounds with potential against H37Rv (MIC range 0.4 to 12.5 μM). Furthermore in cytotoxicity assays, 12 compounds displayed low cytotoxicity against liver and lung cells providing high therapeutic index > 50. To avoid any variations in activity due to the route of chemical synthesis, the hit compounds were re synthesized independently and confirmed for their potential against H37Rv. Taken together, the hits reported here provides copious potential starting points for generation of new leads eventually adds to drug discovery pipeline against tuberculosis.

  9. MetAMOS: a modular and open source metagenomic assembly and analysis pipeline

    PubMed Central

    2013-01-01

    We describe MetAMOS, an open source and modular metagenomic assembly and analysis pipeline. MetAMOS represents an important step towards fully automated metagenomic analysis, starting with next-generation sequencing reads and producing genomic scaffolds, open-reading frames and taxonomic or functional annotations. MetAMOS can aid in reducing assembly errors, commonly encountered when assembling metagenomic samples, and improves taxonomic assignment accuracy while also reducing computational cost. MetAMOS can be downloaded from: https://github.com/treangen/MetAMOS. PMID:23320958

  10. Identifying Novel Molecular Structures for Advanced Melanoma by Ligand-Based Virtual Screening

    PubMed Central

    Wang, Zhao; Lu, Yan; Seibel, William; Miller, Duane D.; Li, Wei

    2009-01-01

    We recently discovered a new class of thiazole analogs that are highly potent against melanoma cells. To expand the structure-activity relationship study and to explore potential new molecular scaffolds, we performed extensive ligand-based virtual screening against a compound library containing 342,910 small molecules. Two different approaches of virtual screening were carried out using the structure of our lead molecule: 1) connectivity-based search using Scitegic Pipeline Pilot from Accelerys and 2) molecular shape similarity search using Schrodinger software. Using a testing compound library, both approaches can rank similar compounds very high and rank dissimilar compounds very low, thus validating our screening methods. Structures identified from these searches were analyzed, and selected compounds were tested in vitro to assess their activity against melanoma cancer cell lines. Several molecules showed good anticancer activity. While none of the identified compounds showed better activity than our lead compound, they provided important insight into structural modifications for our lead compound and also provided novel platforms on which we can optimize new classes of anticancer compounds. One of the newly synthesized analogs based on this virtual screening has improved potency and selectivity against melanoma. PMID:19445498

  11. Optimization of a metatranscriptomic approach to study the lignocellulolytic potential of the higher termite gut microbiome.

    PubMed

    Marynowska, Martyna; Goux, Xavier; Sillam-Dussès, David; Rouland-Lefèvre, Corinne; Roisin, Yves; Delfosse, Philippe; Calusinska, Magdalena

    2017-09-01

    Thanks to specific adaptations developed over millions of years, the efficiency of lignin, cellulose and hemicellulose decomposition of higher termite symbiotic system exceeds that of many other lignocellulose utilizing environments. Especially, the examination of its symbiotic microbes should reveal interesting carbohydrate-active enzymes, which are of primary interest for the industry. Previous metatranscriptomic reports (high-throughput mRNA sequencing) highlight the high representation and overexpression of cellulose and hemicelluloses degrading genes in the termite hindgut digestomes, indicating the potential of this technology in search for new enzymes. Nevertheless, several factors associated with the material sampling and library preparation steps make the metatranscriptomic studies of termite gut prokaryotic symbionts challenging. In this study, we first examined the influence of the sampling strategy, including the whole termite gut and luminal fluid, on the diversity and the metatranscriptomic profiles of the higher termite gut symbiotic bacteria. Secondly, we evaluated different commercially available kits combined in two library preparative pipelines for the best bacterial mRNA enrichment strategy. We showed that the sampling strategy did not significantly impact the generated results, both in terms of the representation of the microbes and their transcriptomic profiles. Nevertheless collecting luminal fluid reduces the co-amplification of unwanted RNA species of host origin. Furthermore, for the four studied higher termite species, the library preparative pipeline employing Ribo-Zero Gold rRNA Removal Kit "Epidemiology" in combination with Poly(A) Purist MAG kit resulted in a more efficient rRNA and poly-A-mRNAdepletion (up to 98.44% rRNA removed) than the pipeline utilizing MICROBExpress and MICROBEnrich kits. High correlation of both Ribo-Zero and MICROBExpresse depleted gene expression profiles with total non-depleted RNA-seq data has been shown for all studied samples, indicating no systematic skewing of the studied pipelines. We have extensively evaluated the impact of the sampling strategy and library preparation steps on the metatranscriptomic profiles of the higher termite gut symbiotic bacteria. The presented methodological approach has great potential to enhance metatranscriptomic studies of the higher termite intestinal flora and to unravel novel carbohydrate-active enzymes.

  12. Exploring Galaxy Formation and Evolution via Structural Decomposition

    NASA Astrophysics Data System (ADS)

    Kelvin, Lee; Driver, Simon; Robotham, Aaron; Hill, David; Cameron, Ewan

    2010-06-01

    The Galaxy And Mass Assembly (GAMA) structural decomposition pipeline (GAMA-SIGMA Structural Investigation of Galaxies via Model Analysis) will provide multi-component information for a sample of ~12,000 galaxies across 9 bands ranging from near-UV to near-IR. This will allow the relationship between structural properties and broadband, optical-to-near-IR, spectral energy distributions of bulge, bar, and disk components to be explored, revealing clues as to the history of baryonic mass assembly within a hierarchical clustering framework. Data is initially taken from the SDSS & UKIDSS-LAS surveys to test the robustness of our automated decomposition pipeline. This will eventually be replaced with the forthcoming higher-resolution VST & VISTA surveys data, expanding the sample to ~30,000 galaxies.

  13. Revelations of an overt water contamination.

    PubMed

    Singh, Gurpreet; Kaushik, S K; Mukherji, S

    2017-07-01

    Contaminated water sources are major cause of water borne diseases of public health importance. Usually, contamination is suspected after an increase in patient load. Two health teams investigated the episode. First team conducted sanitary survey, and second team undertook water safety and morbidity survey. On-site testing was carried out from source till consumer end. Investigation was also undertaken to identify factors which masked the situation. Prevention and control measures included super chlorination, provision of alternate drinking water sources, awareness campaign, layout of new water pipeline bypassing place of contamination, repair of sewers, flushing and cleaning of water pipelines, and repeated water sampling and testing. Multiple sources of drinking water supply were detected. Water samples from consumer end showed 18 coliforms per 100 ml. Sewer cross connection with active leakage in water pipeline was found and this was confirmed by earth excavation. Water safety and morbidity survey found majority of households receiving contaminated water supply. This survey found no significant difference among households receiving contaminated water supply and those receiving clean water. Average proportion of household members with episode of loose motions, pain abdomen, vomiting, fever, and eye conditions was significantly more among households receiving contaminated water. The present study documents detailed methodology of investigation and control measures to be instituted on receipt of contaminated water samples. Effective surveillance mechanisms for drinking water supplies such as routine testing of water samples can identify water contamination at an early stage and prevent an impending outbreak.

  14. Screening and validation of EXTraS data products

    NASA Astrophysics Data System (ADS)

    Carpano, Stefania; Haberl, F.; De Luca, A.; Tiengo, A.: Israel, G.; Rodriguez, G.; Belfiore, A.; Rosen, S.; Read, A.; Wilms, J.; Kreikenbohm, A.; Law-Green, D.

    2015-09-01

    The EXTraS project (Exploring the X-ray Transient and variable Sky) is aimed at fullyexploring the serendipitous content of the XMM-Newton EPIC database in the timedomain. The project is funded within the EU/FP7-Cooperation Space framework and is carried out by a collaboration including INAF (Italy), IUSS (Italy), CNR/IMATI (Italy), University of Leicester (UK), MPE (Germany) and ECAP (Germany). The several tasks consist in characterise aperiodicvariability for all 3XMM sources, search for short-term periodic variability on hundreds of thousands sources, detect new transient sources that are missed by standard source detection and hence not belonging to the 3XMM catalogue, search for long term variability by measuring fluxes or upper limits for both pointed and slew observations, and finally perform multiwavelength characterisation andclassification. Screening and validation of the different products is essentially in order to reject flawed results, generated by the automatic pipelines. We present here the screening tool we developed in the form of a Graphical User Interface and our plans for a systematic screening of the different catalogues.

  15. An investigation on mechanical properties of steel fibre reinforced for underwater welded joint

    NASA Astrophysics Data System (ADS)

    Navin, K.; Zakaria, M. S.; Zairi, S.

    2017-09-01

    Underwater pipelines are always exposed to water and have a high tendency to have corrosion especially on the welded joint. This research is about using fiber glass as steel fiber to coat the welded joint to determine the effectiveness in corrosion prevention of the welded joint. Number of coating is varied to determine the better number coating to coat the pipeline. Few samples were left without immersion in salt water and few samples are immersed into salt water with same salinity as sea water. The material sample is prepared in dog bone shape to enable to be used in Universal Tensile Machine (UTM). The material prepared is left immersed for recommended time and tested in Universal Tensile Machine. Upon analyzing the result, the result is used to determine the breakage point whether broken on the welded joint or different place and also the suitable number of coating to be used.

  16. Improved discovery of NEON data and samples though vocabularies, workflows, and web tools

    NASA Astrophysics Data System (ADS)

    Laney, C. M.; Elmendorf, S.; Flagg, C.; Harris, T.; Lunch, C. K.; Gulbransen, T.

    2017-12-01

    The National Ecological Observatory Network (NEON) is a continental-scale ecological observation facility sponsored by the National Science Foundation and operated by Battelle. NEON supports research on the impacts of invasive species, land use change, and environmental change on natural resources and ecosystems by gathering and disseminating a full suite of observational, instrumented, and airborne datasets from field sites across the U.S. NEON also collects thousands of samples from soil, water, and organisms every year, and partners with numerous institutions to analyze and archive samples. We have developed numerous new technologies to support processing and discovery of this highly diverse collection of data. These technologies include applications for data collection and sample management, processing pipelines specific to each collection system (field observations, installed sensors, and airborne instruments), and publication pipelines. NEON data and metadata are discoverable and downloadable via both a public API and data portal. We solicit continued engagement and advice from the informatics and environmental research communities, particularly in the areas of data versioning, usability, and visualization.

  17. TPH and PAH concentrations in the subsoil of polyduct segments, oil pipeline pumping stations, and right-of-way pipelines from Central Mexico

    NASA Astrophysics Data System (ADS)

    Iturbe, Rosario; Castro, Alejandrina; Perez, Guillermina; Flores, Carlos; Torres, Luis G.

    2008-10-01

    For the year 1996, 366 incidents related with clandestine poaching of oil-products were reported in Mexico, 159 in 1997, and 240 in 1998. For the year 2003 (the most recently reported figure), there were 136 events. Petroleos Mexicanos (PEMEX), very concerned with the environmental agenda, has developed programs oriented to diminish contamination levels in all of its oil facilities. This work was aimed at characterizing zones around polyduct segments, pipelines, pumping stations, and right-of-way pipelines located in the center of Mexico. The TPH contaminated sites were, in decreasing order, polyduct km 39 + 150 > polyduct km 25 + 020 > Zoquital > Tepetitlan > Catalina > Venta Prieta > Ceiba. Most of the sampled points showed the presence of more than one of the 16 PAHs considered by USEPA as priority pollutants. Except point TEPE 2A, where no PAHs were detected, all the sampled points showed values from low to medium concentrations; however, values found at the sites did not exceed the limits according to the Mexican or the American legislation. The place with the largest contaminated area corresponded to the polyduct km 39 + 150, with 130 m2 and 260 m3 to be treated. The least contaminated area was that around the JUAN 4 point at Juandho station, with 20 m2 and 22 m3 of contaminated soil. The total area to be treated is about 230 m2 and 497 m3.

  18. Cost-effectiveness of one versus two sample faecal immunochemical testing for colorectal cancer screening.

    PubMed

    Goede, S Lucas; van Roon, Aafke H C; Reijerink, Jacqueline C I Y; van Vuuren, Anneke J; Lansdorp-Vogelaar, Iris; Habbema, J Dik F; Kuipers, Ernst J; van Leerdam, Monique E; van Ballegooijen, Marjolein

    2013-05-01

    The sensitivity and specificity of a single faecal immunochemical test (FIT) are limited. The performance of FIT screening can be improved by increasing the screening frequency or by providing more than one sample in each screening round. This study aimed to evaluate if two-sample FIT screening is cost-effective compared with one-sample FIT. The MISCAN-colon microsimulation model was used to estimate costs and benefits of strategies with either one or two-sample FIT screening. The FIT cut-off level varied between 50 and 200 ng haemoglobin/ml, and the screening schedule was varied with respect to age range and interval. In addition, different definitions for positivity of the two-sample FIT were considered: at least one positive sample, two positive samples, or the mean of both samples being positive. Within an exemplary screening strategy, biennial FIT from the age of 55-75 years, one-sample FIT provided 76.0-97.0 life-years gained (LYG) per 1000 individuals, at a cost of € 259,000-264,000 (range reflects different FIT cut-off levels). Two-sample FIT screening with at least one sample being positive provided 7.3-12.4 additional LYG compared with one-sample FIT at an extra cost of € 50,000-59,000. However, when all screening intervals and age ranges were considered, intensifying screening with one-sample FIT provided equal or more LYG at lower costs compared with two-sample FIT. If attendance to screening does not differ between strategies it is recommended to increase the number of screening rounds with one-sample FIT screening, before considering increasing the number of FIT samples provided per screening round.

  19. UGbS-Flex, a novel bioinformatics pipeline for imputation-free SNP discovery in polyploids without a reference genome: finger millet as a case study.

    PubMed

    Qi, Peng; Gimode, Davis; Saha, Dipnarayan; Schröder, Stephan; Chakraborty, Debkanta; Wang, Xuewen; Dida, Mathews M; Malmberg, Russell L; Devos, Katrien M

    2018-06-15

    Research on orphan crops is often hindered by a lack of genomic resources. With the advent of affordable sequencing technologies, genotyping an entire genome or, for large-genome species, a representative fraction of the genome has become feasible for any crop. Nevertheless, most genotyping-by-sequencing (GBS) methods are geared towards obtaining large numbers of markers at low sequence depth, which excludes their application in heterozygous individuals. Furthermore, bioinformatics pipelines often lack the flexibility to deal with paired-end reads or to be applied in polyploid species. UGbS-Flex combines publicly available software with in-house python and perl scripts to efficiently call SNPs from genotyping-by-sequencing reads irrespective of the species' ploidy level, breeding system and availability of a reference genome. Noteworthy features of the UGbS-Flex pipeline are an ability to use paired-end reads as input, an effective approach to cluster reads across samples with enhanced outputs, and maximization of SNP calling. We demonstrate use of the pipeline for the identification of several thousand high-confidence SNPs with high representation across samples in an F 3 -derived F 2 population in the allotetraploid finger millet. Robust high-density genetic maps were constructed using the time-tested mapping program MAPMAKER which we upgraded to run efficiently and in a semi-automated manner in a Windows Command Prompt Environment. We exploited comparative GBS with one of the diploid ancestors of finger millet to assign linkage groups to subgenomes and demonstrate the presence of chromosomal rearrangements. The paper combines GBS protocol modifications, a novel flexible GBS analysis pipeline, UGbS-Flex, recommendations to maximize SNP identification, updated genetic mapping software, and the first high-density maps of finger millet. The modules used in the UGbS-Flex pipeline and for genetic mapping were applied to finger millet, an allotetraploid selfing species without a reference genome, as a case study. The UGbS-Flex modules, which can be run independently, are easily transferable to species with other breeding systems or ploidy levels.

  20. Continuous Turbidity Monitoring in the Indian Creek Watershed, Tazewell County, Virginia, 2006-08

    USGS Publications Warehouse

    Moyer, Douglas; Hyer, Kenneth

    2009-01-01

    Thousands of miles of natural gas pipelines are installed annually in the United States. These pipelines commonly cross streams, rivers, and other water bodies during pipeline construction. A major concern associated with pipelines crossing water bodies is increased sediment loading and the subsequent impact to the ecology of the aquatic system. Several studies have investigated the techniques used to install pipelines across surface-water bodies and their effect on downstream suspended-sediment concentrations. These studies frequently employ the evaluation of suspended-sediment or turbidity data that were collected using discrete sample-collection methods. No studies, however, have evaluated the utility of continuous turbidity monitoring for identifying real-time sediment input and providing a robust dataset for the evaluation of long-term changes in suspended-sediment concentration as it relates to a pipeline crossing. In 2006, the U.S. Geological Survey, in cooperation with East Tennessee Natural Gas and the U.S. Fish and Wildlife Service, began a study to monitor the effects of construction of the Jewell Ridge Lateral natural gas pipeline on turbidity conditions below pipeline crossings of Indian Creek and an unnamed tributary to Indian Creek, in Tazewell County, Virginia. The potential for increased sediment loading to Indian Creek is of major concern for watershed managers because Indian Creek is listed as one of Virginia's Threatened and Endangered Species Waters and contains critical habitat for two freshwater mussel species, purple bean (Villosa perpurpurea) and rough rabbitsfoot (Quadrula cylindrical strigillata). Additionally, Indian Creek contains the last known reproducing population of the tan riffleshell (Epioblasma florentina walkeri). Therefore, the objectives of the U.S. Geological Survey monitoring effort were to (1) develop a continuous turbidity monitoring network that attempted to measure real-time changes in suspended sediment (using turbidity as a surrogate) downstream from the pipeline crossings, and (2) provide continuous turbidity data that enable the development of a real-time turbidity-input warning system and assessment of long-term changes in turbidity conditions. Water-quality conditions were assessed using continuous water-quality monitors deployed upstream and downstream from the pipeline crossings in Indian Creek and the unnamed tributary. These paired upstream and downstream monitors were outfitted with turbidity, pH (for Indian Creek only), specific-conductance, and water-temperature sensors. Water-quality data were collected continuously (every 15 minutes) during three phases of the pipeline construction: pre-construction, during construction, and post-construction. Continuous turbidity data were evaluated at various time steps to determine whether the construction of the pipeline crossings had an effect on downstream suspended-sediment conditions in Indian Creek and the unnamed tributary. These continuous turbidity data were analyzed in real time with the aid of a turbidity-input warning system. A warning occurred when turbidity values downstream from the pipeline were 6 Formazin Nephelometric Units or 15 percent (depending on the observed range) greater than turbidity upstream from the pipeline crossing. Statistical analyses also were performed on monthly and phase-of-construction turbidity data to determine if the pipeline crossing served as a long-term source of sediment. Results of this intensive water-quality monitoring effort indicate that values of turbidity in Indian Creek increased significantly between the upstream and downstream water-quality monitors during the construction of the Jewell Ridge pipeline. The magnitude of the significant turbidity increase, however, was small (less than 2 Formazin Nephelometric Units). Patterns in the continuous turbidity data indicate that the actual pipeline crossing of Indian Creek had little influence of downstream water quality; co

  1. Regulatory reform for natural gas pipelines: The effect on pipeline and distribution company share prices

    NASA Astrophysics Data System (ADS)

    Jurman, Elisabeth Antonie

    1997-08-01

    The natural gas shortages in the 1970s focused considerable attention on the federal government's role in altering energy consumption. For the natural gas industry these shortages eventually led to the passage of the Natural Gas Policy Act (NGPA) in 1978 as part of the National Energy Plan. A series of events in the decade of the 1980s has brought about the restructuring of interstate natural gas pipelines which have been transformed by regulators and the courts from monopolies into competitive entities. This transformation also changed their relationship with their downstream customers, the LDCs, who no longer had to deal with pipelines as the only merchants of gas. Regulatory reform made it possible for LDCs to buy directly from producers using the pipelines only for delivery of their purchases. This study tests for the existence of monopoly rents by analyzing the daily returns of natural gas pipeline and utility industry stock price data from 1982 to 1990, a period of regulatory reform for the natural gas industry. The study's main objective is to investigate the degree of empirical support for claims that regulatory reforms increase profits in the affected industry, as the normative theory of regulation expects, or decrease profits, as advocates of the positive theory of regulation believe. I also test Norton's theory of risk which predicts that systematic risk will increase for firms undergoing deregulation. Based on a sample of twelve natural gas pipelines, and 25 utilities an event study concept was employed to measure the impact of regulatory event announcements on daily natural gas pipeline or utility industry stock price data using a market model regression equation. The results of this study provide some evidence that regulatory reforms did not increase the profits of pipeline firms, confirming the expectations of those who claim that excess profits result from regulation and will disappear, once that protection is removed and the firms are operating in competitive markets. The study's empirical findings support the claims of Norton's risk theory that systematic risk is higher in unregulated firms.

  2. Kepler Planet Detection Metrics: Per-Target Flux-Level Transit Injection Tests of TPS for Data Release 25

    NASA Technical Reports Server (NTRS)

    Burke, Christopher J.; Catanzarite, Joseph

    2017-01-01

    Quantifying the ability of a transiting planet survey to recover transit signals has commonly been accomplished through Monte-Carlo injection of transit signals into the observed data and subsequent running of the signal search algorithm (Gilliland et al., 2000; Weldrake et al., 2005; Burke et al., 2006). In order to characterize the performance of the Kepler pipeline (Twicken et al., 2016; Jenkins et al., 2017) on a sample of over 200,000 stars, two complementary injection and recovery tests are utilized:1. Injection of a single transit signal per target into the image or pixel-level data, hereafter referred to as pixel-level transit injection (PLTI), with subsequent processing through the Photometric Analysis (PA), Presearch Data Conditioning (PDC), Transiting Planet Search (TPS), and Data Validation (DV) modules of the Kepler pipeline. The PLTI quantification of the Kepler pipeline's completeness has been described previously by Christiansen et al. (2015, 2016); the completeness of the final SOC 9.3 Kepler pipeline acting on the Data Release 25 (DR25) light curves is described by Christiansen (2017).2. Injection of multiple transit signals per target into the normalized flux time series data with a subsequent transit search using a stream-lined version of the Transiting Planet Search (TPS) module. This test, hereafter referred to as flux-level transit injection (FLTI), is the subject of this document. By running a heavily modified version of TPS, FLTI is able to perform many injections on selected targets and determine in some detail which injected signals are recoverable. Significant numerical efficiency gains are enabled by precomputing the data conditioning steps at the onset of TPS and limiting the search parameter space (i.e., orbital period, transit duration, and ephemeris zero-point) to a small region around each injected transit signal.The PLTI test has the advantage that it follows transit signals through all processing steps of the Kepler pipeline, and the recovered signals can be further classified as planet candidates or false positives in the exact same manner as detections from the nominal (i.e., observed) pipeline run (Twicken et al., 2016, Thompson et al., in preparation). To date, the PLTI test has been the standard means of measuring pipeline completeness averaged over large samples of targets (Christiansen et al., 2015, 2016; Christiansen, 2017). However, since the PLTI test uses only one injection per target, it does not elucidate individual-target variations in pipeline completeness due to differences in stellar properties or astrophysical variability. Thus, we developed the FLTI test to provide a numerically efficient way to fully map individual targets and explore the performance of the pipeline in greater detail. The FLTI tests thereby allow a thorough validation of the pipeline completeness models (such as window function (Burke and Catanzarite, 2017a), detection efficiency (Burke Catanzarite, 2017b), etc.) across the spectrum of Kepler targets (i.e., various astrophysical phenomena and differences in instrumental noise). Tests during development of the FLTI capability revealed that there are significant target-to-target variations in the detection efficiency.

  3. Disentangling methodological and biological sources of gene tree discordance on Oryza (Poaceae) chromosome 3.

    PubMed

    Zwickl, Derrick J; Stein, Joshua C; Wing, Rod A; Ware, Doreen; Sanderson, Michael J

    2014-09-01

    We describe new methods for characterizing gene tree discordance in phylogenomic data sets, which screen for deviations from neutral expectations, summarize variation in statistical support among gene trees, and allow comparison of the patterns of discordance induced by various analysis choices. Using an exceptionally complete set of genome sequences for the short arm of chromosome 3 in Oryza (rice) species, we applied these methods to identify the causes and consequences of differing patterns of discordance in the sets of gene trees inferred using a panel of 20 distinct analysis pipelines. We found that discordance patterns were strongly affected by aspects of data selection, alignment, and alignment masking. Unusual patterns of discordance evident when using certain pipelines were reduced or eliminated by using alternative pipelines, suggesting that they were the product of methodological biases rather than evolutionary processes. In some cases, once such biases were eliminated, evolutionary processes such as introgression could be implicated. Additionally, patterns of gene tree discordance had significant downstream impacts on species tree inference. For example, inference from supermatrices was positively misleading when pipelines that led to biased gene trees were used. Several results may generalize to other data sets: we found that gene tree and species tree inference gave more reasonable results when intron sequence was included during sequence alignment and tree inference, the alignment software PRANK was used, and detectable "block-shift" alignment artifacts were removed. We discuss our findings in the context of well-established relationships in Oryza and continuing controversies regarding the domestication history of O. sativa. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. PMAnalyzer: a new web interface for bacterial growth curve analysis.

    PubMed

    Cuevas, Daniel A; Edwards, Robert A

    2017-06-15

    Bacterial growth curves are essential representations for characterizing bacteria metabolism within a variety of media compositions. Using high-throughput, spectrophotometers capable of processing tens of 96-well plates, quantitative phenotypic information can be easily integrated into the current data structures that describe a bacterial organism. The PMAnalyzer pipeline performs a growth curve analysis to parameterize the unique features occurring within microtiter wells containing specific growth media sources. We have expanded the pipeline capabilities and provide a user-friendly, online implementation of this automated pipeline. PMAnalyzer version 2.0 provides fast automatic growth curve parameter analysis, growth identification and high resolution figures of sample-replicate growth curves and several statistical analyses. PMAnalyzer v2.0 can be found at https://edwards.sdsu.edu/pmanalyzer/ . Source code for the pipeline can be found on GitHub at https://github.com/dacuevas/PMAnalyzer . Source code for the online implementation can be found on GitHub at https://github.com/dacuevas/PMAnalyzerWeb . dcuevas08@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  5. Identification of pathogen genomic variants through an integrated pipeline

    PubMed Central

    2014-01-01

    Background Whole-genome sequencing represents a powerful experimental tool for pathogen research. We present methods for the analysis of small eukaryotic genomes, including a streamlined system (called Platypus) for finding single nucleotide and copy number variants as well as recombination events. Results We have validated our pipeline using four sets of Plasmodium falciparum drug resistant data containing 26 clones from 3D7 and Dd2 background strains, identifying an average of 11 single nucleotide variants per clone. We also identify 8 copy number variants with contributions to resistance, and report for the first time that all analyzed amplification events are in tandem. Conclusions The Platypus pipeline provides malaria researchers with a powerful tool to analyze short read sequencing data. It provides an accurate way to detect SNVs using known software packages, and a novel methodology for detection of CNVs, though it does not currently support detection of small indels. We have validated that the pipeline detects known SNVs in a variety of samples while filtering out spurious data. We bundle the methods into a freely available package. PMID:24589256

  6. SAMSA2: a standalone metatranscriptome analysis pipeline.

    PubMed

    Westreich, Samuel T; Treiber, Michelle L; Mills, David A; Korf, Ian; Lemay, Danielle G

    2018-05-21

    Complex microbial communities are an area of growing interest in biology. Metatranscriptomics allows researchers to quantify microbial gene expression in an environmental sample via high-throughput sequencing. Metatranscriptomic experiments are computationally intensive because the experiments generate a large volume of sequence data and each sequence must be compared with reference sequences from thousands of organisms. SAMSA2 is an upgrade to the original Simple Annotation of Metatranscriptomes by Sequence Analysis (SAMSA) pipeline that has been redesigned for standalone use on a supercomputing cluster. SAMSA2 is faster due to the use of the DIAMOND aligner, and more flexible and reproducible because it uses local databases. SAMSA2 is available with detailed documentation, and example input and output files along with examples of master scripts for full pipeline execution. SAMSA2 is a rapid and efficient metatranscriptome pipeline for analyzing large RNA-seq datasets in a supercomputing cluster environment. SAMSA2 provides simplified output that can be examined directly or used for further analyses, and its reference databases may be upgraded, altered or customized to fit the needs of any experiment.

  7. A cloud-compatible bioinformatics pipeline for ultrarapid pathogen identification from next-generation sequencing of clinical samples

    PubMed Central

    Naccache, Samia N.; Federman, Scot; Veeraraghavan, Narayanan; Zaharia, Matei; Lee, Deanna; Samayoa, Erik; Bouquet, Jerome; Greninger, Alexander L.; Luk, Ka-Cheung; Enge, Barryett; Wadford, Debra A.; Messenger, Sharon L.; Genrich, Gillian L.; Pellegrino, Kristen; Grard, Gilda; Leroy, Eric; Schneider, Bradley S.; Fair, Joseph N.; Martínez, Miguel A.; Isa, Pavel; Crump, John A.; DeRisi, Joseph L.; Sittler, Taylor; Hackett, John; Miller, Steve; Chiu, Charles Y.

    2014-01-01

    Unbiased next-generation sequencing (NGS) approaches enable comprehensive pathogen detection in the clinical microbiology laboratory and have numerous applications for public health surveillance, outbreak investigation, and the diagnosis of infectious diseases. However, practical deployment of the technology is hindered by the bioinformatics challenge of analyzing results accurately and in a clinically relevant timeframe. Here we describe SURPI (“sequence-based ultrarapid pathogen identification”), a computational pipeline for pathogen identification from complex metagenomic NGS data generated from clinical samples, and demonstrate use of the pipeline in the analysis of 237 clinical samples comprising more than 1.1 billion sequences. Deployable on both cloud-based and standalone servers, SURPI leverages two state-of-the-art aligners for accelerated analyses, SNAP and RAPSearch, which are as accurate as existing bioinformatics tools but orders of magnitude faster in performance. In fast mode, SURPI detects viruses and bacteria by scanning data sets of 7–500 million reads in 11 min to 5 h, while in comprehensive mode, all known microorganisms are identified, followed by de novo assembly and protein homology searches for divergent viruses in 50 min to 16 h. SURPI has also directly contributed to real-time microbial diagnosis in acutely ill patients, underscoring its potential key role in the development of unbiased NGS-based clinical assays in infectious diseases that demand rapid turnaround times. PMID:24899342

  8. From Concept to Commerce: Developing a Successful Fungal Endophyte Inoculant for Agricultural Crops.

    PubMed

    Murphy, Brian R; Doohan, Fiona M; Hodkinson, Trevor R

    2018-02-11

    The development of endophyte inoculants for agricultural crops has been bedevilled by the twin problems of a lack of reliability and consistency, with a consequent lack of belief among end users in the efficacy of such treatments. We have developed a successful research pipeline for the production of a reliable, consistent and environmentally targeted fungal endophyte seed-delivered inoculant for barley cultivars. Our approach was developed de novo from an initial concept to source candidate endophyte inoculants from a wild relative of barley, Hordeum murinum (wall barley). A careful screening and selection procedure and extensive controlled environment testing of fungal endophyte strains, followed by multi-year field trials has resulted in the validation of an endophyte consortium suitable for barley crops grown on relatively dry sites. Our approach can be adapted for any crop or environment, provided that the set of first principles we have developed is followed. Here, we report how we developed the successful pipeline for the production of an economically viable fungal endophyte inoculant for barley cultivars.

  9. Analysis of mammalian gene function through broad-based phenotypic screens across a consortium of mouse clinics.

    PubMed

    de Angelis, Martin Hrabě; Nicholson, George; Selloum, Mohammed; White, Jacqui; Morgan, Hugh; Ramirez-Solis, Ramiro; Sorg, Tania; Wells, Sara; Fuchs, Helmut; Fray, Martin; Adams, David J; Adams, Niels C; Adler, Thure; Aguilar-Pimentel, Antonio; Ali-Hadji, Dalila; Amann, Gregory; André, Philippe; Atkins, Sarah; Auburtin, Aurelie; Ayadi, Abdel; Becker, Julien; Becker, Lore; Bedu, Elodie; Bekeredjian, Raffi; Birling, Marie-Christine; Blake, Andrew; Bottomley, Joanna; Bowl, Mike; Brault, Véronique; Busch, Dirk H; Bussell, James N; Calzada-Wack, Julia; Cater, Heather; Champy, Marie-France; Charles, Philippe; Chevalier, Claire; Chiani, Francesco; Codner, Gemma F; Combe, Roy; Cox, Roger; Dalloneau, Emilie; Dierich, André; Di Fenza, Armida; Doe, Brendan; Duchon, Arnaud; Eickelberg, Oliver; Esapa, Chris T; El Fertak, Lahcen; Feigel, Tanja; Emelyanova, Irina; Estabel, Jeanne; Favor, Jack; Flenniken, Ann; Gambadoro, Alessia; Garrett, Lilian; Gates, Hilary; Gerdin, Anna-Karin; Gkoutos, George; Greenaway, Simon; Glasl, Lisa; Goetz, Patrice; Da Cruz, Isabelle Goncalves; Götz, Alexander; Graw, Jochen; Guimond, Alain; Hans, Wolfgang; Hicks, Geoff; Hölter, Sabine M; Höfler, Heinz; Hancock, John M; Hoehndorf, Robert; Hough, Tertius; Houghton, Richard; Hurt, Anja; Ivandic, Boris; Jacobs, Hughes; Jacquot, Sylvie; Jones, Nora; Karp, Natasha A; Katus, Hugo A; Kitchen, Sharon; Klein-Rodewald, Tanja; Klingenspor, Martin; Klopstock, Thomas; Lalanne, Valerie; Leblanc, Sophie; Lengger, Christoph; le Marchand, Elise; Ludwig, Tonia; Lux, Aline; McKerlie, Colin; Maier, Holger; Mandel, Jean-Louis; Marschall, Susan; Mark, Manuel; Melvin, David G; Meziane, Hamid; Micklich, Kateryna; Mittelhauser, Christophe; Monassier, Laurent; Moulaert, David; Muller, Stéphanie; Naton, Beatrix; Neff, Frauke; Nolan, Patrick M; Nutter, Lauryl Mj; Ollert, Markus; Pavlovic, Guillaume; Pellegata, Natalia S; Peter, Emilie; Petit-Demoulière, Benoit; Pickard, Amanda; Podrini, Christine; Potter, Paul; Pouilly, Laurent; Puk, Oliver; Richardson, David; Rousseau, Stephane; Quintanilla-Fend, Leticia; Quwailid, Mohamed M; Racz, Ildiko; Rathkolb, Birgit; Riet, Fabrice; Rossant, Janet; Roux, Michel; Rozman, Jan; Ryder, Ed; Salisbury, Jennifer; Santos, Luis; Schäble, Karl-Heinz; Schiller, Evelyn; Schrewe, Anja; Schulz, Holger; Steinkamp, Ralf; Simon, Michelle; Stewart, Michelle; Stöger, Claudia; Stöger, Tobias; Sun, Minxuan; Sunter, David; Teboul, Lydia; Tilly, Isabelle; Tocchini-Valentini, Glauco P; Tost, Monica; Treise, Irina; Vasseur, Laurent; Velot, Emilie; Vogt-Weisenhorn, Daniela; Wagner, Christelle; Walling, Alison; Weber, Bruno; Wendling, Olivia; Westerberg, Henrik; Willershäuser, Monja; Wolf, Eckhard; Wolter, Anne; Wood, Joe; Wurst, Wolfgang; Yildirim, Ali Önder; Zeh, Ramona; Zimmer, Andreas; Zimprich, Annemarie; Holmes, Chris; Steel, Karen P; Herault, Yann; Gailus-Durner, Valérie; Mallon, Ann-Marie; Brown, Steve Dm

    2015-09-01

    The function of the majority of genes in the mouse and human genomes remains unknown. The mouse embryonic stem cell knockout resource provides a basis for the characterization of relationships between genes and phenotypes. The EUMODIC consortium developed and validated robust methodologies for the broad-based phenotyping of knockouts through a pipeline comprising 20 disease-oriented platforms. We developed new statistical methods for pipeline design and data analysis aimed at detecting reproducible phenotypes with high power. We acquired phenotype data from 449 mutant alleles, representing 320 unique genes, of which half had no previous functional annotation. We captured data from over 27,000 mice, finding that 83% of the mutant lines are phenodeviant, with 65% demonstrating pleiotropy. Surprisingly, we found significant differences in phenotype annotation according to zygosity. New phenotypes were uncovered for many genes with previously unknown function, providing a powerful basis for hypothesis generation and further investigation in diverse systems.

  10. High-throughput characterization for solar fuels materials discovery

    NASA Astrophysics Data System (ADS)

    Mitrovic, Slobodan; Becerra, Natalie; Cornell, Earl; Guevarra, Dan; Haber, Joel; Jin, Jian; Jones, Ryan; Kan, Kevin; Marcin, Martin; Newhouse, Paul; Soedarmadji, Edwin; Suram, Santosh; Xiang, Chengxiang; Gregoire, John; High-Throughput Experimentation Team

    2014-03-01

    In this talk I will present the status of the High-Throughput Experimentation (HTE) project of the Joint Center for Artificial Photosynthesis (JCAP). JCAP is an Energy Innovation Hub of the U.S. Department of Energy with a mandate to deliver a solar fuel generator based on an integrated photoelectrochemical cell (PEC). However, efficient and commercially viable catalysts or light absorbers for the PEC do not exist. The mission of HTE is to provide the accelerated discovery through combinatorial synthesis and rapid screening of material properties. The HTE pipeline also features high-throughput material characterization using x-ray diffraction and x-ray photoemission spectroscopy (XPS). In this talk I present the currently operating pipeline and focus on our combinatorial XPS efforts to build the largest free database of spectra from mixed-metal oxides, nitrides, sulfides and alloys. This work was performed at Joint Center for Artificial Photosynthesis, a DOE Energy Innovation Hub, supported through the Office of Science of the U.S. Department of Energy under Award No. DE-SC0004993.

  11. The Calibration Reference Data System

    NASA Astrophysics Data System (ADS)

    Greenfield, P.; Miller, T.

    2016-07-01

    We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.

  12. Development and Validation of 2D Difference Intensity Analysis for Chemical Library Screening by Protein-Detected NMR Spectroscopy.

    PubMed

    Egner, John M; Jensen, Davin R; Olp, Michael D; Kennedy, Nolan W; Volkman, Brian F; Peterson, Francis C; Smith, Brian C; Hill, R Blake

    2018-03-02

    An academic chemical screening approach was developed by using 2D protein-detected NMR, and a 352-chemical fragment library was screened against three different protein targets. The approach was optimized against two protein targets with known ligands: CXCL12 and BRD4. Principal component analysis reliably identified compounds that induced nonspecific NMR crosspeak broadening but did not unambiguously identify ligands with specific affinity (hits). For improved hit detection, a novel scoring metric-difference intensity analysis (DIA)-was devised that sums all positive and negative intensities from 2D difference spectra. Applying DIA quickly discriminated potential ligands from compounds inducing nonspecific NMR crosspeak broadening and other nonspecific effects. Subsequent NMR titrations validated chemotypes important for binding to CXCL12 and BRD4. A novel target, mitochondrial fission protein Fis1, was screened, and six hits were identified by using DIA. Screening these diverse protein targets identified quinones and catechols that induced nonspecific NMR crosspeak broadening, hampering NMR analyses, but are currently not computationally identified as pan-assay interference compounds. The results established a streamlined screening workflow that can easily be scaled and adapted as part of a larger screening pipeline to identify fragment hits and assess relative binding affinities in the range of 0.3-1.6 mm. DIA could prove useful in library screening and other applications in which NMR chemical shift perturbations are measured. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Irrigation waters and pipe-based biofilms as sources for antibiotic-resistant bacteria.

    PubMed

    Blaustein, Ryan A; Shelton, Daniel R; Van Kessel, Jo Ann S; Karns, Jeffrey S; Stocker, Matthew D; Pachepsky, Yakov A

    2016-01-01

    The presence of antibiotic-resistant bacteria in environmental surface waters has gained recent attention. Wastewater and drinking water distribution systems are known to disseminate antibiotic-resistant bacteria, with the biofilms that form on the inner-surfaces of the pipeline as a hot spot for proliferation and gene exchange. Pipe-based irrigation systems that utilize surface waters may contribute to the dissemination of antibiotic-resistant bacteria in a similar manner. We conducted irrigation events at a perennial stream on a weekly basis for 1 month, and the concentrations of total heterotrophic bacteria, total coliforms, and fecal coliforms, as well as the concentrations of these bacterial groups that were resistant to ampicillin and tetracycline, were monitored at the intake water. Prior to each of the latter three events, residual pipe water was sampled and 6-in. sections of pipeline (coupons) were detached from the system, and biofilm from the inner-wall was removed and analyzed for total protein content and the above bacteria. Isolates of biofilm-associated bacteria were screened for resistance to a panel of seven antibiotics, representing five antibiotic classes. All of the monitored bacteria grew substantially in the residual water between irrigation events, and the biomass of the biofilm steadily increased from week to week. The percentages of biofilm-associated isolates that were resistant to antibiotics on the panel sometimes increased between events. Multiple-drug resistance was observed for all bacterial groups, most often for fecal coliforms, and the distributions of the numbers of antibiotics that the total coliforms and fecal coliforms were resistant to were subject to change from week to week. Results from this study highlight irrigation waters as a potential source for antibiotic-resistant bacteria, which can subsequently become incorporated into and proliferate within irrigation pipe-based biofilms.

  14. Combining functional and structural genomics to sample the essential Burkholderia structome.

    PubMed

    Baugh, Loren; Gallagher, Larry A; Patrapuvich, Rapatbhorn; Clifton, Matthew C; Gardberg, Anna S; Edwards, Thomas E; Armour, Brianna; Begley, Darren W; Dieterich, Shellie H; Dranow, David M; Abendroth, Jan; Fairman, James W; Fox, David; Staker, Bart L; Phan, Isabelle; Gillespie, Angela; Choi, Ryan; Nakazawa-Hewitt, Steve; Nguyen, Mary Trang; Napuli, Alberto; Barrett, Lynn; Buchko, Garry W; Stacy, Robin; Myler, Peter J; Stewart, Lance J; Manoil, Colin; Van Voorhis, Wesley C

    2013-01-01

    The genus Burkholderia includes pathogenic gram-negative bacteria that cause melioidosis, glanders, and pulmonary infections of patients with cancer and cystic fibrosis. Drug resistance has made development of new antimicrobials critical. Many approaches to discovering new antimicrobials, such as structure-based drug design and whole cell phenotypic screens followed by lead refinement, require high-resolution structures of proteins essential to the parasite. We experimentally identified 406 putative essential genes in B. thailandensis, a low-virulence species phylogenetically similar to B. pseudomallei, the causative agent of melioidosis, using saturation-level transposon mutagenesis and next-generation sequencing (Tn-seq). We selected 315 protein products of these genes based on structure-determination criteria, such as excluding very large and/or integral membrane proteins, and entered them into the Seattle Structural Genomics Center for Infection Disease (SSGCID) structure determination pipeline. To maximize structural coverage of these targets, we applied an "ortholog rescue" strategy for those producing insoluble or difficult to crystallize proteins, resulting in the addition of 387 orthologs (or paralogs) from seven other Burkholderia species into the SSGCID pipeline. This structural genomics approach yielded structures from 31 putative essential targets from B. thailandensis, and 25 orthologs from other Burkholderia species, yielding an overall structural coverage for 49 of the 406 essential gene families, with a total of 88 depositions into the Protein Data Bank. Of these, 25 proteins have properties of a potential antimicrobial drug target i.e., no close human homolog, part of an essential metabolic pathway, and a deep binding pocket. We describe the structures of several potential drug targets in detail. This collection of structures, solubility and experimental essentiality data provides a resource for development of drugs against infections and diseases caused by Burkholderia. All expression clones and proteins created in this study are freely available by request.

  15. MetaMap: An atlas of metatranscriptomic reads in human disease-related RNA-seq data.

    PubMed

    Simon, L M; Karg, S; Westermann, A J; Engel, M; Elbehery, A H A; Hense, B; Heinig, M; Deng, L; Theis, F J

    2018-06-12

    With the advent of the age of big data in bioinformatics, large volumes of data and high performance computing power enable researchers to perform re-analyses of publicly available datasets at an unprecedented scale. Ever more studies imply the microbiome in both normal human physiology and a wide range of diseases. RNA sequencing technology (RNA-seq) is commonly used to infer global eukaryotic gene expression patterns under defined conditions, including human disease-related contexts, but its generic nature also enables the detection of microbial and viral transcripts. We developed a bioinformatic pipeline to screen existing human RNA-seq datasets for the presence of microbial and viral reads by re-inspecting the non-human-mapping read fraction. We validated this approach by recapitulating outcomes from 6 independent controlled infection experiments of cell line models and comparison with an alternative metatranscriptomic mapping strategy. We then applied the pipeline to close to 150 terabytes of publicly available raw RNA-seq data from >17,000 samples from >400 studies relevant to human disease using state-of-the-art high performance computing systems. The resulting data of this large-scale re-analysis are made available in the presented MetaMap resource. Our results demonstrate that common human RNA-seq data, including those archived in public repositories, might contain valuable information to correlate microbial and viral detection patterns with diverse diseases. The presented MetaMap database thus provides a rich resource for hypothesis generation towards the role of the microbiome in human disease. Additionally, codes to process new datasets and perform statistical analyses are made available at https://github.com/theislab/MetaMap.

  16. Expert Opinion Editorial Tissue Engineered Blood Vessels as Promising Tools for Testing Drug Toxicity

    PubMed Central

    Truskey, George A.; Fernandez, Cristina E.

    2015-01-01

    Drug-induced vascular injury (DIVI) is a serious problem in preclinical studies of vasoactive molecules and for survivors of pediatric cancers. DIVI is often observed in rodents and some larger animals, primarily with drugs affecting vascular tone, but not in humans; however, DIVI observed in animal studies often precludes a drug candidate from continuing along the development pipeline. Thus, there is great interest by the pharmaceutical industry to identify quantifiable human biomarkers of DIVI. Small scale endothelialized tissue-engineered blood vessels using human cells represent a promising approach to screen drug candidates and developed alternatives to cancer therapeutics in vitro. We identify several technical challenges that remain to be addressed, including high throughput systems to screen large numbers of candidates, identification of suitable cell sources, and establishing and maintaining a differentiated state of the vessel wall cells. Adequately addressing these challenges should yield novel platforms to screen drugs and develop new therapeutics to treat cardiovascular disease. PMID:26028128

  17. Microfluidics‐based 3D cell culture models: Utility in novel drug discovery and delivery research

    PubMed Central

    Gupta, Nilesh; Liu, Jeffrey R.; Patel, Brijeshkumar; Solomon, Deepak E.; Vaidya, Bhuvaneshwar

    2016-01-01

    Abstract The implementation of microfluidic devices within life sciences has furthered the possibilities of both academic and industrial applications such as rapid genome sequencing, predictive drug studies, and single cell manipulation. In contrast to the preferred two‐dimensional cell‐based screening, three‐dimensional (3D) systems have more in vivo relevance as well as ability to perform as a predictive tool for the success or failure of a drug screening campaign. 3D cell culture has shown an adaptive response to the recent advancements in microfluidic technologies which has allowed better control over spheroid sizes and subsequent drug screening studies. In this review, we highlight the most significant developments in the field of microfluidic 3D culture over the past half‐decade with a special focus on their benefits and challenges down the lane. With the newer technologies emerging, implementation of microfluidic 3D culture systems into the drug discovery pipeline is right around the bend. PMID:29313007

  18. High-Throughput Method for Automated Colony and Cell Counting by Digital Image Analysis Based on Edge Detection

    PubMed Central

    Choudhry, Priya

    2016-01-01

    Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849

  19. A Framework for Realistic Modeling and Display of Object Surface Appearance

    NASA Astrophysics Data System (ADS)

    Darling, Benjamin A.

    With advances in screen and video hardware technology, the type of content presented on computers has progressed from text and simple shapes to high-resolution photographs, photorealistic renderings, and high-definition video. At the same time, there have been significant advances in the area of content capture, with the development of devices and methods for creating rich digital representations of real-world objects. Unlike photo or video capture, which provide a fixed record of the light in a scene, these new technologies provide information on the underlying properties of the objects, allowing their appearance to be simulated for novel lighting and viewing conditions. These capabilities provide an opportunity to continue the computer display progression, from high-fidelity image presentations to digital surrogates that recreate the experience of directly viewing objects in the real world. In this dissertation, a framework was developed for representing objects with complex color, gloss, and texture properties and displaying them onscreen to appear as if they are part of the real-world environment. At its core, there is a conceptual shift from a traditional image-based display workflow to an object-based one. Instead of presenting the stored patterns of light from a scene, the objective is to reproduce the appearance attributes of a stored object by simulating its dynamic patterns of light for the real viewing and lighting geometry. This is accomplished using a computational approach where the physical light sources are modeled and the observer and display screen are actively tracked. Surface colors are calculated for the real spectral composition of the illumination with a custom multispectral rendering pipeline. In a set of experiments, the accuracy of color and gloss reproduction was evaluated by measuring the screen directly with a spectroradiometer. Gloss reproduction was assessed by comparing gonio measurements of the screen output to measurements of the real samples in the same measurement configuration. A chromatic adaptation experiment was performed to evaluate color appearance in the framework and explore the factors that contribute to differences when viewing self-luminous displays as opposed to reflective objects. A set of sample applications was developed to demonstrate the potential utility of the object display technology for digital proofing, psychophysical testing, and artwork display.

  20. Jet-mixing of initially-stratified liquid-liquid pipe flows: experiments and numerical simulations

    NASA Astrophysics Data System (ADS)

    Wright, Stuart; Ibarra-Hernandes, Roberto; Xie, Zhihua; Markides, Christos; Matar, Omar

    2016-11-01

    Low pipeline velocities lead to stratification and so-called 'phase slip' in horizontal liquid-liquid flows due to differences in liquid densities and viscosities. Stratified flows have no suitable single point for sampling, from which average phase properties (e.g. fractions) can be established. Inline mixing, achieved by static mixers or jets in cross-flow (JICF), is often used to overcome liquid-liquid stratification by establishing unstable two-phase dispersions for sampling. Achieving dispersions in liquid-liquid pipeline flows using JICF is the subject of this experimental and modelling work. The experimental facility involves a matched refractive index liquid-liquid-solid system, featuring an ETFE test section, and experimental liquids which are silicone oil and a 51-wt% glycerol solution. The matching then allows the dispersed fluid phase fractions and velocity fields to be established through advanced optical techniques, namely PLIF (for phase) and PTV or PIV (for velocity fields). CFD codes using the volume of a fluid (VOF) method are then used to demonstrate JICF breakup and dispersion in stratified pipeline flows. A number of simple jet configurations are described and their dispersion effectiveness is compared with the experimental results. Funding from Cameron for Ph.D. studentship (SW) gratefully acknowledged.

  1. MONA – Interactive manipulation of molecule collections

    PubMed Central

    2013-01-01

    Working with small‐molecule datasets is a routine task for cheminformaticians and chemists. The analysis and comparison of vendor catalogues and the compilation of promising candidates as starting points for screening campaigns are but a few very common applications. The workflows applied for this purpose usually consist of multiple basic cheminformatics tasks such as checking for duplicates or filtering by physico‐chemical properties. Pipelining tools allow to create and change such workflows without much effort, but usually do not support interventions once the pipeline has been started. In many contexts, however, the best suited workflow is not known in advance, thus making it necessary to take the results of the previous steps into consideration before proceeding. To support intuition‐driven processing of compound collections, we developed MONA, an interactive tool that has been designed to prepare and visualize large small‐molecule datasets. Using an SQL database common cheminformatics tasks such as analysis and filtering can be performed interactively with various methods for visual support. Great care was taken in creating a simple, intuitive user interface which can be instantly used without any setup steps. MONA combines the interactivity of molecule database systems with the simplicity of pipelining tools, thus enabling the case‐to‐case application of chemistry expert knowledge. The current version is available free of charge for academic use and can be downloaded at http://www.zbh.uni‐hamburg.de/mona. PMID:23985157

  2. Measuring success: Results from a national survey of recruitment and retention initiatives in the nursing workforce

    PubMed Central

    Carthon, J. Margo Brooks; Nguyen, Thai-Huy; Chittams, Jesse; Park, Elizabeth; Guevara, James

    2015-01-01

    Objectives The purpose of this study was to identify common components of diversity pipeline programs across a national sample of nursing institutions and determine what effect these programs have on increasing underrepresented minority enrollment and graduation. Design Linked data from an electronic survey conducted November 2012 to March 2013 and American Association of Colleges of Nursing baccalaureate graduation and enrollment data (2008 and 2012). Participants Academic and administrative staff of 164 nursing schools in 26 states, including Puerto Rico in the United States. Methods Chi-square statistics were used to (1) describe organizational features of nursing diversity pipeline programs and (2) determine significant trends in underrepresented minorities’ graduation and enrollment between nursing schools with and without diversity pipeline programs Results Twenty percent (n = 33) of surveyed nursing schools reported a structured diversity pipeline program. The most frequent program measures associated with pipeline programs included mentorship, academic, and psychosocial support. Asian, Hispanic, and Native Hawaiian/Pacific Islander nursing student enrollment increased between 2008 and 2012. Hispanic/Latino graduation rates increased (7.9%–10.4%, p = .001), but they decreased among Black (6.8%–5.0%, p = .004) and Native American/Pacific Islander students (2.1 %–0.3%, p ≥ .001). Conclusions Nursing diversity pipeline programs are associated with increases in nursing school enrollment and graduation for some, although not all, minority students. Future initiatives should build on current trends while creating targeted strategies to reverse downward graduation trends among Black, Native American, and Pacific Island nursing students. PMID:24880900

  3. Measuring success: results from a national survey of recruitment and retention initiatives in the nursing workforce.

    PubMed

    Brooks Carthon, J Margo; Nguyen, Thai-Huy; Chittams, Jesse; Park, Elizabeth; Guevara, James

    2014-01-01

    The purpose of this study was to identify common components of diversity pipeline programs across a national sample of nursing institutions and determine what effect these programs have on increasing underrepresented minority enrollment and graduation. Linked data from an electronic survey conducted November 2012 to March 2013 and American Association of Colleges of Nursing baccalaureate graduation and enrollment data (2008 and 2012). Academic and administrative staff of 164 nursing schools in 26 states, including Puerto Rico in the United States. Chi-square statistics were used to (1) describe organizational features of nursing diversity pipeline programs and (2) determine significant trends in underrepresented minorities' graduation and enrollment between nursing schools with and without diversity pipeline programs Twenty percent (n = 33) of surveyed nursing schools reported a structured diversity pipeline program. The most frequent program measures associated with pipeline programs included mentorship, academic, and psychosocial support. Asian, Hispanic, and Native Hawaiian/Pacific Islander nursing student enrollment increased between 2008 and 2012. Hispanic/Latino graduation rates increased (7.9%-10.4%, p = .001), but they decreased among Black (6.8%-5.0%, p = .004) and Native American/Pacific Islander students (2.1 %-0.3%, p ≥ .001). Nursing diversity pipeline programs are associated with increases in nursing school enrollment and graduation for some, although not all, minority students. Future initiatives should build on current trends while creating targeted strategies to reverse downward graduation trends among Black, Native American, and Pacific Island nursing students. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Chemical reaction vector embeddings: towards predicting drug metabolism in the human gut microbiome.

    PubMed

    Mallory, Emily K; Acharya, Ambika; Rensi, Stefano E; Turnbaugh, Peter J; Bright, Roselie A; Altman, Russ B

    2018-01-01

    Bacteria in the human gut have the ability to activate, inactivate, and reactivate drugs with both intended and unintended effects. For example, the drug digoxin is reduced to the inactive metabolite dihydrodigoxin by the gut Actinobacterium E. lenta, and patients colonized with high levels of drug metabolizing strains may have limited response to the drug. Understanding the complete space of drugs that are metabolized by the human gut microbiome is critical for predicting bacteria-drug relationships and their effects on individual patient response. Discovery and validation of drug metabolism via bacterial enzymes has yielded >50 drugs after nearly a century of experimental research. However, there are limited computational tools for screening drugs for potential metabolism by the gut microbiome. We developed a pipeline for comparing and characterizing chemical transformations using continuous vector representations of molecular structure learned using unsupervised representation learning. We applied this pipeline to chemical reaction data from MetaCyc to characterize the utility of vector representations for chemical reaction transformations. After clustering molecular and reaction vectors, we performed enrichment analyses and queries to characterize the space. We detected enriched enzyme names, Gene Ontology terms, and Enzyme Consortium (EC) classes within reaction clusters. In addition, we queried reactions against drug-metabolite transformations known to be metabolized by the human gut microbiome. The top results for these known drug transformations contained similar substructure modifications to the original drug pair. This work enables high throughput screening of drugs and their resulting metabolites against chemical reactions common to gut bacteria.

  5. Molgenis-impute: imputation pipeline in a box.

    PubMed

    Kanterakis, Alexandros; Deelen, Patrick; van Dijk, Freerk; Byelas, Heorhiy; Dijkstra, Martijn; Swertz, Morris A

    2015-08-19

    Genotype imputation is an important procedure in current genomic analysis such as genome-wide association studies, meta-analyses and fine mapping. Although high quality tools are available that perform the steps of this process, considerable effort and expertise is required to set up and run a best practice imputation pipeline, particularly for larger genotype datasets, where imputation has to scale out in parallel on computer clusters. Here we present MOLGENIS-impute, an 'imputation in a box' solution that seamlessly and transparently automates the set up and running of all the steps of the imputation process. These steps include genome build liftover (liftovering), genotype phasing with SHAPEIT2, quality control, sample and chromosomal chunking/merging, and imputation with IMPUTE2. MOLGENIS-impute builds on MOLGENIS-compute, a simple pipeline management platform for submission and monitoring of bioinformatics tasks in High Performance Computing (HPC) environments like local/cloud servers, clusters and grids. All the required tools, data and scripts are downloaded and installed in a single step. Researchers with diverse backgrounds and expertise have tested MOLGENIS-impute on different locations and imputed over 30,000 samples so far using the 1,000 Genomes Project and new Genome of the Netherlands data as the imputation reference. The tests have been performed on PBS/SGE clusters, cloud VMs and in a grid HPC environment. MOLGENIS-impute gives priority to the ease of setting up, configuring and running an imputation. It has minimal dependencies and wraps the pipeline in a simple command line interface, without sacrificing flexibility to adapt or limiting the options of underlying imputation tools. It does not require knowledge of a workflow system or programming, and is targeted at researchers who just want to apply best practices in imputation via simple commands. It is built on the MOLGENIS compute workflow framework to enable customization with additional computational steps or it can be included in other bioinformatics pipelines. It is available as open source from: https://github.com/molgenis/molgenis-imputation.

  6. Evaluation of Primers Targeting the Diazotroph Functional Gene and Development of NifMAP – A Bioinformatics Pipeline for Analyzing nifH Amplicon Data

    PubMed Central

    Angel, Roey; Nepel, Maximilian; Panhölzl, Christopher; Schmidt, Hannes; Herbold, Craig W.; Eichorst, Stephanie A.; Woebken, Dagmar

    2018-01-01

    Diazotrophic microorganisms introduce biologically available nitrogen (N) to the global N cycle through the activity of the nitrogenase enzyme. The genetically conserved dinitrogenase reductase (nifH) gene is phylogenetically distributed across four clusters (I–IV) and is widely used as a marker gene for N2 fixation, permitting investigators to study the genetic diversity of diazotrophs in nature and target potential participants in N2 fixation. To date there have been limited, standardized pipelines for analyzing the nifH functional gene, which is in stark contrast to the 16S rRNA gene. Here we present a bioinformatics pipeline for processing nifH amplicon datasets – NifMAP (“NifH MiSeq Illumina Amplicon Analysis Pipeline”), which as a novel aspect uses Hidden-Markov Models to filter out homologous genes to nifH. By using this pipeline, we evaluated the broadly inclusive primer pairs (Ueda19F–R6, IGK3–DVV, and F2–R6) that target the nifH gene. To evaluate any systematic biases, the nifH gene was amplified with the aforementioned primer pairs in a diverse collection of environmental samples (soils, rhizosphere and roots samples, biological soil crusts and estuarine samples), in addition to a nifH mock community consisting of six phylogenetically diverse members. We noted that all primer pairs co-amplified nifH homologs to varying degrees; up to 90% of the amplicons were nifH homologs with IGK3–DVV in some samples (rhizosphere and roots from tall oat-grass). In regards to specificity, we observed some degree of bias across the primer pairs. For example, primer pair F2–R6 discriminated against cyanobacteria (amongst others), yet captured many sequences from subclusters IIIE and IIIL-N. These aforementioned subclusters were largely missing by the primer pair IGK3–DVV, which also tended to discriminate against Alphaproteobacteria, but amplified sequences within clusters IIIC (affiliated with Clostridia) and clusters IVB and IVC. Primer pair Ueda19F–R6 exhibited the least bias and successfully captured diazotrophs in cluster I and subclusters IIIE, IIIL, IIIM, and IIIN, but tended to discriminate against Firmicutes and subcluster IIIC. Taken together, our newly established bioinformatics pipeline, NifMAP, along with our systematic evaluations of nifH primer pairs permit more robust, high-throughput investigations of diazotrophs in diverse environments. PMID:29760683

  7. Analysis of Bacterial and Archaeal Communities along a High-Molecular-Weight Polyacrylamide Transportation Pipeline System in an Oil Field

    PubMed Central

    Li, Cai-Yun; Li, Jing-Yan; Mbadinga, Serge Maurice; Liu, Jin-Feng; Gu, Ji-Dong; Mu, Bo-Zhong

    2015-01-01

    Viscosity loss of high-molecular-weight partially hydrolyzed polyacrylamide (HPAM) solution was observed in a water injection pipeline before being injected into subterranean oil wells. In order to investigate the possible involvement of microorganisms in HPAM viscosity loss, both bacterial and archaeal community compositions of four samples collected from different points of the transportation pipeline were analyzed using PCR-amplification of the 16S rRNA gene and clone library construction method together with the analysis of physicochemical properties of HPAM solution and environmental factors. Further, the relationship between environmental factors and HPAM properties with microorganisms were delineated by canonical correspondence analysis (CCA). Diverse bacterial and archaeal groups were detected in the four samples. The microbial community of initial solution S1 gathered from the make-up tank is similar to solution S2 gathered from the first filter, and that of solution S3 obtained between the first and the second filter is similar to that of solution S4 obtained between the second filter and the injection well. Members of the genus Acinetobacter sp. were detected with high abundance in S3 and S4 in which HPAM viscosity was considerably reduced, suggesting that they likely played a considerable role in HPAM viscosity loss. This study presents information on microbial community diversity in the HPAM transportation pipeline and the possible involvement of microorganisms in HPAM viscosity loss and biodegradation. The results will help to understand the microbial community contribution made to viscosity change and are beneficial for providing information for microbial control in oil fields. PMID:25849654

  8. Analysis of bacterial and archaeal communities along a high-molecular-weight polyacrylamide transportation pipeline system in an oil field.

    PubMed

    Li, Cai-Yun; Li, Jing-Yan; Mbadinga, Serge Maurice; Liu, Jin-Feng; Gu, Ji-Dong; Mu, Bo-Zhong

    2015-04-02

    Viscosity loss of high-molecular-weight partially hydrolyzed polyacrylamide (HPAM) solution was observed in a water injection pipeline before being injected into subterranean oil wells. In order to investigate the possible involvement of microorganisms in HPAM viscosity loss, both bacterial and archaeal community compositions of four samples collected from different points of the transportation pipeline were analyzed using PCR-amplification of the 16S rRNA gene and clone library construction method together with the analysis of physicochemical properties of HPAM solution and environmental factors. Further, the relationship between environmental factors and HPAM properties with microorganisms were delineated by canonical correspondence analysis (CCA). Diverse bacterial and archaeal groups were detected in the four samples. The microbial community of initial solution S1 gathered from the make-up tank is similar to solution S2 gathered from the first filter, and that of solution S3 obtained between the first and the second filter is similar to that of solution S4 obtained between the second filter and the injection well. Members of the genus Acinetobacter sp. were detected with high abundance in S3 and S4 in which HPAM viscosity was considerably reduced, suggesting that they likely played a considerable role in HPAM viscosity loss. This study presents information on microbial community diversity in the HPAM transportation pipeline and the possible involvement of microorganisms in HPAM viscosity loss and biodegradation. The results will help to understand the microbial community contribution made to viscosity change and are beneficial for providing information for microbial control in oil fields.

  9. Thallium-rich rust scales in drinkable water distribution systems: A case study from northern Tuscany, Italy.

    PubMed

    Biagioni, Cristian; D'Orazio, Massimo; Lepore, Giovanni O; d'Acapito, Francesco; Vezzoni, Simone

    2017-06-01

    Following the detection of a severe thallium contamination of the drinkable water from the public distribution system of Valdicastello Carducci-Pietrasanta (northern Tuscany, Italy), and the identification of the source of contamination in the Molini di Sant'Anna spring (average Tl content≈15μgL -1 ), the replacement of the contaminated water with a virtually Tl-free one (Tl<0.10μgL -1 ) caused an increase in Tl concentration in the drinkable water. This suggested that the pipeline interior had become a secondary source of Tl contamination, promoting its mineralogical and geochemical study. Rust scales samples taken from several pipeline segments, as well as leaching products obtained from these samples, were investigated through scanning electron microscopy, X-ray fluorescence chemical analyses, inductively coupled plasma - mass spectrometry, X-ray diffraction, and X-ray absorption spectroscopy. Thallium-rich rust scales (up to 5.3wt% Tl) have been found only in pipeline samples taken downstream the water treatment plant, whereas the sample taken upstream contains much less Tl (~90μgg -1 ). The Tl-rich nature of such scales is related to the occurrence of nano- and micro-spherules of Tl 2 O 3 and less abundant nanocrystalline μm-sized encrustations of TlCl. Leaching experiments on Tl-rich rust scales indicate that a fraction of the available Tl is easily dissolved in tap water; X-ray absorption spectroscopy suggests that monovalent thallium occurs in water equilibrated with the rust scales, probably related to the dissolution of TlCl encrustations. Therefore, Tl dissolved as Tl + only in the water from the Molini di Sant'Anna spring was partially removed through oxidative precipitation of Tl 2 O 3 and precipitation of TlCl. This highlights the critical role played by the addition of chlorine-based oxidants in water treatment plants that could favour the deposition of Tl-rich coatings within the pipelines, giving rise to unexpected secondary sources of contamination. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Incorporating Virtual Reactions into a Logic-based Ligand-based Virtual Screening Method to Discover New Leads

    PubMed Central

    Reynolds, Christopher R; Muggleton, Stephen H; Sternberg, Michael J E

    2015-01-01

    The use of virtual screening has become increasingly central to the drug development pipeline, with ligand-based virtual screening used to screen databases of compounds to predict their bioactivity against a target. These databases can only represent a small fraction of chemical space, and this paper describes a method of exploring synthetic space by applying virtual reactions to promising compounds within a database, and generating focussed libraries of predicted derivatives. A ligand-based virtual screening tool Investigational Novel Drug Discovery by Example (INDDEx) is used as the basis for a system of virtual reactions. The use of virtual reactions is estimated to open up a potential space of 1.21×1012 potential molecules. A de novo design algorithm known as Partial Logical-Rule Reactant Selection (PLoRRS) is introduced and incorporated into the INDDEx methodology. PLoRRS uses logical rules from the INDDEx model to select reactants for the de novo generation of potentially active products. The PLoRRS method is found to increase significantly the likelihood of retrieving molecules similar to known actives with a p-value of 0.016. Case studies demonstrate that the virtual reactions produce molecules highly similar to known actives, including known blockbuster drugs. PMID:26583052

  11. When drug discovery meets web search: Learning to Rank for ligand-based virtual screening.

    PubMed

    Zhang, Wei; Ji, Lijuan; Chen, Yanan; Tang, Kailin; Wang, Haiping; Zhu, Ruixin; Jia, Wei; Cao, Zhiwei; Liu, Qi

    2015-01-01

    The rapid increase in the emergence of novel chemical substances presents a substantial demands for more sophisticated computational methodologies for drug discovery. In this study, the idea of Learning to Rank in web search was presented in drug virtual screening, which has the following unique capabilities of 1). Applicable of identifying compounds on novel targets when there is not enough training data available for these targets, and 2). Integration of heterogeneous data when compound affinities are measured in different platforms. A standard pipeline was designed to carry out Learning to Rank in virtual screening. Six Learning to Rank algorithms were investigated based on two public datasets collected from Binding Database and the newly-published Community Structure-Activity Resource benchmark dataset. The results have demonstrated that Learning to rank is an efficient computational strategy for drug virtual screening, particularly due to its novel use in cross-target virtual screening and heterogeneous data integration. To the best of our knowledge, we have introduced here the first application of Learning to Rank in virtual screening. The experiment workflow and algorithm assessment designed in this study will provide a standard protocol for other similar studies. All the datasets as well as the implementations of Learning to Rank algorithms are available at http://www.tongji.edu.cn/~qiliu/lor_vs.html. Graphical AbstractThe analogy between web search and ligand-based drug discovery.

  12. Integrated pathway-based transcription regulation network mining and visualization based on gene expression profiles.

    PubMed

    Kibinge, Nelson; Ono, Naoaki; Horie, Masafumi; Sato, Tetsuo; Sugiura, Tadao; Altaf-Ul-Amin, Md; Saito, Akira; Kanaya, Shigehiko

    2016-06-01

    Conventionally, workflows examining transcription regulation networks from gene expression data involve distinct analytical steps. There is a need for pipelines that unify data mining and inference deduction into a singular framework to enhance interpretation and hypotheses generation. We propose a workflow that merges network construction with gene expression data mining focusing on regulation processes in the context of transcription factor driven gene regulation. The pipeline implements pathway-based modularization of expression profiles into functional units to improve biological interpretation. The integrated workflow was implemented as a web application software (TransReguloNet) with functions that enable pathway visualization and comparison of transcription factor activity between sample conditions defined in the experimental design. The pipeline merges differential expression, network construction, pathway-based abstraction, clustering and visualization. The framework was applied in analysis of actual expression datasets related to lung, breast and prostrate cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Launching genomics into the cloud: deployment of Mercury, a next generation sequence analysis pipeline.

    PubMed

    Reid, Jeffrey G; Carroll, Andrew; Veeraraghavan, Narayanan; Dahdouli, Mahmoud; Sundquist, Andreas; English, Adam; Bainbridge, Matthew; White, Simon; Salerno, William; Buhay, Christian; Yu, Fuli; Muzny, Donna; Daly, Richard; Duyk, Geoff; Gibbs, Richard A; Boerwinkle, Eric

    2014-01-29

    Massively parallel DNA sequencing generates staggering amounts of data. Decreasing cost, increasing throughput, and improved annotation have expanded the diversity of genomics applications in research and clinical practice. This expanding scale creates analytical challenges: accommodating peak compute demand, coordinating secure access for multiple analysts, and sharing validated tools and results. To address these challenges, we have developed the Mercury analysis pipeline and deployed it in local hardware and the Amazon Web Services cloud via the DNAnexus platform. Mercury is an automated, flexible, and extensible analysis workflow that provides accurate and reproducible genomic results at scales ranging from individuals to large cohorts. By taking advantage of cloud computing and with Mercury implemented on the DNAnexus platform, we have demonstrated a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples.

  14. Distributed Fiber Optic Sensors for Earthquake Detection and Early Warning

    NASA Astrophysics Data System (ADS)

    Karrenbach, M. H.; Cole, S.

    2016-12-01

    Fiber optic cables placed along pipelines, roads or other infrastructure provide dense sampling of passing seismic wavefields. Laser interrogation units illuminate the fiber over its entire length, and strain at desired points along the fiber can be determined from the reflected signal. Single-mode optical fibers up to 50 km in length can provide a distributed acoustic sensing system (DAS) where the acoustic bandwidth of each channel is limited only by the round-trip time over the length of the cable (0.0005 s for a 50 km cable). Using a 10 m spatial resolution results in 4000 channels sampled at 2.5 kHz spanning a 40 km-long fiber deployed along a pipeline. The inline strain field is averaged along the fiber over a 10 m section of the cable at each desired spatial sample, creating a virtual sensor location. Typically, a dynamic strain sensitivity of sub-nanometers within each gauge along the entire length of the fiber can be achieved. This sensitivity corresponds to a particle displacement figure of approximately -90 dB ms-2Hz-½. Such a fiber optic sensor is not as sensitive as long-period seismometers used in earthquake networks, but given the large number of channels, small to medium-sized earthquakes can be detected, depending on distance from the array, and can be located with precision through arrival time inversions. We show several examples of earthquake recordings using distributed fiber optic arrays that were deployed originally for other purposes. A 480 km long section of a pipeline in Turkey was actively monitored with a DAS fiber optic system for activities in the immediate vicinity of the pipeline. The densely spaced sensor array along the pipeline detected earthquakes of 3.6 - 7.2 magnitude range, centered near Van, Turkey. Secondly, a fiber optic system located along a rail line near the Salton Sea in California was used to create a smaller scale fiber optic sensor array, on which earthquakes with magnitudes 2.2 - 2.7 were recorded from epicenters up to 65 km away. Our analysis shows that existing fiber optic installations along infrastructure could be combined to form a large aperture array with tens of thousands of channels for epicenter estimation and for early warning purposes, augmenting existing earthquake sensor networks.

  15. FANTOM5 CAGE profiles of human and mouse samples.

    PubMed

    Noguchi, Shuhei; Arakawa, Takahiro; Fukuda, Shiro; Furuno, Masaaki; Hasegawa, Akira; Hori, Fumi; Ishikawa-Kato, Sachi; Kaida, Kaoru; Kaiho, Ai; Kanamori-Katayama, Mutsumi; Kawashima, Tsugumi; Kojima, Miki; Kubosaki, Atsutaka; Manabe, Ri-Ichiroh; Murata, Mitsuyoshi; Nagao-Sato, Sayaka; Nakazato, Kenichi; Ninomiya, Noriko; Nishiyori-Sueki, Hiromi; Noma, Shohei; Saijyo, Eri; Saka, Akiko; Sakai, Mizuho; Simon, Christophe; Suzuki, Naoko; Tagami, Michihira; Watanabe, Shoko; Yoshida, Shigehiro; Arner, Peter; Axton, Richard A; Babina, Magda; Baillie, J Kenneth; Barnett, Timothy C; Beckhouse, Anthony G; Blumenthal, Antje; Bodega, Beatrice; Bonetti, Alessandro; Briggs, James; Brombacher, Frank; Carlisle, Ailsa J; Clevers, Hans C; Davis, Carrie A; Detmar, Michael; Dohi, Taeko; Edge, Albert S B; Edinger, Matthias; Ehrlund, Anna; Ekwall, Karl; Endoh, Mitsuhiro; Enomoto, Hideki; Eslami, Afsaneh; Fagiolini, Michela; Fairbairn, Lynsey; Farach-Carson, Mary C; Faulkner, Geoffrey J; Ferrai, Carmelo; Fisher, Malcolm E; Forrester, Lesley M; Fujita, Rie; Furusawa, Jun-Ichi; Geijtenbeek, Teunis B; Gingeras, Thomas; Goldowitz, Daniel; Guhl, Sven; Guler, Reto; Gustincich, Stefano; Ha, Thomas J; Hamaguchi, Masahide; Hara, Mitsuko; Hasegawa, Yuki; Herlyn, Meenhard; Heutink, Peter; Hitchens, Kelly J; Hume, David A; Ikawa, Tomokatsu; Ishizu, Yuri; Kai, Chieko; Kawamoto, Hiroshi; Kawamura, Yuki I; Kempfle, Judith S; Kenna, Tony J; Kere, Juha; Khachigian, Levon M; Kitamura, Toshio; Klein, Sarah; Klinken, S Peter; Knox, Alan J; Kojima, Soichi; Koseki, Haruhiko; Koyasu, Shigeo; Lee, Weonju; Lennartsson, Andreas; Mackay-Sim, Alan; Mejhert, Niklas; Mizuno, Yosuke; Morikawa, Hiromasa; Morimoto, Mitsuru; Moro, Kazuyo; Morris, Kelly J; Motohashi, Hozumi; Mummery, Christine L; Nakachi, Yutaka; Nakahara, Fumio; Nakamura, Toshiyuki; Nakamura, Yukio; Nozaki, Tadasuke; Ogishima, Soichi; Ohkura, Naganari; Ohno, Hiroshi; Ohshima, Mitsuhiro; Okada-Hatakeyama, Mariko; Okazaki, Yasushi; Orlando, Valerio; Ovchinnikov, Dmitry A; Passier, Robert; Patrikakis, Margaret; Pombo, Ana; Pradhan-Bhatt, Swati; Qin, Xian-Yang; Rehli, Michael; Rizzu, Patrizia; Roy, Sugata; Sajantila, Antti; Sakaguchi, Shimon; Sato, Hiroki; Satoh, Hironori; Savvi, Suzana; Saxena, Alka; Schmidl, Christian; Schneider, Claudio; Schulze-Tanzil, Gundula G; Schwegmann, Anita; Sheng, Guojun; Shin, Jay W; Sugiyama, Daisuke; Sugiyama, Takaaki; Summers, Kim M; Takahashi, Naoko; Takai, Jun; Tanaka, Hiroshi; Tatsukawa, Hideki; Tomoiu, Andru; Toyoda, Hiroo; van de Wetering, Marc; van den Berg, Linda M; Verardo, Roberto; Vijayan, Dipti; Wells, Christine A; Winteringham, Louise N; Wolvetang, Ernst; Yamaguchi, Yoko; Yamamoto, Masayuki; Yanagi-Mizuochi, Chiyo; Yoneda, Misako; Yonekura, Yohei; Zhang, Peter G; Zucchelli, Silvia; Abugessaisa, Imad; Arner, Erik; Harshbarger, Jayson; Kondo, Atsushi; Lassmann, Timo; Lizio, Marina; Sahin, Serkan; Sengstag, Thierry; Severin, Jessica; Shimoji, Hisashi; Suzuki, Masanori; Suzuki, Harukazu; Kawai, Jun; Kondo, Naoto; Itoh, Masayoshi; Daub, Carsten O; Kasukawa, Takeya; Kawaji, Hideya; Carninci, Piero; Forrest, Alistair R R; Hayashizaki, Yoshihide

    2017-08-29

    In the FANTOM5 project, transcription initiation events across the human and mouse genomes were mapped at a single base-pair resolution and their frequencies were monitored by CAGE (Cap Analysis of Gene Expression) coupled with single-molecule sequencing. Approximately three thousands of samples, consisting of a variety of primary cells, tissues, cell lines, and time series samples during cell activation and development, were subjected to a uniform pipeline of CAGE data production. The analysis pipeline started by measuring RNA extracts to assess their quality, and continued to CAGE library production by using a robotic or a manual workflow, single molecule sequencing, and computational processing to generate frequencies of transcription initiation. Resulting data represents the consequence of transcriptional regulation in each analyzed state of mammalian cells. Non-overlapping peaks over the CAGE profiles, approximately 200,000 and 150,000 peaks for the human and mouse genomes, were identified and annotated to provide precise location of known promoters as well as novel ones, and to quantify their activities.

  16. FANTOM5 CAGE profiles of human and mouse samples

    PubMed Central

    Noguchi, Shuhei; Arakawa, Takahiro; Fukuda, Shiro; Furuno, Masaaki; Hasegawa, Akira; Hori, Fumi; Ishikawa-Kato, Sachi; Kaida, Kaoru; Kaiho, Ai; Kanamori-Katayama, Mutsumi; Kawashima, Tsugumi; Kojima, Miki; Kubosaki, Atsutaka; Manabe, Ri-ichiroh; Murata, Mitsuyoshi; Nagao-Sato, Sayaka; Nakazato, Kenichi; Ninomiya, Noriko; Nishiyori-Sueki, Hiromi; Noma, Shohei; Saijyo, Eri; Saka, Akiko; Sakai, Mizuho; Simon, Christophe; Suzuki, Naoko; Tagami, Michihira; Watanabe, Shoko; Yoshida, Shigehiro; Arner, Peter; Axton, Richard A.; Babina, Magda; Baillie, J. Kenneth; Barnett, Timothy C.; Beckhouse, Anthony G.; Blumenthal, Antje; Bodega, Beatrice; Bonetti, Alessandro; Briggs, James; Brombacher, Frank; Carlisle, Ailsa J.; Clevers, Hans C.; Davis, Carrie A.; Detmar, Michael; Dohi, Taeko; Edge, Albert S.B.; Edinger, Matthias; Ehrlund, Anna; Ekwall, Karl; Endoh, Mitsuhiro; Enomoto, Hideki; Eslami, Afsaneh; Fagiolini, Michela; Fairbairn, Lynsey; Farach-Carson, Mary C.; Faulkner, Geoffrey J.; Ferrai, Carmelo; Fisher, Malcolm E.; Forrester, Lesley M.; Fujita, Rie; Furusawa, Jun-ichi; Geijtenbeek, Teunis B.; Gingeras, Thomas; Goldowitz, Daniel; Guhl, Sven; Guler, Reto; Gustincich, Stefano; Ha, Thomas J.; Hamaguchi, Masahide; Hara, Mitsuko; Hasegawa, Yuki; Herlyn, Meenhard; Heutink, Peter; Hitchens, Kelly J.; Hume, David A.; Ikawa, Tomokatsu; Ishizu, Yuri; Kai, Chieko; Kawamoto, Hiroshi; Kawamura, Yuki I.; Kempfle, Judith S.; Kenna, Tony J.; Kere, Juha; Khachigian, Levon M.; Kitamura, Toshio; Klein, Sarah; Klinken, S. Peter; Knox, Alan J.; Kojima, Soichi; Koseki, Haruhiko; Koyasu, Shigeo; Lee, Weonju; Lennartsson, Andreas; Mackay-sim, Alan; Mejhert, Niklas; Mizuno, Yosuke; Morikawa, Hiromasa; Morimoto, Mitsuru; Moro, Kazuyo; Morris, Kelly J.; Motohashi, Hozumi; Mummery, Christine L.; Nakachi, Yutaka; Nakahara, Fumio; Nakamura, Toshiyuki; Nakamura, Yukio; Nozaki, Tadasuke; Ogishima, Soichi; Ohkura, Naganari; Ohno, Hiroshi; Ohshima, Mitsuhiro; Okada-Hatakeyama, Mariko; Okazaki, Yasushi; Orlando, Valerio; Ovchinnikov, Dmitry A.; Passier, Robert; Patrikakis, Margaret; Pombo, Ana; Pradhan-Bhatt, Swati; Qin, Xian-Yang; Rehli, Michael; Rizzu, Patrizia; Roy, Sugata; Sajantila, Antti; Sakaguchi, Shimon; Sato, Hiroki; Satoh, Hironori; Savvi, Suzana; Saxena, Alka; Schmidl, Christian; Schneider, Claudio; Schulze-Tanzil, Gundula G.; Schwegmann, Anita; Sheng, Guojun; Shin, Jay W.; Sugiyama, Daisuke; Sugiyama, Takaaki; Summers, Kim M.; Takahashi, Naoko; Takai, Jun; Tanaka, Hiroshi; Tatsukawa, Hideki; Tomoiu, Andru; Toyoda, Hiroo; van de Wetering, Marc; van den Berg, Linda M.; Verardo, Roberto; Vijayan, Dipti; Wells, Christine A.; Winteringham, Louise N.; Wolvetang, Ernst; Yamaguchi, Yoko; Yamamoto, Masayuki; Yanagi-Mizuochi, Chiyo; Yoneda, Misako; Yonekura, Yohei; Zhang, Peter G.; Zucchelli, Silvia; Abugessaisa, Imad; Arner, Erik; Harshbarger, Jayson; Kondo, Atsushi; Lassmann, Timo; Lizio, Marina; Sahin, Serkan; Sengstag, Thierry; Severin, Jessica; Shimoji, Hisashi; Suzuki, Masanori; Suzuki, Harukazu; Kawai, Jun; Kondo, Naoto; Itoh, Masayoshi; Daub, Carsten O.; Kasukawa, Takeya; Kawaji, Hideya; Carninci, Piero; Forrest, Alistair R.R.; Hayashizaki, Yoshihide

    2017-01-01

    In the FANTOM5 project, transcription initiation events across the human and mouse genomes were mapped at a single base-pair resolution and their frequencies were monitored by CAGE (Cap Analysis of Gene Expression) coupled with single-molecule sequencing. Approximately three thousands of samples, consisting of a variety of primary cells, tissues, cell lines, and time series samples during cell activation and development, were subjected to a uniform pipeline of CAGE data production. The analysis pipeline started by measuring RNA extracts to assess their quality, and continued to CAGE library production by using a robotic or a manual workflow, single molecule sequencing, and computational processing to generate frequencies of transcription initiation. Resulting data represents the consequence of transcriptional regulation in each analyzed state of mammalian cells. Non-overlapping peaks over the CAGE profiles, approximately 200,000 and 150,000 peaks for the human and mouse genomes, were identified and annotated to provide precise location of known promoters as well as novel ones, and to quantify their activities. PMID:28850106

  17. SSHscreen and SSHdb, generic software for microarray based gene discovery: application to the stress response in cowpea

    PubMed Central

    2010-01-01

    Background Suppression subtractive hybridization is a popular technique for gene discovery from non-model organisms without an annotated genome sequence, such as cowpea (Vigna unguiculata (L.) Walp). We aimed to use this method to enrich for genes expressed during drought stress in a drought tolerant cowpea line. However, current methods were inefficient in screening libraries and management of the sequence data, and thus there was a need to develop software tools to facilitate the process. Results Forward and reverse cDNA libraries enriched for cowpea drought response genes were screened on microarrays, and the R software package SSHscreen 2.0.1 was developed (i) to normalize the data effectively using spike-in control spot normalization, and (ii) to select clones for sequencing based on the calculation of enrichment ratios with associated statistics. Enrichment ratio 3 values for each clone showed that 62% of the forward library and 34% of the reverse library clones were significantly differentially expressed by drought stress (adjusted p value < 0.05). Enrichment ratio 2 calculations showed that > 88% of the clones in both libraries were derived from rare transcripts in the original tester samples, thus supporting the notion that suppression subtractive hybridization enriches for rare transcripts. A set of 118 clones were chosen for sequencing, and drought-induced cowpea genes were identified, the most interesting encoding a late embryogenesis abundant Lea5 protein, a glutathione S-transferase, a thaumatin, a universal stress protein, and a wound induced protein. A lipid transfer protein and several components of photosynthesis were down-regulated by the drought stress. Reverse transcriptase quantitative PCR confirmed the enrichment ratio values for the selected cowpea genes. SSHdb, a web-accessible database, was developed to manage the clone sequences and combine the SSHscreen data with sequence annotations derived from BLAST and Blast2GO. The self-BLAST function within SSHdb grouped redundant clones together and illustrated that the SSHscreen plots are a useful tool for choosing anonymous clones for sequencing, since redundant clones cluster together on the enrichment ratio plots. Conclusions We developed the SSHscreen-SSHdb software pipeline, which greatly facilitates gene discovery using suppression subtractive hybridization by improving the selection of clones for sequencing after screening the library on a small number of microarrays. Annotation of the sequence information and collaboration was further enhanced through a web-based SSHdb database, and we illustrated this through identification of drought responsive genes from cowpea, which can now be investigated in gene function studies. SSH is a popular and powerful gene discovery tool, and therefore this pipeline will have application for gene discovery in any biological system, particularly non-model organisms. SSHscreen 2.0.1 and a link to SSHdb are available from http://microarray.up.ac.za/SSHscreen. PMID:20359330

  18. A cloud-compatible bioinformatics pipeline for ultrarapid pathogen identification from next-generation sequencing of clinical samples.

    PubMed

    Naccache, Samia N; Federman, Scot; Veeraraghavan, Narayanan; Zaharia, Matei; Lee, Deanna; Samayoa, Erik; Bouquet, Jerome; Greninger, Alexander L; Luk, Ka-Cheung; Enge, Barryett; Wadford, Debra A; Messenger, Sharon L; Genrich, Gillian L; Pellegrino, Kristen; Grard, Gilda; Leroy, Eric; Schneider, Bradley S; Fair, Joseph N; Martínez, Miguel A; Isa, Pavel; Crump, John A; DeRisi, Joseph L; Sittler, Taylor; Hackett, John; Miller, Steve; Chiu, Charles Y

    2014-07-01

    Unbiased next-generation sequencing (NGS) approaches enable comprehensive pathogen detection in the clinical microbiology laboratory and have numerous applications for public health surveillance, outbreak investigation, and the diagnosis of infectious diseases. However, practical deployment of the technology is hindered by the bioinformatics challenge of analyzing results accurately and in a clinically relevant timeframe. Here we describe SURPI ("sequence-based ultrarapid pathogen identification"), a computational pipeline for pathogen identification from complex metagenomic NGS data generated from clinical samples, and demonstrate use of the pipeline in the analysis of 237 clinical samples comprising more than 1.1 billion sequences. Deployable on both cloud-based and standalone servers, SURPI leverages two state-of-the-art aligners for accelerated analyses, SNAP and RAPSearch, which are as accurate as existing bioinformatics tools but orders of magnitude faster in performance. In fast mode, SURPI detects viruses and bacteria by scanning data sets of 7-500 million reads in 11 min to 5 h, while in comprehensive mode, all known microorganisms are identified, followed by de novo assembly and protein homology searches for divergent viruses in 50 min to 16 h. SURPI has also directly contributed to real-time microbial diagnosis in acutely ill patients, underscoring its potential key role in the development of unbiased NGS-based clinical assays in infectious diseases that demand rapid turnaround times. © 2014 Naccache et al.; Published by Cold Spring Harbor Laboratory Press.

  19. Integrated shotgun sequencing and bioinformatics pipeline allows ultra-fast mitogenome recovery and confirms substantial gene rearrangements in Australian freshwater crayfishes

    PubMed Central

    2014-01-01

    Background Although it is possible to recover the complete mitogenome directly from shotgun sequencing data, currently reported methods and pipelines are still relatively time consuming and costly. Using a sample of the Australian freshwater crayfish Engaeus lengana, we demonstrate that it is possible to achieve three-day turnaround time (four hours hands-on time) from tissue sample to NCBI-ready submission file through the integration of MiSeq sequencing platform, Nextera sample preparation protocol, MITObim assembly algorithm and MITOS annotation pipeline. Results The complete mitochondrial genome of the parastacid freshwater crayfish, Engaeus lengana, was recovered by modest shotgun sequencing (1.2 giga bases) using the Illumina MiSeq benchtop sequencing platform. Genome assembly using the MITObim mitogenome assembler recovered the mitochondrial genome as a single contig with a 97-fold mean coverage (min. = 17; max. = 138). The mitogenome consists of 15,934 base pairs and contains the typical 37 mitochondrial genes and a non-coding AT-rich region. The genome arrangement is similar to the only other published parastacid mitogenome from the Australian genus Cherax. Conclusions We infer that the gene order arrangement found in Cherax destructor is common to Australian crayfish and may be a derived feature of the southern hemisphere family Parastacidae. Further, we report to our knowledge, the simplest and fastest protocol for the recovery and assembly of complete mitochondrial genomes using the MiSeq benchtop sequencer. PMID:24484414

  20. Ultra-deep sequencing enables high-fidelity recovery of biodiversity for bulk arthropod samples without PCR amplification

    PubMed Central

    2013-01-01

    Background Next-generation-sequencing (NGS) technologies combined with a classic DNA barcoding approach have enabled fast and credible measurement for biodiversity of mixed environmental samples. However, the PCR amplification involved in nearly all existing NGS protocols inevitably introduces taxonomic biases. In the present study, we developed new Illumina pipelines without PCR amplifications to analyze terrestrial arthropod communities. Results Mitochondrial enrichment directly followed by Illumina shotgun sequencing, at an ultra-high sequence volume, enabled the recovery of Cytochrome c Oxidase subunit 1 (COI) barcode sequences, which allowed for the estimation of species composition at high fidelity for a terrestrial insect community. With 15.5 Gbp Illumina data, approximately 97% and 92% were detected out of the 37 input Operational Taxonomic Units (OTUs), whether the reference barcode library was used or not, respectively, while only 1 novel OTU was found for the latter. Additionally, relatively strong correlation between the sequencing volume and the total biomass was observed for species from the bulk sample, suggesting a potential solution to reveal relative abundance. Conclusions The ability of the new Illumina PCR-free pipeline for DNA metabarcoding to detect small arthropod specimens and its tendency to avoid most, if not all, false positives suggests its great potential in biodiversity-related surveillance, such as in biomonitoring programs. However, further improvement for mitochondrial enrichment is likely needed for the application of the new pipeline in analyzing arthropod communities at higher diversity. PMID:23587339

  1. Recovery and purification process development for monoclonal antibody production

    PubMed Central

    Ma, Junfen; Winter, Charles; Bayer, Robert

    2010-01-01

    Hundreds of therapeutic monoclonal antibodies (mAbs) are currently in development, and many companies have multiple antibodies in their pipelines. Current methodology used in recovery processes for these molecules are reviewed here. Basic unit operations such as harvest, Protein A affinity chromatography and additional polishing steps are surveyed. Alternative processes such as flocculation, precipitation and membrane chromatography are discussed. We also cover platform approaches to purification methods development, use of high throughput screening methods, and offer a view on future developments in purification methodology as applied to mAbs. PMID:20647768

  2. MetaStorm: A Public Resource for Customizable Metagenomics Annotation

    PubMed Central

    Arango-Argoty, Gustavo; Singh, Gargi; Heath, Lenwood S.; Pruden, Amy; Xiao, Weidong; Zhang, Liqing

    2016-01-01

    Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution. PMID:27632579

  3. MetaStorm: A Public Resource for Customizable Metagenomics Annotation.

    PubMed

    Arango-Argoty, Gustavo; Singh, Gargi; Heath, Lenwood S; Pruden, Amy; Xiao, Weidong; Zhang, Liqing

    2016-01-01

    Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution.

  4. A Modular Pipelined Processor for High Resolution Gamma-Ray Spectroscopy

    NASA Astrophysics Data System (ADS)

    Veiga, Alejandro; Grunfeld, Christian

    2016-02-01

    The design of a digital signal processor for gamma-ray applications is presented in which a single ADC input can simultaneously provide temporal and energy characterization of gamma radiation for a wide range of applications. Applying pipelining techniques, the processor is able to manage and synchronize very large volumes of streamed real-time data. Its modular user interface provides a flexible environment for experimental design. The processor can fit in a medium-sized FPGA device operating at ADC sampling frequency, providing an efficient solution for multi-channel applications. Two experiments are presented in order to characterize its temporal and energy resolution.

  5. [Gas pipeline leak detection based on tunable diode laser absorption spectroscopy].

    PubMed

    Zhang, Qi-Xing; Wang, Jin-Jun; Liu, Bing-Hai; Cai, Ting-Li; Qiao, Li-Feng; Zhang, Yong-Ming

    2009-08-01

    The principle of tunable diode laser absorption spectroscopy and harmonic detection technique was introduced. An experimental device was developed by point sampling through small multi-reflection gas cell. A specific line near 1 653. 7 nm was targeted for methane measurement using a distributed feedback diode laser as tunable light source. The linearity between the intensity of second harmonic signal and the concentration of methane was determined. The background content of methane in air was measured. The results show that gas sensors using tunable diode lasers provide a high sensitivity and high selectivity method for city gas pipeline leak detection.

  6. SIMBAD : a sequence-independent molecular-replacement pipeline

    DOE PAGES

    Simpkin, Adam J.; Simkovic, Felix; Thomas, Jens M. H.; ...

    2018-06-08

    The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here,more » SIMBAD , a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two steps fail to yield a solution, a final step in SIMBAD can be invoked to perform a brute-force search of a nonredundant PDB database provided by the MoRDa MR software. Through early-access usage of SIMBAD , this approach has solved novel cases that have otherwise proved difficult to solve.« less

  7. SIMBAD : a sequence-independent molecular-replacement pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpkin, Adam J.; Simkovic, Felix; Thomas, Jens M. H.

    The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here,more » SIMBAD , a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two steps fail to yield a solution, a final step in SIMBAD can be invoked to perform a brute-force search of a nonredundant PDB database provided by the MoRDa MR software. Through early-access usage of SIMBAD , this approach has solved novel cases that have otherwise proved difficult to solve.« less

  8. Simultaneous skull-stripping and lateral ventricle segmentation via fast multi-atlas likelihood fusion

    NASA Astrophysics Data System (ADS)

    Tang, Xiaoying; Kutten, Kwame; Ceritoglu, Can; Mori, Susumu; Miller, Michael I.

    2015-03-01

    In this paper, we propose and validate a fully automated pipeline for simultaneous skull-stripping and lateral ventricle segmentation using T1-weighted images. The pipeline is built upon a segmentation algorithm entitled fast multi-atlas likelihood-fusion (MALF) which utilizes multiple T1 atlases that have been pre-segmented into six whole-brain labels - the gray matter, the white matter, the cerebrospinal fluid, the lateral ventricles, the skull, and the background of the entire image. This algorithm, MALF, was designed for estimating brain anatomical structures in the framework of coordinate changes via large diffeomorphisms. In the proposed pipeline, we use a variant of MALF to estimate those six whole-brain labels in the test T1-weighted image. The three tissue labels (gray matter, white matter, and cerebrospinal fluid) and the lateral ventricles are then grouped together to form a binary brain mask to which we apply morphological smoothing so as to create the final mask for brain extraction. For computational purposes, all input images to MALF are down-sampled by a factor of two. In addition, small deformations are used for the changes of coordinates. This substantially reduces the computational complexity, hence we use the term "fast MALF". The skull-stripping performance is qualitatively evaluated on a total of 486 brain scans from a longitudinal study on Alzheimer dementia. Quantitative error analysis is carried out on 36 scans for evaluating the accuracy of the pipeline in segmenting the lateral ventricle. The volumes of the automated lateral ventricle segmentations, obtained from the proposed pipeline, are compared across three different clinical groups. The ventricle volumes from our pipeline are found to be sensitive to the diagnosis.

  9. Cathodic Protection Measurement Through Inline Inspection Technology Uses and Observations

    NASA Astrophysics Data System (ADS)

    Ferguson, Briana Ley

    This research supports the evaluation of an impressed current cathodic protection (CP) system of a buried coated steel pipeline through alternative technology and methods, via an inline inspection device (ILI, CP ILI tool, or tool), in order to prevent and mitigate external corrosion. This thesis investigates the ability to measure the current density of a pipeline's CP system from inside of a pipeline rather than manually from outside, and then convert that CP ILI tool reading into a pipe-to-soil potential as required by regulations and standards. This was demonstrated through a mathematical model that utilizes applications of Ohm's Law, circuit concepts, and attenuation principles in order to match the results of the ILI sample data by varying parameters of the model (i.e., values for over potential and coating resistivity). This research has not been conducted previously in order to determine if the protected potential range can be achieved with respect to the predicted current density from the CP ILI device. Kirchhoff's method was explored, but certain principals could not be used in the model as manual measurements were required. This research was based on circuit concepts which indirectly affected electrochemical processes. Through Ohm's law, the results show that a constant current density is possible in the protected potential range; therefore, indicates polarization of the pipeline, which leads to calcareous deposit development with respect to electrochemistry. Calcareous deposit is desirable in industry since it increases the resistance of the pipeline coating and lowers current, thus slowing the oxygen diffusion process. This research conveys that an alternative method for CP evaluation from inside of the pipeline is possible where the pipe-to-soil potential can be estimated (as required by regulations) from the ILI tool's current density measurement.

  10. A capillary electrophoresis coupled to mass spectrometry pipeline for long term comparable assessment of the urinary metabolome.

    PubMed

    Boizard, Franck; Brunchault, Valérie; Moulos, Panagiotis; Breuil, Benjamin; Klein, Julie; Lounis, Nadia; Caubet, Cécile; Tellier, Stéphanie; Bascands, Jean-Loup; Decramer, Stéphane; Schanstra, Joost P; Buffin-Meyer, Bénédicte

    2016-10-03

    Although capillary electrophoresis coupled to mass spectrometry (CE-MS) has potential application in the field of metabolite profiling, very few studies actually used CE-MS to identify clinically useful body fluid metabolites. Here we present an optimized CE-MS setup and analysis pipeline to reproducibly explore the metabolite content of urine. We show that the use of a beveled tip capillary improves the sensitivity of detection over a flat tip. We also present a novel normalization procedure based on the use of endogenous stable urinary metabolites identified in the combined metabolome of 75 different urine samples from healthy and diseased individuals. This method allows a highly reproducible comparison of the same sample analyzed nearly 130 times over a range of 4 years. To demonstrate the use of this pipeline in clinical research we compared the urinary metabolome of 34 newborns with ureteropelvic junction (UPJ) obstruction and 15 healthy newborns. We identified 32 features with differential urinary abundance. Combination of the 32 compounds in a SVM classifier predicted with 76% sensitivity and 86% specificity UPJ obstruction in a separate validation cohort of 24 individuals. Thus, this study demonstrates the feasibility to use CE-MS as a tool for the identification of clinically relevant urinary metabolites.

  11. PANGEA: pipeline for analysis of next generation amplicons

    PubMed Central

    Giongo, Adriana; Crabb, David B; Davis-Richardson, Austin G; Chauliac, Diane; Mobberley, Jennifer M; Gano, Kelsey A; Mukherjee, Nabanita; Casella, George; Roesch, Luiz FW; Walts, Brandon; Riva, Alberto; King, Gary; Triplett, Eric W

    2010-01-01

    High-throughput DNA sequencing can identify organisms and describe population structures in many environmental and clinical samples. Current technologies generate millions of reads in a single run, requiring extensive computational strategies to organize, analyze and interpret those sequences. A series of bioinformatics tools for high-throughput sequencing analysis, including preprocessing, clustering, database matching and classification, have been compiled into a pipeline called PANGEA. The PANGEA pipeline was written in Perl and can be run on Mac OSX, Windows or Linux. With PANGEA, sequences obtained directly from the sequencer can be processed quickly to provide the files needed for sequence identification by BLAST and for comparison of microbial communities. Two different sets of bacterial 16S rRNA sequences were used to show the efficiency of this workflow. The first set of 16S rRNA sequences is derived from various soils from Hawaii Volcanoes National Park. The second set is derived from stool samples collected from diabetes-resistant and diabetes-prone rats. The workflow described here allows the investigator to quickly assess libraries of sequences on personal computers with customized databases. PANGEA is provided for users as individual scripts for each step in the process or as a single script where all processes, except the χ2 step, are joined into one program called the ‘backbone’. PMID:20182525

  12. PANGEA: pipeline for analysis of next generation amplicons.

    PubMed

    Giongo, Adriana; Crabb, David B; Davis-Richardson, Austin G; Chauliac, Diane; Mobberley, Jennifer M; Gano, Kelsey A; Mukherjee, Nabanita; Casella, George; Roesch, Luiz F W; Walts, Brandon; Riva, Alberto; King, Gary; Triplett, Eric W

    2010-07-01

    High-throughput DNA sequencing can identify organisms and describe population structures in many environmental and clinical samples. Current technologies generate millions of reads in a single run, requiring extensive computational strategies to organize, analyze and interpret those sequences. A series of bioinformatics tools for high-throughput sequencing analysis, including pre-processing, clustering, database matching and classification, have been compiled into a pipeline called PANGEA. The PANGEA pipeline was written in Perl and can be run on Mac OSX, Windows or Linux. With PANGEA, sequences obtained directly from the sequencer can be processed quickly to provide the files needed for sequence identification by BLAST and for comparison of microbial communities. Two different sets of bacterial 16S rRNA sequences were used to show the efficiency of this workflow. The first set of 16S rRNA sequences is derived from various soils from Hawaii Volcanoes National Park. The second set is derived from stool samples collected from diabetes-resistant and diabetes-prone rats. The workflow described here allows the investigator to quickly assess libraries of sequences on personal computers with customized databases. PANGEA is provided for users as individual scripts for each step in the process or as a single script where all processes, except the chi(2) step, are joined into one program called the 'backbone'.

  13. System for measuring radioactivity of labelled biopolymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gross, V.

    1980-07-08

    A system is described for measuring radioactivity of labelled biopolymers, comprising: a set of containers adapted for receiving aqueous solutions of biological samples containing biopolymers which are subsequently precipitated in said containers on particles of diatomite in the presence of a coprecipitator, then filtered, dissolved, and mixed with a scintillator; radioactivity measuring means including a detection chamber to which is fed the mixture produced in said set of containers; an electric drive for moving said set of containers in a stepwise manner; means for proportional feeding of said coprecipitator and a suspension of diatomite in an acid solution to saidmore » containers which contain the biological sample for forming an acid precipitation of biopolymers; means for the removal of precipitated samples from said containers; precipitated biopolymer filtering means for successively filtering the precipitate, suspending the precipitate, dissolving the biopolymers mixed with said scintillator for feeding of the mixture to said detection chamber; a system of pipelines interconnecting said above-recited means; and said means for measuring radioactivity of labelled biopolymers including, a measuring cell arranged in a detection chamber and communicating with said means for filtering precipitated biopolymers through one pipeline of said system of pipelines; a program unit electrically connected to said electric drive, said means for acid precipatation of biopolymers, said means for the removal of precipitated samples from said containers, said filtering means, and said radioactivity measuring device; said program unit adapted to periodically switch on and off the above-recited means and check the sequence of the radioactivity measuring operations; and a control unit for controlling the initiation of the system and for selecting programs.« less

  14. Comprehensive processing of high-throughput small RNA sequencing data including quality checking, normalization, and differential expression analysis using the UEA sRNA Workbench

    PubMed Central

    Beckers, Matthew; Mohorianu, Irina; Stocks, Matthew; Applegate, Christopher; Dalmay, Tamas; Moulton, Vincent

    2017-01-01

    Recently, high-throughput sequencing (HTS) has revealed compelling details about the small RNA (sRNA) population in eukaryotes. These 20 to 25 nt noncoding RNAs can influence gene expression by acting as guides for the sequence-specific regulatory mechanism known as RNA silencing. The increase in sequencing depth and number of samples per project enables a better understanding of the role sRNAs play by facilitating the study of expression patterns. However, the intricacy of the biological hypotheses coupled with a lack of appropriate tools often leads to inadequate mining of the available data and thus, an incomplete description of the biological mechanisms involved. To enable a comprehensive study of differential expression in sRNA data sets, we present a new interactive pipeline that guides researchers through the various stages of data preprocessing and analysis. This includes various tools, some of which we specifically developed for sRNA analysis, for quality checking and normalization of sRNA samples as well as tools for the detection of differentially expressed sRNAs and identification of the resulting expression patterns. The pipeline is available within the UEA sRNA Workbench, a user-friendly software package for the processing of sRNA data sets. We demonstrate the use of the pipeline on a H. sapiens data set; additional examples on a B. terrestris data set and on an A. thaliana data set are described in the Supplemental Information. A comparison with existing approaches is also included, which exemplifies some of the issues that need to be addressed for sRNA analysis and how the new pipeline may be used to do this. PMID:28289155

  15. 77 FR 70543 - Pipeline Safety: Meeting of the Gas Pipeline Advisory Committee and the Liquid Pipeline Advisory...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    .... PHMSA-2009-0203] Pipeline Safety: Meeting of the Gas Pipeline Advisory Committee and the Liquid Pipeline..., and safety policies for natural gas pipelines and for hazardous liquid pipelines. Both committees were...: Notice of advisory committee meeting. SUMMARY: This notice announces a public meeting of the Gas Pipeline...

  16. From Concept to Commerce: Developing a Successful Fungal Endophyte Inoculant for Agricultural Crops

    PubMed Central

    Doohan, Fiona M.; Hodkinson, Trevor R.

    2018-01-01

    The development of endophyte inoculants for agricultural crops has been bedevilled by the twin problems of a lack of reliability and consistency, with a consequent lack of belief among end users in the efficacy of such treatments. We have developed a successful research pipeline for the production of a reliable, consistent and environmentally targeted fungal endophyte seed-delivered inoculant for barley cultivars. Our approach was developed de novo from an initial concept to source candidate endophyte inoculants from a wild relative of barley, Hordeum murinum (wall barley). A careful screening and selection procedure and extensive controlled environment testing of fungal endophyte strains, followed by multi-year field trials has resulted in the validation of an endophyte consortium suitable for barley crops grown on relatively dry sites. Our approach can be adapted for any crop or environment, provided that the set of first principles we have developed is followed. Here, we report how we developed the successful pipeline for the production of an economically viable fungal endophyte inoculant for barley cultivars. PMID:29439471

  17. Rapid Threat Organism Recognition Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Kelly P.; Solberg, Owen D.; Schoeniger, Joseph S.

    2013-05-07

    The RAPTOR computational pipeline identifies microbial nucleic acid sequences present in sequence data from clinical samples. It takes as input raw short-read genomic sequence data (in particular, the type generated by the Illumina sequencing platforms) and outputs taxonomic evaluation of detected microbes in various human-readable formats. This software was designed to assist in the diagnosis or characterization of infectious disease, by detecting pathogen sequences in nucleic acid sequence data from clinical samples. It has also been applied in the detection of algal pathogens, when algal biofuel ponds became unproductive. RAPTOR first trims and filters genomic sequence reads based on qualitymore » and related considerations, then performs a quick alignment to the human (or other host) genome to filter out host sequences, then performs a deeper search against microbial genomes. Alignment to a protein sequence database is optional. Alignment results are summarized and placed in a taxonomic framework using the Lowest Common Ancestor algorithm.« less

  18. 78 FR 70623 - Pipeline Safety: Meeting of the Gas Pipeline Advisory Committee and the Liquid Pipeline Advisory...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2009-0203] Pipeline Safety: Meeting of the Gas Pipeline Advisory Committee and the Liquid Pipeline Advisory Committee AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. [[Page...

  19. Guided genetic screen to identify genes essential in the regeneration of hair cells and other tissues.

    PubMed

    Pei, Wuhong; Xu, Lisha; Huang, Sunny C; Pettie, Kade; Idol, Jennifer; Rissone, Alberto; Jimenez, Erin; Sinclair, Jason W; Slevin, Claire; Varshney, Gaurav K; Jones, MaryPat; Carrington, Blake; Bishop, Kevin; Huang, Haigen; Sood, Raman; Lin, Shuo; Burgess, Shawn M

    2018-01-01

    Regenerative medicine holds great promise for both degenerative diseases and traumatic tissue injury which represent significant challenges to the health care system. Hearing loss, which affects hundreds of millions of people worldwide, is caused primarily by a permanent loss of the mechanosensory receptors of the inner ear known as hair cells. This failure to regenerate hair cells after loss is limited to mammals, while all other non-mammalian vertebrates tested were able to completely regenerate these mechanosensory receptors after injury. To understand the mechanism of hair cell regeneration and its association with regeneration of other tissues, we performed a guided mutagenesis screen using zebrafish lateral line hair cells as a screening platform to identify genes that are essential for hair cell regeneration, and further investigated how genes essential for hair cell regeneration were involved in the regeneration of other tissues. We created genetic mutations either by retroviral insertion or CRISPR/Cas9 approaches, and developed a high-throughput screening pipeline for analyzing hair cell development and regeneration. We screened 254 gene mutations and identified 7 genes specifically affecting hair cell regeneration. These hair cell regeneration genes fell into distinct and somewhat surprising functional categories. By examining the regeneration of caudal fin and liver, we found these hair cell regeneration genes often also affected other types of tissue regeneration. Therefore, our results demonstrate guided screening is an effective approach to discover regeneration candidates, and hair cell regeneration is associated with other tissue regeneration.

  20. Screening of ground water samples for volatile organic compounds using a portable gas chromatograph

    USGS Publications Warehouse

    Buchmiller, R.C.

    1989-01-01

    A portable gas chromatograph was used to screen 32 ground water samples for volatile organic compounds. Seven screened samples were positive; four of the seven samples had volatile organic substances identified by second-column confirmation. Four of the seven positive, screened samples also tested positive in laboratory analyses of duplicate samples. No volatile organic compounds were detected in laboratory analyses of samples that headspace screening indicated to be negative. Samples that contained volatile organic compounds, as identified by laboratory analysis, and that contained a volatile organic compound present in a standard of selected compounds were correctly identified by using the portable gas chromatography. Comparisons of screened-sample data with laboratory data indicate the ability to detect selected volatile organic compounds at concentrations of about 1 microgram per liter in the headspace of water samples by use of a portable gas chromatography. -Author

  1. Leveraging structure determination with fragment screening for infectious disease drug targets: MECP synthase from Burkholderia pseudomallei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Begley, Darren W.; Hartley, Robert C.; Davies, Douglas R.

    As part of the Seattle Structural Genomics Center for Infectious Disease, we seek to enhance structural genomics with ligand-bound structure data which can serve as a blueprint for structure-based drug design. We have adapted fragment-based screening methods to our structural genomics pipeline to generate multiple ligand-bound structures of high priority drug targets from pathogenic organisms. In this study, we report fragment screening methods and structure determination results for 2C-methyl-D-erythritol-2,4-cyclo-diphosphate (MECP) synthase from Burkholderia pseudomallei, the gram-negative bacterium which causes melioidosis. Screening by nuclear magnetic resonance spectroscopy as well as crystal soaking followed by X-ray diffraction led to the identification ofmore » several small molecules which bind this enzyme in a critical metabolic pathway. A series of complex structures obtained with screening hits reveal distinct binding pockets and a range of small molecules which form complexes with the target. Additional soaks with these compounds further demonstrate a subset of fragments to only bind the protein when present in specific combinations. This ensemble of fragment-bound complexes illuminates several characteristics of MECP synthase, including a previously unknown binding surface external to the catalytic active site. These ligand-bound structures now serve to guide medicinal chemists and structural biologists in rational design of novel inhibitors for this enzyme.« less

  2. Pyrazole and imidazo[1,2-b]pyrazole derivatives as new potential anti-tuberculosis agents.

    PubMed

    Meta, Elda; Brullo, Chiara; Tonelli, Michele; Franzblau, Scott G; Wang, Yuehong; Ma, Rui; Baojie, Wan; Orena, Beatrice Silvia; Pasca, Maria Rosalia; Bruno, Olga

    2018-05-23

    We screened a large library of differently decorated imidazo-pyrazole and pyrazole derivatives as possible new antitubercular agents and this preliminary screening showed that many compounds are able to totally inhibit Mycobacterium growth (>90 %). Among the most active compounds, we selected some new possible hits based on their similarities and, at the same time, their novelty respect to the pipeline drugs. In order to increase the potency and obtain more information about structure activity relationship (SAR), we design and synthesized three new series of compounds (2a-e, 3a-e, and 4a-l). Performed tests confirmed that both new pyrazoles and imidazo-pyrazoles could represent a new starting point to obtain more potent compounds and further work is now underway to identify the protein targets of this new class of anti-TB agents. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  3. A bioinformatic pipeline for identifying informative SNP panels for parentage assignment from RADseq data.

    PubMed

    Andrews, Kimberly R; Adams, Jennifer R; Cassirer, E Frances; Plowright, Raina K; Gardner, Colby; Dwire, Maggie; Hohenlohe, Paul A; Waits, Lisette P

    2018-06-05

    The development of high-throughput sequencing technologies is dramatically increasing the use of single nucleotide polymorphisms (SNPs) across the field of genetics, but most parentage studies of wild populations still rely on microsatellites. We developed a bioinformatic pipeline for identifying SNP panels that are informative for parentage analysis from restriction site-associated DNA sequencing (RADseq) data. This pipeline includes options for analysis with or without a reference genome, and provides methods to maximize genotyping accuracy and select sets of unlinked loci that have high statistical power. We test this pipeline on small populations of Mexican gray wolf and bighorn sheep, for which parentage analyses are expected to be challenging due to low genetic diversity and the presence of many closely related individuals. We compare the results of parentage analysis across SNP panels generated with or without the use of a reference genome, and between SNPs and microsatellites. For Mexican gray wolf, we conducted parentage analyses for 30 pups from a single cohort where samples were available from 64% of possible mothers and 53% of possible fathers, and the accuracy of parentage assignments could be estimated because true identities of parents were known a priori based on field data. For bighorn sheep, we conducted maternity analyses for 39 lambs from five cohorts where 77% of possible mothers were sampled, but true identities of parents were unknown. Analyses with and without a reference genome produced SNP panels with >95% parentage assignment accuracy for Mexican gray wolf, outperforming microsatellites at 78% accuracy. Maternity assignments were completely consistent across all SNP panels for the bighorn sheep, and were 74.4% consistent with assignments from microsatellites. Accuracy and consistency of parentage analysis were not reduced when using as few as 284 SNPs for Mexican gray wolf and 142 SNPs for bighorn sheep, indicating our pipeline can be used to develop SNP genotyping assays for parentage analysis with relatively small numbers of loci. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  4. MICRA: an automatic pipeline for fast characterization of microbial genomes from high-throughput sequencing data.

    PubMed

    Caboche, Ségolène; Even, Gaël; Loywick, Alexandre; Audebert, Christophe; Hot, David

    2017-12-19

    The increase in available sequence data has advanced the field of microbiology; however, making sense of these data without bioinformatics skills is still problematic. We describe MICRA, an automatic pipeline, available as a web interface, for microbial identification and characterization through reads analysis. MICRA uses iterative mapping against reference genomes to identify genes and variations. Additional modules allow prediction of antibiotic susceptibility and resistance and comparing the results of several samples. MICRA is fast, producing few false-positive annotations and variant calls compared to current methods, making it a tool of great interest for fully exploiting sequencing data.

  5. 75 FR 24655 - Order Finding That the ICE Waha Financial Basis Contract Traded on the IntercontinentalExchange...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-05

    ... Texas. Pipelines include El Paso Natural Gas, Transwestern Pipeline, Natural Gas Pipeline Co. of America, Northern Natural Gas, Delhi Pipeline, Oasis Pipeline, EPGT Texas and Lone Star Pipeline. The Platt's [[Page... pipelines. These pipelines bring in natural gas from fields in the Gulf Coast region and ship it to major...

  6. ISPyB: an information management system for synchrotron macromolecular crystallography.

    PubMed

    Delagenière, Solange; Brenchereau, Patrice; Launer, Ludovic; Ashton, Alun W; Leal, Ricardo; Veyrier, Stéphanie; Gabadinho, José; Gordon, Elspeth J; Jones, Samuel D; Levik, Karl Erik; McSweeney, Seán M; Monaco, Stéphanie; Nanao, Max; Spruce, Darren; Svensson, Olof; Walsh, Martin A; Leonard, Gordon A

    2011-11-15

    Individual research groups now analyze thousands of samples per year at synchrotron macromolecular crystallography (MX) resources. The efficient management of experimental data is thus essential if the best possible experiments are to be performed and the best possible data used in downstream processes in structure determination pipelines. Information System for Protein crystallography Beamlines (ISPyB), a Laboratory Information Management System (LIMS) with an underlying data model allowing for the integration of analyses down-stream of the data collection experiment was developed to facilitate such data management. ISPyB is now a multisite, generic LIMS for synchrotron-based MX experiments. Its initial functionality has been enhanced to include improved sample tracking and reporting of experimental protocols, the direct ranking of the diffraction characteristics of individual samples and the archiving of raw data and results from ancillary experiments and post-experiment data processing protocols. This latter feature paves the way for ISPyB to play a central role in future macromolecular structure solution pipelines and validates the application of the approach used in ISPyB to other experimental techniques, such as biological solution Small Angle X-ray Scattering and spectroscopy, which have similar sample tracking and data handling requirements.

  7. Launching genomics into the cloud: deployment of Mercury, a next generation sequence analysis pipeline

    PubMed Central

    2014-01-01

    Background Massively parallel DNA sequencing generates staggering amounts of data. Decreasing cost, increasing throughput, and improved annotation have expanded the diversity of genomics applications in research and clinical practice. This expanding scale creates analytical challenges: accommodating peak compute demand, coordinating secure access for multiple analysts, and sharing validated tools and results. Results To address these challenges, we have developed the Mercury analysis pipeline and deployed it in local hardware and the Amazon Web Services cloud via the DNAnexus platform. Mercury is an automated, flexible, and extensible analysis workflow that provides accurate and reproducible genomic results at scales ranging from individuals to large cohorts. Conclusions By taking advantage of cloud computing and with Mercury implemented on the DNAnexus platform, we have demonstrated a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples. PMID:24475911

  8. MToolBox: a highly automated pipeline for heteroplasmy annotation and prioritization analysis of human mitochondrial variants in high-throughput sequencing

    PubMed Central

    Diroma, Maria Angela; Santorsola, Mariangela; Guttà, Cristiano; Gasparre, Giuseppe; Picardi, Ernesto; Pesole, Graziano; Attimonelli, Marcella

    2014-01-01

    Motivation: The increasing availability of mitochondria-targeted and off-target sequencing data in whole-exome and whole-genome sequencing studies (WXS and WGS) has risen the demand of effective pipelines to accurately measure heteroplasmy and to easily recognize the most functionally important mitochondrial variants among a huge number of candidates. To this purpose, we developed MToolBox, a highly automated pipeline to reconstruct and analyze human mitochondrial DNA from high-throughput sequencing data. Results: MToolBox implements an effective computational strategy for mitochondrial genomes assembling and haplogroup assignment also including a prioritization analysis of detected variants. MToolBox provides a Variant Call Format file featuring, for the first time, allele-specific heteroplasmy and annotation files with prioritized variants. MToolBox was tested on simulated samples and applied on 1000 Genomes WXS datasets. Availability and implementation: MToolBox package is available at https://sourceforge.net/projects/mtoolbox/. Contact: marcella.attimonelli@uniba.it Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028726

  9. Neural-Fuzzy model Based Steel Pipeline Multiple Cracks Classification

    NASA Astrophysics Data System (ADS)

    Elwalwal, Hatem Mostafa; Mahzan, Shahruddin Bin Hj.; Abdalla, Ahmed N.

    2017-10-01

    While pipes are cheaper than other means of transportation, this cost saving comes with a major price: pipes are subject to cracks, corrosion etc., which in turn can cause leakage and environmental damage. In this paper, Neural-Fuzzy model for multiple cracks classification based on Lamb Guide Wave. Simulation results for 42 sample were collected using ANSYS software. The current research object to carry on the numerical simulation and experimental study, aiming at finding an effective way to detection and the localization of cracks and holes defects in the main body of pipeline. Considering the damage form of multiple cracks and holes which may exist in pipeline, to determine the respective position in the steel pipe. In addition, the technique used in this research a guided lamb wave based structural health monitoring method whereas piezoelectric transducers will use as exciting and receiving sensors by Pitch-Catch method. Implementation of simple learning mechanism has been developed specially for the ANN for fuzzy the system represented.

  10. MetaDB a Data Processing Workflow in Untargeted MS-Based Metabolomics Experiments.

    PubMed

    Franceschi, Pietro; Mylonas, Roman; Shahaf, Nir; Scholz, Matthias; Arapitsas, Panagiotis; Masuero, Domenico; Weingart, Georg; Carlin, Silvia; Vrhovsek, Urska; Mattivi, Fulvio; Wehrens, Ron

    2014-01-01

    Due to their sensitivity and speed, mass-spectrometry based analytical technologies are widely used to in metabolomics to characterize biological phenomena. To address issues like metadata organization, quality assessment, data processing, data storage, and, finally, submission to public repositories, bioinformatic pipelines of a non-interactive nature are often employed, complementing the interactive software used for initial inspection and visualization of the data. These pipelines often are created as open-source software allowing the complete and exhaustive documentation of each step, ensuring the reproducibility of the analysis of extensive and often expensive experiments. In this paper, we will review the major steps which constitute such a data processing pipeline, discussing them in the context of an open-source software for untargeted MS-based metabolomics experiments recently developed at our institute. The software has been developed by integrating our metaMS R package with a user-friendly web-based application written in Grails. MetaMS takes care of data pre-processing and annotation, while the interface deals with the creation of the sample lists, the organization of the data storage, and the generation of survey plots for quality assessment. Experimental and biological metadata are stored in the ISA-Tab format making the proposed pipeline fully integrated with the Metabolights framework.

  11. The influence of the internal microbiome on the materials used for construction of the transmission natural gas pipelines in the Lodz Province

    NASA Astrophysics Data System (ADS)

    Staniszewska, Agnieszka; Jastrzębska, Magdalena; Ziemiński, Krzysztof

    2017-10-01

    This paper presents investigation results of the influence of gas microbes on the biocorrosion rate of the materials used for gas pipelines construction in the Lodz Province. Samples of two types of carbon steel and cast iron were stored in the laboratory pipeline model reflecting the real conditions of working natural gas pipelines were. In the next step the influence of cathodic protection with parameters recommended for protection of underground structures was tested. Analyses of biological corrosion products generated on the test surface were carried out using a scanning electron microscope with an X-ray analyzer. The level of ATP was measured to confirm presence of the adsorbed microorganisms on the observed structures. Corrosion rates were determined by gravimetric methods. In the course of the study it was revealed that the rate of biocorrosion of steel is lower than that for cast iron. Our results also proved that the weight corrosion rate depends on the number of adhered microorganisms. In addition, it has been found that application of the carbon steel cathodic protection decreases its weight corrosion rate. The information obtained will help to increase the knowledge on the rate of biological corrosion causing losses/pits inside gas pipline.

  12. Machine-learning-based Brokers for Real-time Classification of the LSST Alert Stream

    NASA Astrophysics Data System (ADS)

    Narayan, Gautham; Zaidi, Tayeb; Soraisam, Monika D.; Wang, Zhe; Lochner, Michelle; Matheson, Thomas; Saha, Abhijit; Yang, Shuo; Zhao, Zhenge; Kececioglu, John; Scheidegger, Carlos; Snodgrass, Richard T.; Axelrod, Tim; Jenness, Tim; Maier, Robert S.; Ridgway, Stephen T.; Seaman, Robert L.; Evans, Eric Michael; Singh, Navdeep; Taylor, Clark; Toeniskoetter, Jackson; Welch, Eric; Zhu, Songzhe; The ANTARES Collaboration

    2018-05-01

    The unprecedented volume and rate of transient events that will be discovered by the Large Synoptic Survey Telescope (LSST) demand that the astronomical community update its follow-up paradigm. Alert-brokers—automated software system to sift through, characterize, annotate, and prioritize events for follow-up—will be critical tools for managing alert streams in the LSST era. The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is one such broker. In this work, we develop a machine learning pipeline to characterize and classify variable and transient sources only using the available multiband optical photometry. We describe three illustrative stages of the pipeline, serving the three goals of early, intermediate, and retrospective classification of alerts. The first takes the form of variable versus transient categorization, the second a multiclass typing of the combined variable and transient data set, and the third a purity-driven subtyping of a transient class. Although several similar algorithms have proven themselves in simulations, we validate their performance on real observations for the first time. We quantitatively evaluate our pipeline on sparse, unevenly sampled, heteroskedastic data from various existing observational campaigns, and demonstrate very competitive classification performance. We describe our progress toward adapting the pipeline developed in this work into a real-time broker working on live alert streams from time-domain surveys.

  13. Latest Development and Application of High Strength and Heavy Gauge Pipeline Steel in China

    NASA Astrophysics Data System (ADS)

    Yongqing, Zhang; Aimin, Guo; Chengjia, Shang; Qingyou, Liu; Gray, J. Malcolm; Barbaro, Frank

    Over the past twenty years, significant advances have been made in the field of microalloying and associated application, among which one of the most successful application cases is HTP practice for heavy gauge, high strength pipeline steels. Combined the strengthening effects of TMCP and retardation effects of austenite recrystallization with increasing Nb in austenite region, HTP conception with low carbon and high niobium alloy design has been successfully applied to develop X80 coil with a thickness of 18.4mm used for China's Second West-East pipeline. During this process, big efforts were made to further develop and enrich the application of microalloying technology, and at the same time the strengthening effects of Nb have been completely unfolded and fully utilized with improved metallurgical quality and quantitative analysis of microstructure. In this paper, the existing status and strengthening effect of Nb during reheating, rolling, cooling and welding have been analyzed and characterized based on mass production samples and laboratory analysis. As confirmed, grain refinement remains the most basic strengthening measure to reduce the microstructure gradient along the thickness, which in turn enlarges the processing window to improve upon low temperature toughness, and finally make it possible to develop heavy gauge, high strength pipeline steels with more challenging fracture toughness requirements.

  14. SNP Data Quality Control in a National Beef and Dairy Cattle System and Highly Accurate SNP Based Parentage Verification and Identification

    PubMed Central

    McClure, Matthew C.; McCarthy, John; Flynn, Paul; McClure, Jennifer C.; Dair, Emma; O'Connell, D. K.; Kearney, John F.

    2018-01-01

    A major use of genetic data is parentage verification and identification as inaccurate pedigrees negatively affect genetic gain. Since 2012 the international standard for single nucleotide polymorphism (SNP) verification in Bos taurus cattle has been the ISAG SNP panels. While these ISAG panels provide an increased level of parentage accuracy over microsatellite markers (MS), they can validate the wrong parent at ≤1% misconcordance rate levels, indicating that more SNP are needed if a more accurate pedigree is required. With rapidly increasing numbers of cattle being genotyped in Ireland that represent 61 B. taurus breeds from a wide range of farm types: beef/dairy, AI/pedigree/commercial, purebred/crossbred, and large to small herd size the Irish Cattle Breeding Federation (ICBF) analyzed different SNP densities to determine that at a minimum ≥500 SNP are needed to consistently predict only one set of parents at a ≤1% misconcordance rate. For parentage validation and prediction ICBF uses 800 SNP (ICBF800) selected based on SNP clustering quality, ISAG200 inclusion, call rate (CR), and minor allele frequency (MAF) in the Irish cattle population. Large datasets require sample and SNP quality control (QC). Most publications only deal with SNP QC via CR, MAF, parent-progeny conflicts, and Hardy-Weinberg deviation, but not sample QC. We report here parentage, SNP QC, and a genomic sample QC pipelines to deal with the unique challenges of >1 million genotypes from a national herd such as SNP genotype errors from mis-tagging of animals, lab errors, farm errors, and multiple other issues that can arise. We divide the pipeline into two parts: a Genotype QC and an Animal QC pipeline. The Genotype QC identifies samples with low call rate, missing or mixed genotype classes (no BB genotype or ABTG alleles present), and low genotype frequencies. The Animal QC handles situations where the genotype might not belong to the listed individual by identifying: >1 non-matching genotypes per animal, SNP duplicates, sex and breed prediction mismatches, parentage and progeny validation results, and other situations. The Animal QC pipeline make use of ICBF800 SNP set where appropriate to identify errors in a computationally efficient yet still highly accurate method. PMID:29599798

  15. 76 FR 29333 - Pipeline Safety: Meetings of the Technical Pipeline Safety Standards Committee and the Technical...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-20

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Technical Hazardous Liquid Pipeline Safety Standards Committee AGENCY: Pipeline and Hazardous Materials... for natural gas pipelines and for hazardous liquid pipelines. Both committees were established under...

  16. 77 FR 34123 - Pipeline Safety: Public Meeting on Integrity Management of Gas Distribution Pipelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-08

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0100] Pipeline Safety: Public Meeting on Integrity Management of Gas Distribution Pipelines AGENCY: Office of Pipeline Safety, Pipeline and Hazardous Materials Safety Administration, DOT. ACTION...

  17. Induced Polarization Surveying for Acid Rock Screening in Highway Design

    NASA Astrophysics Data System (ADS)

    Butler, K. E.; Al, T.; Bishop, T.

    2004-05-01

    Highway and pipeline construction agencies have become increasingly vigilant in their efforts to avoid cutting through sulphide-bearing bedrock that has potential to produce acid rock drainage. Blasting and fragmentation of such rock increases the surface area available for sulphide oxidation and hence increases the risk of acid rock drainage unless the rock contains enough natural buffering capacity to neutralize the pH. In December, 2001, the New Brunswick Department of Transportation (NBOT) sponsored a field trial of geophysical surveying in order to assess its suitability as a screening tool for locating near-surface sulphides along proposed highway alignments. The goal was to develop a protocol that would allow existing programs of drilling and geochemical testing to be targeted more effectively, and provide design engineers with the information needed to reduce rock cuts where necessary and dispose of blasted material in a responsible fashion. Induced polarization (IP) was chosen as the primary geophysical method given its ability to detect low-grade disseminated mineralization. The survey was conducted in dipole-dipole mode using an exploration-style time domain IP system, dipoles 8 to 25 m in length, and six potential dipoles for each current dipole location (i.e. n = 1 - 6). Supplementary information was provided by resistivity and VLF-EM surveys sensitive to lateral changes in electrical conductivity, and by magnetic field surveying chosen for its sensitivity to the magnetic susceptibility of pyrrhotite. Geological and geochemical analyses of samples taken from several IP anomalies located along 4.3 line-km of proposed highway confirmed the effectiveness of the screening technique. IP pseudosections from a region of metamorphosed shales and volcaniclastic rocks identified discrete, well-defined mineralized zones. Stronger, overlapping, and more laterally extensive IP anomalies were observed over a section of graphitic and sulphide-bearing metasedimentary rocks. Attempts to use spectral IP characteristics to determine relative abundances of sulphides and graphite were not conclusive. The overall effectiveness of the screening technique however encouraged NBDOT to apply it to an additional 50 km of planned rock cuts along the corridor selected for the new Trans-Canada Highway.

  18. Bi-PROF

    PubMed Central

    Gries, Jasmin; Schumacher, Dirk; Arand, Julia; Lutsik, Pavlo; Markelova, Maria Rivera; Fichtner, Iduna; Walter, Jörn; Sers, Christine; Tierling, Sascha

    2013-01-01

    The use of next generation sequencing has expanded our view on whole mammalian methylome patterns. In particular, it provides a genome-wide insight of local DNA methylation diversity at single nucleotide level and enables the examination of single chromosome sequence sections at a sufficient statistical power. We describe a bisulfite-based sequence profiling pipeline, Bi-PROF, which is based on the 454 GS-FLX Titanium technology that allows to obtain up to one million sequence stretches at single base pair resolution without laborious subcloning. To illustrate the performance of the experimental workflow connected to a bioinformatics program pipeline (BiQ Analyzer HT) we present a test analysis set of 68 different epigenetic marker regions (amplicons) in five individual patient-derived xenograft tissue samples of colorectal cancer and one healthy colon epithelium sample as a control. After the 454 GS-FLX Titanium run, sequence read processing and sample decoding, the obtained alignments are quality controlled and statistically evaluated. Comprehensive methylation pattern interpretation (profiling) assessed by analyzing 102-104 sequence reads per amplicon allows an unprecedented deep view on pattern formation and methylation marker heterogeneity in tissues concerned by complex diseases like cancer. PMID:23803588

  19. Pulsed eddy current differential probe to detect the defects in a stainless steel pipe

    NASA Astrophysics Data System (ADS)

    Angani, C. S.; Park, D. G.; Kim, C. G.; Leela, P.; Kishore, M.; Cheong, Y. M.

    2011-04-01

    Pulsed eddy current (PEC) is an electromagnetic nondestructive technique widely used to detect and quantify the flaws in conducting materials. In the present study a differential Hall-sensor probe which is used in the PEC system has been fabricated for the detection of defects in stainless steel pipelines. The differential probe has an exciting coil with two Hall-sensors. A stainless steel test sample with electrical discharge machining (EDM) notches under different depths of 1-5 mm was made and the sample was laminated by plastic insulation having uniform thickness to simulate the pipelines in nuclear power plants (NPPs). The driving coil in the probe is excited by a rectangular current pulse and the resultant response, which is the difference of the two Hall-sensors, has been detected as the PEC probe signal. The discriminating time domain features of the detected pulse such as peak value and time to zero are used to interpret the experimental results with the defects in the test sample. A feature extraction technique such as spectral power density has been devised to infer the PEC response.

  20. Pharmacological screening technologies for venom peptide discovery.

    PubMed

    Prashanth, Jutty Rajan; Hasaballah, Nojod; Vetter, Irina

    2017-12-01

    Venomous animals occupy one of the most successful evolutionary niches and occur on nearly every continent. They deliver venoms via biting and stinging apparatuses with the aim to rapidly incapacitate prey and deter predators. This has led to the evolution of venom components that act at a number of biological targets - including ion channels, G-protein coupled receptors, transporters and enzymes - with exquisite selectivity and potency, making venom-derived components attractive pharmacological tool compounds and drug leads. In recent years, plate-based pharmacological screening approaches have been introduced to accelerate venom-derived drug discovery. A range of assays are amenable to this purpose, including high-throughput electrophysiology, fluorescence-based functional and binding assays. However, despite these technological advances, the traditional activity-guided fractionation approach is time-consuming and resource-intensive. The combination of screening techniques suitable for miniaturization with sequence-based discovery approaches - supported by advanced proteomics, mass spectrometry, chromatography as well as synthesis and expression techniques - promises to further improve venom peptide discovery. Here, we discuss practical aspects of establishing a pipeline for venom peptide drug discovery with a particular emphasis on pharmacology and pharmacological screening approaches. This article is part of the Special Issue entitled 'Venom-derived Peptides as Pharmacological Tools.' Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. The development of high-content screening (HCS) technology and its importance to drug discovery.

    PubMed

    Fraietta, Ivan; Gasparri, Fabio

    2016-01-01

    High-content screening (HCS) was introduced about twenty years ago as a promising analytical approach to facilitate some critical aspects of drug discovery. Its application has spread progressively within the pharmaceutical industry and academia to the point that it today represents a fundamental tool in supporting drug discovery and development. Here, the authors review some of significant progress in the HCS field in terms of biological models and assay readouts. They highlight the importance of high-content screening in drug discovery, as testified by its numerous applications in a variety of therapeutic areas: oncology, infective diseases, cardiovascular and neurodegenerative diseases. They also dissect the role of HCS technology in different phases of the drug discovery pipeline: target identification, primary compound screening, secondary assays, mechanism of action studies and in vitro toxicology. Recent advances in cellular assay technologies, such as the introduction of three-dimensional (3D) cultures, induced pluripotent stem cells (iPSCs) and genome editing technologies (e.g., CRISPR/Cas9), have tremendously expanded the potential of high-content assays to contribute to the drug discovery process. Increasingly predictive cellular models and readouts, together with the development of more sophisticated and affordable HCS readers, will further consolidate the role of HCS technology in drug discovery.

  2. Taking transgenic rice drought screening to the field.

    PubMed

    Gaudin, Amélie C M; Henry, Amelia; Sparks, Adam H; Slamet-Loedin, Inez H

    2013-01-01

    Numerous transgenes have been reported to increase rice drought resistance, mostly in small-scale experiments under vegetative-stage drought stress, but few studies have included grain yield or field evaluations. Different definitions of drought resistance are currently in use for field-based and laboratory evaluations of transgenics, the former emphasizing plant responses that may not be linked to yield under drought. Although those fundamental studies use efficient protocols to uncover and validate gene functions, screening conditions differ greatly from field drought environments where the onset of drought stress symptoms is slow (2-3 weeks). Simplified screening methods, including severely stressed survival studies, are therefore not likely to identify transgenic events with better yield performance under drought in the target environment. As biosafety regulations are becoming established to allow field trials in some rice-producing countries, there is a need to develop relevant screening procedures that scale from preliminary event selection to greenhouse and field trials. Multilocation testing in a range of drought environments may reveal that different transgenes are necessary for different types of drought-prone field conditions. We describe here a pipeline to improve the selection efficiency and reproducibility of results across drought treatments and test the potential of transgenic rice for the development of drought-resistant material for agricultural purposes.

  3. 76 FR 43743 - Pipeline Safety: Meetings of the Technical Pipeline Safety Standards Committee and the Technical...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-21

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2011-0127] Pipeline Safety: Meetings of the Technical Pipeline Safety Standards Committee and the Technical Hazardous Liquid Pipeline Safety Standards Committee AGENCY: Pipeline and Hazardous Materials...

  4. Use of combinatorial chemistry to speed drug discovery.

    PubMed

    Rádl, S

    1998-10-01

    IBC's International Conference on Integrating Combinatorial Chemistry into the Discovery Pipeline was held September 14-15, 1998. The program started with a pre-conference workshop on High-Throughput Compound Characterization and Purification. The agenda of the main conference was divided into sessions of Synthesis, Automation and Unique Chemistries; Integrating Combinatorial Chemistry, Medicinal Chemistry and Screening; Combinatorial Chemistry Applications for Drug Discovery; and Information and Data Management. This meeting was an excellent opportunity to see how big pharma, biotech and service companies are addressing the current bottlenecks in combinatorial chemistry to speed drug discovery. (c) 1998 Prous Science. All rights reserved.

  5. 77 FR 16471 - Pipeline Safety: Implementation of the National Registry of Pipeline and Liquefied Natural Gas...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-21

    ... Registry of Pipeline and Liquefied Natural Gas Operators AGENCY: Pipeline and Hazardous Materials Safety... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts... Register (75 FR 72878) titled: ``Pipeline Safety: Updates to Pipeline and Liquefied Natural Gas Reporting...

  6. The High-Throughput Protein Sample Production Platform of the Northeast Structural Genomics Consortium

    PubMed Central

    Xiao, Rong; Anderson, Stephen; Aramini, James; Belote, Rachel; Buchwald, William A.; Ciccosanti, Colleen; Conover, Ken; Everett, John K.; Hamilton, Keith; Huang, Yuanpeng Janet; Janjua, Haleema; Jiang, Mei; Kornhaber, Gregory J.; Lee, Dong Yup; Locke, Jessica Y.; Ma, Li-Chung; Maglaqui, Melissa; Mao, Lei; Mitra, Saheli; Patel, Dayaban; Rossi, Paolo; Sahdev, Seema; Sharma, Seema; Shastry, Ritu; Swapna, G.V.T.; Tong, Saichu N.; Wang, Dongyan; Wang, Huang; Zhao, Li; Montelione, Gaetano T.; Acton, Thomas B.

    2014-01-01

    We describe the core Protein Production Platform of the Northeast Structural Genomics Consortium (NESG) and outline the strategies used for producing high-quality protein samples. The platform is centered on the cloning, expression and purification of 6X-His-tagged proteins using T7-based Escherichia coli systems. The 6X-His tag allows for similar purification procedures for most targets and implementation of high-throughput (HTP) parallel methods. In most cases, the 6X-His-tagged proteins are sufficiently purified (> 97% homogeneity) using a HTP two-step purification protocol for most structural studies. Using this platform, the open reading frames of over 16,000 different targeted proteins (or domains) have been cloned as > 26,000 constructs. Over the past nine years, more than 16,000 of these expressed protein, and more than 4,400 proteins (or domains) have been purified to homogeneity in tens of milligram quantities (see Summary Statistics, http://nesg.org/statistics.html). Using these samples, the NESG has deposited more than 900 new protein structures to the Protein Data Bank (PDB). The methods described here are effective in producing eukaryotic and prokaryotic protein samples in E. coli. This paper summarizes some of the updates made to the protein production pipeline in the last five years, corresponding to phase 2 of the NIGMS Protein Structure Initiative (PSI-2) project. The NESG Protein Production Platform is suitable for implementation in a large individual laboratory or by a small group of collaborating investigators. These advanced automated and/or parallel cloning, expression, purification, and biophysical screening technologies are of broad value to the structural biology, functional proteomics, and structural genomics communities. PMID:20688167

  7. Estimates of microbial quality and concentration of copper in distributed drinking water are highly dependent on sampling strategy.

    PubMed

    Lehtola, Markku J; Miettinen, Ilkka T; Hirvonen, Arja; Vartiainen, Terttu; Martikainen, Pertti J

    2007-12-01

    The numbers of bacteria generally increase in distributed water. Often household pipelines or water fittings (e.g., taps) represent the most critical location for microbial growth in water distribution systems. According to the European Union drinking water directive, there should not be abnormal changes in the colony counts in water. We used a pilot distribution system to study the effects of water stagnation on drinking water microbial quality, concentration of copper and formation of biofilms with two commonly used pipeline materials in households; copper and plastic (polyethylene). Water stagnation for more than 4h significantly increased both the copper concentration and the number of bacteria in water. Heterotrophic plate counts were six times higher in PE pipes and ten times higher in copper pipes after 16 h of stagnation than after only 40 min stagnation. The increase in the heterotrophic plate counts was linear with time in both copper and plastic pipelines. In the distribution system, bacteria originated mainly from biofilms, because in laboratory tests with water, there was only minor growth of bacteria after 16 h stagnation. Our study indicates that water stagnation in the distribution system clearly affects microbial numbers and the concentration of copper in water, and should be considered when planning the sampling strategy for drinking water quality control in distribution systems.

  8. Using ontology-based semantic similarity to facilitate the article screening process for systematic reviews.

    PubMed

    Ji, Xiaonan; Ritter, Alan; Yen, Po-Yin

    2017-05-01

    Systematic Reviews (SRs) are utilized to summarize evidence from high quality studies and are considered the preferred source of evidence-based practice (EBP). However, conducting SRs can be time and labor intensive due to the high cost of article screening. In previous studies, we demonstrated utilizing established (lexical) article relationships to facilitate the identification of relevant articles in an efficient and effective manner. Here we propose to enhance article relationships with background semantic knowledge derived from Unified Medical Language System (UMLS) concepts and ontologies. We developed a pipelined semantic concepts representation process to represent articles from an SR into an optimized and enriched semantic space of UMLS concepts. Throughout the process, we leveraged concepts and concept relations encoded in biomedical ontologies (SNOMED-CT and MeSH) within the UMLS framework to prompt concept features of each article. Article relationships (similarities) were established and represented as a semantic article network, which was readily applied to assist with the article screening process. We incorporated the concept of active learning to simulate an interactive article recommendation process, and evaluated the performance on 15 completed SRs. We used work saved over sampling at 95% recall (WSS95) as the performance measure. We compared the WSS95 performance of our ontology-based semantic approach to existing lexical feature approaches and corpus-based semantic approaches, and found that we had better WSS95 in most SRs. We also had the highest average WSS95 of 43.81% and the highest total WSS95 of 657.18%. We demonstrated using ontology-based semantics to facilitate the identification of relevant articles for SRs. Effective concepts and concept relations derived from UMLS ontologies can be utilized to establish article semantic relationships. Our approach provided a promising performance and can easily apply to any SR topics in the biomedical domain with generalizability. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. 75 FR 13644 - TORP Terminal LP, Bienville Offshore Energy Terminal Liquefied Natural Gas Deepwater Port License...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-22

    ... interconnect pipelines to four existing offshore pipelines (Dauphin Natural Gas Pipeline, Williams Natural Gas Pipeline, Destin Natural Gas Pipeline, and Viosca Knoll Gathering System [VKGS] Gas Pipeline) that connect to the onshore natural gas transmission pipeline system. Natural gas would be delivered to customers...

  10. Freight pipelines: Current status and anticipated future use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-07-01

    This report is issued by the Task Committee on Freight Pipelines, Pipeline Division, ASCE. Freight pipelines of various types (including slurry pipeline, pneumatic pipeline, and capsule pipeline) have been used throughout the world for over a century for transporting solid and sometimes even package products. Recent advancements in pipeline technology, aided by advanced computer control systems and trenchless technologies, have greatly facilitated the transportation of solids by pipelines. Today, in many situations, freight pipelines are not only the most economical and practical means for transporting solids, they are also the most reliable, safest and most environmentally friendly transportation mode. Increasedmore » use of underground pipelines to transport freight is anticipated in the future, especially as the technology continues to improve and surface transportation modes such as highways become more congested. This paper describes the state of the art and expected future uses of various types of freight pipelines. Obstacles hindering the development and use of the most advanced freight pipeline systems, such as the pneumatic capsule pipeline for interstate transport of freight, are discussed.« less

  11. KRAS mutations in blood circulating cell-free DNA: a pancreatic cancer case-control

    PubMed Central

    Le Calvez-Kelm, Florence; Foll, Matthieu; Wozniak, Magdalena B.; Delhomme, Tiffany M.; Durand, Geoffroy; Chopard, Priscilia; Pertesi, Maroulio; Fabianova, Eleonora; Adamcakova, Zora; Holcatova, Ivana; Foretova, Lenka; Janout, Vladimir; Vallee, Maxime P.; Rinaldi, Sabina; Brennan, Paul; McKay, James D.; Byrnes, Graham B.; Scelo, Ghislaine

    2016-01-01

    The utility of KRAS mutations in plasma circulating cell-free DNA (cfDNA) samples as non-invasive biomarkers for the detection of pancreatic cancer has never been evaluated in a large case-control series. We applied a KRAS amplicon-based deep sequencing strategy combined with analytical pipeline specifically designed for the detection of low-abundance mutations to screen plasma samples of 437 pancreatic cancer cases, 141 chronic pancreatitis subjects, and 394 healthy controls. We detected mutations in 21.1% (N=92) of cases, of whom 82 (89.1%) carried at least one mutation at hotspot codons 12, 13 or 61, with mutant allelic fractions from 0.08% to 79%. Advanced stages were associated with an increased proportion of detection, with KRAS cfDNA mutations detected in 10.3%, 17,5% and 33.3% of cases with local, regional and systemic stages, respectively. We also detected KRAS cfDNA mutations in 3.7% (N=14) of healthy controls and in 4.3% (N=6) of subjects with chronic pancreatitis, but at significantly lower allelic fractions than in cases. Combining cfDNA KRAS mutations and CA19-9 plasma levels on a limited set of case-control samples did not improve the overall performance of the biomarkers as compared to CA19-9 alone. Whether the limited sensitivity and specificity observed in our series of KRAS mutations in plasma cfDNA as biomarkers for pancreatic cancer detection are attributable to methodological limitations or to the biology of cfDNA should be further assessed in large case-control series. PMID:27705932

  12. A 96-well screen filter plate for high-throughput biological sample preparation and LC-MS/MS analysis.

    PubMed

    Peng, Sean X; Cousineau, Martin; Juzwin, Stephen J; Ritchie, David M

    2006-01-01

    A novel 96-well screen filter plate (patent pending) has been invented to eliminate a time-consuming and labor-intensive step in preparation of in vivo study samples--to remove blood or plasma clots. These clots plug the pipet tips during a manual or automated sample-transfer step causing inaccurate pipetting or total pipetting failure. Traditionally, these blood and plasma clots are removed by picking them out manually one by one from each sample tube before any sample transfer can be made. This has significantly slowed the sample preparation process and has become a bottleneck for automated high-throughput sample preparation using robotic liquid handlers. Our novel screen filter plate was developed to solve this problem. The 96-well screen filter plate consists of 96 stainless steel wire-mesh screen tubes connected to the 96 openings of a top plate so that the screen filter plate can be readily inserted into a 96-well sample storage plate. Upon insertion, the blood and plasma clots are excluded from entering the screen tube while clear sample solutions flow freely into it. In this way, sample transfer can be easily completed by either manual or automated pipetting methods. In this report, three structurally diverse compounds were selected to evaluate and validate the use of the screen filter plate. The plasma samples of these compounds were transferred and processed in the presence and absence of the screen filter plate and then analyzed by LC-MS/MS methods. Our results showed a good agreement between the samples prepared with and without the screen filter plate, demonstrating the utility and efficiency of this novel device for preparation of blood and plasma samples. The device is simple, easy to use, and reusable. It can be employed for sample preparation of other biological fluids that contain floating particulates or aggregates.

  13. 77 FR 37661 - Amended Notice of Intent To Prepare the Environmental Impact Statement for a Proposed Federal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-22

    ... natural gas pipelines, the Midwestern Gas Transmission line (3 miles distant) and/or the ANR Pipeline (4.5... Pipeline, Boardwalk/Texas Gas Pipeline, Shell/Capline Oil Pipeline, Panhandle/Trunkline Gas Pipeline, and... Rockport, IN, and CO 2 Pipeline; Conduct Additional Public Scoping Meetings; and Issue a Notice of...

  14. Status of the TESS Science Processing Operations Center

    NASA Astrophysics Data System (ADS)

    Jenkins, Jon Michael; Caldwell, Douglas A.; Davies, Misty; Li, Jie; Morris, Robert L.; Rose, Mark; Smith, Jeffrey C.; Tenenbaum, Peter; Ting, Eric; Twicken, Joseph D.; Wohler, Bill

    2018-06-01

    The Transiting Exoplanet Survey Satellite (TESS) was selected by NASA’s Explorer Program to conduct a search for Earth’s closest cousins starting in 2018. TESS will conduct an all-sky transit survey of F, G and K dwarf stars between 4 and 12 magnitudes and M dwarf stars within 200 light years. TESS is expected to discover 1,000 small planets less than twice the size of Earth, and to measure the masses of at least 50 of these small worlds. The TESS science pipeline is being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center based on the highly successful Kepler science pipeline. Like the Kepler pipeline, the TESS pipeline provides calibrated pixels, simple and systematic error-corrected aperture photometry, and centroid locations for all 200,000+ target stars observed over the 2-year mission, along with associated uncertainties. The pixel and light curve products are modeled on the Kepler archive products and will be archived to the Mikulski Archive for Space Telescopes (MAST). In addition to the nominal science data, the 30-minute Full Frame Images (FFIs) simultaneously collected by TESS will also be calibrated by the SPOC and archived at MAST. The TESS pipeline searches through all light curves for evidence of transits that occur when a planet crosses the disk of its host star. The Data Validation pipeline generates a suite of diagnostic metrics for each transit-like signature, and then extracts planetary parameters by fitting a limb-darkened transit model to each potential planetary signature. The results of the transit search are modeled on the Kepler transit search products (tabulated numerical results, time series products, and pdf reports) all of which will be archived to MAST. Synthetic sample data products are available at https://archive.stsci.edu/tess/ete-6.html.Funding for the TESS Mission has been provided by the NASA Science Mission Directorate.

  15. Capsule injection system for a hydraulic capsule pipelining system

    DOEpatents

    Liu, Henry

    1982-01-01

    An injection system for injecting capsules into a hydraulic capsule pipelining system, the pipelining system comprising a pipeline adapted for flow of a carrier liquid therethrough, and capsules adapted to be transported through the pipeline by the carrier liquid flowing through the pipeline. The injection system comprises a reservoir of carrier liquid, the pipeline extending within the reservoir and extending downstream out of the reservoir, and a magazine in the reservoir for holding capsules in a series, one above another, for injection into the pipeline in the reservoir. The magazine has a lower end in communication with the pipeline in the reservoir for delivery of capsules from the magazine into the pipeline.

  16. A Novel Application of Synthetic Biology and Directed Evolution to Engineer Phage-based Antibiotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Meiye

    The emergence of multiple drug resistant bacteria poses threats to human health, agriculture and food safety. Annually over 100,000 deaths and up to $20 billion loss to the U.S. economy are attributed to multiple drug resistant bacteria. With only four new chemical antibiotics in the drug development pipeline, we are in dire need of new solutions to address the emerging threat of multiple drug resistance. We propose a paradigm-changing approach to address the multi-drug resistant bacteria problem by utilizing Synthetic Biology (SynBio) methodologies to create and evolve “designer” bacteriophages or phages – viruses that specifically infect bacteria – to infectmore » and kill newly emerging pathogenic bacterial strains WITHOUT the need for chemical antibiotics. A major advantage of using phage to combat pathogenic bacteria is that phages can co-evolve with their bacterial host, and Sandia can be the first in the world to establish an industrial scale Synthetic Biology pipeline for phage directed evolution for safe, targeted, customizable solution to bacterial drug resistance. Since there is no existing phage directed evolution effort within or outside of Sandia, this proposal is suitable as a high-risk LDRD effort to create the first pipeline for such an endeavor. The high potential reward nature of this proposal will be the immediate impact in decontamination and restoration of surfaces and infrastructure, with longer term impact in human or animal therapeutics. The synthetic biology and screening approaches will lead to fundamental knowledge of phage/bacteria co-evolution, making Sandia a world leader in directed evolution of bacteriophages.« less

  17. 76 FR 73570 - Pipeline Safety: Miscellaneous Changes to Pipeline Safety Regulations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-29

    ... pipeline facilities to facilitate the removal of liquids and other materials from the gas stream. These... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts... Changes to Pipeline Safety Regulations AGENCY: Pipeline and Hazardous Materials Safety Administration...

  18. Edge Bioinformatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, Chien-Chi

    2015-08-03

    Edge Bioinformatics is a developmental bioinformatics and data management platform which seeks to supply laboratories with bioinformatics pipelines for analyzing data associated with common samples case goals. Edge Bioinformatics enables sequencing as a solution and forward-deployed situations where human-resources, space, bandwidth, and time are limited. The Edge bioinformatics pipeline was designed based on following USE CASES and specific to illumina sequencing reads. 1. Assay performance adjudication (PCR): Analysis of an existing PCR assay in a genomic context, and automated design of a new assay to resolve conflicting results; 2. Clinical presentation with extreme symptoms: Characterization of a known pathogen ormore » co-infection with a. Novel emerging disease outbreak or b. Environmental surveillance« less

  19. NMR in the SPINE Structural Proteomics project.

    PubMed

    Ab, E; Atkinson, A R; Banci, L; Bertini, I; Ciofi-Baffoni, S; Brunner, K; Diercks, T; Dötsch, V; Engelke, F; Folkers, G E; Griesinger, C; Gronwald, W; Günther, U; Habeck, M; de Jong, R N; Kalbitzer, H R; Kieffer, B; Leeflang, B R; Loss, S; Luchinat, C; Marquardsen, T; Moskau, D; Neidig, K P; Nilges, M; Piccioli, M; Pierattelli, R; Rieping, W; Schippmann, T; Schwalbe, H; Travé, G; Trenner, J; Wöhnert, J; Zweckstetter, M; Kaptein, R

    2006-10-01

    This paper describes the developments, role and contributions of the NMR spectroscopy groups in the Structural Proteomics In Europe (SPINE) consortium. Focusing on the development of high-throughput (HTP) pipelines for NMR structure determinations of proteins, all aspects from sample preparation, data acquisition, data processing, data analysis to structure determination have been improved with respect to sensitivity, automation, speed, robustness and validation. Specific highlights are protonless (13)C-direct detection methods and inferential structure determinations (ISD). In addition to technological improvements, these methods have been applied to deliver over 60 NMR structures of proteins, among which are five that failed to crystallize. The inclusion of NMR spectroscopy in structural proteomics pipelines improves the success rate for protein structure determinations.

  20. Development and Applications of Pipeline Steel in Long-Distance Gas Pipeline of China

    NASA Astrophysics Data System (ADS)

    Chunyong, Huo; Yang, Li; Lingkang, Ji

    In past decades, with widely utilizing of Microalloying and Thermal Mechanical Control Processing (TMCP) technology, the good matching of strength, toughness, plasticity and weldability on pipeline steel has been reached so that oil and gas pipeline has been greatly developed in China to meet the demand of strong domestic consumption of energy. In this paper, development history of pipeline steel and gas pipeline in china is briefly reviewed. The microstructure characteristic and mechanical performance of pipeline steel used in some representative gas pipelines of china built in different stage are summarized. Through the analysis on the evolution of pipeline service environment, some prospective development trend of application of pipeline steel in China is also presented.

  1. Live imaging of muscles in Drosophila metamorphosis: Towards high-throughput gene identification and function analysis.

    PubMed

    Puah, Wee Choo; Wasser, Martin

    2016-03-01

    Time-lapse microscopy in developmental biology is an emerging tool for functional genomics. Phenotypic effects of gene perturbations can be studied non-invasively at multiple time points in chronological order. During metamorphosis of Drosophila melanogaster, time-lapse microscopy using fluorescent reporters allows visualization of alternative fates of larval muscles, which are a model for the study of genes related to muscle wasting. While doomed muscles enter hormone-induced programmed cell death, a smaller population of persistent muscles survives to adulthood and undergoes morphological remodeling that involves atrophy in early, and hypertrophy in late pupation. We developed a method that combines in vivo imaging, targeted gene perturbation and image analysis to identify and characterize genes involved in muscle development. Macrozoom microscopy helps to screen for interesting muscle phenotypes, while confocal microscopy in multiple locations over 4-5 days produces time-lapse images that are used to quantify changes in cell morphology. Performing a similar investigation using fixed pupal tissues would be too time-consuming and therefore impractical. We describe three applications of our pipeline. First, we show how quantitative microscopy can track and measure morphological changes of muscle throughout metamorphosis and analyze genes involved in atrophy. Second, our assay can help to identify genes that either promote or prevent histolysis of abdominal muscles. Third, we apply our approach to test new fluorescent proteins as live markers for muscle development. We describe mKO2 tagged Cysteine proteinase 1 (Cp1) and Troponin-I (TnI) as examples of proteins showing developmental changes in subcellular localization. Finally, we discuss strategies to improve throughput of our pipeline to permit genome-wide screens in the future. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodge, D. A.; Harris, D. B.

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  3. Cheminformatics-aided discovery of small-molecule Protein-Protein Interaction (PPI) dual inhibitors of Tumor Necrosis Factor (TNF) and Receptor Activator of NF-κB Ligand (RANKL).

    PubMed

    Melagraki, Georgia; Ntougkos, Evangelos; Rinotas, Vagelis; Papaneophytou, Christos; Leonis, Georgios; Mavromoustakos, Thomas; Kontopidis, George; Douni, Eleni; Afantitis, Antreas; Kollias, George

    2017-04-01

    We present an in silico drug discovery pipeline developed and applied for the identification and virtual screening of small-molecule Protein-Protein Interaction (PPI) compounds that act as dual inhibitors of TNF and RANKL through the trimerization interface. The cheminformatics part of the pipeline was developed by combining structure-based with ligand-based modeling using the largest available set of known TNF inhibitors in the literature (2481 small molecules). To facilitate virtual screening, the consensus predictive model was made freely available at: http://enalos.insilicotox.com/TNFPubChem/. We thus generated a priority list of nine small molecules as candidates for direct TNF function inhibition. In vitro evaluation of these compounds led to the selection of two small molecules that act as potent direct inhibitors of TNF function, with IC50 values comparable to those of a previously-described direct inhibitor (SPD304), but with significantly reduced toxicity. These molecules were also identified as RANKL inhibitors and validated in vitro with respect to this second functionality. Direct binding of the two compounds was confirmed both for TNF and RANKL, as well as their ability to inhibit the biologically-active trimer forms. Molecular dynamics calculations were also carried out for the two small molecules in each protein to offer additional insight into the interactions that govern TNF and RANKL complex formation. To our knowledge, these compounds, namely T8 and T23, constitute the second and third published examples of dual small-molecule direct function inhibitors of TNF and RANKL, and could serve as lead compounds for the development of novel treatments for inflammatory and autoimmune diseases.

  4. Enriching the biological space of natural products and charting drug metabolites, through real time biotransformation monitoring: The NMR tube bioreactor.

    PubMed

    Chatzikonstantinou, Alexandra V; Chatziathanasiadou, Maria V; Ravera, Enrico; Fragai, Marco; Parigi, Giacomo; Gerothanassis, Ioannis P; Luchinat, Claudio; Stamatis, Haralambos; Tzakos, Andreas G

    2018-01-01

    Natural products offer a wide range of biological activities, but they are not easily integrated in the drug discovery pipeline, because of their inherent scaffold intricacy and the associated complexity in their synthetic chemistry. Enzymes may be used to perform regioselective and stereoselective incorporation of functional groups in the natural product core, avoiding harsh reaction conditions, several protection/deprotection and purification steps. Herein, we developed a three step protocol carried out inside an NMR-tube. 1st-step: STD-NMR was used to predict the: i) capacity of natural products as enzyme substrates and ii) possible regioselectivity of the biotransformations. 2nd-step: The real-time formation of multiple-biotransformation products in the NMR-tube bioreactor was monitored in-situ. 3rd-step: STD-NMR was applied in the mixture of the biotransformed products to screen ligands for protein targets. Herein, we developed a simple and time-effective process, the "NMR-tube bioreactor", that is able to: (i) predict which component of a mixture of natural products can be enzymatically transformed, (ii) monitor in situ the transformation efficacy and regioselectivity in crude extracts and multiple substrate biotransformations without fractionation and (iii) simultaneously screen for interactions of the biotransformation products with pharmaceutical protein targets. We have developed a green, time-, and cost-effective process that provide a simple route from natural products to lead compounds for drug discovery. This process can speed up the most crucial steps in the early drug discovery process, and reduce the chemical manipulations usually involved in the pipeline, improving the environmental compatibility. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE PAGES

    Dodge, D. A.; Harris, D. B.

    2016-03-15

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  6. Combining Functional and Structural Genomics to Sample the Essential Burkholderia Structome

    PubMed Central

    Baugh, Loren; Gallagher, Larry A.; Patrapuvich, Rapatbhorn; Clifton, Matthew C.; Gardberg, Anna S.; Edwards, Thomas E.; Armour, Brianna; Begley, Darren W.; Dieterich, Shellie H.; Dranow, David M.; Abendroth, Jan; Fairman, James W.; Fox, David; Staker, Bart L.; Phan, Isabelle; Gillespie, Angela; Choi, Ryan; Nakazawa-Hewitt, Steve; Nguyen, Mary Trang; Napuli, Alberto; Barrett, Lynn; Buchko, Garry W.; Stacy, Robin; Myler, Peter J.; Stewart, Lance J.; Manoil, Colin; Van Voorhis, Wesley C.

    2013-01-01

    Background The genus Burkholderia includes pathogenic gram-negative bacteria that cause melioidosis, glanders, and pulmonary infections of patients with cancer and cystic fibrosis. Drug resistance has made development of new antimicrobials critical. Many approaches to discovering new antimicrobials, such as structure-based drug design and whole cell phenotypic screens followed by lead refinement, require high-resolution structures of proteins essential to the parasite. Methodology/Principal Findings We experimentally identified 406 putative essential genes in B. thailandensis, a low-virulence species phylogenetically similar to B. pseudomallei, the causative agent of melioidosis, using saturation-level transposon mutagenesis and next-generation sequencing (Tn-seq). We selected 315 protein products of these genes based on structure-determination criteria, such as excluding very large and/or integral membrane proteins, and entered them into the Seattle Structural Genomics Center for Infection Disease (SSGCID) structure determination pipeline. To maximize structural coverage of these targets, we applied an “ortholog rescue” strategy for those producing insoluble or difficult to crystallize proteins, resulting in the addition of 387 orthologs (or paralogs) from seven other Burkholderia species into the SSGCID pipeline. This structural genomics approach yielded structures from 31 putative essential targets from B. thailandensis, and 25 orthologs from other Burkholderia species, yielding an overall structural coverage for 49 of the 406 essential gene families, with a total of 88 depositions into the Protein Data Bank. Of these, 25 proteins have properties of a potential antimicrobial drug target i.e., no close human homolog, part of an essential metabolic pathway, and a deep binding pocket. We describe the structures of several potential drug targets in detail. Conclusions/Significance This collection of structures, solubility and experimental essentiality data provides a resource for development of drugs against infections and diseases caused by Burkholderia. All expression clones and proteins created in this study are freely available by request. PMID:23382856

  7. 75 FR 63774 - Pipeline Safety: Safety of On-Shore Hazardous Liquid Pipelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Pipelines AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), Department of... Gas Pipeline Safety Act of 1968, Public Law 90-481, delegated to DOT the authority to develop...

  8. 49 CFR 195.210 - Pipeline location.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Pipeline location. 195.210 Section 195.210 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY... PIPELINE Construction § 195.210 Pipeline location. (a) Pipeline right-of-way must be selected to avoid, as...

  9. 75 FR 5244 - Pipeline Safety: Integrity Management Program for Gas Distribution Pipelines; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-02

    ... Management Program for Gas Distribution Pipelines; Correction AGENCY: Pipeline and Hazardous Materials Safety... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Regulations to require operators of gas distribution pipelines to develop and implement integrity management...

  10. 77 FR 61825 - Pipeline Safety: Notice of Public Meeting on Pipeline Data

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-11

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... program performance measures for gas distribution, gas transmission, and hazardous liquids pipelines. The... distribution pipelines (49 CFR 192.1007(e)), gas transmission pipelines (49 CFR 192.945) and hazardous liquids...

  11. 78 FR 41991 - Pipeline Safety: Potential for Damage to Pipeline Facilities Caused by Flooding

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-12

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Notice; Issuance of Advisory... Gas and Hazardous Liquid Pipeline Systems. Subject: Potential for Damage to Pipeline Facilities Caused...

  12. 78 FR 41496 - Pipeline Safety: Meetings of the Gas and Liquid Pipeline Advisory Committees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-10

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0156] Pipeline Safety: Meetings of the Gas and Liquid Pipeline Advisory Committees AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice of advisory committee...

  13. 76 FR 70953 - Pipeline Safety: Safety of Gas Transmission Pipelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-16

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 192 [Docket ID PHMSA-2011-0023] RIN 2137-AE72 Pipeline Safety: Safety of Gas Transmission Pipelines AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Advance notice of...

  14. The Use of a Dynamic Screening of Phonological Awareness to Predict Risk for Reading Disabilities in Kindergarten Children

    PubMed Central

    Bridges, Mindy Sittner; Catts, Hugh W.

    2013-01-01

    This study examined the usefulness and predictive validity of a dynamic screening of phonological awareness in two samples of kindergarten children. In one sample (n = 90), the predictive validity of the dynamic assessment was compared to a static version of the same screening measure. In the second sample (n = 96), the dynamic screening measure was compared to a commonly used screening tool, Dynamic Indicators of Basic Early Literacy Skills Initial Sound Fluency. Results showed that the dynamic screening measure uniquely predicted end-of-year reading achievement and outcomes in both samples. These results provide preliminary support for the usefulness of a dynamic screening measure of phonological awareness for kindergarten students. PMID:21571700

  15. A Description of the Clinical Proteomic Tumor Analysis Consortium (CPTAC) Common Data Analysis Pipeline

    PubMed Central

    Rudnick, Paul A.; Markey, Sanford P.; Roth, Jeri; Mirokhin, Yuri; Yan, Xinjian; Tchekhovskoi, Dmitrii V.; Edwards, Nathan J.; Thangudu, Ratna R.; Ketchum, Karen A.; Kinsinger, Christopher R.; Mesri, Mehdi; Rodriguez, Henry; Stein, Stephen E.

    2016-01-01

    The Clinical Proteomic Tumor Analysis Consortium (CPTAC) has produced large proteomics datasets from the mass spectrometric interrogation of tumor samples previously analyzed by The Cancer Genome Atlas (TCGA) program. The availability of the genomic and proteomic data is enabling proteogenomic study for both reference (i.e., contained in major sequence databases) and non-reference markers of cancer. The CPTAC labs have focused on colon, breast, and ovarian tissues in the first round of analyses; spectra from these datasets were produced from 2D LC-MS/MS analyses and represent deep coverage. To reduce the variability introduced by disparate data analysis platforms (e.g., software packages, versions, parameters, sequence databases, etc.), the CPTAC Common Data Analysis Platform (CDAP) was created. The CDAP produces both peptide-spectrum-match (PSM) reports and gene-level reports. The pipeline processes raw mass spectrometry data according to the following: (1) Peak-picking and quantitative data extraction, (2) database searching, (3) gene-based protein parsimony, and (4) false discovery rate (FDR)-based filtering. The pipeline also produces localization scores for the phosphopeptide enrichment studies using the PhosphoRS program. Quantitative information for each of the datasets is specific to the sample processing, with PSM and protein reports containing the spectrum-level or gene-level (“rolled-up”) precursor peak areas and spectral counts for label-free or reporter ion log-ratios for 4plex iTRAQ™. The reports are available in simple tab-delimited formats and, for the PSM-reports, in mzIdentML. The goal of the CDAP is to provide standard, uniform reports for all of the CPTAC data, enabling comparisons between different samples and cancer types as well as across the major ‘omics fields. PMID:26860878

  16. A Description of the Clinical Proteomic Tumor Analysis Consortium (CPTAC) Common Data Analysis Pipeline.

    PubMed

    Rudnick, Paul A; Markey, Sanford P; Roth, Jeri; Mirokhin, Yuri; Yan, Xinjian; Tchekhovskoi, Dmitrii V; Edwards, Nathan J; Thangudu, Ratna R; Ketchum, Karen A; Kinsinger, Christopher R; Mesri, Mehdi; Rodriguez, Henry; Stein, Stephen E

    2016-03-04

    The Clinical Proteomic Tumor Analysis Consortium (CPTAC) has produced large proteomics data sets from the mass spectrometric interrogation of tumor samples previously analyzed by The Cancer Genome Atlas (TCGA) program. The availability of the genomic and proteomic data is enabling proteogenomic study for both reference (i.e., contained in major sequence databases) and nonreference markers of cancer. The CPTAC laboratories have focused on colon, breast, and ovarian tissues in the first round of analyses; spectra from these data sets were produced from 2D liquid chromatography-tandem mass spectrometry analyses and represent deep coverage. To reduce the variability introduced by disparate data analysis platforms (e.g., software packages, versions, parameters, sequence databases, etc.), the CPTAC Common Data Analysis Platform (CDAP) was created. The CDAP produces both peptide-spectrum-match (PSM) reports and gene-level reports. The pipeline processes raw mass spectrometry data according to the following: (1) peak-picking and quantitative data extraction, (2) database searching, (3) gene-based protein parsimony, and (4) false-discovery rate-based filtering. The pipeline also produces localization scores for the phosphopeptide enrichment studies using the PhosphoRS program. Quantitative information for each of the data sets is specific to the sample processing, with PSM and protein reports containing the spectrum-level or gene-level ("rolled-up") precursor peak areas and spectral counts for label-free or reporter ion log-ratios for 4plex iTRAQ. The reports are available in simple tab-delimited formats and, for the PSM-reports, in mzIdentML. The goal of the CDAP is to provide standard, uniform reports for all of the CPTAC data to enable comparisons between different samples and cancer types as well as across the major omics fields.

  17. Electrochemical Investigation of the Corrosion of Different Microstructural Phases of X65 Pipeline Steel under Saturated Carbon Dioxide Conditions

    PubMed Central

    Yang, Yuanfeng; Joshi, Gaurav R.; Akid, Robert

    2015-01-01

    The aim of this research was to investigate the influence of metallurgy on the corrosion behaviour of separate weld zone (WZ) and parent plate (PP) regions of X65 pipeline steel in a solution of deionised water saturated with CO2, at two different temperatures (55 °C and 80 °C) and at initial pH~4.0. In addition, a non-electrochemical immersion experiment was also performed at 80 °C in CO2, on a sample portion of X65 pipeline containing part of a weld section, together with adjacent heat affected zones (HAZ) and parent material. Electrochemical impedance spectroscopy (EIS) was used to evaluate the corrosion behaviour of the separate weld and parent plate samples. This study seeks to understand the significance of the different microstructures within the different zones of the welded X65 pipe in CO2 environments on corrosion performance; with particular attention given to the formation of surface scales; and their composition/significance. The results obtained from grazing incidence X-ray diffraction (GIXRD) measurements suggest that, post immersion, the parent plate substrate is scale free, with only features arising from ferrite (α-Fe) and cementite (Fe3C) apparent. In contrast, at 80 °C, GIXRD from the weld zone substrate, and weld zone/heat affected zone of the non-electrochemical sample indicates the presence of siderite (FeCO3) and chukanovite (Fe2CO3(OH)2) phases. Scanning Electron Microscopy (SEM) on this surface confirmed the presence of characteristic discrete cube-shaped crystallites of siderite together with plate-like clusters of chukanovite.

  18. Metagenomic Assembly Reveals Hosts of Antibiotic Resistance Genes and the Shared Resistome in Pig, Chicken, and Human Feces.

    PubMed

    Ma, Liping; Xia, Yu; Li, Bing; Yang, Ying; Li, Li-Guan; Tiedje, James M; Zhang, Tong

    2016-01-05

    The risk associated with antibiotic resistance disseminating from animal and human feces is an urgent public issue. In the present study, we sought to establish a pipeline for annotating antibiotic resistance genes (ARGs) based on metagenomic assembly to investigate ARGs and their co-occurrence with associated genetic elements. Genetic elements found on the assembled genomic fragments include mobile genetic elements (MGEs) and metal resistance genes (MRGs). We then explored the hosts of these resistance genes and the shared resistome of pig, chicken and human fecal samples. High levels of tetracycline, multidrug, erythromycin, and aminoglycoside resistance genes were discovered in these fecal samples. In particular, significantly high level of ARGs (7762 ×/Gb) was detected in adult chicken feces, indicating higher ARG contamination level than other fecal samples. Many ARGs arrangements (e.g., macA-macB and tetA-tetR) were discovered shared by chicken, pig and human feces. In addition, MGEs such as the aadA5-dfrA17-carrying class 1 integron were identified on an assembled scaffold of chicken feces, and are carried by human pathogens. Differential coverage binning analysis revealed significant ARG enrichment in adult chicken feces. A draft genome, annotated as multidrug resistant Escherichia coli, was retrieved from chicken feces metagenomes and was determined to carry diverse ARGs (multidrug, acriflavine, and macrolide). The present study demonstrates the determination of ARG hosts and the shared resistome from metagenomic data sets and successfully establishes the relationship between ARGs, hosts, and environments. This ARG annotation pipeline based on metagenomic assembly will help to bridge the knowledge gaps regarding ARG-associated genes and ARG hosts with metagenomic data sets. Moreover, this pipeline will facilitate the evaluation of environmental risks in the genetic context of ARGs.

  19. Vipie: web pipeline for parallel characterization of viral populations from multiple NGS samples.

    PubMed

    Lin, Jake; Kramna, Lenka; Autio, Reija; Hyöty, Heikki; Nykter, Matti; Cinek, Ondrej

    2017-05-15

    Next generation sequencing (NGS) technology allows laboratories to investigate virome composition in clinical and environmental samples in a culture-independent way. There is a need for bioinformatic tools capable of parallel processing of virome sequencing data by exactly identical methods: this is especially important in studies of multifactorial diseases, or in parallel comparison of laboratory protocols. We have developed a web-based application allowing direct upload of sequences from multiple virome samples using custom parameters. The samples are then processed in parallel using an identical protocol, and can be easily reanalyzed. The pipeline performs de-novo assembly, taxonomic classification of viruses as well as sample analyses based on user-defined grouping categories. Tables of virus abundance are produced from cross-validation by remapping the sequencing reads to a union of all observed reference viruses. In addition, read sets and reports are created after processing unmapped reads against known human and bacterial ribosome references. Secured interactive results are dynamically plotted with population and diversity charts, clustered heatmaps and a sortable and searchable abundance table. The Vipie web application is a unique tool for multi-sample metagenomic analysis of viral data, producing searchable hits tables, interactive population maps, alpha diversity measures and clustered heatmaps that are grouped in applicable custom sample categories. Known references such as human genome and bacterial ribosomal genes are optionally removed from unmapped ('dark matter') reads. Secured results are accessible and shareable on modern browsers. Vipie is a freely available web-based tool whose code is open source.

  20. The Cost-Effectiveness of Cervical Self-Sampling to Improve Routine Cervical Cancer Screening: The Importance of Respondent Screening History and Compliance.

    PubMed

    Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J

    2017-01-01

    Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women noncompliant to screening within a 5- or 10-year period under two scenarios: (A) self-sampling respondents had moderate under-screening histories, or (B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The "most cost-effective" strategy was identified as the strategy just below $100,000 per QALY gained. Mailing self-sampling device kits to all women noncompliant to screening within a 5- or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, "10-yearly self-sampling" is preferred ($95,500 per QALY gained) if "5-yearly self-sampling" could only attract moderate under-screeners; however, "5-yearly self-sampling" is preferred if this strategy could additionally attract severe under-screeners. Targeted self-sampling of noncompliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. Cancer Epidemiol Biomarkers Prev; 26(1); 95-103. ©2016 AACR. ©2016 American Association for Cancer Research.

  1. 77 FR 2126 - Pipeline Safety: Implementation of the National Registry of Pipeline and Liquefied Natural Gas...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-13

    ... Natural Gas Operators AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...: ``Pipeline Safety: Updates to Pipeline and Liquefied Natural Gas Reporting Requirements.'' The final rule...

  2. 76 FR 303 - Pipeline Safety: Safety of On-Shore Hazardous Liquid Pipelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-04

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 195 [Docket ID PHMSA-2010-0229] RIN 2137-AE66 Pipeline Safety: Safety of On-Shore Hazardous Liquid Pipelines AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice of...

  3. 78 FR 13075 - Intent To Request Renewal From OMB of One Current Public Collection of Information: Pipeline...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-26

    ... From OMB of One Current Public Collection of Information: Pipeline Corporate Security Review Program... current security practices in the pipeline industry by way of TSA's Pipeline Corporate Security Review... Collection Requirement The TSA Pipeline Security Branch is responsible for conducting Pipeline Corporate...

  4. 78 FR 42889 - Pipeline Safety: Reminder of Requirements for Utility LP-Gas and LPG Pipeline Systems

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 192 [Docket No. PHMSA-2013-0097] Pipeline Safety: Reminder of Requirements for Utility LP-Gas and LPG Pipeline Systems AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION...

  5. 30 CFR 250.1005 - Inspection requirements for DOI pipelines.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 2 2011-07-01 2011-07-01 false Inspection requirements for DOI pipelines. 250.1005 Section 250.1005 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, REGULATION, AND ENFORCEMENT... Pipelines and Pipeline Rights-of-Way § 250.1005 Inspection requirements for DOI pipelines. (a) Pipeline...

  6. PipelineDog: a simple and flexible graphic pipeline construction and maintenance tool.

    PubMed

    Zhou, Anbo; Zhang, Yeting; Sun, Yazhou; Xing, Jinchuan

    2018-05-01

    Analysis pipelines are an essential part of bioinformatics research, and ad hoc pipelines are frequently created by researchers for prototyping and proof-of-concept purposes. However, most existing pipeline management system or workflow engines are too complex for rapid prototyping or learning the pipeline concept. A lightweight, user-friendly and flexible solution is thus desirable. In this study, we developed a new pipeline construction and maintenance tool, PipelineDog. This is a web-based integrated development environment with a modern web graphical user interface. It offers cross-platform compatibility, project management capabilities, code formatting and error checking functions and an online repository. It uses an easy-to-read/write script system that encourages code reuse. With the online repository, it also encourages sharing of pipelines, which enhances analysis reproducibility and accountability. For most users, PipelineDog requires no software installation. Overall, this web application provides a way to rapidly create and easily manage pipelines. PipelineDog web app is freely available at http://web.pipeline.dog. The command line version is available at http://www.npmjs.com/package/pipelinedog and online repository at http://repo.pipeline.dog. ysun@kean.edu or xing@biology.rutgers.edu or ysun@diagnoa.com. Supplementary data are available at Bioinformatics online.

  7. Open Access High Throughput Drug Discovery in the Public Domain: A Mount Everest in the Making

    PubMed Central

    Roy, Anuradha; McDonald, Peter R.; Sittampalam, Sitta; Chaguturu, Rathnam

    2013-01-01

    High throughput screening (HTS) facilitates screening large numbers of compounds against a biochemical target of interest using validated biological or biophysical assays. In recent years, a significant number of drugs in clinical trails originated from HTS campaigns, validating HTS as a bona fide mechanism for hit finding. In the current drug discovery landscape, the pharmaceutical industry is embracing open innovation strategies with academia to maximize their research capabilities and to feed their drug discovery pipeline. The goals of academic research have therefore expanded from target identification and validation to probe discovery, chemical genomics, and compound library screening. This trend is reflected in the emergence of HTS centers in the public domain over the past decade, ranging in size from modestly equipped academic screening centers to well endowed Molecular Libraries Probe Centers Network (MLPCN) centers funded by the NIH Roadmap initiative. These centers facilitate a comprehensive approach to probe discovery in academia and utilize both classical and cutting-edge assay technologies for executing primary and secondary screening campaigns. The various facets of academic HTS centers as well as their implications on technology transfer and drug discovery are discussed, and a roadmap for successful drug discovery in the public domain is presented. New lead discovery against therapeutic targets, especially those involving the rare and neglected diseases, is indeed a Mount Everestonian size task, and requires diligent implementation of pharmaceutical industry’s best practices for a successful outcome. PMID:20809896

  8. Qualification of serological infectious disease assays for the screening of samples from deceased tissue donors.

    PubMed

    Kitchen, A D; Newham, J A

    2011-05-01

    Whilst some of the assays used for serological screening of post-mortem blood samples from deceased tissue donors in some countries have been specifically validated by the manufacturer for this purpose, a significant number of those currently in use globally have not. Although specificity has previously been considered a problem in the screening of such samples, we believe that ensuring sensitivity is more important. The aim of this study was to validate a broader range of assays for the screening of post-mortem blood samples from deceased tissue donors. Six microplate immunoassays currently in use within National Health Service Blood and Transplant (NHSBT) for the screening of blood, tissue and stem cell donations were included. Representative samples from confirmed positive donors were titrated in screen negative post-mortem samples in parallel with normal pooled negative serum to determine if there was any inhibition with the post-mortem samples. There were no significant differences seen (P < 0.005) between the dilution curves obtained for the positive samples diluted in post-mortem samples and normal pooled sera. Although small numbers of samples were studied, it can be surmised that the post-mortem blood samples from deceased tissue donors, collected according to United Kingdom guidelines, are a suitable substrate for the assays evaluated. No diminution of reactivity was seen when dilution with sera from deceased donors was compared to dilution using pooled serum from live donors. In the absence of genuine low titre positive post-mortem samples, the use of samples spiked with various levels of target material provides a means of qualifying serological screening assays used by NHSBT for the screening of post-mortem blood samples from deceased tissue donors.

  9. SEXCMD: Development and validation of sex marker sequences for whole-exome/genome and RNA sequencing.

    PubMed

    Jeong, Seongmun; Kim, Jiwoong; Park, Won; Jeon, Hongmin; Kim, Namshin

    2017-01-01

    Over the last decade, a large number of nucleotide sequences have been generated by next-generation sequencing technologies and deposited to public databases. However, most of these datasets do not specify the sex of individuals sampled because researchers typically ignore or hide this information. Male and female genomes in many species have distinctive sex chromosomes, XX/XY and ZW/ZZ, and expression levels of many sex-related genes differ between the sexes. Herein, we describe how to develop sex marker sequences from syntenic regions of sex chromosomes and use them to quickly identify the sex of individuals being analyzed. Array-based technologies routinely use either known sex markers or the B-allele frequency of X or Z chromosomes to deduce the sex of an individual. The same strategy has been used with whole-exome/genome sequence data; however, all reads must be aligned onto a reference genome to determine the B-allele frequency of the X or Z chromosomes. SEXCMD is a pipeline that can extract sex marker sequences from reference sex chromosomes and rapidly identify the sex of individuals from whole-exome/genome and RNA sequencing after training with a known dataset through a simple machine learning approach. The pipeline counts total numbers of hits from sex-specific marker sequences and identifies the sex of the individuals sampled based on the fact that XX/ZZ samples do not have Y or W chromosome hits. We have successfully validated our pipeline with mammalian (Homo sapiens; XY) and avian (Gallus gallus; ZW) genomes. Typical calculation time when applying SEXCMD to human whole-exome or RNA sequencing datasets is a few minutes, and analyzing human whole-genome datasets takes about 10 minutes. Another important application of SEXCMD is as a quality control measure to avoid mixing samples before bioinformatics analysis. SEXCMD comprises simple Python and R scripts and is freely available at https://github.com/lovemun/SEXCMD.

  10. Environmental signatures and effects of an oil and gas wastewater spill in the Williston Basin, North Dakota.

    PubMed

    Cozzarelli, I M; Skalak, K J; Kent, D B; Engle, M A; Benthem, A; Mumford, A C; Haase, K; Farag, A; Harper, D; Nagel, S C; Iwanowicz, L R; Orem, W H; Akob, D M; Jaeschke, J B; Galloway, J; Kohler, M; Stoliker, D L; Jolly, G D

    2017-02-01

    Wastewaters from oil and gas development pose largely unknown risks to environmental resources. In January 2015, 11.4ML (million liters) of wastewater (300g/L TDS) from oil production in the Williston Basin was reported to have leaked from a pipeline, spilling into Blacktail Creek, North Dakota. Geochemical and biological samples were collected in February and June 2015 to identify geochemical signatures of spilled wastewaters as well as biological responses along a 44-km river reach. February water samples had elevated chloride (1030mg/L) and bromide (7.8mg/L) downstream from the spill, compared to upstream levels (11mg/L and <0.4mg/L, respectively). Lithium (0.25mg/L), boron (1.75mg/L) and strontium (7.1mg/L) were present downstream at 5-10 times upstream concentrations. Light hydrocarbon measurements indicated a persistent thermogenic source of methane in the stream. Semi-volatile hydrocarbons indicative of oil were not detected in filtered samples but low levels, including tetramethylbenzenes and di-methylnaphthalenes, were detected in unfiltered water samples downstream from the spill. Labile sediment-bound barium and strontium concentrations (June 2015) were higher downstream from the Spill Site. Radium activities in sediment downstream from the Spill Site were up to 15 times the upstream activities and, combined with Sr isotope ratios, suggest contributions from the pipeline fluid and support the conclusion that elevated concentrations in Blacktail Creek water are from the leaking pipeline. Results from June 2015 demonstrate the persistence of wastewater effects in Blacktail Creek several months after remediation efforts started. Aquatic health effects were observed in June 2015; fish bioassays showed only 2.5% survival at 7.1km downstream from the spill compared to 89% at the upstream reference site. Additional potential biological impacts were indicated by estrogenic inhibition in downstream waters. Our findings demonstrate that environmental signatures from wastewater spills are persistent and create the potential for long-term environmental health effects. Published by Elsevier B.V.

  11. A Pipeline for the Analysis of APOGEE Spectra Based on Equivalent Widths

    NASA Astrophysics Data System (ADS)

    Arfon Williams, Rob; Bosley, Corinne; Jones, Hayden; Schiavon, Ricardo P.; Allende-Prieto, Carlos; Bizyaev, Dmitry; Carrera, Ricardo; Cunha, Katia M. L.; Nguyen, Duy; Feuillet, Diane; Frinchaboy, Peter M.; García Pérez, Ana; Hasselquist, Sten; Hayden, Michael R.; Hearty, Fred R.; Holtzman, Jon A.; Johnson, Jennifer; Majewski, Steven R.; Meszaros, Szabolcs; Nidever, David L.; Shetrone, Matthew D.; Smith, Verne V.; Sobeck, Jennifer; Troup, Nicholas William; Wilson, John C.; Zasowski, Gail

    2015-01-01

    The Apache Point Galactic Evolution Experiment (APOGEE) forms part of the third Sloan Digital Sky Survey and has obtained high resolution, high signal-to-noise infrared spectra for ~1.3 x 105 stars across the galactic bulge, disc and halo. From these, stellar parameters are derived together with abundances for various elements using the APOGEE Stellar Parameters and Chemical Abundance Pipeline (ASPCAP). In this poster we report preliminary results from application of an alternative stellar parameters and abundances pipeline, based on measurements of equivalent widths of absorption lines in APOGEE spectra. The method is based on a sequential grid inversion algorithm, originally designed for the derivation of ages and elemental abundances of stellar populations from line indices in their integrated spectra. It allows for the rapid processing of large spectroscopic data sets from both current and future surveys, such as APOGEE and APOGEE 2, and it is easily adaptable for application to other very large data sets that are being/will be generated by other massive surveys of the stellar populations of the Galaxy. It will also allow the cross checking of ASPCAP results using an independent method. In this poster we present preliminary results showing estimates of effective temperature and iron abundance [Fe/H] for a subset of the APOGEE sample, comparing with DR12 numbers produced by the ASPCAP pipeline.

  12. Applications of UT results to confirm defects findings by utilization of relevant metallurgical investigations techniques on gas/condensate pipeline working in wet sour gas environment

    NASA Astrophysics Data System (ADS)

    El-Azhari, O. A.; Gajam, S. Y.

    2015-03-01

    The gas/condensate pipe line under investigation is a 12 inch diameter, 48 km ASTM, A106 steel pipeline, carrying hydrocarbons containing wet CO2 and H2S.The pipe line had exploded in a region 100m distance from its terminal; after 24 years of service. Hydrogen induced cracking (HIC) and sour gas corrosion were expected due to the presence of wet H2S in the gas analysis. In other areas of pipe line ultrasonic testing was performed to determine whether the pipeline can be re-operated. The results have shown presence of internal planner defects, this was attributed to the existence of either laminations, type II inclusions or some service defects such as HIC and step wise cracking (SWC).Metallurgical investigations were conducted on fractured samples as per NACE standard (TM-0284-84). The obtained results had shown macroscopic cracks in the form of SWC, microstructure of steel had MnS inclusions. Crack sensitivity analyses were calculated and the microhardness testing was conducted. These results had confirmed that the line material was suffering from sour gas deteriorations. This paper correlates the field UT inspection findings with those methods investigated in the laboratory. Based on the results obtained a new HIC resistance material pipeline needs to be selected.

  13. Self-Sampling for Human Papillomavirus Testing among Non-Attenders Increases Attendance to the Norwegian Cervical Cancer Screening Programme

    PubMed Central

    Enerly, Espen; Bonde, Jesper; Schee, Kristina; Pedersen, Helle; Lönnberg, Stefan; Nygård, Mari

    2016-01-01

    Increasing attendance to screening offers the best potential for improving the effectiveness of well-established cervical cancer screening programs. Self-sampling at home for human papillomavirus (HPV) testing as an alternative to a clinical sampling can be a useful policy to increase attendance. To determine whether self-sampling improves screening attendance for women who do not regularly attend the Norwegian Cervical Cancer Screening Programme (NCCSP), 800 women aged 25–69 years in the Oslo area who were due to receive a 2nd reminder to attend regular screening were randomly selected and invited to be part of the intervention group. Women in this group received one of two self-sampling devices, Evalyn Brush or Delphi Screener. To attend screening, women in the intervention group had the option of using the self-sampling device (self-sampling subgroup) or visiting their physician for a cervical smear. Self-sampled specimens were split and analyzed for the presence of high-risk (hr) HPV by the CLART® HPV2 test and the digene® Hybrid Capture (HC)2 test. The control group consisted of 2593 women who received a 2nd reminder letter according to the current guidelines of the NCCSP. The attendance rates were 33.4% in the intervention group and 23.2% in the control group, with similar attendance rates for both self-sampling devices. Women in the self-sampling subgroup responded favorably to both self-sampling devices and cited not remembering receiving a call for screening as the most dominant reason for previous non-attendance. Thirty-two of 34 (94.1%) hrHPV-positive women in the self-sampling subgroup attended follow-up. In conclusion, self-sampling increased attendance rates and was feasible and well received. This study lends further support to the proposal that self-sampling may be a valuable alternative for increasing cervical cancer screening coverage in Norway. PMID:27073929

  14. Self-Sampling for Human Papillomavirus Testing among Non-Attenders Increases Attendance to the Norwegian Cervical Cancer Screening Programme.

    PubMed

    Enerly, Espen; Bonde, Jesper; Schee, Kristina; Pedersen, Helle; Lönnberg, Stefan; Nygård, Mari

    2016-01-01

    Increasing attendance to screening offers the best potential for improving the effectiveness of well-established cervical cancer screening programs. Self-sampling at home for human papillomavirus (HPV) testing as an alternative to a clinical sampling can be a useful policy to increase attendance. To determine whether self-sampling improves screening attendance for women who do not regularly attend the Norwegian Cervical Cancer Screening Programme (NCCSP), 800 women aged 25-69 years in the Oslo area who were due to receive a 2nd reminder to attend regular screening were randomly selected and invited to be part of the intervention group. Women in this group received one of two self-sampling devices, Evalyn Brush or Delphi Screener. To attend screening, women in the intervention group had the option of using the self-sampling device (self-sampling subgroup) or visiting their physician for a cervical smear. Self-sampled specimens were split and analyzed for the presence of high-risk (hr) HPV by the CLART® HPV2 test and the digene® Hybrid Capture (HC)2 test. The control group consisted of 2593 women who received a 2nd reminder letter according to the current guidelines of the NCCSP. The attendance rates were 33.4% in the intervention group and 23.2% in the control group, with similar attendance rates for both self-sampling devices. Women in the self-sampling subgroup responded favorably to both self-sampling devices and cited not remembering receiving a call for screening as the most dominant reason for previous non-attendance. Thirty-two of 34 (94.1%) hrHPV-positive women in the self-sampling subgroup attended follow-up. In conclusion, self-sampling increased attendance rates and was feasible and well received. This study lends further support to the proposal that self-sampling may be a valuable alternative for increasing cervical cancer screening coverage in Norway.

  15. Method and system for pipeline communication

    DOEpatents

    Richardson,; John, G [Idaho Falls, ID

    2008-01-29

    A pipeline communication system and method includes a pipeline having a surface extending along at least a portion of the length of the pipeline. A conductive bus is formed to and extends along a portion of the surface of the pipeline. The conductive bus includes a first conductive trace and a second conductive trace with the first and second conductive traces being adapted to conformally couple with a pipeline at the surface extending along at least a portion of the length of the pipeline. A transmitter for sending information along the conductive bus on the pipeline is coupled thereto and a receiver for receiving the information from the conductive bus on the pipeline is also couple to the conductive bus.

  16. Windshield splatter analysis with the Galaxy metagenomic pipeline

    PubMed Central

    Kosakovsky Pond, Sergei; Wadhawan, Samir; Chiaromonte, Francesca; Ananda, Guruprasad; Chung, Wen-Yu; Taylor, James; Nekrutenko, Anton

    2009-01-01

    How many species inhabit our immediate surroundings? A straightforward collection technique suitable for answering this question is known to anyone who has ever driven a car at highway speeds. The windshield of a moving vehicle is subjected to numerous insect strikes and can be used as a collection device for representative sampling. Unfortunately the analysis of biological material collected in that manner, as with most metagenomic studies, proves to be rather demanding due to the large number of required tools and considerable computational infrastructure. In this study, we use organic matter collected by a moving vehicle to design and test a comprehensive pipeline for phylogenetic profiling of metagenomic samples that includes all steps from processing and quality control of data generated by next-generation sequencing technologies to statistical analyses and data visualization. To the best of our knowledge, this is also the first publication that features a live online supplement providing access to exact analyses and workflows used in the article. PMID:19819906

  17. A novel visual pipework inspection system

    NASA Astrophysics Data System (ADS)

    Summan, Rahul; Jackson, William; Dobie, Gordon; MacLeod, Charles; Mineo, Carmelo; West, Graeme; Offin, Douglas; Bolton, Gary; Marshall, Stephen; Lille, Alexandre

    2018-04-01

    The interior visual inspection of pipelines in the nuclear industry is a safety critical activity conducted during outages to ensure the continued safe and reliable operation of plant. Typically, the video output by a manually deployed probe is viewed by an operator looking to identify and localize surface defects such as corrosion, erosion and pitting. However, it is very challenging to estimate the nature and extent of defects by viewing a large structure through a relatively small field of view. This work describes a new visual inspection system employing photogrammetry using a fisheye camera and a structured light system to map the internal geometry of pipelines by generating a photorealistic, geometrically accurate surface model. The error of the system output was evaluated through comparison to a ground truth laser scan (ATOS GOM Triple Scan) of a nuclear grade split pipe sample (stainless steel 304L, 80mm internal diameter) containing defects representative of the application - the error was found to be submillimeter across the sample.

  18. Zodiacal Exoplanets in Time (ZEIT). V. A Uniform Search for Transiting Planets in Young Clusters Observed by K2

    NASA Astrophysics Data System (ADS)

    Rizzuto, Aaron C.; Mann, Andrew W.; Vanderburg, Andrew; Kraus, Adam L.; Covey, Kevin R.

    2017-12-01

    Detection of transiting exoplanets around young stars is more difficult than for older systems owing to increased stellar variability. Nine young open cluster planets have been found in the K2 data, but no single analysis pipeline identified all planets. We have developed a transit search pipeline for young stars that uses a transit-shaped notch and quadratic continuum in a 12 or 24 hr window to fit both the stellar variability and the presence of a transit. In addition, for the most rapid rotators ({P}{rot}< 2 days) we model the variability using a linear combination of observed rotations of each star. To maximally exploit our new pipeline, we update the membership for four stellar populations observed by K2 (Upper Scorpius, Pleiades, Hyades, Praesepe) and conduct a uniform search of the members. We identify all known transiting exoplanets in the clusters, 17 eclipsing binaries, one transiting planet candidate orbiting a potential Pleiades member, and three orbiting unlikely members of the young clusters. Limited injection recovery testing on the known planet hosts indicates that for the older Praesepe systems we are sensitive to additional exoplanets as small as 1-2 R ⊕, and for the larger Upper Scorpius planet host (K2-33) our pipeline is sensitive to ˜4 R ⊕ transiting planets. The lack of detected multiple systems in the young clusters is consistent with the expected frequency from the original Kepler sample, within our detection limits. With a robust pipeline that detects all known planets in the young clusters, occurrence rate testing at young ages is now possible.

  19. Extending SEQenv: a taxa-centric approach to environmental annotations of 16S rDNA sequences

    PubMed Central

    Jeffries, Thomas C.; Ijaz, Umer Z.; Hamonts, Kelly

    2017-01-01

    Understanding how the environment selects a given taxon and the diversity patterns that emerge as a result of environmental filtering can dramatically improve our ability to analyse any environment in depth as well as advancing our knowledge on how the response of different taxa can impact each other and ecosystem functions. Most of the work investigating microbial biogeography has been site-specific, and logical environmental factors, rather than geographical location, may be more influential on microbial diversity. SEQenv, a novel pipeline aiming to provide environmental annotations of sequences emerged to provide a consistent description of the environmental niches using the ENVO ontology. While the pipeline provides a list of environmental terms on the basis of sample datasets and, therefore, the annotations obtained are at the dataset level, it lacks a taxa centric approach to environmental annotation. The work here describes an extension developed to enhance the SEQenv pipeline, which provided the means to directly generate environmental annotations for taxa under different contexts. 16S rDNA amplicon datasets belonging to distinct biomes were selected to illustrate the applicability of the extended SEQenv pipeline. A literature survey of the results demonstrates the immense importance of sequence level environmental annotations by illustrating the distribution of both taxa across environments as well as the various environmental sources of a specific taxon. Significantly enhancing the SEQenv pipeline in the process, this information would be valuable to any biologist seeking to understand the various taxa present in the habitat and the environment they originated from, enabling a more thorough analysis of which lineages are abundant in certain habitats and the recovery of patterns in taxon distribution across different habitats and environmental gradients. PMID:29038749

  20. A Search for Lost Planets in the Kepler Multi-Planet Systems and the Discovery of the Long-Period, Neptune-Sized Exoplanet Kepler-150 f

    NASA Technical Reports Server (NTRS)

    Schmitt, Joseph R.; Jenkins, Jon M.; Fischer, Debra A.

    2017-01-01

    The vast majority of the 4700 confirmed planets and planet candidates discovered by the Kepler space telescope were first found by the Kepler pipeline. In the pipeline, after a transit signal is found, all data points associated with those transits are removed, creating a Swiss cheese-like light curve full of holes, which is then used for subsequent transit searches. These holes could render an additional planet undetectable (or lost). We examine a sample of 114 stars with 3+ confirmed planets to see the effect that this Swiss cheesing may have. A simulation determined that the probability that a transiting planet is lost due to the transit masking is low, but non-neglible, reaching a plateau at approximately 3.3% lost in the period range of P = 400 - 500 days. We then model the transits in all quarters of each star and subtract out the transit signals, restoring the in-transit data points, and use the Kepler pipeline to search the transit-subtracted (i.e., transit-cleaned) light curves. However, the pipeline did not discover any credible new transit signals. This demonstrates the validity and robustness of the Kepler pipelines choice to use transit masking over transit subtraction. However, a follow-up visual search through all the transit-subtracted data, which allows for easier visual identification of new transits, revealed the existence of a new, Neptune-sized exoplanet. Kepler-150 f (P = 637.2 days, RP = 3.86 R earth) is confirmed using a combination of false positive probability analysis, transit duration analysis, and the planet multiplicity argument.

  1. A SEARCH FOR LOST PLANETS IN THE KEPLER MULTI-PLANET SYSTEMS AND THE DISCOVERY OF A LONG PERIOD, NEPTUNE-SIZED EXOPLANET KEPLER-150 F.

    PubMed

    Schmitt, Joseph R; Jenkins, Jon M; Fischer, Debra A

    2017-04-01

    The vast majority of the 4700 confirmed planets and planet candidates discovered by the Kepler space telescope were first found by the Kepler pipeline. In the pipeline, after a transit signal is found, all data points associated with those transits are removed, creating a "Swiss cheese"-like light curve full of holes, which is then used for subsequent transit searches. These holes could render an additional planet undetectable (or "lost"). We examine a sample of 114 stars with 3+ confirmed planets to see the effect that this "Swiss cheesing" may have. A simulation determined that the probability that a transiting planet is lost due to the transit masking is low, but non-neglible, reaching a plateau at ~3.3% lost in the period range of P = 400 - 500 days. We then model the transits in all quarters of each star and subtract out the transit signals, restoring the in-transit data points, and use the Kepler pipeline to search the transit-subtracted (i.e., transit-cleaned) light curves. However, the pipeline did not discover any credible new transit signals. This demonstrates the validity and robustness of the Kepler pipeline's choice to use transit masking over transit subtraction. However, a follow-up visual search through all the transit-subtracted data, which allows for easier visual identification of new transits, revealed the existence of a new, Neptune-sized exoplanet. Kepler-150 f ( P = 637.2 days, R P = 3.86 R ⊕ ) is confirmed using a combination of false positive probability analysis, transit duration analysis, and the planet multiplicity argument.

  2. The discovery of novel HDAC3 inhibitors via virtual screening and in vitro bioassay

    PubMed Central

    Hu, Huabin; Xue, Wenjie; Wang, Xiang Simon; Wu, Song

    2018-01-01

    Abstract Histone deacetylase 3 (HDAC3) is a potential target for the treatment of human diseases such as cancers, diabetes, chronic inflammation and neurodegenerative diseases. Previously, we proposed a virtual screening (VS) pipeline named “Hypo1_FRED_SAHA-3” for the discovery of HDAC3 inhibitors (HDAC3Is) and had thoroughly validated it by theoretical calculations. In this study, we attempted to explore its practical utility in a large-scale VS campaign. To this end, we used the VS pipeline to hierarchically screen the Specs chemical library. In order to facilitate compound cherry-picking, we then developed a knowledge-based pose filter (PF) by using our in-house quantitative structure activity relationship- (QSAR-) modelling approach and coupled it with FRED and Autodock Vina. Afterward, we purchased and tested 11 diverse compounds for their HDAC3 inhibitory activity in vitro. The bioassay has identified compound 2 (Specs ID: AN-979/41971160) as a HDAC3I (IC50 = 6.1 μM), which proved the efficacy of our workflow. As a medicinal chemistry study, we performed a follow-up substructure search and identified two more hit compounds of the same chemical type, i.e. 2–1 (AQ-390/42122119, IC50 = 1.3 μM) and 2–2 (AN-329/43450111, IC50 = 12.5 μM). Based on the chemical structures and activities, we have demonstrated the essential role of the capping group in maintaining the activity for this class of HDAC3Is. In addition, we tested the hit compounds for their in vitro activities on other HDACs, including HDAC1, HDAC2, HDAC8, HDAC4 and HDAC6. We have identified these compounds are HDAC1/2/3 selective inhibitors, of which compound 2 show the best selectivity profile. Taken together, the present study is an experimental validation and an update to our earlier VS strategy. The identified hits could be used as starting structures for the development of highly potent and selective HDAC3Is. PMID:29464997

  3. Electrophoretic sample insertion. [device for uniformly distributing samples in flow path

    NASA Technical Reports Server (NTRS)

    Mccreight, L. R. (Inventor)

    1974-01-01

    Two conductive screens located in the flow path of an electrophoresis sample separation apparatus are charged electrically. The sample is introduced between the screens, and the charge is sufficient to disperse and hold the samples across the screens. When the charge is terminated, the samples are uniformly distributed in the flow path. Additionally, a first separation by charged properties has been accomplished.

  4. Getting the Most out of PubChem for Virtual Screening

    PubMed Central

    Kim, Sunghwan

    2016-01-01

    Introduction With the emergence of the “big data” era, the biomedical research community has great interest in exploiting publicly available chemical information for drug discovery. PubChem is an example of public databases that provide a large amount of chemical information free of charge. Areas covered This article provides an overview of how PubChem’s data, tools, and services can be used for virtual screening and reviews recent publications that discuss important aspects of exploiting PubChem for drug discovery. Expert opinion PubChem offers comprehensive chemical information useful for drug discovery. It also provides multiple programmatic access routes, which are essential to build automated virtual screening pipelines that exploit PubChem data. In addition, PubChemRDF allows users to download PubChem data and load them into a local computing facility, facilitating data integration between PubChem and other resources. PubChem resources have been used in many studies for developing bioactivity and toxicity prediction models, discovering polypharmacologic (multi-target) ligands, and identifying new macromolecule targets of compounds (for drug-repurposing or off-target side effect prediction). These studies demonstrate the usefulness of PubChem as a key resource for computer-aided drug discovery and related area. PMID:27454129

  5. 76 FR 75894 - Information Collection Activities: Pipelines and Pipeline Rights-of-Way; Submitted for Office of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-05

    ... pipelines `` * * * for the transportation of oil, natural gas, sulphur, or other minerals, or under such...) Submit repair report 3 1008(f) Submit report of pipeline failure analysis...... 30 1008(g) Submit plan of.... BSEE-2011-0002; OMB Control Number 1010-0050] Information Collection Activities: Pipelines and Pipeline...

  6. 76 FR 620 - Notice of Receipt of Application for a Presidential Permit To Operate and Maintain Pipeline...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-05

    ... Corporation (``Dome Pipeline''). These pipelines carry, or are permitted to carry, liquefied hydrocarbons...) additional pipelines to carry liquefied hydrocarbons. According to the application, all four (4) pipelines... hydrocarbons under pressure and that the remaining two pipelines are being held in reserve to be used in the...

  7. 77 FR 16052 - Information Collection Activities: Pipelines and Pipeline Rights-of-Way; Submitted for Office of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-19

    ... submerged lands of the OCS for pipelines ``* * * for the transportation of oil, natural gas, sulphur, or... ensure that the pipeline, as constructed, will provide for safe transportation of oil and gas and other...-0002; OMB Control Number 1014-0016] Information Collection Activities: Pipelines and Pipeline Rights...

  8. 78 FR 53190 - Pipeline Safety: Notice to Operators of Hazardous Liquid and Natural Gas Pipelines of a Recall on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0185] Pipeline Safety: Notice to Operators of Hazardous Liquid and Natural Gas Pipelines of a Recall on Leak Repair Clamps Due to Defective Seal AGENCY: Pipeline and Hazardous Materials Safety...

  9. An expression vector tailored for large-scale, high-throughput purification of recombinant proteins ☆

    PubMed Central

    Donnelly, Mark I.; Zhou, Min; Millard, Cynthia Sanville; Clancy, Shonda; Stols, Lucy; Eschenfeldt, William H.; Collart, Frank R.; Joachimiak, Andrzej

    2009-01-01

    Production of milligram quantities of numerous proteins for structural and functional studies requires an efficient purification pipeline. We found that the dual tag, his6-tag–maltose-binding protein (MBP), intended to facilitate purification and enhance proteins’ solubility, disrupted such a pipeline, requiring additional screening and purification steps. Not all proteins rendered soluble by fusion to MBP remained soluble after its proteolytic removal, and in those cases where the protein remained soluble, standard purification protocols failed to remove completely the stoichiometric amount of his6-tagged MBP generated by proteolysis. Both liabilities were alleviated by construction of a vector that produces fusion proteins in which MBP, the his6-tag and the target protein are separated by highly specific protease cleavage sites in the configuration MBP-site-his6-site-protein. In vivo cleavage at the first site by co-expressed protease generated untagged MBP and his6-tagged target protein. Proteins not truly rendered soluble by transient association with MBP precipitated, and untagged MBP was easily separated from the his-tagged target protein by conventional protocols. The second protease cleavage site allowed removal of the his6-tag. PMID:16497515

  10. From Sample to Multi-Omics Conclusions in under 48 Hours

    PubMed Central

    Navas-Molina, Jose A.; Hyde, Embriette R.; Vázquez-Baeza, Yoshiki; Humphrey, Greg; Gaffney, James; Minich, Jeremiah J.; Melnik, Alexey V.; Herschend, Jakob; DeReus, Jeff; Durant, Austin; Dutton, Rachel J.; Khosroheidari, Mahdieh; Green, Clifford; da Silva, Ricardo; Dorrestein, Pieter C.; Knight, Rob

    2016-01-01

    ABSTRACT Multi-omics methods have greatly advanced our understanding of the biological organism and its microbial associates. However, they are not routinely used in clinical or industrial applications, due to the length of time required to generate and analyze omics data. Here, we applied a novel integrated omics pipeline for the analysis of human and environmental samples in under 48 h. Human subjects that ferment their own foods provided swab samples from skin, feces, oral cavity, fermented foods, and household surfaces to assess the impact of home food fermentation on their microbial and chemical ecology. These samples were analyzed with 16S rRNA gene sequencing, inferred gene function profiles, and liquid chromatography-tandem mass spectrometry (LC-MS/MS) metabolomics through the Qiita, PICRUSt, and GNPS pipelines, respectively. The human sample microbiomes clustered with the corresponding sample types in the American Gut Project (http://www.americangut.org), and the fermented food samples produced a separate cluster. The microbial communities of the household surfaces were primarily sourced from the fermented foods, and their consumption was associated with increased gut microbial diversity. Untargeted metabolomics revealed that human skin and fermented food samples had separate chemical ecologies and that stool was more similar to fermented foods than to other sample types. Metabolites from the fermented foods, including plant products such as procyanidin and pheophytin, were present in the skin and stool samples of the individuals consuming the foods. Some food metabolites were modified during digestion, and others were detected in stool intact. This study represents a first-of-its-kind analysis of multi-omics data that achieved time intervals matching those of classic microbiological culturing. IMPORTANCE Polymicrobial infections are difficult to diagnose due to the challenge in comprehensively cultivating the microbes present. Omics methods, such as 16S rRNA sequencing, metagenomics, and metabolomics, can provide a more complete picture of a microbial community and its metabolite production, without the biases and selectivity of microbial culture. However, these advanced methods have not been applied to clinical or industrial microbiology or other areas where complex microbial dysbioses require immediate intervention. The reason for this is the length of time required to generate and analyze omics data. Here, we describe the development and application of a pipeline for multi-omics data analysis in time frames matching those of the culture-based approaches often used for these applications. This study applied multi-omics methods effectively in clinically relevant time frames and sets a precedent toward their implementation in clinical medicine and industrial microbiology. PMID:27822524

  11. Data Pre-Processing for Label-Free Multiple Reaction Monitoring (MRM) Experiments

    PubMed Central

    Chung, Lisa M.; Colangelo, Christopher M.; Zhao, Hongyu

    2014-01-01

    Multiple Reaction Monitoring (MRM) conducted on a triple quadrupole mass spectrometer allows researchers to quantify the expression levels of a set of target proteins. Each protein is often characterized by several unique peptides that can be detected by monitoring predetermined fragment ions, called transitions, for each peptide. Concatenating large numbers of MRM transitions into a single assay enables simultaneous quantification of hundreds of peptides and proteins. In recognition of the important role that MRM can play in hypothesis-driven research and its increasing impact on clinical proteomics, targeted proteomics such as MRM was recently selected as the Nature Method of the Year. However, there are many challenges in MRM applications, especially data pre‑processing where many steps still rely on manual inspection of each observation in practice. In this paper, we discuss an analysis pipeline to automate MRM data pre‑processing. This pipeline includes data quality assessment across replicated samples, outlier detection, identification of inaccurate transitions, and data normalization. We demonstrate the utility of our pipeline through its applications to several real MRM data sets. PMID:24905083

  12. Data Pre-Processing for Label-Free Multiple Reaction Monitoring (MRM) Experiments.

    PubMed

    Chung, Lisa M; Colangelo, Christopher M; Zhao, Hongyu

    2014-06-05

    Multiple Reaction Monitoring (MRM) conducted on a triple quadrupole mass spectrometer allows researchers to quantify the expression levels of a set of target proteins. Each protein is often characterized by several unique peptides that can be detected by monitoring predetermined fragment ions, called transitions, for each peptide. Concatenating large numbers of MRM transitions into a single assay enables simultaneous quantification of hundreds of peptides and proteins. In recognition of the important role that MRM can play in hypothesis-driven research and its increasing impact on clinical proteomics, targeted proteomics such as MRM was recently selected as the Nature Method of the Year. However, there are many challenges in MRM applications, especially data pre‑processing where many steps still rely on manual inspection of each observation in practice. In this paper, we discuss an analysis pipeline to automate MRM data pre‑processing. This pipeline includes data quality assessment across replicated samples, outlier detection, identification of inaccurate transitions, and data normalization. We demonstrate the utility of our pipeline through its applications to several real MRM data sets.

  13. Granatum: a graphical single-cell RNA-Seq analysis pipeline for genomics scientists.

    PubMed

    Zhu, Xun; Wolfgruber, Thomas K; Tasato, Austin; Arisdakessian, Cédric; Garmire, David G; Garmire, Lana X

    2017-12-05

    Single-cell RNA sequencing (scRNA-Seq) is an increasingly popular platform to study heterogeneity at the single-cell level. Computational methods to process scRNA-Seq data are not very accessible to bench scientists as they require a significant amount of bioinformatic skills. We have developed Granatum, a web-based scRNA-Seq analysis pipeline to make analysis more broadly accessible to researchers. Without a single line of programming code, users can click through the pipeline, setting parameters and visualizing results via the interactive graphical interface. Granatum conveniently walks users through various steps of scRNA-Seq analysis. It has a comprehensive list of modules, including plate merging and batch-effect removal, outlier-sample removal, gene-expression normalization, imputation, gene filtering, cell clustering, differential gene expression analysis, pathway/ontology enrichment analysis, protein network interaction visualization, and pseudo-time cell series construction. Granatum enables broad adoption of scRNA-Seq technology by empowering bench scientists with an easy-to-use graphical interface for scRNA-Seq data analysis. The package is freely available for research use at http://garmiregroup.org/granatum/app.

  14. Material property relationships for pipeline steels and the potential for application of NDE

    NASA Astrophysics Data System (ADS)

    Smart, Lucinda; Bond, Leonard J.

    2016-02-01

    The oil and gas industry in the USA has an extensive infrastructure of pipelines, 70% of which were installed prior to 1980, and almost half were installed during the 1950s and 1960s. Ideally the mechanical properties (i.e. yield strength, tensile strength, transition temperature, and fracture toughness) of a steel pipe must be known in order to respond to detected defects in an appropriate manner. Neither current in-ditch methods nor the ILI inspection data have yet determined and map the desired mechanical properties with adequate confidence. In the quest to obtain the mechanical properties of a steel pipe using a nondestructive method, it is important to understand that there are many inter-related variables. This paper reports a literature review and an analysis of a sample set of data. There is promise for correlating the results of NDE measurement modalities to the information required to develop relationships between those measurements and the mechanical measurements desired for pipelines to ensure proper response to defects which are of significant threat.

  15. PepLine: a software pipeline for high-throughput direct mapping of tandem mass spectrometry data on genomic sequences.

    PubMed

    Ferro, Myriam; Tardif, Marianne; Reguer, Erwan; Cahuzac, Romain; Bruley, Christophe; Vermat, Thierry; Nugues, Estelle; Vigouroux, Marielle; Vandenbrouck, Yves; Garin, Jérôme; Viari, Alain

    2008-05-01

    PepLine is a fully automated software which maps MS/MS fragmentation spectra of trypsic peptides to genomic DNA sequences. The approach is based on Peptide Sequence Tags (PSTs) obtained from partial interpretation of QTOF MS/MS spectra (first module). PSTs are then mapped on the six-frame translations of genomic sequences (second module) giving hits. Hits are then clustered to detect potential coding regions (third module). Our work aimed at optimizing the algorithms of each component to allow the whole pipeline to proceed in a fully automated manner using raw nucleic acid sequences (i.e., genomes that have not been "reduced" to a database of ORFs or putative exons sequences). The whole pipeline was tested on controlled MS/MS spectra sets from standard proteins and from Arabidopsis thaliana envelope chloroplast samples. Our results demonstrate that PepLine competed with protein database searching softwares and was fast enough to potentially tackle large data sets and/or high size genomes. We also illustrate the potential of this approach for the detection of the intron/exon structure of genes.

  16. Machine-learning-based real-bogus system for the HSC-SSP moving object detection pipeline

    NASA Astrophysics Data System (ADS)

    Lin, Hsing-Wen; Chen, Ying-Tung; Wang, Jen-Hung; Wang, Shiang-Yu; Yoshida, Fumi; Ip, Wing-Huen; Miyazaki, Satoshi; Terai, Tsuyoshi

    2018-01-01

    Machine-learning techniques are widely applied in many modern optical sky surveys, e.g., Pan-STARRS1, PTF/iPTF, and the Subaru/Hyper Suprime-Cam survey, to reduce human intervention in data verification. In this study, we have established a machine-learning-based real-bogus system to reject false detections in the Subaru/Hyper-Suprime-Cam Strategic Survey Program (HSC-SSP) source catalog. Therefore, the HSC-SSP moving object detection pipeline can operate more effectively due to the reduction of false positives. To train the real-bogus system, we use stationary sources as the real training set and "flagged" data as the bogus set. The training set contains 47 features, most of which are photometric measurements and shape moments generated from the HSC image reduction pipeline (hscPipe). Our system can reach a true positive rate (tpr) ˜96% with a false positive rate (fpr) ˜1% or tpr ˜99% at fpr ˜5%. Therefore, we conclude that stationary sources are decent real training samples, and using photometry measurements and shape moments can reject false positives effectively.

  17. 75 FR 24633 - Order Finding That the ICE Chicago Financial Basis Contract Traded on the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-05

    ... Pipe Line, LLC, serves as a juncture for 13 different pipelines. These pipelines bring in natural gas... ``hub'' refers to a juncture where two or more natural gas pipelines are connected. Hubs also serve as... interstate pipelines. The firms that service the Chicago area are ANR Pipeline Company, Natural Gas Pipeline...

  18. A handheld computer-aided diagnosis system and simulated analysis

    NASA Astrophysics Data System (ADS)

    Su, Mingjian; Zhang, Xuejun; Liu, Brent; Su, Kening; Louie, Ryan

    2016-03-01

    This paper describes a Computer Aided Diagnosis (CAD) system based on cellphone and distributed cluster. One of the bottlenecks in building a CAD system for clinical practice is the storage and process of mass pathology samples freely among different devices, and normal pattern matching algorithm on large scale image set is very time consuming. Distributed computation on cluster has demonstrated the ability to relieve this bottleneck. We develop a system enabling the user to compare the mass image to a dataset with feature table by sending datasets to Generic Data Handler Module in Hadoop, where the pattern recognition is undertaken for the detection of skin diseases. A single and combination retrieval algorithm to data pipeline base on Map Reduce framework is used in our system in order to make optimal choice between recognition accuracy and system cost. The profile of lesion area is drawn by doctors manually on the screen, and then uploads this pattern to the server. In our evaluation experiment, an accuracy of 75% diagnosis hit rate is obtained by testing 100 patients with skin illness. Our system has the potential help in building a novel medical image dataset by collecting large amounts of gold standard during medical diagnosis. Once the project is online, the participants are free to join and eventually an abundant sample dataset will soon be gathered enough for learning. These results demonstrate our technology is very promising and expected to be used in clinical practice.

  19. Using industry ROV videos to assess fish associations with subsea pipelines

    NASA Astrophysics Data System (ADS)

    McLean, D. L.; Partridge, J. C.; Bond, T.; Birt, M. J.; Bornt, K. R.; Langlois, T. J.

    2017-06-01

    Remote Operated Vehicles are routinely used to undertake inspection and maintenance activities of underwater pipelines in north-west Australia. In doing so, many terabytes of geo-referenced underwater video are collected at depths, and on a scale usually unobtainable for ecological research. We assessed fish diversity and abundance from existing ROV videos collected along 2-3 km sections of two pipelines in north-west Australia, one at 60-80 m water depth and the other at 120-130 m. A total of 5962 individual fish from 92 species and 42 families were observed. Both pipelines were characterised by a high abundance of commercially important fishes including: snappers (Lutjanidae) and groupers (Epinephelidae). The presence of thousands of unidentifiable larval fish, in addition to juveniles, sub-adults and adults suggests that the pipelines may be enhancing, rather than simply attracting, fish stocks. The prevalence and high complexity of sponges on the shallower pipeline and of deepwater corals on the deeper pipeline had a strong positive correlation with the fish abundance. These habitats likely offer a significant food source and refuge for fish, but also for invertebrates upon which fish feed. A greater diversity on the shallower pipeline, and a higher abundance of fishes on both pipelines, were associated with unsupported pipeline sections (spans) and many species appeared to be utilising pipeline spans as refuges. This study is a first look at the potential value of subsea pipelines for fishes on the north-west shelf. While the results suggest that these sections of pipeline appear to offer significant habitat that supports diverse and important commercially fished species, further work, including off-pipeline surveys on the natural seafloor, are required to determine conclusively the ecological value of pipelines and thereby inform discussions regarding the ecological implications of pipeline decommissioning.

  20. Effect of Thermomechanical Processing and Crystallographic Orientation on the Corrosion Behavior of API 5L X70 Pipeline Steel

    NASA Astrophysics Data System (ADS)

    Ohaeri, Enyinnaya; Omale, Joseph; Eduok, Ubong; Szpunar, Jerzy

    2018-04-01

    This work presents the electrochemical response of X70 pipeline steel substrates thermomechanically processed at different conditions. The WE sample was hot rolled at a temperature range of 850 °C to 805 °C and cooled at a rate of 42.75 °C/s. Another sample WD was hot rolled from 880 °C to 815 °C and cooled at a faster rate of 51.5 °C/s. Corrosion tests were conducted electrochemically by potentiodynamic polarization in hydrogen-charged and non-hydrogen-charged environments. A lower corrosion rate was measured with hydrogen charging due to the rapid formation of corrosion product film on pipeline substrate, but WE specimen emerged as the most susceptible to corrosion with and without hydrogen charging. Variations in thermomechanical rolling conditions influenced grain orientation, protective film properties, corrosion, and cracking behavior on both specimens. Cracks were seen in both specimens after hydrogen charging, but specimen WE experienced a more intense deterioration of protective corrosion product film and subsequent cracking. A large part of specimen WD retained its protective corrosion product film after the polarization test, and sites where spalling occurred resulted in pitting with less cracking. Despite weak crystallographic texture noticed in both specimens, WD showed a higher intensity of corrosion-resistant 111||ND-oriented grains, while WE showed a more random distribution of 111||ND-, 011||ND-, and 001||ND-oriented grains with a lower intensity.

  1. Effect of Thermomechanical Processing and Crystallographic Orientation on the Corrosion Behavior of API 5L X70 Pipeline Steel

    NASA Astrophysics Data System (ADS)

    Ohaeri, Enyinnaya; Omale, Joseph; Eduok, Ubong; Szpunar, Jerzy

    2018-06-01

    This work presents the electrochemical response of X70 pipeline steel substrates thermomechanically processed at different conditions. The WE sample was hot rolled at a temperature range of 850 °C to 805 °C and cooled at a rate of 42.75 °C/s. Another sample WD was hot rolled from 880 °C to 815 °C and cooled at a faster rate of 51.5 °C/s. Corrosion tests were conducted electrochemically by potentiodynamic polarization in hydrogen-charged and non-hydrogen-charged environments. A lower corrosion rate was measured with hydrogen charging due to the rapid formation of corrosion product film on pipeline substrate, but WE specimen emerged as the most susceptible to corrosion with and without hydrogen charging. Variations in thermomechanical rolling conditions influenced grain orientation, protective film properties, corrosion, and cracking behavior on both specimens. Cracks were seen in both specimens after hydrogen charging, but specimen WE experienced a more intense deterioration of protective corrosion product film and subsequent cracking. A large part of specimen WD retained its protective corrosion product film after the polarization test, and sites where spalling occurred resulted in pitting with less cracking. Despite weak crystallographic texture noticed in both specimens, WD showed a higher intensity of corrosion-resistant 111|| ND-oriented grains, while WE showed a more random distribution of 111|| ND-, 011|| ND-, and 001|| ND-oriented grains with a lower intensity.

  2. Understanding Transgender Men's Experiences with and Preferences for Cervical Cancer Screening: A Rapid Assessment Survey.

    PubMed

    Seay, Julia; Ranck, Atticus; Weiss, Roy; Salgado, Christopher; Fein, Lydia; Kobetz, Erin

    2017-08-01

    Transgender men are less likely than cisgender women to receive cervical cancer screening. The purpose of the current study was to understand experiences with and preferences for cervical cancer screening among transgender men. Ninety-one transgender men ages 21-63 completed the survey. The survey evaluated experiences with and preferences for screening, including opinions regarding human papillomavirus (HPV) self-sampling as a primary cervical cancer screening. Half (50.5%) of participants did not have Pap smear screening within the past 3 years. The majority (57.1%) of participants preferred HPV self-sampling over provider-collected Pap smear screening. Participants who reported discrimination were more likely to prefer HPV self-sampling (odds ratio = 3.29, 95% confidence interval 1.38-7.84, P = 0.007). Primary HPV testing via HPV self-sampling may improve cervical cancer screening uptake among transgender men. Future work should pilot this innovative cervical cancer screening method within this population.

  3. Ink Wash Painting Style Rendering With Physically-based Ink Dispersion Model

    NASA Astrophysics Data System (ADS)

    Wang, Yifan; Li, Weiran; Zhu, Qing

    2018-04-01

    This paper presents a real-time rendering method based on the GPU programmable pipeline for rendering the 3D scene in ink wash painting style. The method is divided into main three parts: First, render the ink properties of 3D model by calculating its vertex curvature. Then, cached the ink properties to a paper structure and using an ink dispersion model which is defined by referencing the theory of porous media to simulate the dispersion of ink. Finally, convert the ink properties to the pixel color information and render it to the screen. This method has a better performance than previous methods in visual quality.

  4. Anisotropic tubular filtering for automatic detection of acid-fast bacilli in Ziehl-Neelsen stained sputum smear samples

    NASA Astrophysics Data System (ADS)

    Raza, Shan-e.-Ahmed; Marjan, M. Q.; Arif, Muhammad; Butt, Farhana; Sultan, Faisal; Rajpoot, Nasir M.

    2015-03-01

    One of the main factors for high workload in pulmonary pathology in developing countries is the relatively large proportion of tuberculosis (TB) cases which can be detected with high throughput using automated approaches. TB is caused by Mycobacterium tuberculosis, which appears as thin, rod-shaped acid-fast bacillus (AFB) in Ziehl-Neelsen (ZN) stained sputum smear samples. In this paper, we present an algorithm for automatic detection of AFB in digitized images of ZN stained sputum smear samples under a light microscope. A key component of the proposed algorithm is the enhancement of raw input image using a novel anisotropic tubular filter (ATF) which suppresses the background noise while simultaneously enhancing strong anisotropic features of AFBs present in the image. The resulting image is then segmented using color features and candidate AFBs are identified. Finally, a support vector machine classifier using morphological features from candidate AFBs decides whether a given image is AFB positive or not. We demonstrate the effectiveness of the proposed ATF method with two different feature sets by showing that the proposed image analysis pipeline results in higher accuracy and F1-score than the same pipeline with standard median filtering for image enhancement.

  5. Investigating vegetation spectral reflectance for detecting hydrocarbon pipeline leaks from multispectral data

    NASA Astrophysics Data System (ADS)

    Adamu, Bashir; Tansey, Kevin; Bradshaw, Michael J.

    2013-10-01

    The aim of this paper is to analyse spectral reflectance data from Landsat TM of vegetation that has been exposed to hydrocarbon contamination from oil spills from pipelines. The study is undertaken in an area of mangrove and swamp vegetation where the detection of an oil spill is traditionally difficult to make. We used a database of oil spill records to help identify candidate sites for spectral analysis. Extracted vegetation spectra were compared between polluted and nonpolluted sites and supervised (neural network) classification was carried out to map hydrocarbon (HC) contaminated sites from the sample areas. Initial results show that polluted sites are characterised by high reflectance in the visible (VIS) 0.4μm - 0.7μm, and a lower reflectance in the near-infrared (NIR) 0.7μm - 1.1μm. This suggests that the vegetation is in a stressed state. Samples taken from pixels surrounding polluted sites show similar spectral reflectance values to that of polluted sites suggesting possible migration of HC to the wider environment. Further work will focus on increasing the sample size and investigating the impact of an oil spill on a wider buffer zone around the spill site.

  6. 78 FR 46560 - Pipeline Safety: Class Location Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-01

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... class location requirements for gas transmission pipelines. Section 5 of the Pipeline Safety, Regulatory... and, with respect to gas transmission pipeline facilities, whether applying IMP requirements to...

  7. 77 FR 27279 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-09

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... collections relate to the pipeline integrity management requirements for gas transmission pipeline operators... Management in High Consequence Areas Gas Transmission Pipeline Operators. OMB Control Number: 2137-0610...

  8. 75 FR 53733 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-01

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0246] Pipeline Safety: Information Collection Activities AGENCY: Pipeline and Hazardous... liquefied natural gas, hazardous liquid, and gas transmission pipeline systems operated by a company. The...

  9. 77 FR 15453 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-15

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... information collection titled, ``Gas Pipeline Safety Program Certification and Hazardous Liquid Pipeline... collection request that PHMSA will be submitting to OMB for renewal titled, ``Gas Pipeline Safety Program...

  10. 77 FR 46155 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-02

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... collections relate to the pipeline integrity management requirements for gas transmission pipeline operators... Management in High Consequence Areas Gas Transmission Pipeline Operators. OMB Control Number: 2137-0610...

  11. A powerful nonparametric method for detecting differentially co-expressed genes: distance correlation screening and edge-count test.

    PubMed

    Zhang, Qingyang

    2018-05-16

    Differential co-expression analysis, as a complement of differential expression analysis, offers significant insights into the changes in molecular mechanism of different phenotypes. A prevailing approach to detecting differentially co-expressed genes is to compare Pearson's correlation coefficients in two phenotypes. However, due to the limitations of Pearson's correlation measure, this approach lacks the power to detect nonlinear changes in gene co-expression which is common in gene regulatory networks. In this work, a new nonparametric procedure is proposed to search differentially co-expressed gene pairs in different phenotypes from large-scale data. Our computational pipeline consisted of two main steps, a screening step and a testing step. The screening step is to reduce the search space by filtering out all the independent gene pairs using distance correlation measure. In the testing step, we compare the gene co-expression patterns in different phenotypes by a recently developed edge-count test. Both steps are distribution-free and targeting nonlinear relations. We illustrate the promise of the new approach by analyzing the Cancer Genome Atlas data and the METABRIC data for breast cancer subtypes. Compared with some existing methods, the new method is more powerful in detecting nonlinear type of differential co-expressions. The distance correlation screening can greatly improve computational efficiency, facilitating its application to large data sets.

  12. The cost-effectiveness of cervical self-sampling to improve routine cervical cancer screening: The importance of respondent screening history and compliance

    PubMed Central

    Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J

    2016-01-01

    Background Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. Methods We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women non-compliant to screening within a 5-year or 10-year period under two scenarios: A) self-sampling respondents had moderate under-screening histories, or B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The ‘most cost-effective’ strategy was identified as the strategy just below $100,000 per QALY gained. Results Mailing self-sampling device kits to all women non-compliant to screening within a 5-year or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, ‘10-yearly self-sampling’ is preferred ($95,500 per QALY gained) if ‘5-yearly self-sampling’ could only attract moderate under-screeners; however, ‘5-yearly self-sampling’ is preferred if this strategy could additionally attract severe under-screeners. Conclusions Targeted self-sampling of non-compliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. Impact The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. PMID:27624639

  13. The AMBRE project: Parameterisation of FGK-type stars from the ESO:HARPS archived spectra

    NASA Astrophysics Data System (ADS)

    De Pascale, M.; Worley, C. C.; de Laverny, P.; Recio-Blanco, A.; Hill, V.; Bijaoui, A.

    2014-10-01

    Context. The AMBRE project is a collaboration between the European Southern Observatory (ESO) and the Observatoire de la Côte d'Azur (OCA). It has been established to determine the stellar atmospheric parameters of the archived spectra of four ESO spectrographs. Aims: The analysis of the ESO:HARPS archived spectra for the determination of their atmospheric parameters (effective temperature, surface gravity, global metallicities, and abundance of α-elements over iron) is presented. The sample being analysed (AMBRE:HARPS) covers the period from 2003 to 2010 and is comprised of 126 688 scientific spectra corresponding to ~17 218 different stars. Methods: For the analysis of the AMBRE:HARPS spectral sample, the automated pipeline developed for the analysis of the AMBRE:FEROS archived spectra has been adapted to the characteristics of the HARPS spectra. Within the pipeline, the stellar parameters are determined by the MATISSE algorithm, which has been developed at OCA for the analysis of large samples of stellar spectra in the framework of galactic archaeology. In the present application, MATISSE uses the AMBRE grid of synthetic spectra, which covers FGKM-type stars for a range of gravities and metallicities. Results: We first determined the radial velocity and its associated error for the ~15% of the AMBRE:HARPS spectra, for which this velocity had not been derived by the ESO:HARPS reduction pipeline. The stellar atmospheric parameters and the associated chemical index [α/Fe] with their associated errors have then been estimated for all the spectra of the AMBRE:HARPS archived sample. Based on key quality criteria, we accepted and delivered the parameterisation of 93 116 (74% of the total sample) spectra to ESO. These spectra correspond to ~10 706 stars; each are observed between one and several hundred times. This automatic parameterisation of the AMBRE:HARPS spectra shows that the large majority of these stars are cool main-sequence dwarfs with metallicities greater than -0.5 dex (as expected, given that HARPS has been extensively used for planet searches around GK-stars).

  14. The HLA-net GENE[RATE] pipeline for effective HLA data analysis and its application to 145 population samples from Europe and neighbouring areas.

    PubMed

    Nunes, J M; Buhler, S; Roessli, D; Sanchez-Mazas, A

    2014-05-01

    In this review, we present for the first time an integrated version of the Gene[rate] computer tools which have been developed during the last 5 years to analyse human leukocyte antigen (HLA) data in human populations, as well as the results of their application to a large dataset of 145 HLA-typed population samples from Europe and its two neighbouring areas, North Africa and West Asia, now forming part of the Gene[va] database. All these computer tools and genetic data are, from now, publicly available through a newly designed bioinformatics platform, HLA-net, here presented as a main achievement of the HLA-NET scientific programme. The Gene[rate] pipeline offers user-friendly computer tools to estimate allele and haplotype frequencies, to test Hardy-Weinberg equilibrium (HWE), selective neutrality and linkage disequilibrium, to recode HLA data, to convert file formats, to display population frequencies of chosen alleles and haplotypes in selected geographic regions, and to perform genetic comparisons among chosen sets of population samples, including new data provided by the user. Both numerical and graphical outputs are generated, the latter being highly explicit and of publication quality. All these analyses can be performed on the pipeline after scrupulous validation of the population sample's characterisation and HLA typing reporting according to HLA-NET recommendations. The Gene[va] database offers direct access to the HLA-A, -B, -C, -DQA1, -DQB1, -DRB1 and -DPB1 frequencies and summary statistics of 145 population samples having successfully passed these HLA-NET 'filters', and representing three European subregions (South-East, North-East and Central-West Europe) and two neighbouring areas (North Africa, as far as Sudan, and West Asia, as far as South India). The analysis of these data, summarized in this review, shows a substantial genetic variation at the regional level in this continental area. These results have main implications for population genetics, transplantation and epidemiological studies. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. CASP10-BCL::Fold efficiently samples topologies of large proteins.

    PubMed

    Heinze, Sten; Putnam, Daniel K; Fischer, Axel W; Kohlmann, Tim; Weiner, Brian E; Meiler, Jens

    2015-03-01

    During CASP10 in summer 2012, we tested BCL::Fold for prediction of free modeling (FM) and template-based modeling (TBM) targets. BCL::Fold assembles the tertiary structure of a protein from predicted secondary structure elements (SSEs) omitting more flexible loop regions early on. This approach enables the sampling of conformational space for larger proteins with more complex topologies. In preparation of CASP11, we analyzed the quality of CASP10 models throughout the prediction pipeline to understand BCL::Fold's ability to sample the native topology, identify native-like models by scoring and/or clustering approaches, and our ability to add loop regions and side chains to initial SSE-only models. The standout observation is that BCL::Fold sampled topologies with a GDT_TS score > 33% for 12 of 18 and with a topology score > 0.8 for 11 of 18 test cases de novo. Despite the sampling success of BCL::Fold, significant challenges still exist in clustering and loop generation stages of the pipeline. The clustering approach employed for model selection often failed to identify the most native-like assembly of SSEs for further refinement and submission. It was also observed that for some β-strand proteins model refinement failed as β-strands were not properly aligned to form hydrogen bonds removing otherwise accurate models from the pool. Further, BCL::Fold samples frequently non-natural topologies that require loop regions to pass through the center of the protein. © 2015 Wiley Periodicals, Inc.

  16. Accuracy and Cost-Effectiveness of Cervical Cancer Screening by High-Risk HPV DNA Testing of Self-Collected Vaginal Samples

    PubMed Central

    Balasubramanian, Akhila; Kulasingam, Shalini L.; Baer, Atar; Hughes, James P.; Myers, Evan R.; Mao, Constance; Kiviat, Nancy B.; Koutsky, Laura A.

    2010-01-01

    Objective Estimate the accuracy and cost-effectiveness of cervical cancer screening strategies based on high-risk HPV DNA testing of self-collected vaginal samples. Materials and Methods A subset of 1,665 women (18-50 years of age) participating in a cervical cancer screening study were screened by liquid-based cytology and by high-risk HPV DNA testing of both self-collected vaginal swab samples and clinician-collected cervical samples. Women with positive/abnormal screening test results and a subset of women with negative screening test results were triaged to colposcopy. Based on individual and combined test results, five screening strategies were defined. Estimates of sensitivity and specificity for cervical intraepithelial neoplasia grade 2 or worse were calculated and a Markov model was used to estimate the incremental cost-effectiveness ratios (ICERs) for each strategy. Results Compared to cytology-based screening, high-risk HPV DNA testing of self-collected vaginal samples was more sensitive (68%, 95%CI=58%-78% versus 85%, 95%CI=76%-94%) but less specific (89%, 95%CI=86%-91% versus 73%, 95%CI=67%-79%). A strategy of high-risk HPV DNA testing of self-collected vaginal samples followed by cytology triage of HPV positive women, was comparably sensitive (75%, 95%CI=64%-86%) and specific (88%, 95%CI=85%-92%) to cytology-based screening. In-home self-collection for high-risk HPV DNA detection followed by in-clinic cytology triage had a slightly lower lifetime cost and a slightly higher quality-adjusted life expectancy than did cytology-based screening (ICER of triennial screening compared to no screening was $9,871/QALY and $12,878/QALY, respectively). Conclusions Triennial screening by high-risk HPV DNA testing of in-home, self-collected vaginal samples followed by in-clinic cytology triage was cost-effective. PMID:20592553

  17. 75 FR 43612 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0042] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials..., Inc., a natural gas pipeline operator, seeking relief from compliance with certain requirements in the...

  18. 78 FR 52820 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0181] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Notice. SUMMARY: Pursuant to the Federal pipeline safety laws...

  19. 75 FR 13342 - Pipeline Safety: Workshop on Distribution Pipeline Construction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-19

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... natural gas distribution construction. Natural gas distribution pipelines are subject to a unique subset... distribution pipeline construction practices. This workshop will focus solely on natural gas distribution...

  20. 78 FR 52821 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0146] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Notice. SUMMARY: Pursuant to the Federal pipeline safety laws...

  1. 77 FR 34458 - Pipeline Safety: Requests for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-11

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0112] Pipeline Safety: Requests for Special Permit AGENCY: Pipeline and Hazardous Materials... BreitBurn Energy Company LP, two natural gas pipeline operators, seeking relief from compliance with...

  2. 75 FR 66425 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0124] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials... Company, LP, a natural gas pipeline operator, seeking relief from compliance with certain requirements in...

  3. 77 FR 73637 - Alliance Pipeline L.P.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-11

    ... Pipeline L.P.; Notice of Application Take notice that on November 26, 2012, Alliance Pipeline L.P..., Manager, Regulatory Affairs, Alliance Pipeline Ltd. on behalf of Alliance Pipeline L.P., 800, 605-5 Ave...] BILLING CODE 6717-01-P ...

  4. 49 CFR 192.951 - Where does an operator file a report?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline... Pipeline Safety, Pipeline and Hazardous Materials Safety Administration, U.S. Department of Transportation...

  5. 78 FR 5866 - Pipeline Safety: Annual Reports and Validation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2012-0319] Pipeline Safety: Annual Reports and Validation AGENCY: Pipeline and Hazardous Materials... 2012 gas transmission and gathering annual reports, remind pipeline owners and operators to validate...

  6. 77 FR 51848 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-27

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Program for Gas Distribution Pipelines. DATES: Interested persons are invited to submit comments on or.... These regulations require operators of hazardous liquid pipelines and gas pipelines to develop and...

  7. 77 FR 26822 - Pipeline Safety: Verification of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0068] Pipeline Safety: Verification of Records AGENCY: Pipeline and Hazardous Materials... issuing an Advisory Bulletin to remind operators of gas and hazardous liquid pipeline facilities to verify...

  8. 77 FR 74275 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-13

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No.... These regulations require operators of hazardous liquid pipelines and gas pipelines to develop and... control room. Affected Public: Operators of both natural gas and hazardous liquid pipeline systems. Annual...

  9. ALMA Pipeline: Current Status

    NASA Astrophysics Data System (ADS)

    Shinnaga, H.; Humphreys, E.; Indebetouw, R.; Villard, E.; Kern, J.; Davis, L.; Miura, R. E.; Nakazato, T.; Sugimoto, K.; Kosugi, G.; Akiyama, E.; Muders, D.; Wyrowski, F.; Williams, S.; Lightfoot, J.; Kent, B.; Momjian, E.; Hunter, T.; ALMA Pipeline Team

    2015-12-01

    The ALMA Pipeline is the automated data reduction tool that runs on ALMA data. Current version of the ALMA pipeline produces science quality data products for standard interferometric observing modes up to calibration process. The ALMA Pipeline is comprised of (1) heuristics in the form of Python scripts that select the best processing parameters, and (2) contexts that are given for book-keeping purpose of data processes. The ALMA Pipeline produces a "weblog" that showcases detailed plots for users to judge how each step of calibration processes are treated. The ALMA Interferometric Pipeline was conditionally accepted in March 2014 by processing Cycle 0 and Cycle 1 data sets. From Cycle 2, ALMA Pipeline is used for ALMA data reduction and quality assurance for the projects whose observing modes are supported by the ALMA Pipeline. Pipeline tasks are available based on CASA version 4.2.2, and the first public pipeline release called CASA 4.2.2-pipe has been available since October 2014. One can reduce ALMA data both by CASA tasks as well as by pipeline tasks by using CASA version 4.2.2-pipe.

  10. G-protein-coupled receptors: new approaches to maximise the impact of GPCRS in drug discovery.

    PubMed

    Davey, John

    2004-04-01

    IBC's Drug Discovery Technology Series is a group of conferences highlighting technological advances and applications in niche areas of the drug discovery pipeline. This 2-day meeting focused on G-protein-coupled receptors (GPCRs), probably the most important and certainly the most valuable class of targets for drug discovery. The meeting was chaired by J Beesley (Vice President, European Business Development for LifeSpan Biosciences, Seattle, USA) and included 17 presentations on various aspects of GPCR activity, drug screens and therapeutic analyses. Keynote Addresses covered two of the emerging areas in GPCR regulation; receptor dimerisation (G Milligan, Professor of Molecular Pharmacology and Biochemistry, University of Glasgow, UK) and proteins that interact with GPCRs (J Bockaert, Laboratory of Functional Genomics, CNRS Montpellier, France). A third Keynote Address from W Thomsen (Director of GPCR Drug Screening, Arena Pharmaceuticals, USA) discussed Arena's general approach to drug discovery and illustrated this with reference to the development of an agonist with potential efficacy in Type II diabetes.

  11. Advances in the Study of Heart Development and Disease Using Zebrafish

    PubMed Central

    Brown, Daniel R.; Samsa, Leigh Ann; Qian, Li; Liu, Jiandong

    2016-01-01

    Animal models of cardiovascular disease are key players in the translational medicine pipeline used to define the conserved genetic and molecular basis of disease. Congenital heart diseases (CHDs) are the most common type of human birth defect and feature structural abnormalities that arise during cardiac development and maturation. The zebrafish, Danio rerio, is a valuable vertebrate model organism, offering advantages over traditional mammalian models. These advantages include the rapid, stereotyped and external development of transparent embryos produced in large numbers from inexpensively housed adults, vast capacity for genetic manipulation, and amenability to high-throughput screening. With the help of modern genetics and a sequenced genome, zebrafish have led to insights in cardiovascular diseases ranging from CHDs to arrhythmia and cardiomyopathy. Here, we discuss the utility of zebrafish as a model system and summarize zebrafish cardiac morphogenesis with emphasis on parallels to human heart diseases. Additionally, we discuss the specific tools and experimental platforms utilized in the zebrafish model including forward screens, functional characterization of candidate genes, and high throughput applications. PMID:27335817

  12. High performance in silico virtual drug screening on many-core processors.

    PubMed

    McIntosh-Smith, Simon; Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-05-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel's Xeon Phi and multi-core CPUs with SIMD instruction sets.

  13. High performance in silico virtual drug screening on many-core processors

    PubMed Central

    Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-01-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel’s Xeon Phi and multi-core CPUs with SIMD instruction sets. PMID:25972727

  14. 78 FR 65429 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-31

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0041] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials...-0041 Williams Gas Pipeline 49 CFR 192.150........ To authorize the extension Company, LLC (WGP). of a...

  15. 49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY...) At points where gas with potentially deleterious contaminants enters the pipeline, use filter...

  16. 77 FR 58616 - Pipeline Safety: Information Collection Activities, Revision to Gas Transmission and Gathering...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-21

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0024] Pipeline Safety: Information Collection Activities, Revision to Gas Transmission and Gathering Pipeline Systems Annual Report, Gas Transmission and Gathering Pipeline Systems Incident Report...

  17. 78 FR 14877 - Pipeline Safety: Incident and Accident Reports

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-07

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2013-0028] Pipeline Safety: Incident and Accident Reports AGENCY: Pipeline and Hazardous Materials... PHMSA F 7100.2--Incident Report--Natural and Other Gas Transmission and Gathering Pipeline Systems and...

  18. 75 FR 4136 - Pipeline Safety: Request To Modify Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2009-0377] Pipeline Safety: Request To Modify Special Permit AGENCY: Pipeline and Hazardous... coating on its gas pipeline. DATES: Submit any comments regarding this special permit modification request...

  19. 75 FR 73160 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-29

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...-Related Conditions on Gas, Hazardous Liquid, and Carbon Dioxide Pipelines and Liquefied Natural Gas... Pipelines and Liquefied Natural Gas Facilities.'' The Pipeline Safety Laws (49 U.S.C. 60132) require each...

  20. 75 FR 9018 - Pipeline Safety: Random Drug Testing Rate

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2010-0034] Pipeline Safety: Random Drug Testing Rate AGENCY: Pipeline and Hazardous Materials... pipelines and operators of liquefied natural gas facilities must select and test a percentage of covered...

  1. 77 FR 2606 - Pipeline Safety: Random Drug Testing Rate

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2012-0004] Pipeline Safety: Random Drug Testing Rate AGENCY: Pipeline and Hazardous Materials... pipelines and operators of liquefied natural gas facilities must select and test a percentage of covered...

  2. 75 FR 45591 - Pipeline Safety: Notice of Technical Pipeline Safety Advisory Committee Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Committee Meetings AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION... safety standards, risk assessments, and safety policies for natural gas pipelines and for hazardous...

  3. Self-Sampling for Human Papillomavirus Testing: Increased Cervical Cancer Screening Participation and Incorporation in International Screening Programs

    PubMed Central

    Gupta, Sarah; Palmer, Christina; Bik, Elisabeth M.; Cardenas, Juan P.; Nuñez, Harold; Kraal, Laurens; Bird, Sara W.; Bowers, Jennie; Smith, Alison; Walton, Nathaniel A.; Goddard, Audrey D.; Almonacid, Daniel E.; Zneimer, Susan; Richman, Jessica; Apte, Zachary S.

    2018-01-01

    In most industrialized countries, screening programs for cervical cancer have shifted from cytology (Pap smear or ThinPrep) alone on clinician-obtained samples to the addition of screening for human papillomavirus (HPV), its main causative agent. For HPV testing, self-sampling instead of clinician-sampling has proven to be equally accurate, in particular for assays that use nucleic acid amplification techniques. In addition, HPV testing of self-collected samples in combination with a follow-up Pap smear in case of a positive result is more effective in detecting precancerous lesions than a Pap smear alone. Self-sampling for HPV testing has already been adopted by some countries, while others have started trials to evaluate its incorporation into national cervical cancer screening programs. Self-sampling may result in more individuals willing to participate in cervical cancer screening, because it removes many of the barriers that prevent women, especially those in low socioeconomic and minority populations, from participating in regular screening programs. Several studies have shown that the majority of women who have been underscreened but who tested HPV-positive in a self-obtained sample will visit a clinic for follow-up diagnosis and management. In addition, a self-collected sample can also be used for vaginal microbiome analysis, which can provide additional information about HPV infection persistence as well as vaginal health in general. PMID:29686981

  4. Numerical research on the lateral global buckling characteristics of a high temperature and pressure pipeline with two initial imperfections

    PubMed Central

    Liu, Wenbin; Liu, Aimin

    2018-01-01

    With the exploitation of offshore oil and gas gradually moving to deep water, higher temperature differences and pressure differences are applied to the pipeline system, making the global buckling of the pipeline more serious. For unburied deep-water pipelines, the lateral buckling is the major buckling form. The initial imperfections widely exist in the pipeline system due to manufacture defects or the influence of uneven seabed, and the distribution and geometry features of initial imperfections are random. They can be divided into two kinds based on shape: single-arch imperfections and double-arch imperfections. This paper analyzed the global buckling process of a pipeline with 2 initial imperfections by using a numerical simulation method and revealed how the ratio of the initial imperfection’s space length to the imperfection’s wavelength and the combination of imperfections affects the buckling process. The results show that a pipeline with 2 initial imperfections may suffer the superposition of global buckling. The growth ratios of buckling displacement, axial force and bending moment in the superposition zone are several times larger than no buckling superposition pipeline. The ratio of the initial imperfection’s space length to the imperfection’s wavelength decides whether a pipeline suffers buckling superposition. The potential failure point of pipeline exhibiting buckling superposition is as same as the no buckling superposition pipeline, but the failure risk of pipeline exhibiting buckling superposition is much higher. The shape and direction of two nearby imperfections also affects the failure risk of pipeline exhibiting global buckling superposition. The failure risk of pipeline with two double-arch imperfections is higher than pipeline with two single-arch imperfections. PMID:29554123

  5. Genome-Scale Screen for DNA Methylation-Based Detection Markers for Ovarian Cancer

    PubMed Central

    Houshdaran, Sahar; Shen, Hui; Widschwendter, Martin; Daxenbichler, Günter; Long, Tiffany; Marth, Christian; Laird-Offringa, Ite A.; Press, Michael F.; Dubeau, Louis; Siegmund, Kimberly D.; Wu, Anna H.; Groshen, Susan; Chandavarkar, Uma; Roman, Lynda D.; Berchuck, Andrew; Pearce, Celeste L.; Laird, Peter W.

    2011-01-01

    Background The identification of sensitive biomarkers for the detection of ovarian cancer is of high clinical relevance for early detection and/or monitoring of disease recurrence. We developed a systematic multi-step biomarker discovery and verification strategy to identify candidate DNA methylation markers for the blood-based detection of ovarian cancer. Methodology/Principal Findings We used the Illumina Infinium platform to analyze the DNA methylation status of 27,578 CpG sites in 41 ovarian tumors. We employed a marker selection strategy that emphasized sensitivity by requiring consistency of methylation across tumors, while achieving specificity by excluding markers with methylation in control leukocyte or serum DNA. Our verification strategy involved testing the ability of identified markers to monitor disease burden in serially collected serum samples from ovarian cancer patients who had undergone surgical tumor resection compared to CA-125 levels. We identified one marker, IFFO1 promoter methylation (IFFO1-M), that is frequently methylated in ovarian tumors and that is rarely detected in the blood of normal controls. When tested in 127 serially collected sera from ovarian cancer patients, IFFO1-M showed post-resection kinetics significantly correlated with serum CA-125 measurements in six out of 16 patients. Conclusions/Significance We implemented an effective marker screening and verification strategy, leading to the identification of IFFO1-M as a blood-based candidate marker for sensitive detection of ovarian cancer. Serum levels of IFFO1-M displayed post-resection kinetics consistent with a reflection of disease burden. We anticipate that IFFO1-M and other candidate markers emerging from this marker development pipeline may provide disease detection capabilities that complement existing biomarkers. PMID:22163280

  6. 49 CFR 192.903 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline... pipeline segment means a segment of gas transmission pipeline located in a high consequence area. The terms...

  7. 76 FR 11853 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2011-0027] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials... a 24-inch mainline natural gas pipeline, 595 feet in length. The first segment of the special permit...

  8. 75 FR 72877 - Pipeline Safety: Updates to Pipeline and Liquefied Natural Gas Reporting Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-26

    ... liquid pipelines, and liquefied natural gas (LNG) facilities. These revisions will enhance PHMSA's... of natural gas pipelines, hazardous liquid pipelines, and LNG facilities. Specifically, PHMSA... commodity transported, and type of commodity transported. 8. Modify hazardous liquid operator telephonic...

  9. 77 FR 7572 - Alliance Pipeline L.P.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-13

    ...] Alliance Pipeline L.P.; Notice of Application Take notice that on January 25, 2012, Alliance Pipeline L.P... Pipeline Inc., Managing General Partner of Alliance Pipeline L.P., 800, 605--5 Ave. SW., Calgary, Alberta...; 8:45 am] BILLING CODE 6717-01-P ...

  10. 78 FR 65427 - Pipeline Safety: Reminder of Requirements for Liquefied Petroleum Gas and Utility Liquefied...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-31

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0097] Pipeline Safety: Reminder of Requirements for Liquefied Petroleum Gas and Utility Liquefied Petroleum Gas Pipeline Systems AGENCY: Pipeline and Hazardous Materials Safety Administration...

  11. Thinking on Sichuan-Chongqing gas pipeline transportation system reform under market-oriented conditions

    NASA Astrophysics Data System (ADS)

    Duan, Yanzhi

    2017-01-01

    The gas pipeline networks in Sichuan and Chongqing (Sichuan-Chongqing) region have formed a fully-fledged gas pipeline transportation system in China, which supports and promotes the rapid development of gas market in Sichuan-Chongqing region. In the circumstances of further developed market-oriented economy, it is necessary to carry out further the pipeline system reform in the areas of investment/financing system, operation system and pricing system to lay a solid foundation for improving future gas production and marketing capability and adapting itself to the national gas system reform, and to achieve the objectives of multiparty participated pipeline construction, improved pipeline transportation efficiency and fair and rational pipeline transportation prices. In this article, main thinking on reform in the three areas and major deployment are addressed, and corresponding measures on developing shared pipeline economy, providing financial support to pipeline construction, setting up independent regulatory agency to enhance the industrial supervision for gas pipeline transportation, and promoting the construction of regional gas trade market are recommended.

  12. 49 CFR 179.103-3 - Venting, loading and unloading valves, measuring and sampling devices.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) SPECIFICATIONS FOR TANK CARS Specifications for Pressure Tank Car Tanks (Classes DOT-105, 109... not less than one-fourth inch in thickness. (c) When tank car is used to transport liquefied flammable...

  13. 49 CFR 179.103-3 - Venting, loading and unloading valves, measuring and sampling devices.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) SPECIFICATIONS FOR TANK CARS Specifications for Pressure Tank Car Tanks (Classes DOT-105, 109... not less than one-fourth inch in thickness. (c) When tank car is used to transport liquefied flammable...

  14. 49 CFR 179.103-3 - Venting, loading and unloading valves, measuring and sampling devices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Transportation PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION HAZARDOUS MATERIALS REGULATIONS SPECIFICATIONS FOR TANK CARS Specifications for Pressure Tank Car Tanks (Classes DOT... not less than one-fourth inch in thickness. (c) When tank car is used to transport liquefied flammable...

  15. 49 CFR 179.103-3 - Venting, loading and unloading valves, measuring and sampling devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) SPECIFICATIONS FOR TANK CARS Specifications for Pressure Tank Car Tanks (Classes DOT-105, 109... not less than one-fourth inch in thickness. (c) When tank car is used to transport liquefied flammable...

  16. 30 CFR 250.1202 - Liquid hydrocarbon measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... chapters of the API MPMS as incorporated by reference in 30 CFR 250.198, when obtaining net standard volume... pipeline (retrograde) condensate volumes as allocated to the individual leases or units. (b) What are the... displacement (pipe) prover, master meter, or tank prover; (iii) A proportional-to-flow sampling device pulsed...

  17. Thermographic identification of wetted insulation on pipelines in the arctic oilfields

    NASA Astrophysics Data System (ADS)

    Miles, Jonathan J.; Dahlquist, A. L.; Dash, L. C.

    2006-04-01

    Steel pipes used at Alaskan oil-producing facilities to transport production crude, gas, and injection water between well house and drill site manifold building, and along cross-country lines to and from central processing facilities, must be insulated in order to protect against the severely cold temperatures that are common during the arctic winter. A problem inherent with this system is that the sealed joints between adjacent layers of the outer wrap will over time degrade and can allow water to breach the system and migrate into and through the insulation. The moisture can ultimately interact with the steel pipe and trigger external corrosion which, if left unchecked, can lead to pipe failure and spillage. A New Technology Evaluation Guideline prepared for ConocoPhillips Alaska, Inc. in 2001 is intended to guide the consideration of new technologies for pipeline inspection in a manner that is safer, faster, and more cost-effective than existing techniques. Infrared thermography (IRT) was identified as promising for identification of wetted insulation regions given that it offers the means to scan a large area quickly from a safe distance, and measure the temperature field associated with that area. However, it was also recognized that there are limiting factors associated with an IRT-based approach including instrument sensitivity, cost, portability, functionality in hostile (arctic) environments, and training required for proper interpretation of data. A methodology was developed and tested in the field that provides a technique to conduct large-scale screening for wetted regions along insulated pipelines. The results of predictive modeling analysis and testing demonstrate the feasibility under certain condition of identifying wetted insulation areas. The results of the study and recommendations for implementation are described.

  18. Integrated machine learning, molecular docking and 3D-QSAR based approach for identification of potential inhibitors of trypanosomal N-myristoyltransferase.

    PubMed

    Singh, Nidhi; Shah, Priyanka; Dwivedi, Hemlata; Mishra, Shikha; Tripathi, Renu; Sahasrabuddhe, Amogh A; Siddiqi, Mohammad Imran

    2016-11-15

    N-Myristoyltransferase (NMT) catalyzes the transfer of myristate to the amino-terminal glycine of a subset of proteins, a co-translational modification involved in trafficking substrate proteins to membrane locations, stabilization and protein-protein interactions. It is a studied and validated pre-clinical drug target for fungal and parasitic infections. In the present study, a machine learning approach, docking studies and CoMFA analysis have been integrated with the objective of translation of knowledge into a pipelined workflow towards the identification of putative hits through the screening of large compound libraries. In the proposed pipeline, the reported parasitic NMT inhibitors have been used to develop predictive machine learning classification models. Simultaneously, a TbNMT complex model was generated to establish the relationship between the binding mode of the inhibitors for LmNMT and TbNMT through molecular dynamics simulation studies. A 3D-QSAR model was developed and used to predict the activity of the proposed hits in the subsequent step. The hits classified as active based on the machine learning model were assessed as the potential anti-trypanosomal NMT inhibitors through molecular docking studies, predicted activity using a QSAR model and visual inspection. In the final step, the proposed pipeline was validated through in vitro experiments. A total of seven hits have been proposed and tested in vitro for evaluation of dual inhibitory activity against Leishmania donovani and Trypanosoma brucei. Out of these five compounds showed significant inhibition against both of the organisms. The common topmost active compound SEW04173 belongs to a pyrazole carboxylate scaffold and is anticipated to enrich the chemical space with enhanced potency through optimization.

  19. 49 CFR 195.401 - General requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... this part: (1) An interstate pipeline, other than a low-stress pipeline, on which construction was..., other than a low-stress pipeline, on which construction was begun after July 31, 1977, that transports hazardous liquid. (3) An intrastate pipeline, other than a low-stress pipeline, on which construction was...

  20. 49 CFR 192.717 - Transmission lines: Permanent field repair of leaks.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance... pipe in size. (4) If the leak is on a submerged offshore pipeline or submerged pipeline in inland...

  1. 76 FR 21423 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-15

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2011-0063] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials... application is for two 30-inch segments, segments 3 and 4, of the TPL 330 natural gas pipeline located in St...

  2. 77 FR 19414 - Pipeline Safety: Public Comment on Leak and Valve Studies Mandated by the Pipeline Safety...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-30

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... Safety, Regulatory Certainty, and Job Creation Act of 2011 AGENCY: Pipeline and Hazardous Materials... Transportation (DOT), Pipeline and Hazardous Materials Safety Administration (PHMSA) is providing an important...

  3. 75 FR 35516 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-22

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0147] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials... with the Class 1 location portion of a 7.4 mile natural gas pipeline to be constructed in Alaska. This...

  4. 49 CFR 192.513 - Test requirements for plastic pipelines.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 3 2011-10-01 2011-10-01 false Test requirements for plastic pipelines. 192.513 Section 192.513 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND... Test requirements for plastic pipelines. (a) Each segment of a plastic pipeline must be tested in...

  5. 49 CFR 192.513 - Test requirements for plastic pipelines.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Test requirements for plastic pipelines. 192.513 Section 192.513 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND... Test requirements for plastic pipelines. (a) Each segment of a plastic pipeline must be tested in...

  6. 49 CFR 192.513 - Test requirements for plastic pipelines.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 3 2013-10-01 2013-10-01 false Test requirements for plastic pipelines. 192.513 Section 192.513 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND... Test requirements for plastic pipelines. (a) Each segment of a plastic pipeline must be tested in...

  7. 49 CFR 192.513 - Test requirements for plastic pipelines.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 3 2014-10-01 2014-10-01 false Test requirements for plastic pipelines. 192.513 Section 192.513 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND... Test requirements for plastic pipelines. (a) Each segment of a plastic pipeline must be tested in...

  8. 49 CFR 192.513 - Test requirements for plastic pipelines.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 3 2012-10-01 2012-10-01 false Test requirements for plastic pipelines. 192.513 Section 192.513 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND... Test requirements for plastic pipelines. (a) Each segment of a plastic pipeline must be tested in...

  9. 78 FR 38803 - Pipeline Safety: Information Collection Activities, Revisions to Incident and Annual Reports for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-27

    ... Reports for Gas Pipeline Operators AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... (OMB) Control No. 2137-0522, titled ``Incident and Annual Reports for Gas Pipeline Operators.'' PHMSA...

  10. 75 FR 5640 - Pipeline Safety: Implementation of Revised Incident/Accident Report Forms for Distribution...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Distribution Systems, Gas Transmission and Gathering Systems, and Hazardous Liquid Systems AGENCY: Pipeline and.... SUMMARY: This notice advises owners and operators of gas pipeline facilities and hazardous liquid pipeline...

  11. Use of Dried Capillary Blood Sampling for Islet Autoantibody Screening in Relatives: A Feasibility Study.

    PubMed

    Bingley, Polly J; Rafkin, Lisa E; Matheson, Della; Steck, Andrea K; Yu, Liping; Henderson, Courtney; Beam, Craig A; Boulware, David C

    2015-12-01

    Islet autoantibody testing provides the basis for assessment of risk of progression to type 1 diabetes. We set out to determine the feasibility and acceptability of dried capillary blood spot-based screening to identify islet autoantibody-positive relatives potentially eligible for inclusion in prevention trials. Dried blood spot (DBS) and venous samples were collected from 229 relatives participating in the TrialNet Pathway to Prevention Study. Both samples were tested for glutamic acid decarboxylase, islet antigen 2, and zinc transporter 8 autoantibodies, and venous samples were additionally tested for insulin autoantibodies and islet cell antibodies. We defined multiple autoantibody positive as two or more autoantibodies in venous serum and DBS screen positive if one or more autoantibodies were detected. Participant questionnaires compared the sample collection methods. Of 44 relatives who were multiple autoantibody positive in venous samples, 42 (95.5%) were DBS screen positive, and DBS accurately detected 145 of 147 autoantibody-negative relatives (98.6%). Capillary blood sampling was perceived as more painful than venous blood draw, but 60% of participants would prefer initial screening using home fingerstick with clinic visits only required if autoantibodies were found. Capillary blood sampling could facilitate screening for type 1 diabetes prevention studies.

  12. 75 FR 35366 - Pipeline Safety: Applying Safety Regulation to All Rural Onshore Hazardous Liquid Low-Stress Lines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-22

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Onshore Hazardous Liquid Low-Stress Lines AGENCY: Pipeline and Hazardous Materials Safety Administration... pipelines to perform a complete ``could affect'' analysis to determine which rural low-stress pipeline...

  13. 49 CFR 192.917 - How does an operator identify potential threats to pipeline integrity and use the threat...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192...

  14. 78 FR 71033 - Pipeline Safety: Information Collection Activities, Revisions to Incident and Annual Reports for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-27

    ... Reports for Gas Pipeline Operators AGENCY: Pipeline and Hazardous Materials Safety Administration, DOT... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Pipeline Systems; PHMSA F 7100.2-1 Annual Report for Calendar Year 20xx Natural and Other Gas Transmission...

  15. 49 CFR 192.937 - What is a continual process of evaluation and assessment to maintain a pipeline's integrity?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.937 What is a...

  16. 76 FR 28326 - Pipeline Safety: National Pipeline Mapping System Data Submissions and Submission Dates for Gas...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-17

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR 191... Reports AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Issuance of... Pipeline and Hazardous Materials Safety Administration (PHMSA) published a final rule on November 26, 2010...

  17. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 3 2012-10-01 2012-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast... pipeline is disturbed: (a) That segment of the pipeline must be protected, as necessary, against damage...

  18. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 3 2011-10-01 2011-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast... pipeline is disturbed: (a) That segment of the pipeline must be protected, as necessary, against damage...

  19. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 3 2014-10-01 2014-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast... pipeline is disturbed: (a) That segment of the pipeline must be protected, as necessary, against damage...

  20. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 3 2013-10-01 2013-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast... pipeline is disturbed: (a) That segment of the pipeline must be protected, as necessary, against damage...

  1. Comparisons of sediment losses from a newly constructed cross-country natural gas pipeline and an existing in-road pipeline

    Treesearch

    Pamela J. Edwards; Bridget M. Harrison; Daniel J. Holz; Karl W.J. Williard; Jon E. Schoonover

    2014-01-01

    Sediment loads were measured for about one year from natural gas pipelines in two studies in north central West Virginia. One study involved a 1-year-old pipeline buried within the bed of a 25-year-old skid road, and the other involved a newly constructed cross-country pipeline. Both pipelines were the same diameter and were installed using similar trenching and...

  2. About U.S. Natural Gas Pipelines

    EIA Publications

    2007-01-01

    This information product provides the interested reader with a broad and non-technical overview of how the U.S. natural gas pipeline network operates, along with some insights into the many individual pipeline systems that make up the network. While the focus of the presentation is the transportation of natural gas over the interstate and intrastate pipeline systems, information on subjects related to pipeline development, such as system design and pipeline expansion, are also included.

  3. Niche and neutral processes both shape community structure in parallelized, aerobic, single carbon-source enrichments

    DOE Data Explorer

    Flynn, Theodore M.; Koval, Jason C.; Greenwald, Stephanie M.; Owens, Sarah M.; Kemner, Kenneth M.; Antonopoulos, Dionysios A.

    2017-01-01

    We present DNA sequence data in FASTA-formatted files from aerobic environmental microcosms inoculated with a sole carbon source. DNA sequences are of 16S rRNA genes present in DNA extracted from each microcosm along with the environmental samples (soil, water) used to inoculate them. These samples were sequenced using the Illumina MiSeq platform at the Environmental Sample Preparation and Sequencing Facility at Argonne National Laboratory. This data is compatible with standard microbiome analysis pipelines (e.g., QIIME, mothur, etc.).

  4. Methodology for Image-Based Reconstruction of Ventricular Geometry for Patient-Specific Modeling of Cardiac Electrophysiology

    PubMed Central

    Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.

    2014-01-01

    Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771

  5. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  6. zUMIs - A fast and flexible pipeline to process RNA sequencing data with UMIs.

    PubMed

    Parekh, Swati; Ziegenhain, Christoph; Vieth, Beate; Enard, Wolfgang; Hellmann, Ines

    2018-06-01

    Single-cell RNA-sequencing (scRNA-seq) experiments typically analyze hundreds or thousands of cells after amplification of the cDNA. The high throughput is made possible by the early introduction of sample-specific bar codes (BCs), and the amplification bias is alleviated by unique molecular identifiers (UMIs). Thus, the ideal analysis pipeline for scRNA-seq data needs to efficiently tabulate reads according to both BC and UMI. zUMIs is a pipeline that can handle both known and random BCs and also efficiently collapse UMIs, either just for exon mapping reads or for both exon and intron mapping reads. If BC annotation is missing, zUMIs can accurately detect intact cells from the distribution of sequencing reads. Another unique feature of zUMIs is the adaptive downsampling function that facilitates dealing with hugely varying library sizes but also allows the user to evaluate whether the library has been sequenced to saturation. To illustrate the utility of zUMIs, we analyzed a single-nucleus RNA-seq dataset and show that more than 35% of all reads map to introns. Also, we show that these intronic reads are informative about expression levels, significantly increasing the number of detected genes and improving the cluster resolution. zUMIs flexibility makes if possible to accommodate data generated with any of the major scRNA-seq protocols that use BCs and UMIs and is the most feature-rich, fast, and user-friendly pipeline to process such scRNA-seq data.

  7. Comparison of passive diffusion bag samplers and submersible pump sampling methods for monitoring volatile organic compounds in ground water at Area 6, Naval Air Station, Whidbey Island, Washington

    USGS Publications Warehouse

    Huffman, Raegan L.

    2002-01-01

    Ground-water samples were collected in April 1999 at Naval Air Station Whidbey Island, Washington, with passive diffusion samplers and a submersible pump to compare concentrations of volatile organic compounds (VOCs) in water samples collected using the two sampling methods. Single diffusion samplers were installed in wells with 10-foot screened intervals, and multiple diffusion samplers were installed in wells with 20- to 40-foot screened intervals. The diffusion samplers were recovered after 20 days and the wells were then sampled using a submersible pump. VOC concentrations in the 10-foot screened wells in water samples collected with diffusion samplers closely matched concentrations in samples collected with the submersible pump. Analysis of VOC concentrations in samples collected from the 20- to 40-foot screened wells with multiple diffusion samplers indicated vertical concentration variation within the screened interval, whereas the analysis of VOC concentrations in samples collected with the submersible pump indicated mixing during pumping. The results obtained using the two sampling methods indicate that the samples collected with the diffusion samplers were comparable with and can be considerably less expensive than samples collected using a submersible pump.

  8. Computational fragment-based screening using RosettaLigand: the SAMPL3 challenge

    NASA Astrophysics Data System (ADS)

    Kumar, Ashutosh; Zhang, Kam Y. J.

    2012-05-01

    SAMPL3 fragment based virtual screening challenge provides a valuable opportunity for researchers to test their programs, methods and screening protocols in a blind testing environment. We participated in SAMPL3 challenge and evaluated our virtual fragment screening protocol, which involves RosettaLigand as the core component by screening a 500 fragments Maybridge library against bovine pancreatic trypsin. Our study reaffirmed that the real test for any virtual screening approach would be in a blind testing environment. The analyses presented in this paper also showed that virtual screening performance can be improved, if a set of known active compounds is available and parameters and methods that yield better enrichment are selected. Our study also highlighted that to achieve accurate orientation and conformation of ligands within a binding site, selecting an appropriate method to calculate partial charges is important. Another finding is that using multiple receptor ensembles in docking does not always yield better enrichment than individual receptors. On the basis of our results and retrospective analyses from SAMPL3 fragment screening challenge we anticipate that chances of success in a fragment screening process could be increased significantly with careful selection of receptor structures, protein flexibility, sufficient conformational sampling within binding pocket and accurate assignment of ligand and protein partial charges.

  9. Detection and identification of occult HBV in blood donors in Taiwan using a commercial, multiplex, multi-dye nucleic acid amplification technology screening test.

    PubMed

    Lin, K T; Chang, C L; Tsai, M H; Lin, K S; Saldanha, J; Hung, C M

    2014-02-01

    The ability of a new generation commercial, multiplex, multi-dye test from Roche, the cobas TaqScreen MPX test, version 2.0, to detect and identify occult HBV infections was evaluated using routine donor samples from Kaohsiung Blood Bank, Taiwan. A total of 5973 samples were tested by nucleic acid amplification technology (NAT); 5898 in pools of six, 66 in pools of less than six and nine samples individually. NAT-reactive samples were retested with alternative NAT tests, and follow-up samples from the donors were tested individually by NAT and for all the HBV serological markers. Eight NAT-only-reactive donors were identified, and follow-up samples were obtained from six of the donors. The results indicated that all eight donors had an occult HBV infection with viral loads <12 IU/ml. The cobas(®) TaqScreen MPX test, version 2.0, has an advantage over the current Roche blood screening test, the cobas TaqScreen MPX test, for screening donations in countries with a high prevalence of occult HBV infections since the uncertainty associated with identifying samples with very low viremia is removed by the ability of the test to identify the viral target in samples that are reactive with the cobas TaqScreen MPX test, version 2.0. © 2013 International Society of Blood Transfusion.

  10. Use of probability analysis to establish routine bioassay screening levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbaugh, E.H.; Sula, M.J.; McFadden, K.M.

    1990-09-01

    Probability analysis was used by the Hanford Internal Dosimetry Program to establish bioassay screening levels for tritium and uranium in urine. Background environmental levels of these two radionuclides are generally detectable by the highly sensitive urine analysis procedures routinely used at Hanford. Establishing screening levels requires balancing the impact of false detection with the consequence of potentially undetectable occupation dose. To establish the screening levels, tritium and uranium analyses were performed on urine samples collected from workers exposed only to environmental sources. All samples were collected at home using a simulated 12-hour protocol for tritium and a simulated 24-hour collectionmore » protocol for uranium. Results of the analyses of these samples were ranked according to tritium concentration or total sample uranium. The cumulative percentile was calculated and plotted using log-probability coordinates. Geometric means and screening levels corresponding to various percentiles were estimated by graphical interpolation and standard calculations. The potentially annual internal dose associated with a screening level was calculated. Screening levels were selected corresponding to the 99.9 percentile, implying that, on the average, 1 out of 1000 samples collected from an unexposed worker population would be expected to exceed the screening level. 4 refs., 2 figs.« less

  11. Effects of small-scale vertical variations in well-screen inflow rates and concentrations of organic compounds on the collection of representative ground-water-quality samples

    USGS Publications Warehouse

    Gibs, Jacob; Brown, G. Allan; Turner, Kenneth S.; MacLeod, Cecilia L.; Jelinski, James; Koehnlein, Susan A.

    1993-01-01

    Because a water sample collected from a well is an integration of water from different depths along the well screen, measured concentrations can be biased if analyte concentrations are not uniform along the length of the well screen. The resulting concentration in the sample, therefore, is a function of variations in well-screen inflow rate and analyte concentration with depth. A multiport sampler with seven short screened intervals was designed and used to investigate small-scale vertical variations in water chemistry and aquifer hydraulic conductivity in ground water contaminated by leaded gasoline at Galloway Township, Atlantic County, New Jersey. The multiport samplers were used to collect independent samples from seven intervals within the screened zone that were flow-rate weighted and integrated to simulate a 5-foot-long, 2.375-inch- outside-diameter conventional wire-wound screen. The integration of the results of analyses of samples collected from two multiport samplers showed that a conventional 5-foot-long well screen would integrate contaminant concentrations over its length and resulted in an apparent contaminant concentration that was a little as 28 percent of the maximum concentration observed in the multiport sampler.

  12. Middleware Case Study: MeDICi

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wynne, Adam S.

    2011-05-05

    In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less

  13. 76 FR 43604 - Pipeline Safety: Applying Safety Regulations to All Rural Onshore Hazardous Liquid Low-Stress...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-21

    ... Regulations to All Rural Onshore Hazardous Liquid Low-Stress Lines, Correction AGENCY: Pipeline and Hazardous... the Federal Pipeline Safety Regulations to address rural low-stress hazardous liquid pipelines that... regarding the compliance date for identifying all segments of a Category 3 low-stress pipeline. DATES: This...

  14. 77 FR 48145 - Cameron Interstate Pipeline, LLC, Cameron LNG, LLC; Notice of Intent To Prepare an Environmental...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-13

    ... operation of the Cameron Pipeline Expansion Project and the Cameron Liquefied Natural Gas (LNG) Liquefaction... Pipeline Expansion Project would be constructed and operated to provide natural gas to the planned export...-diameter pipeline extending from an interconnection with the Florida Gas Transmission Pipeline in Calcasieu...

  15. 18 CFR 357.3 - FERC Form No. 73, Oil Pipeline Data for Depreciation Analysis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Pipeline Data for Depreciation Analysis. 357.3 Section 357.3 Conservation of Power and Water Resources... No. 73, Oil Pipeline Data for Depreciation Analysis. (a) Who must file. Any oil pipeline company.... 73, Oil Pipeline Data for Depreciation Analysis, available for review at the Commission's Public...

  16. 30 CFR 250.1007 - What to include in applications.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... coordinates of key points; and the location of other pipelines that will be connected to or crossed by the proposed pipeline(s). The initial and terminal points of the pipeline and any continuation into State jurisdiction shall be accurately located even if the pipeline is to have an onshore terminal point. A plat(s...

  17. 49 CFR 192.627 - Tapping pipelines under pressure.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Tapping pipelines under pressure. 192.627 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Operations § 192.627 Tapping pipelines under pressure. Each tap made on a pipeline under pressure must be performed by a crew qualified to make...

  18. 77 FR 6857 - Pipeline Safety: Notice of Public Meetings on Improving Pipeline Leak Detection System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-09

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... installed to lessen the volume of natural gas and hazardous liquid released during catastrophic pipeline... p.m. Panel 3: Considerations for Natural Gas Pipeline Leak Detection Systems 3:30 p.m. Break 3:45 p...

  19. 78 FR 13747 - Railroad Safety: Advisory Notice Related to Railroad Accidents in Vicinity of Underground Pipelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-28

    ... underground natural gas transmission pipeline operated by Nicor Gas. The pipeline well exceeded Federal... had the gas pipeline been installed at the railroad crossing with only the minimum level of ground... resumption of service.'' On July 31, 2012, the Pipeline and Hazardous Materials Safety Administration (PHMSA...

  20. 77 FR 14010 - Millennium Pipeline Company, LLC; Notice of Availability of the Environmental Assessment for the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-08

    ... emergency electrical power generator. Pipeline facilities required for the project include approximately 545 feet of new 36-inch- diameter suction and discharge pipelines which would connect the compressor... pipeline located between the new suction and discharge pipelines. The FERC staff mailed copies of the EA to...

  1. Unipro UGENE NGS pipelines and components for variant calling, RNA-seq and ChIP-seq data analyses.

    PubMed

    Golosova, Olga; Henderson, Ross; Vaskin, Yuriy; Gabrielian, Andrei; Grekhov, German; Nagarajan, Vijayaraj; Oler, Andrew J; Quiñones, Mariam; Hurt, Darrell; Fursov, Mikhail; Huyen, Yentram

    2014-01-01

    The advent of Next Generation Sequencing (NGS) technologies has opened new possibilities for researchers. However, the more biology becomes a data-intensive field, the more biologists have to learn how to process and analyze NGS data with complex computational tools. Even with the availability of common pipeline specifications, it is often a time-consuming and cumbersome task for a bench scientist to install and configure the pipeline tools. We believe that a unified, desktop and biologist-friendly front end to NGS data analysis tools will substantially improve productivity in this field. Here we present NGS pipelines "Variant Calling with SAMtools", "Tuxedo Pipeline for RNA-seq Data Analysis" and "Cistrome Pipeline for ChIP-seq Data Analysis" integrated into the Unipro UGENE desktop toolkit. We describe the available UGENE infrastructure that helps researchers run these pipelines on different datasets, store and investigate the results and re-run the pipelines with the same parameters. These pipeline tools are included in the UGENE NGS package. Individual blocks of these pipelines are also available for expert users to create their own advanced workflows.

  2. The ALMA Science Pipeline: Current Status

    NASA Astrophysics Data System (ADS)

    Humphreys, Elizabeth; Miura, Rie; Brogan, Crystal L.; Hibbard, John; Hunter, Todd R.; Indebetouw, Remy

    2016-09-01

    The ALMA Science Pipeline is being developed for the automated calibration and imaging of ALMA interferometric and single-dish data. The calibration Pipeline for interferometric data was accepted for use by ALMA Science Operations in 2014, and for single-dish data end-to-end processing in 2015. However, work is ongoing to expand the use cases for which the Pipeline can be used e.g. for higher frequency and lower signal-to-noise datasets, and for new observing modes. A current focus includes the commissioning of science target imaging for interferometric data. For the Single Dish Pipeline, the line finding algorithm used in baseline subtraction and baseline flagging heuristics have been greately improved since the prototype used for data from the previous cycle. These algorithms, unique to the Pipeline, produce better results than standard manual processing in many cases. In this poster, we report on the current status of the Pipeline capabilities, present initial results from the Imaging Pipeline, and the smart line finding and flagging algorithm used in the Single Dish Pipeline. The Pipeline is released as part of CASA (the Common Astronomy Software Applications package).

  3. A Review of Fatigue Crack Growth for Pipeline Steels Exposed to Hydrogen

    PubMed Central

    Nanninga, N.; Slifka, A.; Levy, Y.; White, C.

    2010-01-01

    Hydrogen pipeline systems offer an economical means of storing and transporting energy in the form of hydrogen gas. Pipelines can be used to transport hydrogen that has been generated at solar and wind farms to and from salt cavern storage locations. In addition, pipeline transportation systems will be essential before widespread hydrogen fuel cell vehicle technology becomes a reality. Since hydrogen pipeline use is expected to grow, the mechanical integrity of these pipelines will need to be validated under the presence of pressurized hydrogen. This paper focuses on a review of the fatigue crack growth response of pipeline steels when exposed to gaseous hydrogen environments. Because of defect-tolerant design principles in pipeline structures, it is essential that designers consider hydrogen-assisted fatigue crack growth behavior in these applications. PMID:27134796

  4. United States petroleum pipelines: An empirical analysis of pipeline sizing

    NASA Astrophysics Data System (ADS)

    Coburn, L. L.

    1980-12-01

    The undersizing theory hypothesizes that integrated oil companies have a strong economic incentive to size the petroleum pipelines they own and ship over in a way that means that some of the demand must utilize higher cost alternatives. The DOJ theory posits that excess or monopoly profits are earned due to the natural monopoly characteristics of petroleum pipelines and the existence of market power in some pipelines at either the upstream or downstream market. The theory holds that independent petroleum pipelines owned by companies not otherwise affiliated with the petroleum industry (independent pipelines) do not have these incentives and all the efficiencies of pipeline transportation are passed to the ultimate consumer. Integrated oil companies on the other hand, keep these cost efficiencies for themselves in the form of excess profits.

  5. Mechanical sieve for screening mineral samples

    NASA Technical Reports Server (NTRS)

    Otto, W. P.

    1970-01-01

    Mechanical sieve consists of three horizontal screens mounted in a vertical stack. A combination of rotation and tapping produces an even flow across the screens, dislodges trapped particles, an ensures rapid segregation of the sample.

  6. 77 FR 45417 - Pipeline Safety: Inspection and Protection of Pipeline Facilities After Railway Accidents

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-31

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Accidents AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. [[Page 45418

  7. U.S. pipeline industry enters new era

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnsen, M.R.

    1999-11-01

    The largest construction project in North America this year and next--the Alliance Pipeline--marks some advances for the US pipeline industry. With the Alliance Pipeline system (Alliance), mechanized welding and ultrasonic testing are making their debuts in the US as primary mainline construction techniques. Particularly in Canada and Europe, mechanized welding technology has been used for both onshore and offshore pipeline construction for at least 15 years. However, it has never before been used to build a cross-country pipeline in the US, although it has been tested on short segments. This time, however, an accelerated construction schedule, among other reasons, necessitatedmore » the use of mechanized gas metal arc welding (GMAW). The $3-billion pipeline will delivery natural gas from northwestern British Columbia and northeastern Alberta in Canada to a hub near Chicago, Ill., where it will connect to the North American pipeline grid. Once the pipeline is completed and buried, crews will return the topsoil. Corn and other crops will reclaim the land. While the casual passerby probably won't know the Alliance pipeline is there, it may have a far-reaching effect on the way mainline pipelines are built in the US. For even though mechanized welding and ultrasonic testing are being used for the first time in the United States on this project, some US workers had already gained experience with the technology on projects elsewhere. And work on this pipeline has certainly developed a much larger pool of experienced workers for industry to draw from. The Alliance project could well signal the start of a new era in US pipeline construction.« less

  8. Physical properties of star clusters in the outer LMC as observed by the DES

    DOE PAGES

    Pieres, A.; Santiago, B.; Balbinot, E.; ...

    2016-05-26

    The Large Magellanic Cloud (LMC) harbors a rich and diverse system of star clusters, whose ages, chemical abundances, and positions provide information about the LMC history of star formation. We use Science Verification imaging data from the Dark Energy Survey to increase the census of known star clusters in the outer LMC and to derive physical parameters for a large sample of such objects using a spatially and photometrically homogeneous data set. Our sample contains 255 visually identified cluster candidates, of which 109 were not listed in any previous catalog. We quantify the crowding effect for the stellar sample producedmore » by the DES Data Management pipeline and conclude that the stellar completeness is < 10% inside typical LMC cluster cores. We therefore develop a pipeline to sample and measure stellar magnitudes and positions around the cluster candidates using DAOPHOT. We also implement a maximum-likelihood method to fit individual density profiles and colour-magnitude diagrams. For 117 (from a total of 255) of the cluster candidates (28 uncatalogued clusters), we obtain reliable ages, metallicities, distance moduli and structural parameters, confirming their nature as physical systems. The distribution of cluster metallicities shows a radial dependence, with no clusters more metal-rich than [Fe/H] ~ -0.7 beyond 8 kpc from the LMC center. Furthermore, the age distribution has two peaks at ≃ 1.2 Gyr and ≃ 2.7 Gyr.« less

  9. Physical properties of star clusters in the outer LMC as observed by the DES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pieres, A.; Santiago, B.; Balbinot, E.

    The Large Magellanic Cloud (LMC) harbors a rich and diverse system of star clusters, whose ages, chemical abundances, and positions provide information about the LMC history of star formation. We use Science Verification imaging data from the Dark Energy Survey to increase the census of known star clusters in the outer LMC and to derive physical parameters for a large sample of such objects using a spatially and photometrically homogeneous data set. Our sample contains 255 visually identified cluster candidates, of which 109 were not listed in any previous catalog. We quantify the crowding effect for the stellar sample producedmore » by the DES Data Management pipeline and conclude that the stellar completeness is < 10% inside typical LMC cluster cores. We therefore develop a pipeline to sample and measure stellar magnitudes and positions around the cluster candidates using DAOPHOT. We also implement a maximum-likelihood method to fit individual density profiles and colour-magnitude diagrams. For 117 (from a total of 255) of the cluster candidates (28 uncatalogued clusters), we obtain reliable ages, metallicities, distance moduli and structural parameters, confirming their nature as physical systems. The distribution of cluster metallicities shows a radial dependence, with no clusters more metal-rich than [Fe/H] ~ -0.7 beyond 8 kpc from the LMC center. Furthermore, the age distribution has two peaks at ≃ 1.2 Gyr and ≃ 2.7 Gyr.« less

  10. SPACE WARPS- II. New gravitational lens candidates from the CFHTLS discovered through citizen science

    NASA Astrophysics Data System (ADS)

    More, Anupreeta; Verma, Aprajita; Marshall, Philip J.; More, Surhud; Baeten, Elisabeth; Wilcox, Julianne; Macmillan, Christine; Cornen, Claude; Kapadia, Amit; Parrish, Michael; Snyder, Chris; Davis, Christopher P.; Gavazzi, Raphael; Lintott, Chris J.; Simpson, Robert; Miller, David; Smith, Arfon M.; Paget, Edward; Saha, Prasenjit; Küng, Rafael; Collett, Thomas E.

    2016-01-01

    We report the discovery of 29 promising (and 59 total) new lens candidates from the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) based on about 11 million classifications performed by citizen scientists as part of the first SPACE WARPS lens search. The goal of the blind lens search was to identify lens candidates missed by robots (the RINGFINDER on galaxy scales and ARCFINDER on group/cluster scales) which had been previously used to mine the CFHTLS for lenses. We compare some properties of the samples detected by these algorithms to the SPACE WARPS sample and find them to be broadly similar. The image separation distribution calculated from the SPACE WARPS sample shows that previous constraints on the average density profile of lens galaxies are robust. SPACE WARPS recovers about 65 per cent of known lenses, while the new candidates show a richer variety compared to those found by the two robots. This detection rate could be increased to 80 per cent by only using classifications performed by expert volunteers (albeit at the cost of a lower purity), indicating that the training and performance calibration of the citizen scientists is very important for the success of SPACE WARPS. In this work we present the SIMCT pipeline, used for generating in situ a sample of realistic simulated lensed images. This training sample, along with the false positives identified during the search, has a legacy value for testing future lens-finding algorithms. We make the pipeline and the training set publicly available.

  11. Trace and major element pollution originating from coal ash suspension and transport processes.

    PubMed

    Popovic, A; Djordjevic, D; Polic, P

    2001-04-01

    Coal ash obtained by coal combustion in the "Nikola Tesla A" power plant in Obrenovac, near Belgrade, Yugoslavia, is mixed with water of the Sava river and transported to the dump. In order to assess pollution caused by leaching of some minor and major elements during ash transport through the pipeline, two sets of samples (six samples each) were subjected to a modified sequential extraction. The first set consisted of coal ash samples taken immediately after combustion, while the second set was obtained by extraction with river water, imitating the processes that occur in the pipeline. Samples were extracted consecutively with distilled water and a 1 M solution of KCl, pH 7, and the differences in extractability were compared in order to predict potential pollution. Considering concentrations of seven trace elements as well as five major elements in extracts from a total of 12 samples, it can be concluded that lead and cadmium do not present an environmental threat during and immediately after ash transport to the dump. Portions of zinc, nickel and chromium are released during the ash transport, and arsenic and manganese are released continuously. Copper and iron do not present an environmental threat due to element leaching during and immediately after the coal ash suspension and transport. On the contrary, these elements, as well as chromium, become concentrated during coal ash transport. Adsorbed portions of calcium, magnesium and potassium are also leached during coal ash transport.

  12. drPACS: A Simple UNIX Execution Pipeline

    NASA Astrophysics Data System (ADS)

    Teuben, P.

    2011-07-01

    We describe a very simple yet flexible and effective pipeliner for UNIX commands. It creates a Makefile to define a set of serially dependent commands. The commands in the pipeline share a common set of parameters by which they can communicate. Commands must follow a simple convention to retrieve and store parameters. Pipeline parameters can optionally be made persistent across multiple runs of the pipeline. Tools were added to simplify running a large series of pipelines, which can then also be run in parallel.

  13. Evaluation of a QuECHERS-like extraction approach for the determination of PBDEs in mussels by immuno-assay-based screening methods

    USDA-ARS?s Scientific Manuscript database

    A sample preparation method was evaluated for the determination of polybrominated diphenyl ethers (PBDEs) in mussel samples, by using colorimetric and electrochemical immunoassay-based screening methods. A simple sample preparation in conjunction with a rapid screening method possesses the desired c...

  14. DDBJ read annotation pipeline: a cloud computing-based pipeline for high-throughput analysis of next-generation sequencing data.

    PubMed

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-08-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/.

  15. DDBJ Read Annotation Pipeline: A Cloud Computing-Based Pipeline for High-Throughput Analysis of Next-Generation Sequencing Data

    PubMed Central

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-01-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/. PMID:23657089

  16. State of art of seismic design and seismic hazard analysis for oil and gas pipeline system

    NASA Astrophysics Data System (ADS)

    Liu, Aiwen; Chen, Kun; Wu, Jian

    2010-06-01

    The purpose of this paper is to adopt the uniform confidence method in both water pipeline design and oil-gas pipeline design. Based on the importance of pipeline and consequence of its failure, oil and gas pipeline can be classified into three pipe classes, with exceeding probabilities over 50 years of 2%, 5% and 10%, respectively. Performance-based design requires more information about ground motion, which should be obtained by evaluating seismic safety for pipeline engineering site. Different from a city’s water pipeline network, the long-distance oil and gas pipeline system is a spatially linearly distributed system. For the uniform confidence of seismic safety, a long-distance oil and pipeline formed with pump stations and different-class pipe segments should be considered as a whole system when analyzing seismic risk. Considering the uncertainty of earthquake magnitude, the design-basis fault displacements corresponding to the different pipeline classes are proposed to improve deterministic seismic hazard analysis (DSHA). A new empirical relationship between the maximum fault displacement and the surface-wave magnitude is obtained with the supplemented earthquake data in East Asia. The estimation of fault displacement for a refined oil pipeline in Wenchuan M S8.0 earthquake is introduced as an example in this paper.

  17. Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) based bioavailability determination of the major classes of phytochemicals.

    PubMed

    Stylos, Evgenios; Chatziathanasiadou, Maria V; Syriopoulou, Aggeliki; Tzakos, Andreas G

    2017-03-15

    Natural products derived from plants have served as an inexhausted source for drug discovery and drug development. They have been evolutionary amplified with drug-like properties and have already illustrated immense therapeutic potential over an array of different diseases. However, their incorporation in the drug discovery pipeline has been diminished the last two decades. This was probably due to barriers related to their inherent difficulties to be integrated in high-throughput screening assays as also their largely unexplored bioavailability. Analytical procedures have come into the spotlight, a result of the continuous development of the instrumentation's capabilities as far as detection and separation is concerned. Integral part of this technological evolution is LC-MS instrumentation and its extended use for the determination of various compounds. The fact that it provides extra sensitivity, specificity and good separation in complex samples, makes LC-MS/MS the ultimate tool in the determination of many types of chemical compounds, such as phytochemicals. Herein, we focus on the achievements of the last five years in quantitative analysis of the major classes of phytochemicals (flavonoids, alkaloids, terpenes, glycosides and saponins) in plasma, through LC-MS/MS, as also their bioavailability. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. 49 CFR 195.12 - What requirements apply to low-stress pipelines in rural areas?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false What requirements apply to low-stress pipelines in... low-stress pipelines in rural areas? (a) General. This section does not apply to a rural low-stress pipeline regulated under this part as a low-stress pipeline that crosses a waterway currently used for...

  19. Regulatory assessment with regulatory flexibility analysis : draft regulatory evaluation - Notice of Proposed Rulemaking -- Pipeline Safety : safety standards for increasing the maximum allowable operating pressure for natural gas transmission pipelines.

    DOT National Transportation Integrated Search

    2008-02-01

    The Pipeline and Hazardous Materials Safety Administration (PHMSA) is proposing changes to the Federal pipeline safety regulations in 49 CFR Part 192, which cover the transportation of natural gas by pipeline. Specifically, PHMSA proposes allowing na...

  20. Hog Charm II tetracycline test screening results compared with a liquid chromatography tandem mass spectrometry 10-μg/kg method.

    PubMed

    Salter, Robert; Holmes, Steven; Legg, David; Coble, Joel; George, Bruce

    2012-02-01

    Pork tissue samples that tested positive and negative by the Charm II tetracycline test screening method in the slaughter plant laboratory were tested with the modified AOAC International liquid chromatography tandem mass spectrometry (LC-MS-MS) method 995.09 to determine the predictive value of the screening method at detecting total tetracyclines at 10 μg/kg of tissue, in compliance with Russian import regulations. There were 218 presumptive-positive tetracycline samples of 4,195 randomly tested hogs. Of these screening test positive samples, 83% (182) were positive, >10 μg/kg by LC-MS-MS; 12.8% (28) were false violative, greater than limit of detection (LOD) but <10 μg/kg; and 4.2% (8) were not detected at the LC-MS-MS LOD. The 36 false-violative and not-detected samples represent 1% of the total samples screened. Twenty-seven of 30 randomly selected tetracycline screening negative samples tested below the LC-MS-MS LOD, and 3 samples tested <3 μg/kg chlortetracycline. Results indicate that the Charm II tetracycline test is effective at predicting hogs containing >10 μg/kg total tetracyclines in compliance with Russian import regulations.

  1. Development and verification of a model for estimating the screening utility in the detection of PCBs in transformer oil.

    PubMed

    Terakado, Shingo; Glass, Thomas R; Sasaki, Kazuhiro; Ohmura, Naoya

    2014-01-01

    A simple new model for estimating the screening performance (false positive and false negative rates) of a given test for a specific sample population is presented. The model is shown to give good results on a test population, and is used to estimate the performance on a sampled population. Using the model developed in conjunction with regulatory requirements and the relative costs of the confirmatory and screening tests allows evaluation of the screening test's utility in terms of cost savings. Testers can use the methods developed to estimate the utility of a screening program using available screening tests with their own sample populations.

  2. Oil and gas pipeline construction cost analysis and developing regression models for cost estimation

    NASA Astrophysics Data System (ADS)

    Thaduri, Ravi Kiran

    In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.

  3. A Bacterial Analysis Platform: An Integrated System for Analysing Bacterial Whole Genome Sequencing Data for Clinical Diagnostics and Surveillance.

    PubMed

    Thomsen, Martin Christen Frølund; Ahrenfeldt, Johanne; Cisneros, Jose Luis Bellod; Jurtz, Vanessa; Larsen, Mette Voldby; Hasman, Henrik; Aarestrup, Frank Møller; Lund, Ole

    2016-01-01

    Recent advances in whole genome sequencing have made the technology available for routine use in microbiological laboratories. However, a major obstacle for using this technology is the availability of simple and automatic bioinformatics tools. Based on previously published and already available web-based tools we developed a single pipeline for batch uploading of whole genome sequencing data from multiple bacterial isolates. The pipeline will automatically identify the bacterial species and, if applicable, assemble the genome, identify the multilocus sequence type, plasmids, virulence genes and antimicrobial resistance genes. A short printable report for each sample will be provided and an Excel spreadsheet containing all the metadata and a summary of the results for all submitted samples can be downloaded. The pipeline was benchmarked using datasets previously used to test the individual services. The reported results enable a rapid overview of the major results, and comparing that to the previously found results showed that the platform is reliable and able to correctly predict the species and find most of the expected genes automatically. In conclusion, a combined bioinformatics platform was developed and made publicly available, providing easy-to-use automated analysis of bacterial whole genome sequencing data. The platform may be of immediate relevance as a guide for investigators using whole genome sequencing for clinical diagnostics and surveillance. The platform is freely available at: https://cge.cbs.dtu.dk/services/CGEpipeline-1.1 and it is the intention that it will continue to be expanded with new features as these become available.

  4. [Informative indices of the biocorrosion activity for the determination of the character of the aggression ground].

    PubMed

    Chesnokova, M G; Shalai, V V; Kraus, Y A; Mironov, A Y; Blinova, E G

    2016-01-01

    Underground corrosion is referred to the most difficult types of corrosion in connection with that it is multifactorial and differs in progressive dynamics of the participation of each parameter in the process of destruction of the metal. With the aim of the evaluation of the informativeness of the index of the biocorrosion activity caused by the influence of various factors to determine the character of the soil aggressiveness in the district of pipeline laying there was studied the complex of microbiological and physical-chemical indices). There was determined the amount of sulfur cycle bacteria (autotrophic thiobacteria and sulphate-reducing bacteria), the total concentration of sulfur and iron in the soil samples adjacent to the surface of the underground pipelines in the territory of the Khanty-Mansi Autonomous District of Yugra, and the ratio of these indices with a specific electrical resistance of the soil. There was established the predominance ofsamples with weak aggressiveness of the soil (55.17% of cases), with the criterion ofbiocorrosion soil activity of 2,44 ± 0,19. The results show significant differences in the thiobacteria content and mobile iron in the studied soil-ground samples. There was revealed a direct correlation of the average force of concentrations of identified bacteria and iron content in the soil. There was shown the necessity of the implementation of dynamic control and the development of methods of protection of metal structures to prevent biocorrosion in the design and in the process of the operation of the pipeline.

  5. Assessment of Welders Exposure to Carcinogen Metals from Manual Metal Arc Welding in Gas Transmission Pipelines, Iran

    PubMed Central

    Golbabaei, F; Seyedsomea, M; Ghahri, A; Shirkhanloo, H; Khadem, M; Hassani, H; Sadeghi, N; Dinari, B

    2012-01-01

    Background: Welding can produce dangerous fumes containing various metals especially carcinogenic ones. Occupational exposure to welding fumes is associated with lung cancer. Therefore, welders in Gas Transmission Pipelines are known as a high-risk group. This study was designed to determinate the amounts of metals Cr, Ni, and Cd in breathing zone and urine of welders and to assess the possibility of introducing urinary metals as a biomarker due to occupational exposure. Methods: In this cross sectional study, 94 individuals from Gas Transmission Pipelines welders, Iran, Borujen in 2011 were selected and classified into 3 groups including Welders, Back Welders and Assistances. The sampling procedures were performed according to NIOSH 7300 for total chromium, nickel, and cadmium and NIOSH 7600 for Cr+6. For all participants urine samples were collected during the entire work shift and metals in urine were determined according to NIOSH 8310. Results: Back Welders and Assistances groups had maximum and minimum exposure to total fume and its elements, respectively. In addition, results showed that there are significant differences (P<0.05) between Welders and Back Welders with Assistances group in exposure with total fume and elements except Ni. Urinary concentrations of three metals including Cr, Cd and Ni among all welders were about 4.5, 12 and 14-fold greater than those detected in controls, respectively. Weak correlations were found between airborne and urinary metals concentrations (R2: Cr=0.45, Cd=0.298, Ni=0.362). Conclusion: Urinary metals concentrations could not be considerate as a biomarker for welders’ exposure assessment. PMID:23113226

  6. Effect of layerwise structural inhomogeneity on stress- corrosion cracking of steel tubes

    NASA Astrophysics Data System (ADS)

    Perlovich, Yu A.; Krymskaya, O. A.; Isaenkova, M. G.; Morozov, N. S.; Fesenko, V. A.; Ryakhovskikh, I. V.; Esiev, T. S.

    2016-04-01

    Based on X-ray texture and structure analysis data of the material of main gas pipelines it was shown that the layerwise inhomogeneity of tubes is formed during their manufacturing. The degree of this inhomogeneity affects on the tendency of tubes to stress- corrosion cracking under exploitation. Samples of tubes were cut out from gas pipelines located under various operating conditions. Herewith the study was conducted both for sections with detected stress-corrosion defects and without them. Distributions along tube wall thickness for lattice parameters and half-width of X-ray lines were constructed. Crystallographic texture analysis of external and internal tube layers was also carried out. Obtained data testifies about considerable layerwise inhomogeneity of all samples. Despite the different nature of the texture inhomogeneity of gas pipeline tubes, the more inhomogeneous distribution of texture or structure features causes the increasing of resistance to stress- corrosion. The observed effect can be explained by saturation with interstitial impurities of the surface layer of the hot-rolled sheet and obtained therefrom tube. This results in rising of lattice parameters in the external layer of tube as compared to those in underlying metal. Thus, internal layers have a compressive effect on external layers in the rolling plane that prevents cracks opening at the tube surface. Moreover, the high mutual misorientation of grains within external and internal layers of tube results in the necessity to change the moving crack plane, so that the crack growth can be inhibited when reaching the layer with a modified texture.

  7. Assessment of welders exposure to carcinogen metals from manual metal arc welding in gas transmission pipelines, iran.

    PubMed

    Golbabaei, F; Seyedsomea, M; Ghahri, A; Shirkhanloo, H; Khadem, M; Hassani, H; Sadeghi, N; Dinari, B

    2012-01-01

    Welding can produce dangerous fumes containing various metals especially carcinogenic ones. Occupational exposure to welding fumes is associated with lung cancer. Therefore, welders in Gas Transmission Pipelines are known as a high-risk group. This study was designed to determinate the amounts of metals Cr, Ni, and Cd in breathing zone and urine of welders and to assess the possibility of introducing urinary metals as a biomarker due to occupational exposure. In this cross sectional study, 94 individuals from Gas Transmission Pipelines welders, Iran, Borujen in 2011 were selected and classified into 3 groups including Welders, Back Welders and Assistances. The sampling procedures were performed according to NIOSH 7300 for total chromium, nickel, and cadmium and NIOSH 7600 for Cr+6. For all participants urine samples were collected during the entire work shift and metals in urine were determined according to NIOSH 8310. Back Welders and Assistances groups had maximum and minimum exposure to total fume and its elements, respectively. In addition, results showed that there are significant differences (P<0.05) between Welders and Back Welders with Assistances group in exposure with total fume and elements except Ni. Urinary concentrations of three metals including Cr, Cd and Ni among all welders were about 4.5, 12 and 14-fold greater than those detected in controls, respectively. Weak correlations were found between airborne and urinary metals concentrations (R2: Cr=0.45, Cd=0.298, Ni=0.362). Urinary metals concentrations could not be considerate as a biomarker for welders' exposure assessment.

  8. Welding Thermal Simulation and Corrosion Study of X-70 Deep Sea Pipeline Steel

    NASA Astrophysics Data System (ADS)

    Zhang, Weipeng; Li, Zhuoran; Gao, Jixiang; Peng, Zhengwu

    2017-12-01

    Gleeble thermomechanical processing machine was used to simulate coarse grain heat affected zone (CGHAZ) of API X-70 thick wall pipeline steel used in deep sea. Microstructures and corresponding corrosion behavior of the simulated CGHAZs using different cooling rate were investigated and compared to the as-received material by scanning electron microscope and electrochemical experiments carried out in 3.5 wt. % NaCl solution. Results of this study show that the as-received samples exhibited a little bit higher corrosion resistance than the simulated CGHAZs. Among 3 sets of simulation experiments, the maximum corrosion tendency was exhibited at the t8/5 = 20 s with the most martensite-austensite (M-A) microstructure and highest corrosion potential was shown at the t8/5 = 60 s.

  9. A Study on Optimal Sizing of Pipeline Transporting Equi-sized Particulate Solid-Liquid Mixture

    NASA Astrophysics Data System (ADS)

    Asim, Taimoor; Mishra, Rakesh; Pradhan, Suman; Ubbi, Kuldip

    2012-05-01

    Pipelines transporting solid-liquid mixtures are of practical interest to the oil and pipe industry throughout the world. Such pipelines are known as slurry pipelines where the solid medium of the flow is commonly known as slurry. The optimal designing of such pipelines is of commercial interests for their widespread acceptance. A methodology has been evolved for the optimal sizing of a pipeline transporting solid-liquid mixture. Least cost principle has been used in sizing such pipelines, which involves the determination of pipe diameter corresponding to the minimum cost for given solid throughput. The detailed analysis with regard to transportation of slurry having solids of uniformly graded particles size has been included. The proposed methodology can be used for designing a pipeline for transporting any solid material for different solid throughput.

  10. A Methodology to Teach Advanced A/D Converters, Combining Digital Signal Processing and Microelectronics Perspectives

    ERIC Educational Resources Information Center

    Quintans, C.; Colmenar, A.; Castro, M.; Moure, M. J.; Mandado, E.

    2010-01-01

    ADCs (analog-to-digital converters), especially Pipeline and Sigma-Delta converters, are designed using complex architectures in order to increase their sampling rate and/or resolution. Consequently, the learning of ADC devices also encompasses complex concepts such as multistage synchronization, latency, oversampling, modulation, noise shaping,…

  11. The Translational Genomics Core at Partners Personalized Medicine: Facilitating the Transition of Research towards Personalized Medicine

    PubMed Central

    Blau, Ashley; Brown, Alison; Mahanta, Lisa; Amr, Sami S.

    2016-01-01

    The Translational Genomics Core (TGC) at Partners Personalized Medicine (PPM) serves as a fee-for-service core laboratory for Partners Healthcare researchers, providing access to technology platforms and analysis pipelines for genomic, transcriptomic, and epigenomic research projects. The interaction of the TGC with various components of PPM provides it with a unique infrastructure that allows for greater IT and bioinformatics opportunities, such as sample tracking and data analysis. The following article describes some of the unique opportunities available to an academic research core operating within PPM, such the ability to develop analysis pipelines with a dedicated bioinformatics team and maintain a flexible Laboratory Information Management System (LIMS) with the support of an internal IT team, as well as the operational challenges encountered to respond to emerging technologies, diverse investigator needs, and high staff turnover. In addition, the implementation and operational role of the TGC in the Partners Biobank genotyping project of over 25,000 samples is presented as an example of core activities working with other components of PPM. PMID:26927185

  12. The Translational Genomics Core at Partners Personalized Medicine: Facilitating the Transition of Research towards Personalized Medicine.

    PubMed

    Blau, Ashley; Brown, Alison; Mahanta, Lisa; Amr, Sami S

    2016-02-26

    The Translational Genomics Core (TGC) at Partners Personalized Medicine (PPM) serves as a fee-for-service core laboratory for Partners Healthcare researchers, providing access to technology platforms and analysis pipelines for genomic, transcriptomic, and epigenomic research projects. The interaction of the TGC with various components of PPM provides it with a unique infrastructure that allows for greater IT and bioinformatics opportunities, such as sample tracking and data analysis. The following article describes some of the unique opportunities available to an academic research core operating within PPM, such the ability to develop analysis pipelines with a dedicated bioinformatics team and maintain a flexible Laboratory Information Management System (LIMS) with the support of an internal IT team, as well as the operational challenges encountered to respond to emerging technologies, diverse investigator needs, and high staff turnover. In addition, the implementation and operational role of the TGC in the Partners Biobank genotyping project of over 25,000 samples is presented as an example of core activities working with other components of PPM.

  13. Use of Dried Capillary Blood Sampling for Islet Autoantibody Screening in Relatives: A Feasibility Study

    PubMed Central

    Rafkin, Lisa E.; Matheson, Della; Steck, Andrea K.; Yu, Liping; Henderson, Courtney; Beam, Craig A.; Boulware, David C.

    2015-01-01

    Abstract Background: Islet autoantibody testing provides the basis for assessment of risk of progression to type 1 diabetes. We set out to determine the feasibility and acceptability of dried capillary blood spot–based screening to identify islet autoantibody–positive relatives potentially eligible for inclusion in prevention trials. Materials and Methods: Dried blood spot (DBS) and venous samples were collected from 229 relatives participating in the TrialNet Pathway to Prevention Study. Both samples were tested for glutamic acid decarboxylase, islet antigen 2, and zinc transporter 8 autoantibodies, and venous samples were additionally tested for insulin autoantibodies and islet cell antibodies. We defined multiple autoantibody positive as two or more autoantibodies in venous serum and DBS screen positive if one or more autoantibodies were detected. Participant questionnaires compared the sample collection methods. Results: Of 44 relatives who were multiple autoantibody positive in venous samples, 42 (95.5%) were DBS screen positive, and DBS accurately detected 145 of 147 autoantibody-negative relatives (98.6%). Capillary blood sampling was perceived as more painful than venous blood draw, but 60% of participants would prefer initial screening using home fingerstick with clinic visits only required if autoantibodies were found. Conclusions: Capillary blood sampling could facilitate screening for type 1 diabetes prevention studies. PMID:26375197

  14. Comparison of Ion Personal Genome Machine Platforms for the Detection of Variants in BRCA1 and BRCA2.

    PubMed

    Hwang, Sang Mee; Lee, Ki Chan; Lee, Min Seob; Park, Kyoung Un

    2018-01-01

    Transition to next generation sequencing (NGS) for BRCA1 / BRCA2 analysis in clinical laboratories is ongoing but different platforms and/or data analysis pipelines give different results resulting in difficulties in implementation. We have evaluated the Ion Personal Genome Machine (PGM) Platforms (Ion PGM, Ion PGM Dx, Thermo Fisher Scientific) for the analysis of BRCA1 /2. The results of Ion PGM with OTG-snpcaller, a pipeline based on Torrent mapping alignment program and Genome Analysis Toolkit, from 75 clinical samples and 14 reference DNA samples were compared with Sanger sequencing for BRCA1 / BRCA2 . Ten clinical samples and 14 reference DNA samples were additionally sequenced by Ion PGM Dx with Torrent Suite. Fifty types of variants including 18 pathogenic or variants of unknown significance were identified from 75 clinical samples and known variants of the reference samples were confirmed by Sanger sequencing and/or NGS. One false-negative results were present for Ion PGM/OTG-snpcaller for an indel variant misidentified as a single nucleotide variant. However, eight discordant results were present for Ion PGM Dx/Torrent Suite with both false-positive and -negative results. A 40-bp deletion, a 4-bp deletion and a 1-bp deletion variant was not called and a false-positive deletion was identified. Four other variants were misidentified as another variant. Ion PGM/OTG-snpcaller showed acceptable performance with good concordance with Sanger sequencing. However, Ion PGM Dx/Torrent Suite showed many discrepant results not suitable for use in a clinical laboratory, requiring further optimization of the data analysis for calling variants.

  15. MICCA: a complete and accurate software for taxonomic profiling of metagenomic data.

    PubMed

    Albanese, Davide; Fontana, Paolo; De Filippo, Carlotta; Cavalieri, Duccio; Donati, Claudio

    2015-05-19

    The introduction of high throughput sequencing technologies has triggered an increase of the number of studies in which the microbiota of environmental and human samples is characterized through the sequencing of selected marker genes. While experimental protocols have undergone a process of standardization that makes them accessible to a large community of scientist, standard and robust data analysis pipelines are still lacking. Here we introduce MICCA, a software pipeline for the processing of amplicon metagenomic datasets that efficiently combines quality filtering, clustering of Operational Taxonomic Units (OTUs), taxonomy assignment and phylogenetic tree inference. MICCA provides accurate results reaching a good compromise among modularity and usability. Moreover, we introduce a de-novo clustering algorithm specifically designed for the inference of Operational Taxonomic Units (OTUs). Tests on real and synthetic datasets shows that thanks to the optimized reads filtering process and to the new clustering algorithm, MICCA provides estimates of the number of OTUs and of other common ecological indices that are more accurate and robust than currently available pipelines. Analysis of public metagenomic datasets shows that the higher consistency of results improves our understanding of the structure of environmental and human associated microbial communities. MICCA is an open source project.

  16. Algorithmic methods to infer the evolutionary trajectories in cancer progression

    PubMed Central

    Graudenzi, Alex; Ramazzotti, Daniele; Sanz-Pamplona, Rebeca; De Sano, Luca; Mauri, Giancarlo; Moreno, Victor; Antoniotti, Marco; Mishra, Bud

    2016-01-01

    The genomic evolution inherent to cancer relates directly to a renewed focus on the voluminous next-generation sequencing data and machine learning for the inference of explanatory models of how the (epi)genomic events are choreographed in cancer initiation and development. However, despite the increasing availability of multiple additional -omics data, this quest has been frustrated by various theoretical and technical hurdles, mostly stemming from the dramatic heterogeneity of the disease. In this paper, we build on our recent work on the “selective advantage” relation among driver mutations in cancer progression and investigate its applicability to the modeling problem at the population level. Here, we introduce PiCnIc (Pipeline for Cancer Inference), a versatile, modular, and customizable pipeline to extract ensemble-level progression models from cross-sectional sequenced cancer genomes. The pipeline has many translational implications because it combines state-of-the-art techniques for sample stratification, driver selection, identification of fitness-equivalent exclusive alterations, and progression model inference. We demonstrate PiCnIc’s ability to reproduce much of the current knowledge on colorectal cancer progression as well as to suggest novel experimentally verifiable hypotheses. PMID:27357673

  17. A pipeline VLSI design of fast singular value decomposition processor for real-time EEG system based on on-line recursive independent component analysis.

    PubMed

    Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi

    2013-01-01

    This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.

  18. The hidden genomic landscape of acute myeloid leukemia: subclonal structure revealed by undetected mutations

    PubMed Central

    Bodini, Margherita; Ronchini, Chiara; Giacò, Luciano; Russo, Anna; Melloni, Giorgio E. M.; Luzi, Lucilla; Sardella, Domenico; Volorio, Sara; Hasan, Syed K.; Ottone, Tiziana; Lavorgna, Serena; Lo-Coco, Francesco; Candoni, Anna; Fanin, Renato; Toffoletti, Eleonora; Iacobucci, Ilaria; Martinelli, Giovanni; Cignetti, Alessandro; Tarella, Corrado; Bernard, Loris; Pelicci, Pier Giuseppe

    2015-01-01

    The analyses carried out using 2 different bioinformatics pipelines (SomaticSniper and MuTect) on the same set of genomic data from 133 acute myeloid leukemia (AML) patients, sequenced inside the Cancer Genome Atlas project, gave discrepant results. We subsequently tested these 2 variant-calling pipelines on 20 leukemia samples from our series (19 primary AMLs and 1 secondary AML). By validating many of the predicted somatic variants (variant allele frequencies ranging from 100% to 5%), we observed significantly different calling efficiencies. In particular, despite relatively high specificity, sensitivity was poor in both pipelines resulting in a high rate of false negatives. Our findings raise the possibility that landscapes of AML genomes might be more complex than previously reported and characterized by the presence of hundreds of genes mutated at low variant allele frequency, suggesting that the application of genome sequencing to the clinic requires a careful and critical evaluation. We think that improvements in technology and workflow standardization, through the generation of clear experimental and bioinformatics guidelines, are fundamental to translate the use of next-generation sequencing from research to the clinic and to transform genomic information into better diagnosis and outcomes for the patient. PMID:25499761

  19. VIP: an integrated pipeline for metagenomics of virus identification and discovery

    PubMed Central

    Li, Yang; Wang, Hao; Nie, Kai; Zhang, Chen; Zhang, Yi; Wang, Ji; Niu, Peihua; Ma, Xuejun

    2016-01-01

    Identification and discovery of viruses using next-generation sequencing technology is a fast-developing area with potential wide application in clinical diagnostics, public health monitoring and novel virus discovery. However, tremendous sequence data from NGS study has posed great challenge both in accuracy and velocity for application of NGS study. Here we describe VIP (“Virus Identification Pipeline”), a one-touch computational pipeline for virus identification and discovery from metagenomic NGS data. VIP performs the following steps to achieve its goal: (i) map and filter out background-related reads, (ii) extensive classification of reads on the basis of nucleotide and remote amino acid homology, (iii) multiple k-mer based de novo assembly and phylogenetic analysis to provide evolutionary insight. We validated the feasibility and veracity of this pipeline with sequencing results of various types of clinical samples and public datasets. VIP has also contributed to timely virus diagnosis (~10 min) in acutely ill patients, demonstrating its potential in the performance of unbiased NGS-based clinical studies with demand of short turnaround time. VIP is released under GPLv3 and is available for free download at: https://github.com/keylabivdc/VIP. PMID:27026381

  20. Using complementary approaches to identify trans-domain nuclear gene transfers in the extremophile Galdieria sulphuraria (Rhodophyta).

    PubMed

    Pandey, Ravi S; Saxena, Garima; Bhattacharya, Debashish; Qiu, Huan; Azad, Rajeev K

    2017-02-01

    Identification of horizontal gene transfers (HGTs) has primarily relied on phylogenetic tree based methods, which require a rich sampling of sequenced genomes to ensure a reliable inference. Because the success of phylogenetic approaches depends on the breadth and depth of the database, researchers usually apply stringent filters to detect only the most likely gene transfers in the genomes of interest. One such study focused on a highly conservative estimate of trans-domain gene transfers in the extremophile eukaryote, Galdieria sulphuraria (Galdieri) Merola (Rhodophyta), by applying multiple filters in their phylogenetic pipeline. This led to the identification of 75 inter-domain acquisitions from Bacteria or Archaea. Because of the evolutionary, ecological, and potential biotechnological significance of foreign genes in algae, alternative approaches and pipelines complementing phylogenetics are needed for a more comprehensive assessment of HGT. We present here a novel pipeline that uncovered 17 novel foreign genes of prokaryotic origin in G. sulphuraria, results that are supported by multiple lines of evidence including composition-based, comparative data, and phylogenetics. These genes encode a variety of potentially adaptive functions, from metabolite transport to DNA repair. © 2016 Phycological Society of America.

  1. Double-pulse laser-induced breakdown spectroscopy analysis of scales from petroleum pipelines

    NASA Astrophysics Data System (ADS)

    Cavalcanti, G. H.; Rocha, A. A.; Damasceno, R. N.; Legnaioli, S.; Lorenzetti, G.; Pardini, L.; Palleschi, V.

    2013-09-01

    Pipeline scales from the Campos Bay Petroleum Field near Rio de Janeiro, Brazil have been analyzed by both Raman spectroscopy and by laser-induced breakdown spectroscopy (LIBS) using a double-pulse, calibration-free approach. Elements that are characteristic of petroleum (e.g. C, H, N, O, Mg, Na, Fe and V) were detected, in addition to the Ca, Al, and Si which form the matrix of the scale. The LIBS results were compared with the results of micro-Raman spectroscopy, which confirmed the nature of the incrustations inferred by the LIBS analysis. Results of this preliminary study suggest that diffusion of pipe material into the pipeline intake column plays an important role in the growth of scale. Thanks to the simplicity and relative low cost of equipment and to the fact that no special chemical pre-treatment of the samples is needed, LIBS can offer very fast acquisition of data and the possibility of in situ measurements. LIBS could thus represent an alternative or complementary method for the chemical characterization of the scales by comparison to conventional analytical techniques, such as X-ray diffraction or X-ray fluorescence.

  2. Reconstructing the calibrated strain signal in the Advanced LIGO detectors

    NASA Astrophysics Data System (ADS)

    Viets, A. D.; Wade, M.; Urban, A. L.; Kandhasamy, S.; Betzwieser, J.; Brown, Duncan A.; Burguet-Castell, J.; Cahillane, C.; Goetz, E.; Izumi, K.; Karki, S.; Kissel, J. S.; Mendell, G.; Savage, R. L.; Siemens, X.; Tuyenbayev, D.; Weinstein, A. J.

    2018-05-01

    Advanced LIGO’s raw detector output needs to be calibrated to compute dimensionless strain h(t) . Calibrated strain data is produced in the time domain using both a low-latency, online procedure and a high-latency, offline procedure. The low-latency h(t) data stream is produced in two stages, the first of which is performed on the same computers that operate the detector’s feedback control system. This stage, referred to as the front-end calibration, uses infinite impulse response (IIR) filtering and performs all operations at a 16 384 Hz digital sampling rate. Due to several limitations, this procedure currently introduces certain systematic errors in the calibrated strain data, motivating the second stage of the low-latency procedure, known as the low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses finite impulse response (FIR) filtering to apply corrections to the output of the front-end calibration. It applies time-dependent correction factors to the sensing and actuation components of the calibrated strain to reduce systematic errors. The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.

  3. MICCA: a complete and accurate software for taxonomic profiling of metagenomic data

    PubMed Central

    Albanese, Davide; Fontana, Paolo; De Filippo, Carlotta; Cavalieri, Duccio; Donati, Claudio

    2015-01-01

    The introduction of high throughput sequencing technologies has triggered an increase of the number of studies in which the microbiota of environmental and human samples is characterized through the sequencing of selected marker genes. While experimental protocols have undergone a process of standardization that makes them accessible to a large community of scientist, standard and robust data analysis pipelines are still lacking. Here we introduce MICCA, a software pipeline for the processing of amplicon metagenomic datasets that efficiently combines quality filtering, clustering of Operational Taxonomic Units (OTUs), taxonomy assignment and phylogenetic tree inference. MICCA provides accurate results reaching a good compromise among modularity and usability. Moreover, we introduce a de-novo clustering algorithm specifically designed for the inference of Operational Taxonomic Units (OTUs). Tests on real and synthetic datasets shows that thanks to the optimized reads filtering process and to the new clustering algorithm, MICCA provides estimates of the number of OTUs and of other common ecological indices that are more accurate and robust than currently available pipelines. Analysis of public metagenomic datasets shows that the higher consistency of results improves our understanding of the structure of environmental and human associated microbial communities. MICCA is an open source project. PMID:25988396

  4. 30 CFR 250.1006 - How must I decommission and take out of service a DOI pipeline?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... a DOI pipeline? 250.1006 Section 250.1006 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT... and Pipeline Rights-of-Way § 250.1006 How must I decommission and take out of service a DOI pipeline...) The table in this section lists the requirements if you take a DOI pipeline out of service: If you...

  5. [Self-sampling and reminder letters increase participation in the Finnish cervical cancer screening programme].

    PubMed

    Virtanen, Anni; Nieminen, Pekka; Malila, Nea; Luostarinen, Tapio; Anttila, Ahti

    2013-01-01

    Participation rate in the national cervical cancer screening programme is currently less than 70% in Finland. A new potential method of increasing the attendance are self-taken samples for hrHPV-testing. All cervical cancer screening non-attendees in 22 municipalities received first a reminder letter. Non-attendees after the reminder letter were offered a self-sampling device. Reminder letters increased total participation from 72.6% to 79.9%, and self-sampling tests further to 83.4%. If reminder letters were sent with fixed appointments, participation was significantly higher (30 vs. 14%). If self-sampling is used after the recommended two invitations, overall screening attendance could reach the desired 80% to 85%.

  6. [Utilization of self-sampling kits for HPV testing in cervical cancer screening - pilot study].

    PubMed

    Ondryášová, H; Koudeláková, V; Drábek, J; Vaněk, P; Slavkovský, R; Hajdúch, M

    2015-12-01

    To get initial experience with alternative sampling (self-sampling) for HPV testing as the means of cervical cancer screening program. Original work. Institute of Molecular and Translational Medicine, Faculty of Medicine and Dentistry, Palacky University in Olomouc. Based on expression of interest, 215 self-sampling kits were posted to women. Evalyn(®) Brush Vaginal swabs obtained by self-sampling were analyzed for the presence of HPV infection by Cobas 4800 HPV (Roche) followed by genotyping using PapilloCheck(®) HPV-Screening (Greiner Bio-One). Sixty women randomly chosen from our sample were sent a questionnaire focused on their experience with self-sampling. One hundred seventy-four of 215 (81%) distributed self-sampling devices have been delivered to analysis. All cervicovaginal swabs were sampled correctly and it was possible to analyze them by Cobas 4800 HPV test. Similarly, 98% (171/174) samples were analyzable by PapilloCheck(®) HPV-Screening.One hundred twenty-five (72%) of 174 tested samples were HPV negative. Low risk HPV infection was detected only in 7 samples (4%), and high risk HPV (hrHPV) infection was present in 42 samples (24%). The most frequently detected hrHPV genotypes were HPV16 (11/42; 26%) and HPV53 (6/42; 14%). HrHPV co-infection was detected in 10 cases, in 5 of them lrHPV infection was find also.Of the 60 questionnaires, 48 (80%) were returned. From this group, 47 (98%) women rated their experience with self-sampling device as good to excellent. User manual of self-sampling device was considered good to excellent by all women (100%). All women also rated the convenience of self-sampling device using as good to excellent. As expected, most of the women (n = 42 [88%]) preferred self-sampling to physician sampling. Cervicovaginal self-sampling leads to valid results of HPV screening using two molecular genetics methods and was accepted by Czech women very well. The self-sampling as an opportunity to participate in cervical cancer screening could increase the attendance of the screening program and would help to reduce the incidence and mortality for this disease in the Czech population.

  7. Analysis of the strength of sea gas pipelines of positive buoyancy conditioned by glaciation

    NASA Astrophysics Data System (ADS)

    Malkov, Venyamin; Kurbatova, Galina; Ermolaeva, Nadezhda; Malkova, Yulia; Petrukhin, Ruslan

    2018-05-01

    A technique for estimating the stress state of a gas pipeline laid along the seabed in northern latitudes in the presence of glaciation is proposed. It is assumed that the pipeline lies on the bottom of the seabed, but under certain conditions on the some part of the pipeline a glaciation is formed and the gas pipeline section in the place of glaciation can come off the ground due to the positive buoyancy of the ice. Calculation of additional stresses caused by bending of the pipeline is of practical interest for strength evaluation. The gas pipeline is a two-layer cylindrical shell of circular cross section. The inner layer is made of high-strength steel, the outer layer is made of reinforced ferroconcrete. The proposed methodology for calculating the gas pipeline for strength is based on the equations of the theory of shells. The procedure takes into account the effect of internal gas pressure, external pressure of sea water, the weight of two-layer gas pipeline and the weight of the ice layer. The lifting force created by the displaced fluid and the positive buoyancy of the ice is also taken into account. It is significant that the listed loads cause only two types of deformation of the gas pipeline: axisymmetric and antisymmetric. The interaction of the pipeline with the ground as an elastic foundation is not considered. The main objective of the research is to establish the fact of separation of part of the pipeline from the ground. The method of calculations of stresses and deformations occurring in a model sea gas pipeline is presented.

  8. Pipeline transport and simultaneous saccharification of corn stover.

    PubMed

    Kumar, Amit; Cameron, Jay B; Flynn, Peter C

    2005-05-01

    Pipeline transport of corn stover delivered by truck from the field is evaluated against a range of truck transport costs. Corn stover transported by pipeline at 20% solids concentration (wet basis) or higher could directly enter an ethanol fermentation plant, and hence the investment in the pipeline inlet end processing facilities displaces comparable investment in the plant. At 20% solids, pipeline transport of corn stover costs less than trucking at capacities in excess of 1.4 M drytonnes/yr when compared to a mid range of truck transport cost (excluding any credit for economies of scale achieved in the ethanol fermentation plant from larger scale due to multiple pipelines). Pipelining of corn stover gives the opportunity to conduct simultaneous transport and saccharification (STS). If current enzymes are used, this would require elevated temperature. Heating of the slurry for STS, which in a fermentation plant is achieved from waste heat, is a significant cost element (more than 5 cents/l of ethanol) if done at the pipeline inlet unless waste heat is available, for example from an electric power plant located adjacent to the pipeline inlet. Heat loss in a 1.26 m pipeline carrying 2 M drytonnes/yr is about 5 degrees C at a distance of 400 km in typical prairie clay soils, and would not likely require insulation; smaller pipelines or different soil conditions might require insulation for STS. Saccharification in the pipeline would reduce the need for investment in the fermentation plant, saving about 0.2 cents/l of ethanol. Transport of corn stover in multiple pipelines offers the opportunity to develop a large ethanol fermentation plant, avoiding some of the diseconomies of scale that arise from smaller plants whose capacities are limited by issues of truck congestion.

  9. A pilot study of community-based self-sampling for HPV testing among non-attenders of cervical cancer screening programs in El Salvador.

    PubMed

    Laskow, Bari; Figueroa, Ruben; Alfaro, Karla M; Scarinci, Isabel C; Conlisk, Elizabeth; Maza, Mauricio; Chang, Judy C; Cremer, Miriam

    2017-08-01

    To establish the feasibility and acceptability of home-based HPV self-sampling among women who did not attend screening appointments in rural El Salvador. In a cross-sectional study, data were collected from May 2015 to January 2016 among 60 women aged 30-59 years who were not pregnant, provided informed consent, had not been screened in 2 years, had no history of pre-cancer treatment, and did not attend a scheduled HPV screening. Participants completed questionnaires and received educational information before being given an opportunity to self-sample with the Hybrid Capture 2 High Risk HPV DNA Test. Self-sampling was accepted by 41 (68%) participants. Almost all women chose to self-sample because the process was easy (40/41, 98%), could be performed at home (40/41, 98%), and saved time (38/41, 93%), and because they felt less embarrassed (33/41, 80%). The most common reason for declining the test was not wanting to be screened (8/19, 42%). The prevalence of high-risk HPV types among women who accepted self-sampling was 17% (7/41). For most women, community-based self-sampling was an acceptable way to participate in a cervical cancer screening program. In low-resource countries, incorporating community-based self-sampling into screening programs might improve coverage of high-risk women. © 2017 International Federation of Gynecology and Obstetrics.

  10. 78 FR 53751 - Dominion NGL Pipelines, LLC; Notice of Petition for Declaratory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-30

    ... new ethane pipeline (Natrium Ethane Pipeline) extending from a new natural gas processing and... utilize, or pay for, significant capacity on the Natrium Ethane Pipeline (Committed Shipper); and (3) the...

  11. 49 CFR 192.51 - Scope.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE... for the selection and qualification of pipe and components for use in pipelines. ...

  12. A semi-automated luminescence based standard membrane feeding assay identifies novel small molecules that inhibit transmission of malaria parasites by mosquitoes

    PubMed Central

    Vos, Martijn W.; Stone, Will J. R.; Koolen, Karin M.; van Gemert, Geert-Jan; van Schaijk, Ben; Leroy, Didier; Sauerwein, Robert W.; Bousema, Teun; Dechering, Koen J.

    2015-01-01

    Current first-line treatments for uncomplicated falciparum malaria rapidly clear the asexual stages of the parasite, but do not fully prevent parasite transmission by mosquitoes. The standard membrane feeding assay (SMFA) is the biological gold standard assessment of transmission reducing activity (TRA), but its throughput is limited by the need to determine mosquito infection status by dissection and microscopy. Here we present a novel dissection-free luminescence based SMFA format using a transgenic Plasmodium falciparum reporter parasite without resistance to known antimalarials and therefore unrestricted in its utility in compound screening. Analyses of sixty-five compounds from the Medicines for Malaria Venture validation and malaria boxes identified 37 compounds with high levels of TRA (>80%); different assay modes allowed discrimination between gametocytocidal and downstream modes of action. Comparison of SMFA data to published assay formats for predicting parasite infectivity indicated that individual in vitro screens show substantial numbers of false negatives. These results highlight the importance of the SMFA in the screening pipeline for transmission reducing compounds and present a rapid and objective method. In addition we present sixteen diverse chemical scaffolds from the malaria box that may serve as a starting point for further discovery and development of malaria transmission blocking drugs. PMID:26687564

  13. Urinary Tract Infection as a Preventable Cause of Pregnancy Complications: Opportunities, Challenges, and a Global Call to Action

    PubMed Central

    Gilbert, Nicole M.; O'Brien, Valerie P.; Hultgren, Scott; Macones, George; Lewis, Warren G.

    2013-01-01

    The urinary tract is a common site of infection in humans. During pregnancy, urinary tract infection (UTI) is associated with increased risks of maternal and neonatal morbidity and mortality, even when the infection is asymptomatic. By mapping available rates of UTI in pregnancy across different populations, we emphasize this as a problem of global significance. Many countries with high rates of preterm birth and neonatal mortality also have rates of UTI in pregnancy that exceed rates seen in more developed countries. A global analysis of the etiologies of UTI revealed familiar culprits as well as emerging threats. Screening and treatment of UTI have improved birth outcomes in several more developed countries and would likely improve maternal and neonatal health worldwide. However, challenges of implementation in resource-poor settings must be overcome. We review the nature of the barriers occurring at each step of the screening and treatment pipeline and highlight steps necessary to overcome these obstacles. It is our hope that the information compiled here will increase awareness of the global significance of UTI in maternal and neonatal health and embolden governments, nongovernmental organizations, and researchers to do their part to make urine screening and UTI treatment a reality for all pregnant women. PMID:24416696

  14. Urinary tract infection as a preventable cause of pregnancy complications: opportunities, challenges, and a global call to action.

    PubMed

    Gilbert, Nicole M; O'Brien, Valerie P; Hultgren, Scott; Macones, George; Lewis, Warren G; Lewis, Amanda L

    2013-09-01

    The urinary tract is a common site of infection in humans. During pregnancy, urinary tract infection (UTI) is associated with increased risks of maternal and neonatal morbidity and mortality, even when the infection is asymptomatic. By mapping available rates of UTI in pregnancy across different populations, we emphasize this as a problem of global significance. Many countries with high rates of preterm birth and neonatal mortality also have rates of UTI in pregnancy that exceed rates seen in more developed countries. A global analysis of the etiologies of UTI revealed familiar culprits as well as emerging threats. Screening and treatment of UTI have improved birth outcomes in several more developed countries and would likely improve maternal and neonatal health worldwide. However, challenges of implementation in resource-poor settings must be overcome. We review the nature of the barriers occurring at each step of the screening and treatment pipeline and highlight steps necessary to overcome these obstacles. It is our hope that the information compiled here will increase awareness of the global significance of UTI in maternal and neonatal health and embolden governments, nongovernmental organizations, and researchers to do their part to make urine screening and UTI treatment a reality for all pregnant women.

  15. Mesoscale carbon sequestration site screening and CCS infrastructure analysis.

    PubMed

    Keating, Gordon N; Middleton, Richard S; Stauffer, Philip H; Viswanathan, Hari S; Letellier, Bruce C; Pasqualini, Donatella; Pawar, Rajesh J; Wolfsberg, Andrew V

    2011-01-01

    We explore carbon capture and sequestration (CCS) at the meso-scale, a level of study between regional carbon accounting and highly detailed reservoir models for individual sites. We develop an approach to CO(2) sequestration site screening for industries or energy development policies that involves identification of appropriate sequestration basin, analysis of geologic formations, definition of surface sites, design of infrastructure, and analysis of CO(2) transport and storage costs. Our case study involves carbon management for potential oil shale development in the Piceance-Uinta Basin, CO and UT. This study uses new capabilities of the CO(2)-PENS model for site screening, including reservoir capacity, injectivity, and cost calculations for simple reservoirs at multiple sites. We couple this with a model of optimized source-sink-network infrastructure (SimCCS) to design pipeline networks and minimize CCS cost for a given industry or region. The CLEAR(uff) dynamical assessment model calculates the CO(2) source term for various oil production levels. Nine sites in a 13,300 km(2) area have the capacity to store 6.5 GtCO(2), corresponding to shale-oil production of 1.3 Mbbl/day for 50 years (about 1/4 of U.S. crude oil production). Our results highlight the complex, nonlinear relationship between the spatial deployment of CCS infrastructure and the oil-shale production rate.

  16. Acceptability of self-collected versus provider-collected sampling for HPV DNA testing among women in rural El Salvador.

    PubMed

    Rosenbaum, Alan J; Gage, Julia C; Alfaro, Karla M; Ditzian, Lauren R; Maza, Mauricio; Scarinci, Isabel C; Felix, Juan C; Castle, Philip E; Villalta, Sofia; Miranda, Esmeralda; Cremer, Miriam L

    2014-08-01

    To determine the acceptability of self-collected versus provider-collected sampling among women participating in public sector HPV-based cervical cancer screening in El Salvador. Two thousand women aged 30-49 years underwent self-collected and provider-collected sampling with careHPV between October 2012 and March 2013 (Qiagen, Gaithersburg, MD, USA). After sample collection, a random sample of women (n=518) were asked about their experience. Participants were questioned regarding sampling method preference, previous cervical cancer screening, HPV and cervical cancer knowledge, HPV risk factors, and demographic information. All 518 women approached to participate in this questionnaire study agreed and were enrolled, 27.8% (142 of 511 responding) of whom had not received cervical cancer screening within the past 3 years and were considered under-screened. Overall, 38.8% (n=201) preferred self-collection and 31.9% (n=165) preferred provider collection. Self-collection preference was associated with prior tubal ligation, HPV knowledge, future self-sampling preference, and future home-screening preference (P<0.05). Reasons for self-collection preference included privacy/embarrassment, ease, and less pain; reasons cited for provider-collection preference were result accuracy and provider knowledge/experience. Self-sampling was found to be acceptable, therefore screening programs could consider offering this option either in the clinic or at home. Self-sampling at home may increase coverage in low-resource countries and reduce the burden that screening places upon clinical infrastructure. Copyright © 2014 International Federation of Gynecology and Obstetrics. All rights reserved.

  17. Numerical Modeling of Mechanical Behavior for Buried Steel Pipelines Crossing Subsidence Strata

    PubMed Central

    Han, C. J.

    2015-01-01

    This paper addresses the mechanical behavior of buried steel pipeline crossing subsidence strata. The investigation is based on numerical simulation of the nonlinear response of the pipeline-soil system through finite element method, considering large strain and displacement, inelastic material behavior of buried pipeline and the surrounding soil, as well as contact and friction on the pipeline-soil interface. Effects of key parameters on the mechanical behavior of buried pipeline were investigated, such as strata subsidence, diameter-thickness ratio, buried depth, internal pressure, friction coefficient and soil properties. The results show that the maximum strain appears on the outer transition subsidence section of the pipeline, and its cross section is concave shaped. With the increasing of strata subsidence and diameter-thickness ratio, the out of roundness, longitudinal strain and equivalent plastic strain increase gradually. With the buried depth increasing, the deflection, out of roundness and strain of the pipeline decrease. Internal pressure and friction coefficient have little effect on the deflection of buried pipeline. Out of roundness is reduced and the strain is increased gradually with the increasing of internal pressure. The physical properties of soil have a great influence on the mechanical properties of buried pipeline. The results from the present study can be used for the development of optimization design and preventive maintenance for buried steel pipelines. PMID:26103460

  18. An Automated Algorithm to Screen Massive Training Samples for a Global Impervious Surface Classification

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.

    2012-01-01

    An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to the phenology, solar-view geometry, and atmospheric condition etc. factors but not actual landcover difference. Finally, we will compare the classification results from screened and unscreened training samples to assess the improvement achieved by cleaning up the training samples. Keywords:

  19. 78 FR 16764 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0302] Pipeline Safety: Information Collection Activities AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice and Request for Comments on a Previously...

  20. 78 FR 24309 - Pipeline and Hazardous Materials Safety Administration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-24

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration List of Special Permit Applications Delayed AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA..., Pipeline and Hazardous Materials Safety Administration, U.S. Department of Transportation, East Building...

Top