Sample records for target identification pipeline

  1. Target-decoy Based False Discovery Rate Estimation for Large-scale Metabolite Identification.

    PubMed

    Wang, Xusheng; Jones, Drew R; Shaw, Timothy I; Cho, Ji-Hoon; Wang, Yuanyuan; Tan, Haiyan; Xie, Boer; Zhou, Suiping; Li, Yuxin; Peng, Junmin

    2018-05-23

    Metabolite identification is a crucial step in mass spectrometry (MS)-based metabolomics. However, it is still challenging to assess the confidence of assigned metabolites. In this study, we report a novel method for estimating false discovery rate (FDR) of metabolite assignment with a target-decoy strategy, in which the decoys are generated through violating the octet rule of chemistry by adding small odd numbers of hydrogen atoms. The target-decoy strategy was integrated into JUMPm, an automated metabolite identification pipeline for large-scale MS analysis, and was also evaluated with two other metabolomics tools, mzMatch and mzMine 2. The reliability of FDR calculation was examined by false datasets, which were simulated by altering MS1 or MS2 spectra. Finally, we used the JUMPm pipeline coupled with the target-decoy strategy to process unlabeled and stable-isotope labeled metabolomic datasets. The results demonstrate that the target-decoy strategy is a simple and effective method for evaluating the confidence of high-throughput metabolite identification.

  2. Environmental durability diagnostic for printed identification codes of polymer insulation for distribution pipelines

    NASA Astrophysics Data System (ADS)

    Zhuravleva, G. N.; Nagornova, I. V.; Kondratov, A. P.; Bablyuk, E. B.; Varepo, L. G.

    2017-08-01

    A research and modelling of weatherability and environmental durability of multilayer polymer insulation of both cable and pipelines with printed barcodes or color identification information were performed. It was proved that interlayer printing of identification codes in distribution pipelines insulation coatings provides high marking stability to light and atmospheric condensation. This allows to carry out their distant damage control. However, microbiological fouling of upper polymer layer hampers the distant damage pipelines identification. The color difference values and density changes of PE and PVC printed insolation due to weather and biological factors were defined.

  3. Strategies for target identification of antimicrobial natural products.

    PubMed

    Farha, Maya A; Brown, Eric D

    2016-05-04

    Covering: 2000 to 2015Despite a pervasive decline in natural product research at many pharmaceutical companies over the last two decades, natural products have undeniably been a prolific and unsurpassed source for new lead antibacterial compounds. Due to their inherent complexity, natural extracts face several hurdles in high-throughout discovery programs, including target identification. Target identification and validation is a crucial process for advancing hits through the discovery pipeline, but has remained a major bottleneck. In the case of natural products, extremely low yields and limited compound supply further impede the process. Here, we review the wealth of target identification strategies that have been proposed and implemented for the characterization of novel antibacterials. Traditionally, these have included genomic and biochemical-based approaches, which, in recent years, have been improved with modern-day technology and better honed for natural product discovery. Further, we discuss the more recent innovative approaches for uncovering the target of new antibacterial natural products, which have resulted from modern advances in chemical biology tools. Finally, we present unique screening platforms implemented to streamline the process of target identification. The different innovative methods to respond to the challenge of characterizing the mode of action for antibacterial natural products have cumulatively built useful frameworks that may advocate a renovated interest in natural product drug discovery programs.

  4. A distributed computational search strategy for the identification of diagnostics targets: application to finding aptamer targets for methicillin-resistant staphylococci.

    PubMed

    Flanagan, Keith; Cockell, Simon; Harwood, Colin; Hallinan, Jennifer; Nakjang, Sirintra; Lawry, Beth; Wipat, Anil

    2014-06-30

    The rapid and cost-effective identification of bacterial species is crucial, especially for clinical diagnosis and treatment. Peptide aptamers have been shown to be valuable for use as a component of novel, direct detection methods. These small peptides have a number of advantages over antibodies, including greater specificity and longer shelf life. These properties facilitate their use as the detector components of biosensor devices. However, the identification of suitable aptamer targets for particular groups of organisms is challenging. We present a semi-automated processing pipeline for the identification of candidate aptamer targets from whole bacterial genome sequences. The pipeline can be configured to search for protein sequence fragments that uniquely identify a set of strains of interest. The system is also capable of identifying additional organisms that may be of interest due to their possession of protein fragments in common with the initial set. Through the use of Cloud computing technology and distributed databases, our system is capable of scaling with the rapidly growing genome repositories, and consequently of keeping the resulting data sets up-to-date. The system described is also more generically applicable to the discovery of specific targets for other diagnostic approaches such as DNA probes, PCR primers and antibodies.

  5. A distributed computational search strategy for the identification of diagnostics targets: Application to finding aptamer targets for methicillin-resistant staphylococci.

    PubMed

    Flanagan, Keith; Cockell, Simon; Harwood, Colin; Hallinan, Jennifer; Nakjang, Sirintra; Lawry, Beth; Wipat, Anil

    2014-06-01

    The rapid and cost-effective identification of bacterial species is crucial, especially for clinical diagnosis and treatment. Peptide aptamers have been shown to be valuable for use as a component of novel, direct detection methods. These small peptides have a number of advantages over antibodies, including greater specificity and longer shelf life. These properties facilitate their use as the detector components of biosensor devices. However, the identification of suitable aptamer targets for particular groups of organisms is challenging. We present a semi-automated processing pipeline for the identification of candidate aptamer targets from whole bacterial genome sequences. The pipeline can be configured to search for protein sequence fragments that uniquely identify a set of strains of interest. The system is also capable of identifying additional organisms that may be of interest due to their possession of protein fragments in common with the initial set. Through the use of Cloud computing technology and distributed databases, our system is capable of scaling with the rapidly growing genome repositories, and consequently of keeping the resulting data sets up-to-date. The system described is also more generically applicable to the discovery of specific targets for other diagnostic approaches such as DNA probes, PCR primers and antibodies.

  6. Identification of missing variants by combining multiple analytic pipelines.

    PubMed

    Ren, Yingxue; Reddy, Joseph S; Pottier, Cyril; Sarangi, Vivekananda; Tian, Shulan; Sinnwell, Jason P; McDonnell, Shannon K; Biernacka, Joanna M; Carrasquillo, Minerva M; Ross, Owen A; Ertekin-Taner, Nilüfer; Rademakers, Rosa; Hudson, Matthew; Mainzer, Liudmila Sergeevna; Asmann, Yan W

    2018-04-16

    After decades of identifying risk factors using array-based genome-wide association studies (GWAS), genetic research of complex diseases has shifted to sequencing-based rare variants discovery. This requires large sample sizes for statistical power and has brought up questions about whether the current variant calling practices are adequate for large cohorts. It is well-known that there are discrepancies between variants called by different pipelines, and that using a single pipeline always misses true variants exclusively identifiable by other pipelines. Nonetheless, it is common practice today to call variants by one pipeline due to computational cost and assume that false negative calls are a small percent of total. We analyzed 10,000 exomes from the Alzheimer's Disease Sequencing Project (ADSP) using multiple analytic pipelines consisting of different read aligners and variant calling strategies. We compared variants identified by using two aligners in 50,100, 200, 500, 1000, and 1952 samples; and compared variants identified by adding single-sample genotyping to the default multi-sample joint genotyping in 50,100, 500, 2000, 5000 and 10,000 samples. We found that using a single pipeline missed increasing numbers of high-quality variants correlated with sample sizes. By combining two read aligners and two variant calling strategies, we rescued 30% of pass-QC variants at sample size of 2000, and 56% at 10,000 samples. The rescued variants had higher proportions of low frequency (minor allele frequency [MAF] 1-5%) and rare (MAF < 1%) variants, which are the very type of variants of interest. In 660 Alzheimer's disease cases with earlier onset ages of ≤65, 4 out of 13 (31%) previously-published rare pathogenic and protective mutations in APP, PSEN1, and PSEN2 genes were undetected by the default one-pipeline approach but recovered by the multi-pipeline approach. Identification of the complete variant set from sequencing data is the prerequisite of genetic

  7. VIP: an integrated pipeline for metagenomics of virus identification and discovery

    PubMed Central

    Li, Yang; Wang, Hao; Nie, Kai; Zhang, Chen; Zhang, Yi; Wang, Ji; Niu, Peihua; Ma, Xuejun

    2016-01-01

    Identification and discovery of viruses using next-generation sequencing technology is a fast-developing area with potential wide application in clinical diagnostics, public health monitoring and novel virus discovery. However, tremendous sequence data from NGS study has posed great challenge both in accuracy and velocity for application of NGS study. Here we describe VIP (“Virus Identification Pipeline”), a one-touch computational pipeline for virus identification and discovery from metagenomic NGS data. VIP performs the following steps to achieve its goal: (i) map and filter out background-related reads, (ii) extensive classification of reads on the basis of nucleotide and remote amino acid homology, (iii) multiple k-mer based de novo assembly and phylogenetic analysis to provide evolutionary insight. We validated the feasibility and veracity of this pipeline with sequencing results of various types of clinical samples and public datasets. VIP has also contributed to timely virus diagnosis (~10 min) in acutely ill patients, demonstrating its potential in the performance of unbiased NGS-based clinical studies with demand of short turnaround time. VIP is released under GPLv3 and is available for free download at: https://github.com/keylabivdc/VIP. PMID:27026381

  8. A Bioinformatic Pipeline for Monitoring of the Mutational Stability of Viral Drug Targets with Deep-Sequencing Technology.

    PubMed

    Kravatsky, Yuri; Chechetkin, Vladimir; Fedoseeva, Daria; Gorbacheva, Maria; Kravatskaya, Galina; Kretova, Olga; Tchurikov, Nickolai

    2017-11-23

    The efficient development of antiviral drugs, including efficient antiviral small interfering RNAs (siRNAs), requires continuous monitoring of the strict correspondence between a drug and the related highly variable viral DNA/RNA target(s). Deep sequencing is able to provide an assessment of both the general target conservation and the frequency of particular mutations in the different target sites. The aim of this study was to develop a reliable bioinformatic pipeline for the analysis of millions of short, deep sequencing reads corresponding to selected highly variable viral sequences that are drug target(s). The suggested bioinformatic pipeline combines the available programs and the ad hoc scripts based on an original algorithm of the search for the conserved targets in the deep sequencing data. We also present the statistical criteria for the threshold of reliable mutation detection and for the assessment of variations between corresponding data sets. These criteria are robust against the possible sequencing errors in the reads. As an example, the bioinformatic pipeline is applied to the study of the conservation of RNA interference (RNAi) targets in human immunodeficiency virus 1 (HIV-1) subtype A. The developed pipeline is freely available to download at the website http://virmut.eimb.ru/. Brief comments and comparisons between VirMut and other pipelines are also presented.

  9. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    PubMed

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. © 2016 Society for Laboratory Automation and Screening.

  10. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing

    PubMed Central

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678

  11. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.

    PubMed

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.

  12. The Tuberculosis Drug Discovery and Development Pipeline and Emerging Drug Targets

    PubMed Central

    Mdluli, Khisimuzi; Kaneko, Takushi; Upton, Anna

    2015-01-01

    The recent accelerated approval for use in extensively drug-resistant and multidrug-resistant-tuberculosis (MDR-TB) of two first-in-class TB drugs, bedaquiline and delamanid, has reinvigorated the TB drug discovery and development field. However, although several promising clinical development programs are ongoing to evaluate new TB drugs and regimens, the number of novel series represented is few. The global early-development pipeline is also woefully thin. To have a chance of achieving the goal of better, shorter, safer TB drug regimens with utility against drug-sensitive and drug-resistant disease, a robust and diverse global TB drug discovery pipeline is key, including innovative approaches that make use of recently acquired knowledge on the biology of TB. Fortunately, drug discovery for TB has resurged in recent years, generating compounds with varying potential for progression into developable leads. In parallel, advances have been made in understanding TB pathogenesis. It is now possible to apply the lessons learned from recent TB hit generation efforts and newly validated TB drug targets to generate the next wave of TB drug leads. Use of currently underexploited sources of chemical matter and lead-optimization strategies may also improve the efficiency of future TB drug discovery. Novel TB drug regimens with shorter treatment durations must target all subpopulations of Mycobacterium tuberculosis existing in an infection, including those responsible for the protracted TB treatment duration. This review summarizes the current TB drug development pipeline and proposes strategies for generating improved hits and leads in the discovery phase that could help achieve this goal. PMID:25635061

  13. OPTICAL correlation identification technology applied in underwater laser imaging target identification

    NASA Astrophysics Data System (ADS)

    Yao, Guang-tao; Zhang, Xiao-hui; Ge, Wei-long

    2012-01-01

    The underwater laser imaging detection is an effective method of detecting short distance target underwater as an important complement of sonar detection. With the development of underwater laser imaging technology and underwater vehicle technology, the underwater automatic target identification has gotten more and more attention, and is a research difficulty in the area of underwater optical imaging information processing. Today, underwater automatic target identification based on optical imaging is usually realized with the method of digital circuit software programming. The algorithm realization and control of this method is very flexible. However, the optical imaging information is 2D image even 3D image, the amount of imaging processing information is abundant, so the electronic hardware with pure digital algorithm will need long identification time and is hard to meet the demands of real-time identification. If adopt computer parallel processing, the identification speed can be improved, but it will increase complexity, size and power consumption. This paper attempts to apply optical correlation identification technology to realize underwater automatic target identification. The optics correlation identification technology utilizes the Fourier transform characteristic of Fourier lens which can accomplish Fourier transform of image information in the level of nanosecond, and optical space interconnection calculation has the features of parallel, high speed, large capacity and high resolution, combines the flexibility of calculation and control of digital circuit method to realize optoelectronic hybrid identification mode. We reduce theoretical formulation of correlation identification and analyze the principle of optical correlation identification, and write MATLAB simulation program. We adopt single frame image obtained in underwater range gating laser imaging to identify, and through identifying and locating the different positions of target, we can improve

  14. PharmMapper server: a web server for potential drug target identification using pharmacophore mapping approach

    PubMed Central

    Liu, Xiaofeng; Ouyang, Sisheng; Yu, Biao; Liu, Yabo; Huang, Kai; Gong, Jiayu; Zheng, Siyuan; Li, Zhihua; Li, Honglin; Jiang, Hualiang

    2010-01-01

    In silico drug target identification, which includes many distinct algorithms for finding disease genes and proteins, is the first step in the drug discovery pipeline. When the 3D structures of the targets are available, the problem of target identification is usually converted to finding the best interaction mode between the potential target candidates and small molecule probes. Pharmacophore, which is the spatial arrangement of features essential for a molecule to interact with a specific target receptor, is an alternative method for achieving this goal apart from molecular docking method. PharmMapper server is a freely accessed web server designed to identify potential target candidates for the given small molecules (drugs, natural products or other newly discovered compounds with unidentified binding targets) using pharmacophore mapping approach. PharmMapper hosts a large, in-house repertoire of pharmacophore database (namely PharmTargetDB) annotated from all the targets information in TargetBank, BindingDB, DrugBank and potential drug target database, including over 7000 receptor-based pharmacophore models (covering over 1500 drug targets information). PharmMapper automatically finds the best mapping poses of the query molecule against all the pharmacophore models in PharmTargetDB and lists the top N best-fitted hits with appropriate target annotations, as well as respective molecule’s aligned poses are presented. Benefited from the highly efficient and robust triangle hashing mapping method, PharmMapper bears high throughput ability and only costs 1 h averagely to screen the whole PharmTargetDB. The protocol was successful in finding the proper targets among the top 300 pharmacophore candidates in the retrospective benchmarking test of tamoxifen. PharmMapper is available at http://59.78.96.61/pharmmapper. PMID:20430828

  15. APRICOT: an integrated computational pipeline for the sequence-based identification and characterization of RNA-binding proteins.

    PubMed

    Sharan, Malvika; Förstner, Konrad U; Eulalio, Ana; Vogel, Jörg

    2017-06-20

    RNA-binding proteins (RBPs) have been established as core components of several post-transcriptional gene regulation mechanisms. Experimental techniques such as cross-linking and co-immunoprecipitation have enabled the identification of RBPs, RNA-binding domains (RBDs) and their regulatory roles in the eukaryotic species such as human and yeast in large-scale. In contrast, our knowledge of the number and potential diversity of RBPs in bacteria is poorer due to the technical challenges associated with the existing global screening approaches. We introduce APRICOT, a computational pipeline for the sequence-based identification and characterization of proteins using RBDs known from experimental studies. The pipeline identifies functional motifs in protein sequences using position-specific scoring matrices and Hidden Markov Models of the functional domains and statistically scores them based on a series of sequence-based features. Subsequently, APRICOT identifies putative RBPs and characterizes them by several biological properties. Here we demonstrate the application and adaptability of the pipeline on large-scale protein sets, including the bacterial proteome of Escherichia coli. APRICOT showed better performance on various datasets compared to other existing tools for the sequence-based prediction of RBPs by achieving an average sensitivity and specificity of 0.90 and 0.91 respectively. The command-line tool and its documentation are available at https://pypi.python.org/pypi/bio-apricot. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Feature extraction and identification in distributed optical-fiber vibration sensing system for oil pipeline safety monitoring

    NASA Astrophysics Data System (ADS)

    Wu, Huijuan; Qian, Ya; Zhang, Wei; Tang, Chenghao

    2017-12-01

    High sensitivity of a distributed optical-fiber vibration sensing (DOVS) system based on the phase-sensitivity optical time domain reflectometry (Φ-OTDR) technology also brings in high nuisance alarm rates (NARs) in real applications. In this paper, feature extraction methods of wavelet decomposition (WD) and wavelet packet decomposition (WPD) are comparatively studied for three typical field testing signals, and an artificial neural network (ANN) is built for the event identification. The comparison results prove that the WPD performs a little better than the WD for the DOVS signal analysis and identification in oil pipeline safety monitoring. The identification rate can be improved up to 94.4%, and the nuisance alarm rate can be effectively controlled as low as 5.6% for the identification network with the wavelet packet energy distribution features.

  17. NGSPanPipe: A Pipeline for Pan-genome Identification in Microbial Strains from Experimental Reads.

    PubMed

    Kulsum, Umay; Kapil, Arti; Singh, Harpreet; Kaur, Punit

    2018-01-01

    Recent advancements in sequencing technologies have decreased both time span and cost for sequencing the whole bacterial genome. High-throughput Next-Generation Sequencing (NGS) technology has led to the generation of enormous data concerning microbial populations publically available across various repositories. As a consequence, it has become possible to study and compare the genomes of different bacterial strains within a species or genus in terms of evolution, ecology and diversity. Studying the pan-genome provides insights into deciphering microevolution, global composition and diversity in virulence and pathogenesis of a species. It can also assist in identifying drug targets and proposing vaccine candidates. The effective analysis of these large genome datasets necessitates the development of robust tools. Current methods to develop pan-genome do not support direct input of raw reads from the sequencer machine but require preprocessing of reads as an assembled protein/gene sequence file or the binary matrix of orthologous genes/proteins. We have designed an easy-to-use integrated pipeline, NGSPanPipe, which can directly identify the pan-genome from short reads. The output from the pipeline is compatible with other pan-genome analysis tools. We evaluated our pipeline with other methods for developing pan-genome, i.e. reference-based assembly and de novo assembly using simulated reads of Mycobacterium tuberculosis. The single script pipeline (pipeline.pl) is applicable for all bacterial strains. It integrates multiple in-house Perl scripts and is freely accessible from https://github.com/Biomedinformatics/NGSPanPipe .

  18. Identification of pneumonia and influenza deaths using the death certificate pipeline

    PubMed Central

    2012-01-01

    Background Death records are a rich source of data, which can be used to assist with public surveillance and/or decision support. However, to use this type of data for such purposes it has to be transformed into a coded format to make it computable. Because the cause of death in the certificates is reported as free text, encoding the data is currently the single largest barrier of using death certificates for surveillance. Therefore, the purpose of this study was to demonstrate the feasibility of using a pipeline, composed of a detection rule and a natural language processor, for the real time encoding of death certificates using the identification of pneumonia and influenza cases as an example and demonstrating that its accuracy is comparable to existing methods. Results A Death Certificates Pipeline (DCP) was developed to automatically code death certificates and identify pneumonia and influenza cases. The pipeline used MetaMap to code death certificates from the Utah Department of Health for the year 2008. The output of MetaMap was then accessed by detection rules which flagged pneumonia and influenza cases based on the Centers of Disease and Control and Prevention (CDC) case definition. The output from the DCP was compared with the current method used by the CDC and with a keyword search. Recall, precision, positive predictive value and F-measure with respect to the CDC method were calculated for the two other methods considered here. The two different techniques compared here with the CDC method showed the following recall/ precision results: DCP: 0.998/0.98 and keyword searching: 0.96/0.96. The F-measure were 0.99 and 0.96 respectively (DCP and keyword searching). Both the keyword and the DCP can run in interactive form with modest computer resources, but DCP showed superior performance. Conclusion The pipeline proposed here for coding death certificates and the detection of cases is feasible and can be extended to other conditions. This method provides an

  19. A De-Identification Pipeline for Ultrasound Medical Images in DICOM Format.

    PubMed

    Monteiro, Eriksson; Costa, Carlos; Oliveira, José Luís

    2017-05-01

    Clinical data sharing between healthcare institutions, and between practitioners is often hindered by privacy protection requirements. This problem is critical in collaborative scenarios where data sharing is fundamental for establishing a workflow among parties. The anonymization of patient information burned in DICOM images requires elaborate processes somewhat more complex than simple de-identification of textual information. Usually, before sharing, there is a need for manual removal of specific areas containing sensitive information in the images. In this paper, we present a pipeline for ultrasound medical image de-identification, provided as a free anonymization REST service for medical image applications, and a Software-as-a-Service to streamline automatic de-identification of medical images, which is freely available for end-users. The proposed approach applies image processing functions and machine-learning models to bring about an automatic system to anonymize medical images. To perform character recognition, we evaluated several machine-learning models, being Convolutional Neural Networks (CNN) selected as the best approach. For accessing the system quality, 500 processed images were manually inspected showing an anonymization rate of 89.2%. The tool can be accessed at https://bioinformatics.ua.pt/dicom/anonymizer and it is available with the most recent version of Google Chrome, Mozilla Firefox and Safari. A Docker image containing the proposed service is also publicly available for the community.

  20. Theory and Application of Magnetic Flux Leakage Pipeline Detection.

    PubMed

    Shi, Yan; Zhang, Chao; Li, Rui; Cai, Maolin; Jia, Guanwei

    2015-12-10

    Magnetic flux leakage (MFL) detection is one of the most popular methods of pipeline inspection. It is a nondestructive testing technique which uses magnetic sensitive sensors to detect the magnetic leakage field of defects on both the internal and external surfaces of pipelines. This paper introduces the main principles, measurement and processing of MFL data. As the key point of a quantitative analysis of MFL detection, the identification of the leakage magnetic signal is also discussed. In addition, the advantages and disadvantages of different identification methods are analyzed. Then the paper briefly introduces the expert systems used. At the end of this paper, future developments in pipeline MFL detection are predicted.

  1. Building a pipeline to discover and validate novel therapeutic targets and lead compounds for Alzheimer's disease

    PubMed Central

    Bennett, David A.; Yu, Lei; De Jager, Philip L.

    2014-01-01

    Cognitive decline, Alzheimer's disease (AD) and other causes are major public health problems worldwide. With changing demographics, the number of persons with dementia will increase rapidly. The treatment and prevention of AD and other dementias, therefore, is an urgent unmet need. There have been considerable advances in understanding the biology of many age-related disorders that cause dementia. Gains in understanding AD have led to the development of ante-mortem biomarkers of traditional neuropathology and the conduct of several phase III interventions in the amyloid-β cascade early in the disease process. Many other intervention strategies are in various stages of development. However, efforts to date have met with limited success. A recent National Institute on Aging Research Summit led to a number of requests for applications. One was to establish multi-disciplinary teams of investigators who use systems biology approaches and stem cell technology to identify a new generation of AD targets. We were recently awarded one of three such grants to build a pipeline that integrates epidemiology, systems biology, and stem cell technology to discover and validate novel therapeutic targets and lead compounds for AD treatment and prevention. Here we describe the two cohorts that provide the data and biospecimens being exploited for our pipeline and describe the available unique datasets. Second, we present evidence in support of a chronic disease model of AD that informs our choice of phenotypes as the target outcome. Third, we provide an overview of our approach. Finally, we present the details of our planned drug discovery pipeline. PMID:24508835

  2. Genome-wide identification of microRNA targets in the neglected disease pathogens of the genus Echinococcus.

    PubMed

    Macchiaroli, Natalia; Maldonado, Lucas L; Zarowiecki, Magdalena; Cucher, Marcela; Gismondi, María Inés; Kamenetzky, Laura; Rosenzvit, Mara Cecilia

    2017-06-01

    MicroRNAs (miRNAs), a class of small non-coding RNAs, are key regulators of gene expression at post-transcriptional level and play essential roles in biological processes such as development. MiRNAs silence target mRNAs by binding to complementary sequences in the 3'untranslated regions (3'UTRs). The parasitic helminths of the genus Echinococcus are the causative agents of echinococcosis, a zoonotic neglected disease. In previous work, we performed a comprehensive identification and characterization of Echinococcus miRNAs. However, current knowledge about their targets is limited. Since target prediction algorithms rely on complementarity between 3'UTRs and miRNA sequences, a major limitation is the lack of accurate sequence information of 3'UTR for most species including parasitic helminths. We performed RNA-seq and developed a pipeline that integrates the transcriptomic data with available genomic data of this parasite in order to identify 3'UTRs of Echinococcus canadensis. The high confidence set of 3'UTRs obtained allowed the prediction of miRNA targets in Echinococcus through a bioinformatic approach. We performed for the first time a comparative analysis of miRNA targets in Echinococcus and Taenia. We found that many evolutionarily conserved target sites in Echinococcus and Taenia may be functional and under selective pressure. Signaling pathways such as MAPK and Wnt were among the most represented pathways indicating miRNA roles in parasite growth and development. Genome-wide identification and characterization of miRNA target genes in Echinococcus provide valuable information to guide experimental studies in order to understand miRNA functions in the parasites biology. miRNAs involved in essential functions, especially those being absent in the host or showing sequence divergence with respect to host orthologs, might be considered as novel therapeutic targets for echinococcosis control. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Real-time Monitoring of Subsea Gas Pipelines, Offshore Platforms, and Ship Inspection Scores Using an Automatic Identification System

    NASA Astrophysics Data System (ADS)

    Artana, K. B.; Pitana, T.; Dinariyana, D. P.; Ariana, M.; Kristianto, D.; Pratiwi, E.

    2018-06-01

    The aim of this research is to develop an algorithm and application that can perform real-time monitoring of the safety operation of offshore platforms and subsea gas pipelines as well as determine the need for ship inspection using data obtained from automatic identification system (AIS). The research also focuses on the integration of shipping database, AIS data, and others to develop a prototype for designing a real-time monitoring system of offshore platforms and pipelines. A simple concept is used in the development of this prototype, which is achieved by using an overlaying map that outlines the coordinates of the offshore platform and subsea gas pipeline with the ship's coordinates (longitude/latitude) as detected by AIS. Using such information, we can then build an early warning system (EWS) relayed through short message service (SMS), email, or other means when the ship enters the restricted and exclusion zone of platforms and pipelines. The ship inspection system is developed by combining several attributes. Then, decision analysis software is employed to prioritize the vessel's four attributes, including ship age, ship type, classification, and flag state. Results show that the EWS can increase the safety level of offshore platforms and pipelines, as well as the efficient use of patrol boats in monitoring the safety of the facilities. Meanwhile, ship inspection enables the port to prioritize the ship to be inspected in accordance with the priority ranking inspection score.

  4. Pipeline monitoring with unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kochetkova, L. I.

    2018-05-01

    Pipeline leakage during transportation of combustible substances leads to explosion and fire thus causing death of people and destruction of production and accommodation facilities. Continuous pipeline monitoring allows identifying leaks in due time and quickly taking measures for their elimination. The paper describes the solution of identification of pipeline leakage using unmanned aerial vehicles. It is recommended to apply the spectral analysis with input RGB signal to identify pipeline damages. The application of multi-zone digital images allows defining potential spill of oil hydrocarbons as well as possible soil pollution. The method of multi-temporal digital images within the visible region makes it possible to define changes in soil morphology for its subsequent analysis. The given solution is cost efficient and reliable thus allowing reducing timing and labor resources in comparison with other methods of pipeline monitoring.

  5. Theory and Application of Magnetic Flux Leakage Pipeline Detection

    PubMed Central

    Shi, Yan; Zhang, Chao; Li, Rui; Cai, Maolin; Jia, Guanwei

    2015-01-01

    Magnetic flux leakage (MFL) detection is one of the most popular methods of pipeline inspection. It is a nondestructive testing technique which uses magnetic sensitive sensors to detect the magnetic leakage field of defects on both the internal and external surfaces of pipelines. This paper introduces the main principles, measurement and processing of MFL data. As the key point of a quantitative analysis of MFL detection, the identification of the leakage magnetic signal is also discussed. In addition, the advantages and disadvantages of different identification methods are analyzed. Then the paper briefly introduces the expert systems used. At the end of this paper, future developments in pipeline MFL detection are predicted. PMID:26690435

  6. Network-assisted target identification for haploinsufficiency and homozygous profiling screens

    PubMed Central

    Wang, Sheng

    2017-01-01

    Chemical genomic screens have recently emerged as a systematic approach to drug discovery on a genome-wide scale. Drug target identification and elucidation of the mechanism of action (MoA) of hits from these noisy high-throughput screens remain difficult. Here, we present GIT (Genetic Interaction Network-Assisted Target Identification), a network analysis method for drug target identification in haploinsufficiency profiling (HIP) and homozygous profiling (HOP) screens. With the drug-induced phenotypic fitness defect of the deletion of a gene, GIT also incorporates the fitness defects of the gene’s neighbors in the genetic interaction network. On three genome-scale yeast chemical genomic screens, GIT substantially outperforms previous scoring methods on target identification on HIP and HOP assays, respectively. Finally, we showed that by combining HIP and HOP assays, GIT further boosts target identification and reveals potential drug’s mechanism of action. PMID:28574983

  7. An Aperture Photometry Pipeline for K2 Data

    NASA Astrophysics Data System (ADS)

    Buzasi, Derek L.; Carboneau, Lindsey; Lezcano, Andy; Vydra, Ekaterina

    2016-01-01

    As part of an ongoing research program with undergraduate students at Florida Gulf Coast University, we have constructed an aperture photometry pipeline for K2 data. The pipeline performs dynamic automated aperture mask definition for all targets in the K2 fields, followed by aperture photometry and detrending. Our pipeline is currently used to support a number of projects, including studies of stellar rotation and activity, red giant asteroseismology, gyrochronology, and exoplanet searches. In addition, output is used to support an undergraduate class on exoplanets aimed at a student audience of both majors and non-majors. The pipeline is designed for both batch and single-target use, and is easily extensible to data from other missions, and pipeline output is available to the community. This paper will describe our pipeline and its capabilities and illustrate the quality of the results, drawing on all of the applications for which it is currently used.

  8. A cloud-compatible bioinformatics pipeline for ultrarapid pathogen identification from next-generation sequencing of clinical samples.

    PubMed

    Naccache, Samia N; Federman, Scot; Veeraraghavan, Narayanan; Zaharia, Matei; Lee, Deanna; Samayoa, Erik; Bouquet, Jerome; Greninger, Alexander L; Luk, Ka-Cheung; Enge, Barryett; Wadford, Debra A; Messenger, Sharon L; Genrich, Gillian L; Pellegrino, Kristen; Grard, Gilda; Leroy, Eric; Schneider, Bradley S; Fair, Joseph N; Martínez, Miguel A; Isa, Pavel; Crump, John A; DeRisi, Joseph L; Sittler, Taylor; Hackett, John; Miller, Steve; Chiu, Charles Y

    2014-07-01

    Unbiased next-generation sequencing (NGS) approaches enable comprehensive pathogen detection in the clinical microbiology laboratory and have numerous applications for public health surveillance, outbreak investigation, and the diagnosis of infectious diseases. However, practical deployment of the technology is hindered by the bioinformatics challenge of analyzing results accurately and in a clinically relevant timeframe. Here we describe SURPI ("sequence-based ultrarapid pathogen identification"), a computational pipeline for pathogen identification from complex metagenomic NGS data generated from clinical samples, and demonstrate use of the pipeline in the analysis of 237 clinical samples comprising more than 1.1 billion sequences. Deployable on both cloud-based and standalone servers, SURPI leverages two state-of-the-art aligners for accelerated analyses, SNAP and RAPSearch, which are as accurate as existing bioinformatics tools but orders of magnitude faster in performance. In fast mode, SURPI detects viruses and bacteria by scanning data sets of 7-500 million reads in 11 min to 5 h, while in comprehensive mode, all known microorganisms are identified, followed by de novo assembly and protein homology searches for divergent viruses in 50 min to 16 h. SURPI has also directly contributed to real-time microbial diagnosis in acutely ill patients, underscoring its potential key role in the development of unbiased NGS-based clinical assays in infectious diseases that demand rapid turnaround times. © 2014 Naccache et al.; Published by Cold Spring Harbor Laboratory Press.

  9. The JCSG high-throughput structural biology pipeline.

    PubMed

    Elsliger, Marc André; Deacon, Ashley M; Godzik, Adam; Lesley, Scott A; Wooley, John; Wüthrich, Kurt; Wilson, Ian A

    2010-10-01

    The Joint Center for Structural Genomics high-throughput structural biology pipeline has delivered more than 1000 structures to the community over the past ten years. The JCSG has made a significant contribution to the overall goal of the NIH Protein Structure Initiative (PSI) of expanding structural coverage of the protein universe, as well as making substantial inroads into structural coverage of an entire organism. Targets are processed through an extensive combination of bioinformatics and biophysical analyses to efficiently characterize and optimize each target prior to selection for structure determination. The pipeline uses parallel processing methods at almost every step in the process and can adapt to a wide range of protein targets from bacterial to human. The construction, expansion and optimization of the JCSG gene-to-structure pipeline over the years have resulted in many technological and methodological advances and developments. The vast number of targets and the enormous amounts of associated data processed through the multiple stages of the experimental pipeline required the development of variety of valuable resources that, wherever feasible, have been converted to free-access web-based tools and applications.

  10. A Proposed Scalable Design and Simulation of Wireless Sensor Network-Based Long-Distance Water Pipeline Leakage Monitoring System

    PubMed Central

    Almazyad, Abdulaziz S.; Seddiq, Yasser M.; Alotaibi, Ahmed M.; Al-Nasheri, Ahmed Y.; BenSaleh, Mohammed S.; Obeid, Abdulfattah M.; Qasim, Syed Manzoor

    2014-01-01

    Anomalies such as leakage and bursts in water pipelines have severe consequences for the environment and the economy. To ensure the reliability of water pipelines, they must be monitored effectively. Wireless Sensor Networks (WSNs) have emerged as an effective technology for monitoring critical infrastructure such as water, oil and gas pipelines. In this paper, we present a scalable design and simulation of a water pipeline leakage monitoring system using Radio Frequency IDentification (RFID) and WSN technology. The proposed design targets long-distance aboveground water pipelines that have special considerations for maintenance, energy consumption and cost. The design is based on deploying a group of mobile wireless sensor nodes inside the pipeline and allowing them to work cooperatively according to a prescheduled order. Under this mechanism, only one node is active at a time, while the other nodes are sleeping. The node whose turn is next wakes up according to one of three wakeup techniques: location-based, time-based and interrupt-driven. In this paper, mathematical models are derived for each technique to estimate the corresponding energy consumption and memory size requirements. The proposed equations are analyzed and the results are validated using simulation. PMID:24561404

  11. A proposed scalable design and simulation of wireless sensor network-based long-distance water pipeline leakage monitoring system.

    PubMed

    Almazyad, Abdulaziz S; Seddiq, Yasser M; Alotaibi, Ahmed M; Al-Nasheri, Ahmed Y; BenSaleh, Mohammed S; Obeid, Abdulfattah M; Qasim, Syed Manzoor

    2014-02-20

    Anomalies such as leakage and bursts in water pipelines have severe consequences for the environment and the economy. To ensure the reliability of water pipelines, they must be monitored effectively. Wireless Sensor Networks (WSNs) have emerged as an effective technology for monitoring critical infrastructure such as water, oil and gas pipelines. In this paper, we present a scalable design and simulation of a water pipeline leakage monitoring system using Radio Frequency IDentification (RFID) and WSN technology. The proposed design targets long-distance aboveground water pipelines that have special considerations for maintenance, energy consumption and cost. The design is based on deploying a group of mobile wireless sensor nodes inside the pipeline and allowing them to work cooperatively according to a prescheduled order. Under this mechanism, only one node is active at a time, while the other nodes are sleeping. The node whose turn is next wakes up according to one of three wakeup techniques: location-based, time-based and interrupt-driven. In this paper, mathematical models are derived for each technique to estimate the corresponding energy consumption and memory size requirements. The proposed equations are analyzed and the results are validated using simulation.

  12. Open Targets: a platform for therapeutic target identification and validation

    PubMed Central

    Koscielny, Gautier; An, Peter; Carvalho-Silva, Denise; Cham, Jennifer A.; Fumis, Luca; Gasparyan, Rippa; Hasan, Samiul; Karamanis, Nikiforos; Maguire, Michael; Papa, Eliseo; Pierleoni, Andrea; Pignatelli, Miguel; Platt, Theo; Rowland, Francis; Wankar, Priyanka; Bento, A. Patrícia; Burdett, Tony; Fabregat, Antonio; Forbes, Simon; Gaulton, Anna; Gonzalez, Cristina Yenyxe; Hermjakob, Henning; Hersey, Anne; Jupe, Steven; Kafkas, Şenay; Keays, Maria; Leroy, Catherine; Lopez, Francisco-Javier; Magarinos, Maria Paula; Malone, James; McEntyre, Johanna; Munoz-Pomer Fuentes, Alfonso; O'Donovan, Claire; Papatheodorou, Irene; Parkinson, Helen; Palka, Barbara; Paschall, Justin; Petryszak, Robert; Pratanwanich, Naruemon; Sarntivijal, Sirarat; Saunders, Gary; Sidiropoulos, Konstantinos; Smith, Thomas; Sondka, Zbyslaw; Stegle, Oliver; Tang, Y. Amy; Turner, Edward; Vaughan, Brendan; Vrousgou, Olga; Watkins, Xavier; Martin, Maria-Jesus; Sanseau, Philippe; Vamathevan, Jessica; Birney, Ewan; Barrett, Jeffrey; Dunham, Ian

    2017-01-01

    We have designed and developed a data integration and visualization platform that provides evidence about the association of known and potential drug targets with diseases. The platform is designed to support identification and prioritization of biological targets for follow-up. Each drug target is linked to a disease using integrated genome-wide data from a broad range of data sources. The platform provides either a target-centric workflow to identify diseases that may be associated with a specific target, or a disease-centric workflow to identify targets that may be associated with a specific disease. Users can easily transition between these target- and disease-centric workflows. The Open Targets Validation Platform is accessible at https://www.targetvalidation.org. PMID:27899665

  13. SIMPLEX: Cloud-Enabled Pipeline for the Comprehensive Analysis of Exome Sequencing Data

    PubMed Central

    Fischer, Maria; Snajder, Rene; Pabinger, Stephan; Dander, Andreas; Schossig, Anna; Zschocke, Johannes; Trajanoski, Zlatko; Stocker, Gernot

    2012-01-01

    In recent studies, exome sequencing has proven to be a successful screening tool for the identification of candidate genes causing rare genetic diseases. Although underlying targeted sequencing methods are well established, necessary data handling and focused, structured analysis still remain demanding tasks. Here, we present a cloud-enabled autonomous analysis pipeline, which comprises the complete exome analysis workflow. The pipeline combines several in-house developed and published applications to perform the following steps: (a) initial quality control, (b) intelligent data filtering and pre-processing, (c) sequence alignment to a reference genome, (d) SNP and DIP detection, (e) functional annotation of variants using different approaches, and (f) detailed report generation during various stages of the workflow. The pipeline connects the selected analysis steps, exposes all available parameters for customized usage, performs required data handling, and distributes computationally expensive tasks either on a dedicated high-performance computing infrastructure or on the Amazon cloud environment (EC2). The presented application has already been used in several research projects including studies to elucidate the role of rare genetic diseases. The pipeline is continuously tested and is publicly available under the GPL as a VirtualBox or Cloud image at http://simplex.i-med.ac.at; additional supplementary data is provided at http://www.icbi.at/exome. PMID:22870267

  14. The PREP pipeline: standardized preprocessing for large-scale EEG analysis.

    PubMed

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.

  15. The PREP pipeline: standardized preprocessing for large-scale EEG analysis

    PubMed Central

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A.

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode. PMID:26150785

  16. Identification of pathogen genomic variants through an integrated pipeline

    PubMed Central

    2014-01-01

    Background Whole-genome sequencing represents a powerful experimental tool for pathogen research. We present methods for the analysis of small eukaryotic genomes, including a streamlined system (called Platypus) for finding single nucleotide and copy number variants as well as recombination events. Results We have validated our pipeline using four sets of Plasmodium falciparum drug resistant data containing 26 clones from 3D7 and Dd2 background strains, identifying an average of 11 single nucleotide variants per clone. We also identify 8 copy number variants with contributions to resistance, and report for the first time that all analyzed amplification events are in tandem. Conclusions The Platypus pipeline provides malaria researchers with a powerful tool to analyze short read sequencing data. It provides an accurate way to detect SNVs using known software packages, and a novel methodology for detection of CNVs, though it does not currently support detection of small indels. We have validated that the pipeline detects known SNVs in a variety of samples while filtering out spurious data. We bundle the methods into a freely available package. PMID:24589256

  17. The ALMA Science Pipeline: Current Status

    NASA Astrophysics Data System (ADS)

    Humphreys, Elizabeth; Miura, Rie; Brogan, Crystal L.; Hibbard, John; Hunter, Todd R.; Indebetouw, Remy

    2016-09-01

    The ALMA Science Pipeline is being developed for the automated calibration and imaging of ALMA interferometric and single-dish data. The calibration Pipeline for interferometric data was accepted for use by ALMA Science Operations in 2014, and for single-dish data end-to-end processing in 2015. However, work is ongoing to expand the use cases for which the Pipeline can be used e.g. for higher frequency and lower signal-to-noise datasets, and for new observing modes. A current focus includes the commissioning of science target imaging for interferometric data. For the Single Dish Pipeline, the line finding algorithm used in baseline subtraction and baseline flagging heuristics have been greately improved since the prototype used for data from the previous cycle. These algorithms, unique to the Pipeline, produce better results than standard manual processing in many cases. In this poster, we report on the current status of the Pipeline capabilities, present initial results from the Imaging Pipeline, and the smart line finding and flagging algorithm used in the Single Dish Pipeline. The Pipeline is released as part of CASA (the Common Astronomy Software Applications package).

  18. Camouflage, detection and identification of moving targets

    PubMed Central

    Hall, Joanna R.; Cuthill, Innes C.; Baddeley, Roland; Shohet, Adam J.; Scott-Samuel, Nicholas E.

    2013-01-01

    Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation—detection, identification and capture—in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely ‘break’ camouflage. PMID:23486439

  19. Camouflage, detection and identification of moving targets.

    PubMed

    Hall, Joanna R; Cuthill, Innes C; Baddeley, Roland; Shohet, Adam J; Scott-Samuel, Nicholas E

    2013-05-07

    Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation-detection, identification and capture-in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely 'break' camouflage.

  20. Categorization and identification of simultaneous targets.

    PubMed

    Theeuwes, J

    1991-02-01

    Early and late selection theories of visual attention disagree about whether identification occurs before or after selection. Studies showing the category effect, i.e., the time to detect a letter is hardly affected by the number of digits present in the display, are taken as evidence for late selection theories since these studies suggest parallel identification of all items in the display. As an extension of previous studies, in the present study two categorically different targets were presented simultaneously among a variable number of nontargets. Subjects were shown brief displays of two target letters among either 2, 4 or 6 nontarget digits. Subjects responded 'same' when the two letters were identical and 'different' otherwise. Since the 'same-different' response reflects the combined outcome of the simultaneous targets, late-selection theory predicts that the time to match the target letters is independent of the number of nontarget digits. Alternatively, early-selection theory predicts a linear increase of reaction time with display size since the presence of more than one target disrupts parallel preattentive processing, leading to a serial search through all items in the display. The results provide evidence for the early-selection view since reaction time increased linearly with the number of categorically different nontargets. A control experiment revealed that none of the alternative explanations could account for the display size effect.

  1. Thermographic identification of wetted insulation on pipelines in the arctic oilfields

    NASA Astrophysics Data System (ADS)

    Miles, Jonathan J.; Dahlquist, A. L.; Dash, L. C.

    2006-04-01

    Steel pipes used at Alaskan oil-producing facilities to transport production crude, gas, and injection water between well house and drill site manifold building, and along cross-country lines to and from central processing facilities, must be insulated in order to protect against the severely cold temperatures that are common during the arctic winter. A problem inherent with this system is that the sealed joints between adjacent layers of the outer wrap will over time degrade and can allow water to breach the system and migrate into and through the insulation. The moisture can ultimately interact with the steel pipe and trigger external corrosion which, if left unchecked, can lead to pipe failure and spillage. A New Technology Evaluation Guideline prepared for ConocoPhillips Alaska, Inc. in 2001 is intended to guide the consideration of new technologies for pipeline inspection in a manner that is safer, faster, and more cost-effective than existing techniques. Infrared thermography (IRT) was identified as promising for identification of wetted insulation regions given that it offers the means to scan a large area quickly from a safe distance, and measure the temperature field associated with that area. However, it was also recognized that there are limiting factors associated with an IRT-based approach including instrument sensitivity, cost, portability, functionality in hostile (arctic) environments, and training required for proper interpretation of data. A methodology was developed and tested in the field that provides a technique to conduct large-scale screening for wetted regions along insulated pipelines. The results of predictive modeling analysis and testing demonstrate the feasibility under certain condition of identifying wetted insulation areas. The results of the study and recommendations for implementation are described.

  2. Bioinformatics Pipelines for Targeted Resequencing and Whole-Exome Sequencing of Human and Mouse Genomes: A Virtual Appliance Approach for Instant Deployment

    PubMed Central

    Saeed, Isaam; Wong, Stephen Q.; Mar, Victoria; Goode, David L.; Caramia, Franco; Doig, Ken; Ryland, Georgina L.; Thompson, Ella R.; Hunter, Sally M.; Halgamuge, Saman K.; Ellul, Jason; Dobrovic, Alexander; Campbell, Ian G.; Papenfuss, Anthony T.; McArthur, Grant A.; Tothill, Richard W.

    2014-01-01

    Targeted resequencing by massively parallel sequencing has become an effective and affordable way to survey small to large portions of the genome for genetic variation. Despite the rapid development in open source software for analysis of such data, the practical implementation of these tools through construction of sequencing analysis pipelines still remains a challenging and laborious activity, and a major hurdle for many small research and clinical laboratories. We developed TREVA (Targeted REsequencing Virtual Appliance), making pre-built pipelines immediately available as a virtual appliance. Based on virtual machine technologies, TREVA is a solution for rapid and efficient deployment of complex bioinformatics pipelines to laboratories of all sizes, enabling reproducible results. The analyses that are supported in TREVA include: somatic and germline single-nucleotide and insertion/deletion variant calling, copy number analysis, and cohort-based analyses such as pathway and significantly mutated genes analyses. TREVA is flexible and easy to use, and can be customised by Linux-based extensions if required. TREVA can also be deployed on the cloud (cloud computing), enabling instant access without investment overheads for additional hardware. TREVA is available at http://bioinformatics.petermac.org/treva/. PMID:24752294

  3. Common Data Analysis Pipeline | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    CPTAC supports analyses of the mass spectrometry raw data (mapping of spectra to peptide sequences and protein identification) for the public using a Common Data Analysis Pipeline (CDAP). The data types available on the public portal are described below. A general overview of this pipeline can be downloaded here. Mass Spectrometry Data Formats RAW (Vendor) Format

  4. An automatic and efficient pipeline for disease gene identification through utilizing family-based sequencing data.

    PubMed

    Song, Dandan; Li, Ning; Liao, Lejian

    2015-01-01

    Due to the generation of enormous amounts of data at both lower costs as well as in shorter times, whole-exome sequencing technologies provide dramatic opportunities for identifying disease genes implicated in Mendelian disorders. Since upwards of thousands genomic variants can be sequenced in each exome, it is challenging to filter pathogenic variants in protein coding regions and reduce the number of missing true variants. Therefore, an automatic and efficient pipeline for finding disease variants in Mendelian disorders is designed by exploiting a combination of variants filtering steps to analyze the family-based exome sequencing approach. Recent studies on the Freeman-Sheldon disease are revisited and show that the proposed method outperforms other existing candidate gene identification methods.

  5. Detection and identification of human targets in radar data

    NASA Astrophysics Data System (ADS)

    Gürbüz, Sevgi Z.; Melvin, William L.; Williams, Douglas B.

    2007-04-01

    Radar offers unique advantages over other sensors, such as visual or seismic sensors, for human target detection. Many situations, especially military applications, prevent the placement of video cameras or implantment seismic sensors in the area being observed, because of security or other threats. However, radar can operate far away from potential targets, and functions during daytime as well as nighttime, in virtually all weather conditions. In this paper, we examine the problem of human target detection and identification using single-channel, airborne, synthetic aperture radar (SAR). Human targets are differentiated from other detected slow-moving targets by analyzing the spectrogram of each potential target. Human spectrograms are unique, and can be used not just to identify targets as human, but also to determine features about the human target being observed, such as size, gender, action, and speed. A 12-point human model, together with kinematic equations of motion for each body part, is used to calculate the expected target return and spectrogram. A MATLAB simulation environment is developed including ground clutter, human and non-human targets for the testing of spectrogram-based detection and identification algorithms. Simulations show that spectrograms have some ability to detect and identify human targets in low noise. An example gender discrimination system correctly detected 83.97% of males and 91.11% of females. The problems and limitations of spectrogram-based methods in high clutter environments are discussed. The SNR loss inherent to spectrogram-based methods is quantified. An alternate detection and identification method that will be used as a basis for future work is proposed.

  6. Presearch data conditioning in the Kepler Science Operations Center pipeline

    NASA Astrophysics Data System (ADS)

    Twicken, Joseph D.; Chandrasekaran, Hema; Jenkins, Jon M.; Gunter, Jay P.; Girouard, Forrest; Klaus, Todd C.

    2010-07-01

    We describe the Presearch Data Conditioning (PDC) software component and its context in the Kepler Science Operations Center (SOC) Science Processing Pipeline. The primary tasks of this component are to correct systematic and other errors, remove excess flux due to aperture crowding, and condition the raw flux light curves for over 160,000 long cadence (~thirty minute) and 512 short cadence (~one minute) stellar targets. Long cadence corrected flux light curves are subjected to a transiting planet search in a subsequent pipeline module. We discuss science algorithms for long and short cadence PDC: identification and correction of unexplained (i.e., unrelated to known anomalies) discontinuities; systematic error correction; and removal of excess flux due to aperture crowding. We discuss the propagation of uncertainties from raw to corrected flux. Finally, we present examples from Kepler flight data to illustrate PDC performance. Corrected flux light curves produced by PDC are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.

  7. Presearch Data Conditioning in the Kepler Science Operations Center Pipeline

    NASA Technical Reports Server (NTRS)

    Twicken, Joseph D.; Chandrasekaran, Hema; Jenkins, Jon M.; Gunter, Jay P.; Girouard, Forrest; Klaus, Todd C.

    2010-01-01

    We describe the Presearch Data Conditioning (PDC) software component and its context in the Kepler Science Operations Center (SOC) pipeline. The primary tasks of this component are to correct systematic and other errors, remove excess flux due to aperture crowding, and condition the raw flux light curves for over 160,000 long cadence (thirty minute) and 512 short cadence (one minute) targets across the focal plane array. Long cadence corrected flux light curves are subjected to a transiting planet search in a subsequent pipeline module. We discuss the science algorithms for long and short cadence PDC: identification and correction of unexplained (i.e., unrelated to known anomalies) discontinuities; systematic error correction; and excess flux removal. We discuss the propagation of uncertainties from raw to corrected flux. Finally, we present examples of raw and corrected flux time series for flight data to illustrate PDC performance. Corrected flux light curves produced by PDC are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and will be made available to the general public in accordance with the NASA/Kepler data release policy.

  8. IFPTarget: A Customized Virtual Target Identification Method Based on Protein-Ligand Interaction Fingerprinting Analyses.

    PubMed

    Li, Guo-Bo; Yu, Zhu-Jun; Liu, Sha; Huang, Lu-Yi; Yang, Ling-Ling; Lohans, Christopher T; Yang, Sheng-Yong

    2017-07-24

    Small-molecule target identification is an important and challenging task for chemical biology and drug discovery. Structure-based virtual target identification has been widely used, which infers and prioritizes potential protein targets for the molecule of interest (MOI) principally via a scoring function. However, current "universal" scoring functions may not always accurately identify targets to which the MOI binds from the retrieved target database, in part due to a lack of consideration of the important binding features for an individual target. Here, we present IFPTarget, a customized virtual target identification method, which uses an interaction fingerprinting (IFP) method for target-specific interaction analyses and a comprehensive index (Cvalue) for target ranking. Evaluation results indicate that the IFP method enables substantially improved binding pose prediction, and Cvalue has an excellent performance in target ranking for the test set. When applied to screen against our established target library that contains 11,863 protein structures covering 2842 unique targets, IFPTarget could retrieve known targets within the top-ranked list and identified new potential targets for chemically diverse drugs. IFPTarget prediction led to the identification of the metallo-β-lactamase VIM-2 as a target for quercetin as validated by enzymatic inhibition assays. This study provides a new in silico target identification tool and will aid future efforts to develop new target-customized methods for target identification.

  9. VISPA2: a scalable pipeline for high-throughput identification and annotation of vector integration sites.

    PubMed

    Spinozzi, Giulio; Calabria, Andrea; Brasca, Stefano; Beretta, Stefano; Merelli, Ivan; Milanesi, Luciano; Montini, Eugenio

    2017-11-25

    Bioinformatics tools designed to identify lentiviral or retroviral vector insertion sites in the genome of host cells are used to address the safety and long-term efficacy of hematopoietic stem cell gene therapy applications and to study the clonal dynamics of hematopoietic reconstitution. The increasing number of gene therapy clinical trials combined with the increasing amount of Next Generation Sequencing data, aimed at identifying integration sites, require both highly accurate and efficient computational software able to correctly process "big data" in a reasonable computational time. Here we present VISPA2 (Vector Integration Site Parallel Analysis, version 2), the latest optimized computational pipeline for integration site identification and analysis with the following features: (1) the sequence analysis for the integration site processing is fully compliant with paired-end reads and includes a sequence quality filter before and after the alignment on the target genome; (2) an heuristic algorithm to reduce false positive integration sites at nucleotide level to reduce the impact of Polymerase Chain Reaction or trimming/alignment artifacts; (3) a classification and annotation module for integration sites; (4) a user friendly web interface as researcher front-end to perform integration site analyses without computational skills; (5) the time speedup of all steps through parallelization (Hadoop free). We tested VISPA2 performances using simulated and real datasets of lentiviral vector integration sites, previously obtained from patients enrolled in a hematopoietic stem cell gene therapy clinical trial and compared the results with other preexisting tools for integration site analysis. On the computational side, VISPA2 showed a > 6-fold speedup and improved precision and recall metrics (1 and 0.97 respectively) compared to previously developed computational pipelines. These performances indicate that VISPA2 is a fast, reliable and user-friendly tool for

  10. A distributed pipeline for DIDSON data processing

    USGS Publications Warehouse

    Li, Liling; Danner, Tyler; Eickholt, Jesse; McCann, Erin L.; Pangle, Kevin; Johnson, Nicholas

    2018-01-01

    Technological advances in the field of ecology allow data on ecological systems to be collected at high resolution, both temporally and spatially. Devices such as Dual-frequency Identification Sonar (DIDSON) can be deployed in aquatic environments for extended periods and easily generate several terabytes of underwater surveillance data which may need to be processed multiple times. Due to the large amount of data generated and need for flexibility in processing, a distributed pipeline was constructed for DIDSON data making use of the Hadoop ecosystem. The pipeline is capable of ingesting raw DIDSON data, transforming the acoustic data to images, filtering the images, detecting and extracting motion, and generating feature data for machine learning and classification. All of the tasks in the pipeline can be run in parallel and the framework allows for custom processing. Applications of the pipeline include monitoring migration times, determining the presence of a particular species, estimating population size and other fishery management tasks.

  11. Target identification for small bioactive molecules: finding the needle in the haystack.

    PubMed

    Ziegler, Slava; Pries, Verena; Hedberg, Christian; Waldmann, Herbert

    2013-03-04

    Identification and confirmation of bioactive small-molecule targets is a crucial, often decisive step both in academic and pharmaceutical research. Through the development and availability of several new experimental techniques, target identification is, in principle, feasible, and the number of successful examples steadily grows. However, a generic methodology that can successfully be applied in the majority of the cases has not yet been established. Herein we summarize current methods for target identification of small molecules, primarily for a chemistry audience but also the biological community, for example, the chemist or biologist attempting to identify the target of a given bioactive compound. We describe the most frequently employed experimental approaches for target identification and provide several representative examples illustrating the state-of-the-art. Among the techniques currently available, protein affinity isolation using suitable small-molecule probes (pulldown) and subsequent mass spectrometric analysis of the isolated proteins appears to be most powerful and most frequently applied. To provide guidance for rapid entry into the field and based on our own experience we propose a typical workflow for target identification, which centers on the application of chemical proteomics as the key step to generate hypotheses for potential target proteins. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Forecasting and Evaluation of Gas Pipelines Geometric Forms Breach Hazard

    NASA Astrophysics Data System (ADS)

    Voronin, K. S.

    2016-10-01

    Main gas pipelines during operation are under the influence of the permanent pressure drops which leads to their lengthening and as a result, to instability of their position in space. In dynamic systems that have feedback, phenomena, preceding emergencies, should be observed. The article discusses the forced vibrations of the gas pipeline cylindrical surface under the influence of dynamic loads caused by pressure surges, and the process of its geometric shape deformation. Frequency of vibrations, arising in the pipeline at the stage preceding its bending, is being determined. Identification of this frequency can be the basis for the development of a method of monitoring the technical condition of the gas pipeline, and forecasting possible emergency situations allows planning and carrying out in due time reconstruction works on sections of gas pipeline with a possible deviation from the design position.

  13. Employing machine learning for reliable miRNA target identification in plants.

    PubMed

    Jha, Ashwani; Shankar, Ravi

    2011-12-29

    miRNAs are ~21 nucleotide long small noncoding RNA molecules, formed endogenously in most of the eukaryotes, which mainly control their target genes post transcriptionally by interacting and silencing them. While a lot of tools has been developed for animal miRNA target system, plant miRNA target identification system has witnessed limited development. Most of them have been centered around exact complementarity match. Very few of them considered other factors like multiple target sites and role of flanking regions. In the present work, a Support Vector Regression (SVR) approach has been implemented for plant miRNA target identification, utilizing position specific dinucleotide density variation information around the target sites, to yield highly reliable result. It has been named as p-TAREF (plant-Target Refiner). Performance comparison for p-TAREF was done with other prediction tools for plants with utmost rigor and where p-TAREF was found better performing in several aspects. Further, p-TAREF was run over the experimentally validated miRNA targets from species like Arabidopsis, Medicago, Rice and Tomato, and detected them accurately, suggesting gross usability of p-TAREF for plant species. Using p-TAREF, target identification was done for the complete Rice transcriptome, supported by expression and degradome based data. miR156 was found as an important component of the Rice regulatory system, where control of genes associated with growth and transcription looked predominant. The entire methodology has been implemented in a multi-threaded parallel architecture in Java, to enable fast processing for web-server version as well as standalone version. This also makes it to run even on a simple desktop computer in concurrent mode. It also provides a facility to gather experimental support for predictions made, through on the spot expression data analysis, in its web-server version. A machine learning multivariate feature tool has been implemented in parallel and

  14. Identification of failure type in corroded pipelines: a bayesian probabilistic approach.

    PubMed

    Breton, T; Sanchez-Gheno, J C; Alamilla, J L; Alvarez-Ramirez, J

    2010-07-15

    Spillover of hazardous materials from transport pipelines can lead to catastrophic events with serious and dangerous environmental impact, potential fire events and human fatalities. The problem is more serious for large pipelines when the construction material is under environmental corrosion conditions, as in the petroleum and gas industries. In this way, predictive models can provide a suitable framework for risk evaluation, maintenance policies and substitution procedure design that should be oriented to reduce increased hazards. This work proposes a bayesian probabilistic approach to identify and predict the type of failure (leakage or rupture) for steel pipelines under realistic corroding conditions. In the first step of the modeling process, the mechanical performance of the pipe is considered for establishing conditions under which either leakage or rupture failure can occur. In the second step, experimental burst tests are used to introduce a mean probabilistic boundary defining a region where the type of failure is uncertain. In the boundary vicinity, the failure discrimination is carried out with a probabilistic model where the events are considered as random variables. In turn, the model parameters are estimated with available experimental data and contrasted with a real catastrophic event, showing good discrimination capacity. The results are discussed in terms of policies oriented to inspection and maintenance of large-size pipelines in the oil and gas industry. 2010 Elsevier B.V. All rights reserved.

  15. C-mii: a tool for plant miRNA and target identification.

    PubMed

    Numnark, Somrak; Mhuantong, Wuttichai; Ingsriswang, Supawadee; Wichadakul, Duangdao

    2012-01-01

    MicroRNAs (miRNAs) have been known to play an important role in several biological processes in both animals and plants. Although several tools for miRNA and target identification are available, the number of tools tailored towards plants is limited, and those that are available have specific functionality, lack graphical user interfaces, and restrict the number of input sequences. Large-scale computational identifications of miRNAs and/or targets of several plants have been also reported. Their methods, however, are only described as flow diagrams, which require programming skills and the understanding of input and output of the connected programs to reproduce. To overcome these limitations and programming complexities, we proposed C-mii as a ready-made software package for both plant miRNA and target identification. C-mii was designed and implemented based on established computational steps and criteria derived from previous literature with the following distinguishing features. First, software is easy to install with all-in-one programs and packaged databases. Second, it comes with graphical user interfaces (GUIs) for ease of use. Users can identify plant miRNAs and targets via step-by-step execution, explore the detailed results from each step, filter the results according to proposed constraints in plant miRNA and target biogenesis, and export sequences and structures of interest. Third, it supplies bird's eye views of the identification results with infographics and grouping information. Fourth, in terms of functionality, it extends the standard computational steps of miRNA target identification with miRNA-target folding and GO annotation. Fifth, it provides helper functions for the update of pre-installed databases and automatic recovery. Finally, it supports multi-project and multi-thread management. C-mii constitutes the first complete software package with graphical user interfaces enabling computational identification of both plant miRNA genes and mi

  16. C-mii: a tool for plant miRNA and target identification

    PubMed Central

    2012-01-01

    Background MicroRNAs (miRNAs) have been known to play an important role in several biological processes in both animals and plants. Although several tools for miRNA and target identification are available, the number of tools tailored towards plants is limited, and those that are available have specific functionality, lack graphical user interfaces, and restrict the number of input sequences. Large-scale computational identifications of miRNAs and/or targets of several plants have been also reported. Their methods, however, are only described as flow diagrams, which require programming skills and the understanding of input and output of the connected programs to reproduce. Results To overcome these limitations and programming complexities, we proposed C-mii as a ready-made software package for both plant miRNA and target identification. C-mii was designed and implemented based on established computational steps and criteria derived from previous literature with the following distinguishing features. First, software is easy to install with all-in-one programs and packaged databases. Second, it comes with graphical user interfaces (GUIs) for ease of use. Users can identify plant miRNAs and targets via step-by-step execution, explore the detailed results from each step, filter the results according to proposed constraints in plant miRNA and target biogenesis, and export sequences and structures of interest. Third, it supplies bird's eye views of the identification results with infographics and grouping information. Fourth, in terms of functionality, it extends the standard computational steps of miRNA target identification with miRNA-target folding and GO annotation. Fifth, it provides helper functions for the update of pre-installed databases and automatic recovery. Finally, it supports multi-project and multi-thread management. Conclusions C-mii constitutes the first complete software package with graphical user interfaces enabling computational identification of

  17. Limitations of contrast enhancement for infrared target identification

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Fanning, Jonathan D.

    2009-05-01

    Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content. Automatic contrast enhancement techniques do not always achieve this improvement. In some cases, the contrast can increase to a level of target saturation. This paper assesses the range-performance effects of contrast enhancement for target identification as a function of image saturation. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing contrast enhancement processed images at various levels of saturation. Contrast enhancement is modeled in the U.S. Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of specific feature saturation or enhancement. The measured results follow the predicted performance based on the target task difficulty metric used in NVThermIP for the non-saturated cases. The saturated images reduce the information contained in the target and performance suffers. The model treats the contrast of the target as uniform over spatial frequency. As the contrast is enhanced, the model assumes that the contrast is enhanced uniformly over the spatial frequencies. After saturation, the spatial cues that differentiate one tank from another are located in a limited band of spatial frequencies. A frequency dependent treatment of target contrast is needed to predict performance of over-processed images.

  18. Comparative genomics study for the identification of drug and vaccine targets in Staphylococcus aureus: MurA ligase enzyme as a proposed candidate.

    PubMed

    Ghosh, Soma; Prava, Jyoti; Samal, Himanshu Bhusan; Suar, Mrutyunjay; Mahapatra, Rajani Kanta

    2014-06-01

    Now-a-days increasing emergence of antibiotic-resistant pathogenic microorganisms is one of the biggest challenges for management of disease. In the present study comparative genomics, metabolic pathways analysis and additional parameters were defined for the identification of 94 non-homologous essential proteins in Staphylococcus aureus genome. Further study prioritized 19 proteins as vaccine candidates where as druggability study reports 34 proteins suitable as drug targets. Enzymes from peptidoglycan biosynthesis, folate biosynthesis were identified as candidates for drug development. Furthermore, bacterial secretory proteins and few hypothetical proteins identified in our analysis fulfill the criteria of vaccine candidates. As a case study, we built a homology model of one of the potential drug target, MurA ligase, using MODELLER (9v12) software. The model has been further selected for in silico docking study with inhibitors from the DrugBank database. Results from this study could facilitate selection of proteins for entry into drug design and vaccine production pipelines. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. SIMBAD : a sequence-independent molecular-replacement pipeline

    DOE PAGES

    Simpkin, Adam J.; Simkovic, Felix; Thomas, Jens M. H.; ...

    2018-06-08

    The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here,more » SIMBAD , a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two

  20. SIMBAD : a sequence-independent molecular-replacement pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpkin, Adam J.; Simkovic, Felix; Thomas, Jens M. H.

    The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here,more » SIMBAD , a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two

  1. Employing machine learning for reliable miRNA target identification in plants

    PubMed Central

    2011-01-01

    Background miRNAs are ~21 nucleotide long small noncoding RNA molecules, formed endogenously in most of the eukaryotes, which mainly control their target genes post transcriptionally by interacting and silencing them. While a lot of tools has been developed for animal miRNA target system, plant miRNA target identification system has witnessed limited development. Most of them have been centered around exact complementarity match. Very few of them considered other factors like multiple target sites and role of flanking regions. Result In the present work, a Support Vector Regression (SVR) approach has been implemented for plant miRNA target identification, utilizing position specific dinucleotide density variation information around the target sites, to yield highly reliable result. It has been named as p-TAREF (plant-Target Refiner). Performance comparison for p-TAREF was done with other prediction tools for plants with utmost rigor and where p-TAREF was found better performing in several aspects. Further, p-TAREF was run over the experimentally validated miRNA targets from species like Arabidopsis, Medicago, Rice and Tomato, and detected them accurately, suggesting gross usability of p-TAREF for plant species. Using p-TAREF, target identification was done for the complete Rice transcriptome, supported by expression and degradome based data. miR156 was found as an important component of the Rice regulatory system, where control of genes associated with growth and transcription looked predominant. The entire methodology has been implemented in a multi-threaded parallel architecture in Java, to enable fast processing for web-server version as well as standalone version. This also makes it to run even on a simple desktop computer in concurrent mode. It also provides a facility to gather experimental support for predictions made, through on the spot expression data analysis, in its web-server version. Conclusion A machine learning multivariate feature tool has been

  2. A cloud-compatible bioinformatics pipeline for ultrarapid pathogen identification from next-generation sequencing of clinical samples

    PubMed Central

    Naccache, Samia N.; Federman, Scot; Veeraraghavan, Narayanan; Zaharia, Matei; Lee, Deanna; Samayoa, Erik; Bouquet, Jerome; Greninger, Alexander L.; Luk, Ka-Cheung; Enge, Barryett; Wadford, Debra A.; Messenger, Sharon L.; Genrich, Gillian L.; Pellegrino, Kristen; Grard, Gilda; Leroy, Eric; Schneider, Bradley S.; Fair, Joseph N.; Martínez, Miguel A.; Isa, Pavel; Crump, John A.; DeRisi, Joseph L.; Sittler, Taylor; Hackett, John; Miller, Steve; Chiu, Charles Y.

    2014-01-01

    Unbiased next-generation sequencing (NGS) approaches enable comprehensive pathogen detection in the clinical microbiology laboratory and have numerous applications for public health surveillance, outbreak investigation, and the diagnosis of infectious diseases. However, practical deployment of the technology is hindered by the bioinformatics challenge of analyzing results accurately and in a clinically relevant timeframe. Here we describe SURPI (“sequence-based ultrarapid pathogen identification”), a computational pipeline for pathogen identification from complex metagenomic NGS data generated from clinical samples, and demonstrate use of the pipeline in the analysis of 237 clinical samples comprising more than 1.1 billion sequences. Deployable on both cloud-based and standalone servers, SURPI leverages two state-of-the-art aligners for accelerated analyses, SNAP and RAPSearch, which are as accurate as existing bioinformatics tools but orders of magnitude faster in performance. In fast mode, SURPI detects viruses and bacteria by scanning data sets of 7–500 million reads in 11 min to 5 h, while in comprehensive mode, all known microorganisms are identified, followed by de novo assembly and protein homology searches for divergent viruses in 50 min to 16 h. SURPI has also directly contributed to real-time microbial diagnosis in acutely ill patients, underscoring its potential key role in the development of unbiased NGS-based clinical assays in infectious diseases that demand rapid turnaround times. PMID:24899342

  3. Identification of Direct Protein Targets of Small Molecules

    PubMed Central

    2010-01-01

    Small-molecule target identification is a vital and daunting task for the chemical biology community as well as for researchers interested in applying the power of chemical genetics to impact biology and medicine. To overcome this “target ID” bottleneck, new technologies are being developed that analyze protein–drug interactions, such as drug affinity responsive target stability (DARTS), which aims to discover the direct binding targets (and off targets) of small molecules on a proteome scale without requiring chemical modification of the compound. Here, we review the DARTS method, discuss why it works, and provide new perspectives for future development in this area. PMID:21077692

  4. Bombing Target Identification from Limited Transect Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Barry L.; Hathaway, John E.; Pulsipher, Brent A.

    2006-08-07

    A series of sensor data combined with geostatistical techniques were used to determine likely target areas for a historic military aerial bombing range. Primary data consisted of magnetic anomaly information from limited magnetometer transects across the site. Secondary data included airborne LIDAR, orthophotography, and other general site characterization information. Identification of likely target areas relied primarily upon kriging estimates of magnetic anomaly densities across the site. Secondary information, such as impact crater locations, was used to refine the boundary delineations.

  5. ARTIP: Automated Radio Telescope Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  6. A new disaster victim identification management strategy targeting "near identification-threshold" cases: Experiences from the Boxing Day tsunami.

    PubMed

    Wright, Kirsty; Mundorff, Amy; Chaseling, Janet; Forrest, Alexander; Maguire, Christopher; Crane, Denis I

    2015-05-01

    The international disaster victim identification (DVI) response to the Boxing Day tsunami, led by the Royal Thai Police in Phuket, Thailand, was one of the largest and most complex in DVI history. Referred to as the Thai Tsunami Victim Identification operation, the group comprised a multi-national, multi-agency, and multi-disciplinary team. The traditional DVI approach proved successful in identifying a large number of victims quickly. However, the team struggled to identify certain victims due to incomplete or poor quality ante-mortem and post-mortem data. In response to these challenges, a new 'near-threshold' DVI management strategy was implemented to target presumptive identifications and improve operational efficiency. The strategy was implemented by the DNA Team, therefore DNA kinship matches that just failed to reach the reporting threshold of 99.9% were prioritized, however the same approach could be taken by targeting, for example, cases with partial fingerprint matches. The presumptive DNA identifications were progressively filtered through the Investigation, Dental and Fingerprint Teams to add additional information necessary to either strengthen or conclusively exclude the identification. Over a five-month period 111 victims from ten countries were identified using this targeted approach. The new identifications comprised 87 adults, 24 children and included 97 Thai locals. New data from the Fingerprint Team established nearly 60% of the total near-threshold identifications and the combined DNA/Physical method was responsible for over 30%. Implementing the new strategy, targeting near-threshold cases, had positive management implications. The process initiated additional ante-mortem information collections, and established a much-needed, distinct "end-point" for unresolved cases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Collaborative identification method for sea battlefield target based on deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zheng, Guangdi; Pan, Mingbo; Liu, Wei; Wu, Xuetong

    2018-03-01

    The target identification of the sea battlefield is the prerequisite for the judgment of the enemy in the modern naval battle. In this paper, a collaborative identification method based on convolution neural network is proposed to identify the typical targets of sea battlefields. Different from the traditional single-input/single-output identification method, the proposed method constructs a multi-input/single-output co-identification architecture based on optimized convolution neural network and weighted D-S evidence theory. The simulation results show that

  8. 77 FR 70543 - Pipeline Safety: Meeting of the Gas Pipeline Advisory Committee and the Liquid Pipeline Advisory...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    .... PHMSA-2009-0203] Pipeline Safety: Meeting of the Gas Pipeline Advisory Committee and the Liquid Pipeline..., and safety policies for natural gas pipelines and for hazardous liquid pipelines. Both committees were...: Notice of advisory committee meeting. SUMMARY: This notice announces a public meeting of the Gas Pipeline...

  9. The X-shooter pipeline

    NASA Astrophysics Data System (ADS)

    Modigliani, Andrea; Goldoni, Paolo; Royer, Frédéric; Haigron, Regis; Guglielmi, Laurent; François, Patrick; Horrobin, Matthew; Bristow, Paul; Vernet, Joel; Moehler, Sabine; Kerber, Florian; Ballester, Pascal; Mason, Elena; Christensen, Lise

    2010-07-01

    The X-shooter data reduction pipeline, as part of the ESO-VLT Data Flow System, provides recipes for Paranal Science Operations, and for Data Product and Quality Control Operations at Garching headquarters. At Paranal, it is used for the quick-look data evaluation. The pipeline recipes can be executed either with EsoRex at the command line level or through the Gasgano graphical user interface. The recipes are implemented with the ESO Common Pipeline Library (CPL). X-shooter is the first of the second generation of VLT instruments. It makes possible to collect in one shot the full spectrum of the target from 300 to 2500 nm, subdivided in three arms optimised for UVB, VIS and NIR ranges, with an efficiency between 15% and 35% including the telescope and the atmosphere, and a spectral resolution varying between 3000 and 17,000. It allows observations in stare, offset modes, using the slit or an IFU, and observing sequences nodding the target along the slit. Data reduction can be performed either with a classical approach, by determining the spectral format via 2D-polynomial transformations, or with the help of a dedicated instrument physical model to gain insight on the instrument and allowing a constrained solution that depends on a few parameters with a physical meaning. In the present paper we describe the steps of data reduction necessary to fully reduce science observations in the different modes with examples on typical data calibrations and observations sequences.

  10. An integrated pipeline of open source software adapted for multi-CPU architectures: use in the large-scale identification of single nucleotide polymorphisms.

    PubMed

    Jayashree, B; Hanspal, Manindra S; Srinivasan, Rajgopal; Vigneshwaran, R; Varshney, Rajeev K; Spurthi, N; Eshwar, K; Ramesh, N; Chandra, S; Hoisington, David A

    2007-01-01

    The large amounts of EST sequence data available from a single species of an organism as well as for several species within a genus provide an easy source of identification of intra- and interspecies single nucleotide polymorphisms (SNPs). In the case of model organisms, the data available are numerous, given the degree of redundancy in the deposited EST data. There are several available bioinformatics tools that can be used to mine this data; however, using them requires a certain level of expertise: the tools have to be used sequentially with accompanying format conversion and steps like clustering and assembly of sequences become time-intensive jobs even for moderately sized datasets. We report here a pipeline of open source software extended to run on multiple CPU architectures that can be used to mine large EST datasets for SNPs and identify restriction sites for assaying the SNPs so that cost-effective CAPS assays can be developed for SNP genotyping in genetics and breeding applications. At the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), the pipeline has been implemented to run on a Paracel high-performance system consisting of four dual AMD Opteron processors running Linux with MPICH. The pipeline can be accessed through user-friendly web interfaces at http://hpc.icrisat.cgiar.org/PBSWeb and is available on request for academic use. We have validated the developed pipeline by mining chickpea ESTs for interspecies SNPs, development of CAPS assays for SNP genotyping, and confirmation of restriction digestion pattern at the sequence level.

  11. Drug Target Mining and Analysis of the Chinese Tree Shrew for Pharmacological Testing

    PubMed Central

    Liu, Jie; Lee, Wen-hui; Zhang, Yun

    2014-01-01

    The discovery of new drugs requires the development of improved animal models for drug testing. The Chinese tree shrew is considered to be a realistic candidate model. To assess the potential of the Chinese tree shrew for pharmacological testing, we performed drug target prediction and analysis on genomic and transcriptomic scales. Using our pipeline, 3,482 proteins were predicted to be drug targets. Of these predicted targets, 446 and 1,049 proteins with the highest rank and total scores, respectively, included homologs of targets for cancer chemotherapy, depression, age-related decline and cardiovascular disease. Based on comparative analyses, more than half of drug target proteins identified from the tree shrew genome were shown to be higher similarity to human targets than in the mouse. Target validation also demonstrated that the constitutive expression of the proteinase-activated receptors of tree shrew platelets is similar to that of human platelets but differs from that of mouse platelets. We developed an effective pipeline and search strategy for drug target prediction and the evaluation of model-based target identification for drug testing. This work provides useful information for future studies of the Chinese tree shrew as a source of novel targets for drug discovery research. PMID:25105297

  12. 78 FR 70623 - Pipeline Safety: Meeting of the Gas Pipeline Advisory Committee and the Liquid Pipeline Advisory...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2009-0203] Pipeline Safety: Meeting of the Gas Pipeline Advisory Committee and the Liquid Pipeline Advisory Committee AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. [[Page...

  13. A homology-based pipeline for global prediction of post-translational modification sites

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Shi, Shao-Ping; Xu, Hao-Dong; Suo, Sheng-Bao; Qiu, Jian-Ding

    2016-05-01

    The pathways of protein post-translational modifications (PTMs) have been shown to play particularly important roles for almost any biological process. Identification of PTM substrates along with information on the exact sites is fundamental for fully understanding or controlling biological processes. Alternative computational strategies would help to annotate PTMs in a high-throughput manner. Traditional algorithms are suited for identifying the common organisms and tissues that have a complete PTM atlas or extensive experimental data. While annotation of rare PTMs in most organisms is a clear challenge. In this work, to this end we have developed a novel homology-based pipeline named PTMProber that allows identification of potential modification sites for most of the proteomes lacking PTMs data. Cross-promotion E-value (CPE) as stringent benchmark has been used in our pipeline to evaluate homology to known modification sites. Independent-validation tests show that PTMProber achieves over 58.8% recall with high precision by CPE benchmark. Comparisons with other machine-learning tools show that PTMProber pipeline performs better on general predictions. In addition, we developed a web-based tool to integrate this pipeline at http://bioinfo.ncu.edu.cn/PTMProber/index.aspx. In addition to pre-constructed prediction models of PTM, the website provides an extensional functionality to allow users to customize models.

  14. Slaying Hydra: A Python-Based Reduction Pipeline for the Hydra Multi-Object Spectrograph

    NASA Astrophysics Data System (ADS)

    Seifert, Richard; Mann, Andrew

    2018-01-01

    We present a Python-based data reduction pipeline for the Hydra Multi-Object Spectrograph on the WIYN 3.5 m telescope, an instrument which enables simultaneous spectroscopy of up to 93 targets. The reduction steps carried out include flat-fielding, dynamic fiber tracing, wavelength calibration, optimal fiber extraction, and sky subtraction. The pipeline also supports the use of sky lines to correct for zero-point offsets between fibers. To account for the moving parts on the instrument and telescope, fiber positions and wavelength solutions are derived in real-time for each dataset. The end result is a one-dimensional spectrum for each target fiber. Quick and fully automated, the pipeline enables on-the-fly reduction while observing, and has been known to outperform the IRAF pipeline by more accurately reproducing known RVs. While Hydra has many configurations in both high- and low-resolution, the pipeline was developed and tested with only one high-resolution mode. In the future we plan to expand the pipeline to work in most commonly used modes.

  15. [Exploring New Drug Targets through the Identification of Target Molecules of Bioactive Natural Products].

    PubMed

    Arai, Masayoshi

    2016-01-01

    With the development of cell biology and microbiology, it has become easy to culture many types of animal cells and microbes, and they are frequently used for phenotypic screening to explore medicinal seeds. On the other hand, it is recognized that cells and pathogenic microbes present in pathologic sites and infected regions of the human body display unique properties different from those under general culture conditions. We isolated several bioactive compounds from marine medicinal resources using constructed bioassay-guided separation focusing on the unique changes in the characteristics of cells and pathogenic microbes (Mycobacterium spp.) in the human body under disease conditions. In addition, we also carried out identification studies of target molecules of the bioactive compounds by methods utilizing the gene expression profile, transformants of cells or microbes, synthetic probe molecules of the isolated compounds, etc., since bioactive compounds isolated from the phenotypic screening system often target new molecules. This review presents our phenotypic screening systems, isolation of bioactive compounds from marine medicinal resources, and target identification of bioactive compounds.

  16. The Influence of Attention and Target Identification on Saccadic Eye Movements Depends on Prior Target Location

    PubMed Central

    Hardwick, David R.; Cutmore, Timothy R. H.; Hine, Trevor J.

    2014-01-01

    Saccadic latency is reduced by a temporal gap between fixation point and target, by identification of a target feature, and by movement in a new direction (inhibition of saccadic return, ISR). A simple additive model was compared with a shared resources model that predicts a three-way interaction. Twenty naïve participants made horizontal saccades to targets left and right of fixation in a randomised block design. There was a significant three-way interaction among the factors on saccade latency. This was revealed in a two-way interaction between feature identification and the gap versus no gap factor which was only apparent when the saccade was in the same direction as the previous saccade. No interaction was apparent when the saccade was in the opposite direction. This result supports an attentional inhibitory effect that is present during ISR to a previous location which is only partly released by the facilitative effect of feature identification and gap. Together, anticipatory error data and saccade latency interactions suggest a source of ISR at a higher level of attention, possibly localised in the dorsolateral prefrontal cortex and involving tonic activation. PMID:24719754

  17. Visually-guided attention enhances target identification in a complex auditory scene.

    PubMed

    Best, Virginia; Ozmeral, Erol J; Shinn-Cunningham, Barbara G

    2007-06-01

    In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.

  18. Improved Photometry for the DASCH Pipeline

    NASA Astrophysics Data System (ADS)

    Tang, Sumin; Grindlay, Jonathan; Los, Edward; Servillat, Mathieu

    2013-07-01

    The Digital Access to a Sky Century@Harvard (DASCH) project is digitizing the ˜500,000 glass plate images obtained (full sky) by the Harvard College Observatory from 1885 to 1992. Astrometry and photometry for each resolved object are derived with photometric rms values of ˜0.15 mag for the initial photometry analysis pipeline. Here we describe new developments for DASCH photometry, applied to the Kepler field, that have yielded further improvements, including better identification of image blends and plate defects by measuring image profiles and astrometric deviations. A local calibration procedure using nearby stars in a similar magnitude range as the program star (similar to what has been done for visual photometry from the plates) yields additional improvement for a net photometric rms of ˜0.1 mag. We also describe statistical measures of light curves that are now used in the DASCH pipeline processing to identify new variables autonomously. The DASCH photometry methods described here are used in the pipeline processing for the data releases of DASCH data,5 as well as for a forthcoming paper on the long-term variables discovered by DASCH in the Kepler field.

  19. Effects of EPI distortion correction pipelines on the connectome in Parkinson's Disease

    NASA Astrophysics Data System (ADS)

    Galvis, Justin; Mezher, Adam F.; Ragothaman, Anjanibhargavi; Villalon-Reina, Julio E.; Fletcher, P. Thomas; Thompson, Paul M.; Prasad, Gautam

    2016-03-01

    Echo-planar imaging (EPI) is commonly used for diffusion-weighted imaging (DWI) but is susceptible to nonlinear geometric distortions arising from inhomogeneities in the static magnetic field. These inhomogeneities can be measured and corrected using a fieldmap image acquired during the scanning process. In studies where the fieldmap image is not collected, these distortions can be corrected, to some extent, by nonlinearly registering the diffusion image to a corresponding anatomical image, either a T1- or T2-weighted image. Here we compared two EPI distortion correction pipelines, both based on nonlinear registration, which were optimized for the particular weighting of the structural image registration target. The first pipeline used a 3D nonlinear registration to a T1-weighted target, while the second pipeline used a 1D nonlinear registration to a T2-weighted target. We assessed each pipeline in its ability to characterize high-level measures of brain connectivity in Parkinson's disease (PD) in 189 individuals (58 healthy controls, 131 people with PD) from the Parkinson's Progression Markers Initiative (PPMI) dataset. We computed a structural connectome (connectivity map) for each participant using regions of interest from a cortical parcellation combined with DWI-based whole-brain tractography. We evaluated test-retest reliability of the connectome for each EPI distortion correction pipeline using a second diffusion scan acquired directly after the participants' first. Finally, we used support vector machine (SVM) classification to assess how accurately each pipeline classified PD versus healthy controls using each participants' structural connectome.

  20. Target identification of small molecules based on chemical biology approaches.

    PubMed

    Futamura, Yushi; Muroi, Makoto; Osada, Hiroyuki

    2013-05-01

    Recently, a phenotypic approach-screens that assess the effects of compounds on cells, tissues, or whole organisms-has been reconsidered and reintroduced as a complementary strategy of a target-based approach for drug discovery. Although the finding of novel bioactive compounds from large chemical libraries has become routine, the identification of their molecular targets is still a time-consuming and difficult process, making this step rate-limiting in drug development. In the last decade, we and other researchers have amassed a large amount of phenotypic data through progress in omics research and advances in instrumentation. Accordingly, the profiling methodologies using these datasets expertly have emerged to identify and validate specific molecular targets of drug candidates, attaining some progress in current drug discovery (e.g., eribulin). In the case of a compound that shows an unprecedented phenotype likely by inhibiting a first-in-class target, however, such phenotypic profiling is invalid. Under the circumstances, a photo-crosslinking affinity approach should be beneficial. In this review, we describe and summarize recent progress in both affinity-based (direct) and phenotypic profiling (indirect) approaches for chemical biology target identification.

  1. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  2. Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene

    PubMed Central

    Ozmeral, Erol J.; Shinn-Cunningham, Barbara G.

    2007-01-01

    In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors. PMID:17453308

  3. 76 FR 70953 - Pipeline Safety: Safety of Gas Transmission Pipelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-16

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 192 [Docket ID PHMSA-2011-0023] RIN 2137-AE72 Pipeline Safety: Safety of Gas Transmission Pipelines AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Advance notice of...

  4. A high-throughput pipeline for the production of synthetic antibodies for analysis of ribonucleoprotein complexes

    PubMed Central

    Na, Hong; Laver, John D.; Jeon, Jouhyun; Singh, Fateh; Ancevicius, Kristin; Fan, Yujie; Cao, Wen Xi; Nie, Kun; Yang, Zhenglin; Luo, Hua; Wang, Miranda; Rissland, Olivia; Westwood, J. Timothy; Kim, Philip M.; Smibert, Craig A.; Lipshitz, Howard D.; Sidhu, Sachdev S.

    2016-01-01

    Post-transcriptional regulation of mRNAs plays an essential role in the control of gene expression. mRNAs are regulated in ribonucleoprotein (RNP) complexes by RNA-binding proteins (RBPs) along with associated protein and noncoding RNA (ncRNA) cofactors. A global understanding of post-transcriptional control in any cell type requires identification of the components of all of its RNP complexes. We have previously shown that these complexes can be purified by immunoprecipitation using anti-RBP synthetic antibodies produced by phage display. To develop the large number of synthetic antibodies required for a global analysis of RNP complex composition, we have established a pipeline that combines (i) a computationally aided strategy for design of antigens located outside of annotated domains, (ii) high-throughput antigen expression and purification in Escherichia coli, and (iii) high-throughput antibody selection and screening. Using this pipeline, we have produced 279 antibodies against 61 different protein components of Drosophila melanogaster RNPs. Together with those produced in our low-throughput efforts, we have a panel of 311 antibodies for 67 RNP complex proteins. Tests of a subset of our antibodies demonstrated that 89% immunoprecipitate their endogenous target from embryo lysate. This panel of antibodies will serve as a resource for global studies of RNP complexes in Drosophila. Furthermore, our high-throughput pipeline permits efficient production of synthetic antibodies against any large set of proteins. PMID:26847261

  5. Characterization and Validation of Transiting Planets in the TESS SPOC Pipeline

    NASA Astrophysics Data System (ADS)

    Twicken, Joseph D.; Caldwell, Douglas A.; Davies, Misty; Jenkins, Jon Michael; Li, Jie; Morris, Robert L.; Rose, Mark; Smith, Jeffrey C.; Tenenbaum, Peter; Ting, Eric; Wohler, Bill

    2018-06-01

    Light curves for Transiting Exoplanet Survey Satellite (TESS) target stars will be extracted and searched for transiting planet signatures in the Science Processing Operations Center (SPOC) Science Pipeline at NASA Ames Research Center. Targets for which the transiting planet detection threshold is exceeded will be processed in the Data Validation (DV) component of the Pipeline. The primary functions of DV are to (1) characterize planets identified in the transiting planet search, (2) search for additional transiting planet signatures in light curves after modeled transit signatures have been removed, and (3) perform a comprehensive suite of diagnostic tests to aid in discrimination between true transiting planets and false positive detections. DV data products include extensive reports by target, one-page summaries by planet candidate, and tabulated transit model fit and diagnostic test results. DV products may be employed by humans and automated systems to vet planet candidates identified in the Pipeline. TESS will launch in 2018 and survey the full sky for transiting exoplanets over a period of two years. The SPOC pipeline was ported from the Kepler Science Operations Center (SOC) codebase and extended for TESS after the mission was selected for flight in the NASA Astrophysics Explorer program. We describe the Data Validation component of the SPOC Pipeline. The diagnostic tests exploit the flux (i.e., light curve) and pixel time series associated with each target to support the determination of the origin of each purported transiting planet signature. We also highlight the differences between the DV components for Kepler and TESS. Candidate planet detections and data products will be delivered to the Mikulski Archive for Space Telescopes (MAST); the MAST URL is archive.stsci.edu/tess. Funding for the TESS Mission has been provided by the NASA Science Mission Directorate.

  6. Targeted stock identification using multilocus genotype 'familyprinting'

    USGS Publications Warehouse

    Letcher, B.H.; King, T.L.

    1999-01-01

    We present an approach to stock identification of small, targeted populations that uses multilocus microsatellite genotypes of individual mating adults to uniquely identify first- and second-generation offspring in a mixture. We call the approach 'familyprinting'; unlike DNA fingerprinting where tissue samples of individuals are matched, offspring from various families are assigned to pairs of parents or sets of four grandparents with known genotypes. The basic unit of identification is the family, but families can be nested within a variety of stock units ranging from naturally reproducing groups of fish in a small tributary or pond from which mating adults can be sampled to large or small collections of families produced in hatcheries and stocked in specific locations. We show that, with as few as seven alleles per locus using four loci without error, first-generation offspring can be uniquely assigned to the correct family. For second-generation applications in a hatchery more alleles per locus (10) and loci (10) are required for correct assignment of all offspring to the correct set of grandparents. Using microsatellite DNA variation from an Atlantic salmon (Salmo solar) restoration river (Connecticut River, USA), we also show that this population contains sufficient genetic diversity in sea-run returns for 100% correct first, generation assignment and 97% correct second-generation assignment using 14 loci. We are currently using first- and second-generation familyprinting in this population with the ultimate goal of identifying stocking tributary. In addition to within-river familyprinting, there also appears to be sufficient genetic diversity within and between Atlantic salmon populations for identification of 'familyprinted' fish in a mixture of multiple populations. We also suggest that second-generation familyprinting with multiple populations may also provide a tool for examining stock structure. Familyprinting with microsatellite DNA markers is a viable

  7. Emerging therapeutic targets for treatment of leishmaniasis.

    PubMed

    Sundar, Shyam; Singh, Bhawana

    2018-06-01

    Parasitic diseases that pose a threat to human life include leishmaniasis - caused by protozoan parasite Leishmania species. Existing drugs have limitations due to deleterious side effects like teratogenicity, high cost and drug resistance. This calls for the need to have an insight into therapeutic aspects of disease. Areas covered: We have identified different drug targets via. molecular, imuunological, metabolic as well as by system biology approaches. We bring these promising drug targets into light so that they can be explored to their maximum. In an effort to bridge the gaps between existing knowledge and prospects of drug discovery, we have compiled interesting studies on drug targets, thereby paving the way for establishment of better therapeutic aspects. Expert opinion: Advancements in technology shed light on many unexplored pathways. Further probing of well established pathways led to the discovery of new drug targets. This review is a comprehensive report on current and emerging drug targets, with emphasis on several metabolic targets, organellar biochemistry, salvage pathways, epigenetics, kinome and more. Identification of new targets can contribute significantly towards strengthening the pipeline for disease elimination.

  8. Fast, accurate and easy-to-pipeline methods for amplicon sequence processing

    NASA Astrophysics Data System (ADS)

    Antonielli, Livio; Sessitsch, Angela

    2016-04-01

    Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.

  9. Blueprint for antimicrobial hit discovery targeting metabolic networks.

    PubMed

    Shen, Y; Liu, J; Estiu, G; Isin, B; Ahn, Y-Y; Lee, D-S; Barabási, A-L; Kapatral, V; Wiest, O; Oltvai, Z N

    2010-01-19

    Advances in genome analysis, network biology, and computational chemistry have the potential to revolutionize drug discovery by combining system-level identification of drug targets with the atomistic modeling of small molecules capable of modulating their activity. To demonstrate the effectiveness of such a discovery pipeline, we deduced common antibiotic targets in Escherichia coli and Staphylococcus aureus by identifying shared tissue-specific or uniformly essential metabolic reactions in their metabolic networks. We then predicted through virtual screening dozens of potential inhibitors for several enzymes of these reactions and showed experimentally that a subset of these inhibited both enzyme activities in vitro and bacterial cell viability. This blueprint is applicable for any sequenced organism with high-quality metabolic reconstruction and suggests a general strategy for strain-specific antiinfective therapy.

  10. 76 FR 73570 - Pipeline Safety: Miscellaneous Changes to Pipeline Safety Regulations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-29

    ... pipeline facilities to facilitate the removal of liquids and other materials from the gas stream. These... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts... Changes to Pipeline Safety Regulations AGENCY: Pipeline and Hazardous Materials Safety Administration...

  11. PipelineDog: a simple and flexible graphic pipeline construction and maintenance tool.

    PubMed

    Zhou, Anbo; Zhang, Yeting; Sun, Yazhou; Xing, Jinchuan

    2018-05-01

    Analysis pipelines are an essential part of bioinformatics research, and ad hoc pipelines are frequently created by researchers for prototyping and proof-of-concept purposes. However, most existing pipeline management system or workflow engines are too complex for rapid prototyping or learning the pipeline concept. A lightweight, user-friendly and flexible solution is thus desirable. In this study, we developed a new pipeline construction and maintenance tool, PipelineDog. This is a web-based integrated development environment with a modern web graphical user interface. It offers cross-platform compatibility, project management capabilities, code formatting and error checking functions and an online repository. It uses an easy-to-read/write script system that encourages code reuse. With the online repository, it also encourages sharing of pipelines, which enhances analysis reproducibility and accountability. For most users, PipelineDog requires no software installation. Overall, this web application provides a way to rapidly create and easily manage pipelines. PipelineDog web app is freely available at http://web.pipeline.dog. The command line version is available at http://www.npmjs.com/package/pipelinedog and online repository at http://repo.pipeline.dog. ysun@kean.edu or xing@biology.rutgers.edu or ysun@diagnoa.com. Supplementary data are available at Bioinformatics online.

  12. MutAid: Sanger and NGS Based Integrated Pipeline for Mutation Identification, Validation and Annotation in Human Molecular Genetics.

    PubMed

    Pandey, Ram Vinay; Pabinger, Stephan; Kriegner, Albert; Weinhäusel, Andreas

    2016-01-01

    Traditional Sanger sequencing as well as Next-Generation Sequencing have been used for the identification of disease causing mutations in human molecular research. The majority of currently available tools are developed for research and explorative purposes and often do not provide a complete, efficient, one-stop solution. As the focus of currently developed tools is mainly on NGS data analysis, no integrative solution for the analysis of Sanger data is provided and consequently a one-stop solution to analyze reads from both sequencing platforms is not available. We have therefore developed a new pipeline called MutAid to analyze and interpret raw sequencing data produced by Sanger or several NGS sequencing platforms. It performs format conversion, base calling, quality trimming, filtering, read mapping, variant calling, variant annotation and analysis of Sanger and NGS data under a single platform. It is capable of analyzing reads from multiple patients in a single run to create a list of potential disease causing base substitutions as well as insertions and deletions. MutAid has been developed for expert and non-expert users and supports four sequencing platforms including Sanger, Illumina, 454 and Ion Torrent. Furthermore, for NGS data analysis, five read mappers including BWA, TMAP, Bowtie, Bowtie2 and GSNAP and four variant callers including GATK-HaplotypeCaller, SAMTOOLS, Freebayes and VarScan2 pipelines are supported. MutAid is freely available at https://sourceforge.net/projects/mutaid.

  13. MutAid: Sanger and NGS Based Integrated Pipeline for Mutation Identification, Validation and Annotation in Human Molecular Genetics

    PubMed Central

    Pandey, Ram Vinay; Pabinger, Stephan; Kriegner, Albert; Weinhäusel, Andreas

    2016-01-01

    Traditional Sanger sequencing as well as Next-Generation Sequencing have been used for the identification of disease causing mutations in human molecular research. The majority of currently available tools are developed for research and explorative purposes and often do not provide a complete, efficient, one-stop solution. As the focus of currently developed tools is mainly on NGS data analysis, no integrative solution for the analysis of Sanger data is provided and consequently a one-stop solution to analyze reads from both sequencing platforms is not available. We have therefore developed a new pipeline called MutAid to analyze and interpret raw sequencing data produced by Sanger or several NGS sequencing platforms. It performs format conversion, base calling, quality trimming, filtering, read mapping, variant calling, variant annotation and analysis of Sanger and NGS data under a single platform. It is capable of analyzing reads from multiple patients in a single run to create a list of potential disease causing base substitutions as well as insertions and deletions. MutAid has been developed for expert and non-expert users and supports four sequencing platforms including Sanger, Illumina, 454 and Ion Torrent. Furthermore, for NGS data analysis, five read mappers including BWA, TMAP, Bowtie, Bowtie2 and GSNAP and four variant callers including GATK-HaplotypeCaller, SAMTOOLS, Freebayes and VarScan2 pipelines are supported. MutAid is freely available at https://sourceforge.net/projects/mutaid. PMID:26840129

  14. Aptamers as tools for target prioritization and lead identification.

    PubMed

    Burgstaller, Petra; Girod, Anne; Blind, Michael

    2002-12-15

    The increasing number of potential drug target candidates has driven the development of novel technologies designed to identify functionally important targets and enhance the subsequent lead discovery process. Highly specific synthetic nucleic acid ligands--also known as aptamers--offer a new exciting route in the drug discovery process by linking target validation directly with HTS. Recently, aptamers have proven to be valuable tools for modulating the function of endogenous cellular proteins in their natural environment. A set of technologies has been developed to use these sophisticated ligands for the validation of potential drug targets in disease models. Moreover, aptamers that are specific antagonists of protein function can act as substitute interaction partners in HTS assays to facilitate the identification of small-molecule lead compounds.

  15. Computational Pipeline for NIRS-EEG Joint Imaging of tDCS-Evoked Cerebral Responses-An Application in Ischemic Stroke.

    PubMed

    Guhathakurta, Debarpan; Dutta, Anirban

    2016-01-01

    Transcranial direct current stimulation (tDCS) modulates cortical neural activity and hemodynamics. Electrophysiological methods (electroencephalography-EEG) measure neural activity while optical methods (near-infrared spectroscopy-NIRS) measure hemodynamics coupled through neurovascular coupling (NVC). Assessment of NVC requires development of NIRS-EEG joint-imaging sensor montages that are sensitive to the tDCS affected brain areas. In this methods paper, we present a software pipeline incorporating freely available software tools that can be used to target vascular territories with tDCS and develop a NIRS-EEG probe for joint imaging of tDCS-evoked responses. We apply this software pipeline to target primarily the outer convexity of the brain territory (superficial divisions) of the middle cerebral artery (MCA). We then present a computational method based on Empirical Mode Decomposition of NIRS and EEG time series into a set of intrinsic mode functions (IMFs), and then perform a cross-correlation analysis on those IMFs from NIRS and EEG signals to model NVC at the lesional and contralesional hemispheres of an ischemic stroke patient. For the contralesional hemisphere, a strong positive correlation between IMFs of regional cerebral hemoglobin oxygen saturation and the log-transformed mean-power time-series of IMFs for EEG with a lag of about -15 s was found after a cumulative 550 s stimulation of anodal tDCS. It is postulated that system identification, for example using a continuous-time autoregressive model, of this coupling relation under tDCS perturbation may provide spatiotemporal discriminatory features for the identification of ischemia. Furthermore, portable NIRS-EEG joint imaging can be incorporated into brain computer interfaces to monitor tDCS-facilitated neurointervention as well as cortical reorganization.

  16. Computational Pipeline for NIRS-EEG Joint Imaging of tDCS-Evoked Cerebral Responses—An Application in Ischemic Stroke

    PubMed Central

    Guhathakurta, Debarpan; Dutta, Anirban

    2016-01-01

    Transcranial direct current stimulation (tDCS) modulates cortical neural activity and hemodynamics. Electrophysiological methods (electroencephalography-EEG) measure neural activity while optical methods (near-infrared spectroscopy-NIRS) measure hemodynamics coupled through neurovascular coupling (NVC). Assessment of NVC requires development of NIRS-EEG joint-imaging sensor montages that are sensitive to the tDCS affected brain areas. In this methods paper, we present a software pipeline incorporating freely available software tools that can be used to target vascular territories with tDCS and develop a NIRS-EEG probe for joint imaging of tDCS-evoked responses. We apply this software pipeline to target primarily the outer convexity of the brain territory (superficial divisions) of the middle cerebral artery (MCA). We then present a computational method based on Empirical Mode Decomposition of NIRS and EEG time series into a set of intrinsic mode functions (IMFs), and then perform a cross-correlation analysis on those IMFs from NIRS and EEG signals to model NVC at the lesional and contralesional hemispheres of an ischemic stroke patient. For the contralesional hemisphere, a strong positive correlation between IMFs of regional cerebral hemoglobin oxygen saturation and the log-transformed mean-power time-series of IMFs for EEG with a lag of about −15 s was found after a cumulative 550 s stimulation of anodal tDCS. It is postulated that system identification, for example using a continuous-time autoregressive model, of this coupling relation under tDCS perturbation may provide spatiotemporal discriminatory features for the identification of ischemia. Furthermore, portable NIRS-EEG joint imaging can be incorporated into brain computer interfaces to monitor tDCS-facilitated neurointervention as well as cortical reorganization. PMID:27378836

  17. United States petroleum pipelines: An empirical analysis of pipeline sizing

    NASA Astrophysics Data System (ADS)

    Coburn, L. L.

    1980-12-01

    The undersizing theory hypothesizes that integrated oil companies have a strong economic incentive to size the petroleum pipelines they own and ship over in a way that means that some of the demand must utilize higher cost alternatives. The DOJ theory posits that excess or monopoly profits are earned due to the natural monopoly characteristics of petroleum pipelines and the existence of market power in some pipelines at either the upstream or downstream market. The theory holds that independent petroleum pipelines owned by companies not otherwise affiliated with the petroleum industry (independent pipelines) do not have these incentives and all the efficiencies of pipeline transportation are passed to the ultimate consumer. Integrated oil companies on the other hand, keep these cost efficiencies for themselves in the form of excess profits.

  18. ToTem: a tool for variant calling pipeline optimization.

    PubMed

    Tom, Nikola; Tom, Ondrej; Malcikova, Jitka; Pavlova, Sarka; Kubesova, Blanka; Rausch, Tobias; Kolarik, Miroslav; Benes, Vladimir; Bystry, Vojtech; Pospisilova, Sarka

    2018-06-26

    High-throughput bioinformatics analyses of next generation sequencing (NGS) data often require challenging pipeline optimization. The key problem is choosing appropriate tools and selecting the best parameters for optimal precision and recall. Here we introduce ToTem, a tool for automated pipeline optimization. ToTem is a stand-alone web application with a comprehensive graphical user interface (GUI). ToTem is written in Java and PHP with an underlying connection to a MySQL database. Its primary role is to automatically generate, execute and benchmark different variant calling pipeline settings. Our tool allows an analysis to be started from any level of the process and with the possibility of plugging almost any tool or code. To prevent an over-fitting of pipeline parameters, ToTem ensures the reproducibility of these by using cross validation techniques that penalize the final precision, recall and F-measure. The results are interpreted as interactive graphs and tables allowing an optimal pipeline to be selected, based on the user's priorities. Using ToTem, we were able to optimize somatic variant calling from ultra-deep targeted gene sequencing (TGS) data and germline variant detection in whole genome sequencing (WGS) data. ToTem is a tool for automated pipeline optimization which is freely available as a web application at  https://totem.software .

  19. Accurate and exact CNV identification from targeted high-throughput sequence data.

    PubMed

    Nord, Alex S; Lee, Ming; King, Mary-Claire; Walsh, Tom

    2011-04-12

    Massively parallel sequencing of barcoded DNA samples significantly increases screening efficiency for clinically important genes. Short read aligners are well suited to single nucleotide and indel detection. However, methods for CNV detection from targeted enrichment are lacking. We present a method combining coverage with map information for the identification of deletions and duplications in targeted sequence data. Sequencing data is first scanned for gains and losses using a comparison of normalized coverage data between samples. CNV calls are confirmed by testing for a signature of sequences that span the CNV breakpoint. With our method, CNVs can be identified regardless of whether breakpoints are within regions targeted for sequencing. For CNVs where at least one breakpoint is within targeted sequence, exact CNV breakpoints can be identified. In a test data set of 96 subjects sequenced across ~1 Mb genomic sequence using multiplexing technology, our method detected mutations as small as 31 bp, predicted quantitative copy count, and had a low false-positive rate. Application of this method allows for identification of gains and losses in targeted sequence data, providing comprehensive mutation screening when combined with a short read aligner.

  20. Data as a Service: A Seismic Web Service Pipeline

    NASA Astrophysics Data System (ADS)

    Martinez, E.

    2016-12-01

    Publishing data as a service pipeline provides an improved, dynamic approach over static data archives. A service pipeline is a collection of micro web services that each perform a specific task and expose the results of that task. Structured request/response formats allow micro web services to be chained together into a service pipeline to provide more complex results. The U.S. Geological Survey adopted service pipelines to publish seismic hazard and design data supporting both specific and generalized audiences. The seismic web service pipeline starts at source data and exposes probability and deterministic hazard curves, response spectra, risk-targeted ground motions, and seismic design provision metadata. This pipeline supports public/private organizations and individual engineers/researchers. Publishing data as a service pipeline provides a variety of benefits. Exposing the component services enables advanced users to inspect or use the data at each processing step. Exposing a composite service enables new users quick access to published data with a very low barrier to entry. Advanced users may re-use micro web services by chaining them in new ways or injecting new micros services into the pipeline. This allows the user to test hypothesis and compare their results to published results. Exposing data at each step in the pipeline enables users to review and validate the data and process more quickly and accurately. Making the source code open source, per USGS policy, further enables this transparency. Each micro service may be scaled independent of any other micro service. This ensures data remains available and timely in a cost-effective manner regardless of load. Additionally, if a new or more efficient approach to processing the data is discovered, this new approach may replace the old approach at any time, keeping the pipeline running while not affecting other micro services.

  1. RNAi screen for rapid therapeutic target identification in leukemia patients

    PubMed Central

    Tyner, Jeffrey W.; Deininger, Michael W.; Loriaux, Marc M.; Chang, Bill H.; Gotlib, Jason R.; Willis, Stephanie G.; Erickson, Heidi; Kovacsovics, Tibor; O'Hare, Thomas; Heinrich, Michael C.; Druker, Brian J.

    2009-01-01

    Targeted therapy has vastly improved outcomes in certain types of cancer. Extension of this paradigm across a broad spectrum of malignancies will require an efficient method to determine the molecular vulnerabilities of cancerous cells. Improvements in sequencing technology will soon enable high-throughput sequencing of entire genomes of cancer patients; however, determining the relevance of identified sequence variants will require complementary functional analyses. Here, we report an RNAi-assisted protein target identification (RAPID) technology that individually assesses targeting of each member of the tyrosine kinase gene family. We demonstrate that RAPID screening of primary leukemia cells from 30 patients identifies targets that are critical to survival of the malignant cells from 10 of these individuals. We identify known, activating mutations in JAK2 and K-RAS, as well as patient-specific sensitivity to down-regulation of FLT1, CSF1R, PDGFR, ROR1, EPHA4/5, JAK1/3, LMTK3, LYN, FYN, PTK2B, and N-RAS. We also describe a previously undescribed, somatic, activating mutation in the thrombopoietin receptor that is sensitive to down-stream pharmacologic inhibition. Hence, the RAPID technique can quickly identify molecular vulnerabilities in malignant cells. Combination of this technique with whole-genome sequencing will represent an ideal tool for oncogenic target identification such that specific therapies can be matched with individual patients. PMID:19433805

  2. The confidence-accuracy relationship in eyewitness identification: effects of lineup instructions, foil similarity, and target-absent base rates.

    PubMed

    Brewer, Neil; Wells, Gary L

    2006-03-01

    Discriminating accurate from mistaken eyewitness identifications is a major issue facing criminal justice systems. This study examined whether eyewitness confidence assists such decisions under a variety of conditions using a confidence-accuracy (CA) calibration approach. Participants (N = 1,200) viewed a simulated crime and attempted 2 separate identifications from 8-person target-present or target-absent lineups. Confidence and accuracy were calibrated for choosers (but not nonchoosers) for both targets under all conditions. Lower overconfidence was associated with higher diagnosticity, lower target-absent base rates, and shorter identification latencies. Although researchers agree that courtroom expressions of confidence are uninformative, our findings indicate that confidence assessments obtained immediately after a positive identification can provide a useful guide for investigators about the likely accuracy of an identification.

  3. Advances in identification and validation of protein targets of natural products without chemical modification.

    PubMed

    Chang, J; Kim, Y; Kwon, H J

    2016-05-04

    Covering: up to February 2016Identification of the target proteins of natural products is pivotal to understanding the mechanisms of action to develop natural products for use as molecular probes and potential therapeutic drugs. Affinity chromatography of immobilized natural products has been conventionally used to identify target proteins, and has yielded good results. However, this method has limitations, in that labeling or tagging for immobilization and affinity purification often result in reduced or altered activity of the natural product. New strategies have recently been developed and applied to identify the target proteins of natural products and synthetic small molecules without chemical modification of the natural product. These direct and indirect methods for target identification of label-free natural products include drug affinity responsive target stability (DARTS), stability of proteins from rates of oxidation (SPROX), cellular thermal shift assay (CETSA), thermal proteome profiling (TPP), and bioinformatics-based analysis of connectivity. This review focuses on and reports case studies of the latest advances in target protein identification methods for label-free natural products. The integration of newly developed technologies will provide new insights and highlight the value of natural products for use as biological probes and new drug candidates.

  4. Identification of targets for rational pharmacological therapy in childhood craniopharyngioma.

    PubMed

    Gump, Jacob M; Donson, Andrew M; Birks, Diane K; Amani, Vladimir M; Rao, Karun K; Griesinger, Andrea M; Kleinschmidt-DeMasters, B K; Johnston, James M; Anderson, Richard C E; Rosenfeld, Amy; Handler, Michael; Gore, Lia; Foreman, Nicholas; Hankinson, Todd C

    2015-05-21

    Pediatric adamantinomatous craniopharyngioma (ACP) is a histologically benign but clinically aggressive brain tumor that arises from the sellar/suprasellar region. Despite a high survival rate with current surgical and radiation therapy (75-95 % at 10 years), ACP is associated with debilitating visual, endocrine, neurocognitive and psychological morbidity, resulting in excheptionally poor quality of life for survivors. Identification of an effective pharmacological therapy could drastically decrease morbidity and improve long term outcomes for children with ACP. Using mRNA microarray gene expression analysis of 15 ACP patient samples, we have found several pharmaceutical targets that are significantly and consistently overexpressed in our panel of ACP relative to other pediatric brain tumors, pituitary tumors, normal pituitary and normal brain tissue. Among the most highly expressed are several targets of the kinase inhibitor dasatinib - LCK, EPHA2 and SRC; EGFR pathway targets - AREG, EGFR and ERBB3; and other potentially actionable cancer targets - SHH, MMP9 and MMP12. We confirm by western blot that a subset of these targets is highly expressed in ACP primary tumor samples. We report here the first published transcriptome for ACP and the identification of targets for rational therapy. Experimental drugs targeting each of these gene products are currently being tested clinically and pre-clinically for the treatment of other tumor types. This study provides a rationale for further pre-clinical and clinical studies of novel pharmacological treatments for ACP. Development of mouse and cell culture models for ACP will further enable the translation of these targets from the lab to the clinic, potentially ushering in a new era in the treatment of ACP.

  5. Rapid, Vehicle-Based Identification of Location and Magnitude of Urban Natural Gas Pipeline Leaks.

    PubMed

    von Fischer, Joseph C; Cooley, Daniel; Chamberlain, Sam; Gaylord, Adam; Griebenow, Claire J; Hamburg, Steven P; Salo, Jessica; Schumacher, Russ; Theobald, David; Ham, Jay

    2017-04-04

    Information about the location and magnitudes of natural gas (NG) leaks from urban distribution pipelines is important for minimizing greenhouse gas emissions and optimizing investment in pipeline management. To enable rapid collection of such data, we developed a relatively simple method using high-precision methane analyzers in Google Street View cars. Our data indicate that this automated leak survey system can document patterns in leak location and magnitude within and among cities, even without wind data. We found that urban areas with prevalent corrosion-prone distribution lines (Boston, MA, Staten Island, NY, and Syracuse, NY), leaked approximately 25-fold more methane than cities with more modern pipeline materials (Burlington, VT, and Indianapolis, IN). Although this mobile monitoring method produces conservative estimates of leak rates and leak counts, it can still help prioritize both leak repairs and replacement of leak-prone sections of distribution lines, thus minimizing methane emissions over short and long terms.

  6. 75 FR 13342 - Pipeline Safety: Workshop on Distribution Pipeline Construction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-19

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... natural gas distribution construction. Natural gas distribution pipelines are subject to a unique subset... distribution pipeline construction practices. This workshop will focus solely on natural gas distribution...

  7. A graph-based approach for designing extensible pipelines

    PubMed Central

    2012-01-01

    . The system has been tested on Linux and Windows platforms. Conclusions Our graph-based approach enables the automatic creation of pipelines by compiling a specialised set of tools on demand, depending on the functionality required. It also allows the implementation of extensible and low-maintenance pipelines and contributes towards consolidating openness and collaboration in bioinformatics systems. It is targeted at pipeline developers and is suited for implementing applications with sequential execution steps and combined functionalities. In the format conversion application, the automatic combination of conversion tools increased both the number of possible conversions available to the user and the extensibility of the system to allow for future updates with new file formats. PMID:22788675

  8. Identification of control targets in Boolean molecular network models via computational algebra.

    PubMed

    Murrugarra, David; Veliz-Cuba, Alan; Aguilar, Boris; Laubenbacher, Reinhard

    2016-09-23

    Many problems in biomedicine and other areas of the life sciences can be characterized as control problems, with the goal of finding strategies to change a disease or otherwise undesirable state of a biological system into another, more desirable, state through an intervention, such as a drug or other therapeutic treatment. The identification of such strategies is typically based on a mathematical model of the process to be altered through targeted control inputs. This paper focuses on processes at the molecular level that determine the state of an individual cell, involving signaling or gene regulation. The mathematical model type considered is that of Boolean networks. The potential control targets can be represented by a set of nodes and edges that can be manipulated to produce a desired effect on the system. This paper presents a method for the identification of potential intervention targets in Boolean molecular network models using algebraic techniques. The approach exploits an algebraic representation of Boolean networks to encode the control candidates in the network wiring diagram as the solutions of a system of polynomials equations, and then uses computational algebra techniques to find such controllers. The control methods in this paper are validated through the identification of combinatorial interventions in the signaling pathways of previously reported control targets in two well studied systems, a p53-mdm2 network and a blood T cell lymphocyte granular leukemia survival signaling network. Supplementary data is available online and our code in Macaulay2 and Matlab are available via http://www.ms.uky.edu/~dmu228/ControlAlg . This paper presents a novel method for the identification of intervention targets in Boolean network models. The results in this paper show that the proposed methods are useful and efficient for moderately large networks.

  9. The Druggable Pocketome of Corynebacterium diphtheriae: A New Approach for in silico Putative Druggable Targets

    PubMed Central

    Hassan, Syed S.; Jamal, Syed B.; Radusky, Leandro G.; Tiwari, Sandeep; Ullah, Asad; Ali, Javed; Behramand; de Carvalho, Paulo V. S. D.; Shams, Rida; Khan, Sabir; Figueiredo, Henrique C. P.; Barh, Debmalya; Ghosh, Preetam; Silva, Artur; Baumbach, Jan; Röttger, Richard; Turjanski, Adrián G.; Azevedo, Vasco A. C.

    2018-01-01

    Diphtheria is an acute and highly infectious disease, previously regarded as endemic in nature but vaccine-preventable, is caused by Corynebacterium diphtheriae (Cd). In this work, we used an in silico approach along the 13 complete genome sequences of C. diphtheriae followed by a computational assessment of structural information of the binding sites to characterize the “pocketome druggability.” To this end, we first computed the “modelome” (3D structures of a complete genome) of a randomly selected reference strain Cd NCTC13129; that had 13,763 open reading frames (ORFs) and resulted in 1,253 (∼9%) structure models. The amino acid sequences of these modeled structures were compared with the remaining 12 genomes and consequently, 438 conserved protein sequences were obtained. The RCSB-PDB database was consulted to check the template structures for these conserved proteins and as a result, 401 adequate 3D models were obtained. We subsequently predicted the protein pockets for the obtained set of models and kept only the conserved pockets that had highly druggable (HD) values (137 across all strains). Later, an off-target host homology analyses was performed considering the human proteome using NCBI database. Furthermore, the gene essentiality analysis was carried out that gave a final set of 10-conserved targets possessing highly druggable protein pockets. To check the target identification robustness of the pipeline used in this work, we crosschecked the final target list with another in-house target identification approach for C. diphtheriae thereby obtaining three common targets, these were; hisE-phosphoribosyl-ATP pyrophosphatase, glpX-fructose 1,6-bisphosphatase II, and rpsH-30S ribosomal protein S8. Our predicted results suggest that the in silico approach used could potentially aid in experimental polypharmacological target determination in C. diphtheriae and other pathogens, thereby, might complement the existing and new drug-discovery pipelines

  10. Reduced-order model for underwater target identification using proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Ramesh, Sai Sudha; Lim, Kian Meng

    2017-03-01

    Research on underwater acoustics has seen major development over the past decade due to its widespread applications in domains such as underwater communication/navigation (SONAR), seismic exploration and oceanography. In particular, acoustic signatures from partially or fully buried targets can be used in the identification of buried mines for mine counter measures (MCM). Although there exist several techniques to identify target properties based on SONAR images and acoustic signatures, these methods first employ a feature extraction method to represent the dominant characteristics of a data set, followed by the use of an appropriate classifier based on neural networks or the relevance vector machine. The aim of the present study is to demonstrate the applications of proper orthogonal decomposition (POD) technique in capturing dominant features of a set of scattered pressure signals, and subsequent use of the POD modes and coefficients in the identification of partially buried underwater target parameters such as its location, size and material density. Several numerical examples are presented to demonstrate the performance of the system identification method based on POD. Although the present study is based on 2D acoustic model, the method can be easily extended to 3D models and thereby enables cost-effective representations of large-scale data.

  11. RGAugury: a pipeline for genome-wide prediction of resistance gene analogs (RGAs) in plants.

    PubMed

    Li, Pingchuan; Quan, Xiande; Jia, Gaofeng; Xiao, Jin; Cloutier, Sylvie; You, Frank M

    2016-11-02

    Resistance gene analogs (RGAs), such as NBS-encoding proteins, receptor-like protein kinases (RLKs) and receptor-like proteins (RLPs), are potential R-genes that contain specific conserved domains and motifs. Thus, RGAs can be predicted based on their conserved structural features using bioinformatics tools. Computer programs have been developed for the identification of individual domains and motifs from the protein sequences of RGAs but none offer a systematic assessment of the different types of RGAs. A user-friendly and efficient pipeline is needed for large-scale genome-wide RGA predictions of the growing number of sequenced plant genomes. An integrative pipeline, named RGAugury, was developed to automate RGA prediction. The pipeline first identifies RGA-related protein domains and motifs, namely nucleotide binding site (NB-ARC), leucine rich repeat (LRR), transmembrane (TM), serine/threonine and tyrosine kinase (STTK), lysin motif (LysM), coiled-coil (CC) and Toll/Interleukin-1 receptor (TIR). RGA candidates are identified and classified into four major families based on the presence of combinations of these RGA domains and motifs: NBS-encoding, TM-CC, and membrane associated RLP and RLK. All time-consuming analyses of the pipeline are paralleled to improve performance. The pipeline was evaluated using the well-annotated Arabidopsis genome. A total of 98.5, 85.2, and 100 % of the reported NBS-encoding genes, membrane associated RLPs and RLKs were validated, respectively. The pipeline was also successfully applied to predict RGAs for 50 sequenced plant genomes. A user-friendly web interface was implemented to ease command line operations, facilitate visualization and simplify result management for multiple datasets. RGAugury is an efficiently integrative bioinformatics tool for large scale genome-wide identification of RGAs. It is freely available at Bitbucket: https://bitbucket.org/yaanlpc/rgaugury .

  12. Identification of ground targets from airborne platforms

    NASA Astrophysics Data System (ADS)

    Doe, Josh; Boettcher, Evelyn; Miller, Brian

    2009-05-01

    The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) sensor performance models predict the ability of soldiers to perform a specified military discrimination task using an EO/IR sensor system. Increasingly EO/IR systems are being used on manned and un-manned aircraft for surveillance and target acquisition tasks. In response to this emerging requirement, the NVESD Modeling and Simulation division has been tasked to compare target identification performance between ground-to-ground and air-to-ground platforms for both IR and visible spectra for a set of wheeled utility vehicles. To measure performance, several forced choice experiments were designed and administered and the results analyzed. This paper describes these experiments and reports the results as well as the NVTherm model calibration factors derived for the infrared imagery.

  13. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    PubMed

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  14. Target identification using Zernike moments and neural networks

    NASA Astrophysics Data System (ADS)

    Azimi-Sadjadi, Mahmood R.; Jamshidi, Arta A.; Nevis, Andrew J.

    2001-10-01

    The development of an underwater target identification algorithm capable of identifying various types of underwater targets, such as mines, under different environmental conditions pose many technical problems. Some of the contributing factors are: targets have diverse sizes, shapes and reflectivity properties. Target emplacement environment is variable; targets may be proud or partially buried. Environmental properties vary significantly from one location to another. Bottom features such as sand, rocks, corals, and vegetation can conceal a target whether it is partially buried or proud. Competing clutter with responses that closely resemble those of the targets may lead to false positives. All the problems mentioned above contribute to overly difficult and challenging conditions that could lead to unreliable algorithm performance with existing methods. In this paper, we developed and tested a shape-dependent feature extraction scheme that provides features invariant to rotation, size scaling and translation; properties that are extremely useful for any target classification problem. The developed schemes were tested on an electro-optical imagery data set collected under different environmental conditions with variable background, range and target types. The electro-optic data set was collected using a Laser Line Scan (LLS) sensor by the Coastal Systems Station (CSS), located in Panama City, Florida. The performance of the developed scheme and its robustness to distortion, rotation, scaling and translation was also studied.

  15. 75 FR 63774 - Pipeline Safety: Safety of On-Shore Hazardous Liquid Pipelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Pipelines AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), Department of... Gas Pipeline Safety Act of 1968, Public Law 90-481, delegated to DOT the authority to develop...

  16. 77 FR 61825 - Pipeline Safety: Notice of Public Meeting on Pipeline Data

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-11

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... program performance measures for gas distribution, gas transmission, and hazardous liquids pipelines. The... distribution pipelines (49 CFR 192.1007(e)), gas transmission pipelines (49 CFR 192.945) and hazardous liquids...

  17. Utilizing random Forest QSAR models with optimized parameters for target identification and its application to target-fishing server.

    PubMed

    Lee, Kyoungyeul; Lee, Minho; Kim, Dongsup

    2017-12-28

    The identification of target molecules is important for understanding the mechanism of "target deconvolution" in phenotypic screening and "polypharmacology" of drugs. Because conventional methods of identifying targets require time and cost, in-silico target identification has been considered an alternative solution. One of the well-known in-silico methods of identifying targets involves structure activity relationships (SARs). SARs have advantages such as low computational cost and high feasibility; however, the data dependency in the SAR approach causes imbalance of active data and ambiguity of inactive data throughout targets. We developed a ligand-based virtual screening model comprising 1121 target SAR models built using a random forest algorithm. The performance of each target model was tested by employing the ROC curve and the mean score using an internal five-fold cross validation. Moreover, recall rates for top-k targets were calculated to assess the performance of target ranking. A benchmark model using an optimized sampling method and parameters was examined via external validation set. The result shows recall rates of 67.6% and 73.9% for top-11 (1% of the total targets) and top-33, respectively. We provide a website for users to search the top-k targets for query ligands available publicly at http://rfqsar.kaist.ac.kr . The target models that we built can be used for both predicting the activity of ligands toward each target and ranking candidate targets for a query ligand using a unified scoring scheme. The scores are additionally fitted to the probability so that users can estimate how likely a ligand-target interaction is active. The user interface of our web site is user friendly and intuitive, offering useful information and cross references.

  18. 76 FR 303 - Pipeline Safety: Safety of On-Shore Hazardous Liquid Pipelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-04

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 195 [Docket ID PHMSA-2010-0229] RIN 2137-AE66 Pipeline Safety: Safety of On-Shore Hazardous Liquid Pipelines AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice of...

  19. Development and Applications of Pipeline Steel in Long-Distance Gas Pipeline of China

    NASA Astrophysics Data System (ADS)

    Chunyong, Huo; Yang, Li; Lingkang, Ji

    In past decades, with widely utilizing of Microalloying and Thermal Mechanical Control Processing (TMCP) technology, the good matching of strength, toughness, plasticity and weldability on pipeline steel has been reached so that oil and gas pipeline has been greatly developed in China to meet the demand of strong domestic consumption of energy. In this paper, development history of pipeline steel and gas pipeline in china is briefly reviewed. The microstructure characteristic and mechanical performance of pipeline steel used in some representative gas pipelines of china built in different stage are summarized. Through the analysis on the evolution of pipeline service environment, some prospective development trend of application of pipeline steel in China is also presented.

  20. Hierarchical relaxation methods for multispectral pixel classification as applied to target identification

    NASA Astrophysics Data System (ADS)

    Cohen, E. A., Jr.

    1985-02-01

    This report provides insights into the approaches toward image modeling as applied to target detection. The approach is that of examining the energy in prescribed wave-bands which emanate from a target and correlating the emissions. Typically, one might be looking at two or three infrared bands, possibly together with several visual bands. The target is segmented, using both first and second order modeling, into a set of interesting components and these components are correlated so as to enhance the classification process. A Markov-type model is used to provide an a priori assessment of the spatial relationships among critical parts of the target, and a stochastic model using the output of an initial probabilistic labeling is invoked. The tradeoff between this stochastic model and the Markov model is then optimized to yield a best labeling for identification purposes. In an identification of friend or foe (IFF) context, this methodology could be of interest, for it provides the ingredients for such a higher level of understanding.

  1. Blueprint for antimicrobial hit discovery targeting metabolic networks

    PubMed Central

    Shen, Y.; Liu, J.; Estiu, G.; Isin, B.; Ahn, Y-Y.; Lee, D-S.; Barabási, A-L.; Kapatral, V.; Wiest, O.; Oltvai, Z. N.

    2010-01-01

    Advances in genome analysis, network biology, and computational chemistry have the potential to revolutionize drug discovery by combining system-level identification of drug targets with the atomistic modeling of small molecules capable of modulating their activity. To demonstrate the effectiveness of such a discovery pipeline, we deduced common antibiotic targets in Escherichia coli and Staphylococcus aureus by identifying shared tissue-specific or uniformly essential metabolic reactions in their metabolic networks. We then predicted through virtual screening dozens of potential inhibitors for several enzymes of these reactions and showed experimentally that a subset of these inhibited both enzyme activities in vitro and bacterial cell viability. This blueprint is applicable for any sequenced organism with high-quality metabolic reconstruction and suggests a general strategy for strain-specific antiinfective therapy. PMID:20080587

  2. The Kepler Science Data Processing Pipeline Source Code Road Map

    NASA Technical Reports Server (NTRS)

    Wohler, Bill; Jenkins, Jon M.; Twicken, Joseph D.; Bryson, Stephen T.; Clarke, Bruce Donald; Middour, Christopher K.; Quintana, Elisa Victoria; Sanderfer, Jesse Thomas; Uddin, Akm Kamal; Sabale, Anima; hide

    2016-01-01

    We give an overview of the operational concepts and architecture of the Kepler Science Processing Pipeline. Designed, developed, operated, and maintained by the Kepler Science Operations Center (SOC) at NASA Ames Research Center, the Science Processing Pipeline is a central element of the Kepler Ground Data System. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center which hosts the computers required to perform data analysis. The SOC's charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Processing Pipeline, including, the software algorithms. We present the high-performance, parallel computing software modules of the pipeline that perform transit photometry, pixel-level calibration, systematic error correction, attitude determination, stellar target management, and instrument characterization.

  3. Gathering pipeline methane emissions in Fayetteville shale pipelines and scoping guidelines for future pipeline measurement campaigns

    DOE PAGES

    Zimmerle, Daniel J.; Pickering, Cody K.; Bell, Clay S.; ...

    2017-11-24

    Gathering pipelines, which transport gas from well pads to downstream processing, are a sector of the natural gas supply chain for which little measured methane emissions data are available. This study performed leak detection and measurement on 96 km of gathering pipeline and the associated 56 pigging facilities and 39 block valves. The study found one underground leak accounting for 83% (4.0 kg CH 4/hr) of total measured emissions. Methane emissions for the 4684 km of gathering pipeline in the study area were estimated at 402 kg CH 4/hr [95 to 1065 kg CH 4/hr, 95% CI], or 1% [0.2%more » to 2.6%] of all methane emissions measured during a prior aircraft study of the same area. Emissions estimated by this study fall within the uncertainty range of emissions estimated using emission factors from EPA's 2015 Greenhouse Inventory and study activity estimates. While EPA's current inventory is based upon emission factors from distribution mains measured in the 1990s, this study indicates that using emission factors from more recent distribution studies could significantly underestimate emissions from gathering pipelines. To guide broader studies of pipeline emissions, we also estimate the fraction of the pipeline length within a basin that must be measured to constrain uncertainty of pipeline emissions estimates to within 1% of total basin emissions. The study provides both substantial insight into the mix of emission sources and guidance for future gathering pipeline studies, but since measurements were made in a single basin, the results are not sufficiently representative to provide methane emission factors at the regional or national level.« less

  4. Gathering pipeline methane emissions in Fayetteville shale pipelines and scoping guidelines for future pipeline measurement campaigns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerle, Daniel J.; Pickering, Cody K.; Bell, Clay S.

    Gathering pipelines, which transport gas from well pads to downstream processing, are a sector of the natural gas supply chain for which little measured methane emissions data are available. This study performed leak detection and measurement on 96 km of gathering pipeline and the associated 56 pigging facilities and 39 block valves. The study found one underground leak accounting for 83% (4.0 kg CH 4/hr) of total measured emissions. Methane emissions for the 4684 km of gathering pipeline in the study area were estimated at 402 kg CH 4/hr [95 to 1065 kg CH 4/hr, 95% CI], or 1% [0.2%more » to 2.6%] of all methane emissions measured during a prior aircraft study of the same area. Emissions estimated by this study fall within the uncertainty range of emissions estimated using emission factors from EPA's 2015 Greenhouse Inventory and study activity estimates. While EPA's current inventory is based upon emission factors from distribution mains measured in the 1990s, this study indicates that using emission factors from more recent distribution studies could significantly underestimate emissions from gathering pipelines. To guide broader studies of pipeline emissions, we also estimate the fraction of the pipeline length within a basin that must be measured to constrain uncertainty of pipeline emissions estimates to within 1% of total basin emissions. The study provides both substantial insight into the mix of emission sources and guidance for future gathering pipeline studies, but since measurements were made in a single basin, the results are not sufficiently representative to provide methane emission factors at the regional or national level.« less

  5. Uridine monophosphate kinase as potential target for tuberculosis: from target to lead identification.

    PubMed

    Arvind, Akanksha; Jain, Vaibhav; Saravanan, Parameswaran; Mohan, C Gopi

    2013-12-01

    Mycobacterium tuberculosis (Mtb) is a causative agent of tuberculosis (TB) disease, which has affected approximately 2 billion people worldwide. Due to the emergence of resistance towards the existing drugs, discovery of new anti-TB drugs is an important global healthcare challenge. To address this problem, there is an urgent need to identify new drug targets in Mtb. In the present study, the subtractive genomics approach has been employed for the identification of new drug targets against TB. Screening the Mtb proteome using the Database of Essential Genes (DEG) and human proteome resulted in the identification of 60 key proteins which have no eukaryotic counterparts. Critical analysis of these proteins using Kyoto Encyclopedia of Genes and Genomes (KEGG) metabolic pathways database revealed uridine monophosphate kinase (UMPK) enzyme as a potential drug target for developing novel anti-TB drugs. Homology model of Mtb-UMPK was constructed for the first time on the basis of the crystal structure of E. coli-UMPK, in order to understand its structure-function relationships, and which would in turn facilitate to perform structure-based inhibitor design. Furthermore, the structural similarity search was carried out using physiological inhibitor UTP of Mtb-UMPK to virtually screen ZINC database. Retrieved hits were further screened by implementing several filters like ADME and toxicity followed by molecular docking. Finally, on the basis of the Glide docking score and the mode of binding, 6 putative leads were identified as inhibitors of this enzyme which can potentially emerge as future drugs for the treatment of TB.

  6. Identification of STAT target genes in adipocytes

    PubMed Central

    Zhao, Peng; Stephens, Jacqueline M.

    2013-01-01

    Adipocytes play important roles in lipid storage, energy homeostasis and whole body insulin sensitivity. Studies in the last two decades have identified the hormones and cytokines that activate specific STATs in adipocytes in vitro and in vivo. Five of the seven STAT family members are expressed in adipocyte (STATs 1, 3, 5A, 5B and 6). Many transcription factors, including STATs, have been shown to play an important role in adipose tissue development and function. This review will summarize the importance of adipocytes, indicate the cytokines and hormones that utilize the JAK-STAT signaling pathway in fat cells and focus on the identification of STAT target genes in mature adipocytes. To date, specific target genes have been identified for STATs, 1, 5A and 5B, but not for STATs 3 and 6. PMID:24058802

  7. 77 FR 34123 - Pipeline Safety: Public Meeting on Integrity Management of Gas Distribution Pipelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-08

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0100] Pipeline Safety: Public Meeting on Integrity Management of Gas Distribution Pipelines AGENCY: Office of Pipeline Safety, Pipeline and Hazardous Materials Safety Administration, DOT. ACTION...

  8. ALMA Pipeline: Current Status

    NASA Astrophysics Data System (ADS)

    Shinnaga, H.; Humphreys, E.; Indebetouw, R.; Villard, E.; Kern, J.; Davis, L.; Miura, R. E.; Nakazato, T.; Sugimoto, K.; Kosugi, G.; Akiyama, E.; Muders, D.; Wyrowski, F.; Williams, S.; Lightfoot, J.; Kent, B.; Momjian, E.; Hunter, T.; ALMA Pipeline Team

    2015-12-01

    The ALMA Pipeline is the automated data reduction tool that runs on ALMA data. Current version of the ALMA pipeline produces science quality data products for standard interferometric observing modes up to calibration process. The ALMA Pipeline is comprised of (1) heuristics in the form of Python scripts that select the best processing parameters, and (2) contexts that are given for book-keeping purpose of data processes. The ALMA Pipeline produces a "weblog" that showcases detailed plots for users to judge how each step of calibration processes are treated. The ALMA Interferometric Pipeline was conditionally accepted in March 2014 by processing Cycle 0 and Cycle 1 data sets. From Cycle 2, ALMA Pipeline is used for ALMA data reduction and quality assurance for the projects whose observing modes are supported by the ALMA Pipeline. Pipeline tasks are available based on CASA version 4.2.2, and the first public pipeline release called CASA 4.2.2-pipe has been available since October 2014. One can reduce ALMA data both by CASA tasks as well as by pipeline tasks by using CASA version 4.2.2-pipe.

  9. Risk analysis of urban gas pipeline network based on improved bow-tie model

    NASA Astrophysics Data System (ADS)

    Hao, M. J.; You, Q. J.; Yue, Z.

    2017-11-01

    Gas pipeline network is a major hazard source in urban areas. In the event of an accident, there could be grave consequences. In order to understand more clearly the causes and consequences of gas pipeline network accidents, and to develop prevention and mitigation measures, the author puts forward the application of improved bow-tie model to analyze risks of urban gas pipeline network. The improved bow-tie model analyzes accident causes from four aspects: human, materials, environment and management; it also analyzes the consequences from four aspects: casualty, property loss, environment and society. Then it quantifies the causes and consequences. Risk identification, risk analysis, risk assessment, risk control, and risk management will be clearly shown in the model figures. Then it can suggest prevention and mitigation measures accordingly to help reduce accident rate of gas pipeline network. The results show that the whole process of an accident can be visually investigated using the bow-tie model. It can also provide reasons for and predict consequences of an unfortunate event. It is of great significance in order to analyze leakage failure of gas pipeline network.

  10. Comparison of three microarray probe annotation pipelines: differences in strategies and their effect on downstream analysis

    PubMed Central

    Neerincx, Pieter BT; Casel, Pierrot; Prickett, Dennis; Nie, Haisheng; Watson, Michael; Leunissen, Jack AM; Groenen, Martien AM; Klopp, Christophe

    2009-01-01

    Background Reliable annotation linking oligonucleotide probes to target genes is essential for functional biological analysis of microarray experiments. We used the IMAD, OligoRAP and sigReannot pipelines to update the annotation for the ARK-Genomics Chicken 20 K array as part of a joined EADGENE/SABRE workshop. In this manuscript we compare their annotation strategies and results. Furthermore, we analyse the effect of differences in updated annotation on functional analysis for an experiment involving Eimeria infected chickens and finally we propose guidelines for optimal annotation strategies. Results IMAD, OligoRAP and sigReannot update both annotation and estimated target specificity. The 3 pipelines can assign oligos to target specificity categories although with varying degrees of resolution. Target specificity is judged based on the amount and type of oligo versus target-gene alignments (hits), which are determined by filter thresholds that users can adjust based on their experimental conditions. Linking oligos to annotation on the other hand is based on rigid rules, which differ between pipelines. For 52.7% of the oligos from a subset selected for in depth comparison all pipelines linked to one or more Ensembl genes with consensus on 44.0%. In 31.0% of the cases none of the pipelines could assign an Ensembl gene to an oligo and for the remaining 16.3% the coverage differed between pipelines. Differences in updated annotation were mainly due to different thresholds for hybridisation potential filtering of oligo versus target-gene alignments and different policies for expanding annotation using indirect links. The differences in updated annotation packages had a significant effect on GO term enrichment analysis with consensus on only 67.2% of the enriched terms. Conclusion In addition to flexible thresholds to determine target specificity, annotation tools should provide metadata describing the relationships between oligos and the annotation assigned to them

  11. Comparison of three microarray probe annotation pipelines: differences in strategies and their effect on downstream analysis.

    PubMed

    Neerincx, Pieter Bt; Casel, Pierrot; Prickett, Dennis; Nie, Haisheng; Watson, Michael; Leunissen, Jack Am; Groenen, Martien Am; Klopp, Christophe

    2009-07-16

    Reliable annotation linking oligonucleotide probes to target genes is essential for functional biological analysis of microarray experiments. We used the IMAD, OligoRAP and sigReannot pipelines to update the annotation for the ARK-Genomics Chicken 20 K array as part of a joined EADGENE/SABRE workshop. In this manuscript we compare their annotation strategies and results. Furthermore, we analyse the effect of differences in updated annotation on functional analysis for an experiment involving Eimeria infected chickens and finally we propose guidelines for optimal annotation strategies. IMAD, OligoRAP and sigReannot update both annotation and estimated target specificity. The 3 pipelines can assign oligos to target specificity categories although with varying degrees of resolution. Target specificity is judged based on the amount and type of oligo versus target-gene alignments (hits), which are determined by filter thresholds that users can adjust based on their experimental conditions. Linking oligos to annotation on the other hand is based on rigid rules, which differ between pipelines.For 52.7% of the oligos from a subset selected for in depth comparison all pipelines linked to one or more Ensembl genes with consensus on 44.0%. In 31.0% of the cases none of the pipelines could assign an Ensembl gene to an oligo and for the remaining 16.3% the coverage differed between pipelines. Differences in updated annotation were mainly due to different thresholds for hybridisation potential filtering of oligo versus target-gene alignments and different policies for expanding annotation using indirect links. The differences in updated annotation packages had a significant effect on GO term enrichment analysis with consensus on only 67.2% of the enriched terms. In addition to flexible thresholds to determine target specificity, annotation tools should provide metadata describing the relationships between oligos and the annotation assigned to them. These relationships can then

  12. 75 FR 5244 - Pipeline Safety: Integrity Management Program for Gas Distribution Pipelines; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-02

    ... Management Program for Gas Distribution Pipelines; Correction AGENCY: Pipeline and Hazardous Materials Safety... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Regulations to require operators of gas distribution pipelines to develop and implement integrity management...

  13. Towards detection of pipeline integrity threats using a smart fiber optic surveillance system: PIT-STOP project blind field test results

    NASA Astrophysics Data System (ADS)

    Tejedor, J.; Macias-Guarasa, J.; Martins, H. F.; Piote, D.; Pastor-Graells, J.; Martin-Lopez, S.; Corredera, P.; De Pauw, G.; De Smet, F.; Postvoll, W.; Ahlen, C. H.; Gonzalez-Herraez, M.

    2017-04-01

    This paper presents the first report on on-line and final blind field test results of a pipeline integrity threat surveillance system. The system integrates a machine+activity identification mode, and a threat detection mode. Two different pipeline sections were selected for the blind tests: One close to the sensor position, and the other 35 km away from it. Results of the machine+activity identification mode showed that about 46% of the times the machine, the activity or both were correctly identified. For the threat detection mode, 8 out of 10 threats were correctly detected, with 1 false alarm.

  14. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  15. Targets for future clinical trials in Huntington's disease: what's in the pipeline?

    PubMed

    Wild, Edward J; Tabrizi, Sarah J

    2014-09-15

    The known genetic cause of Huntington's disease (HD) has fueled considerable progress in understanding its pathobiology and the development of therapeutic approaches aimed at correcting specific changes linked to the causative mutation. Among the most promising is reducing expression of mutant huntingtin protein (mHTT) with RNA interference or antisense oligonucleotides; human trials are now being planned. Zinc-finger transcriptional repression is another innovative method to reduce mHTT expression. Modulation of mHTT phosphorylation, chaperone upregulation, and autophagy enhancement represent attempts to alter cellular homeostasis to favor removal of mHTT. Inhibition of histone deacetylases (HDACs) remains of interest; recent work affirms HDAC4 as a target but questions the assumed centrality of its catalytic activity in HD. Phosphodiesterase inhibition, aimed at restoring synaptic function, has progressed rapidly to human trials. Deranged cellular signaling provides several tractable targets, but specificity and complexity are challenges. Restoring neurotrophic support in HD remains a key potential therapeutic approach. with several approaches being pursued, including brain-derived neurotrophic factor (BDNF) mimesis through tyrosine receptor kinase B (TrkB) agonism and monoclonal antibodies. An increasing understanding of the role of glial cells in HD has led to several new therapeutic avenues, including kynurenine monooxygenase inhibition, immunomodulation by laquinimod, CB2 agonism, and others. The complex metabolic derangements in HD remain under study, but no clear therapeutic strategy has yet emerged. We conclude that many exciting therapeutics are progressing through the development pipeline, and combining a better understanding of HD biology in human patients, with concerted medicinal chemistry efforts, will be crucial for bringing about an era of effective therapies. © 2014 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of

  16. Detection of underground pipeline based on Golay waveform design

    NASA Astrophysics Data System (ADS)

    Dai, Jingjing; Xu, Dazhuan

    2017-08-01

    The detection of underground pipeline is an important problem in the development of the city, but the research about it is not mature at present. In this paper, based on the principle of waveform design in wireless communication, we design an acoustic signal detection system to detect the location of underground pipelines. According to the principle of acoustic localization, we chose DSP-F28335 as the development board, and use DA and AD module as the master control chip. The DA module uses complementary Golay sequence as emission signal. The AD module acquisiting data synchronously, so that the echo signals which containing position information of the target is recovered through the signal processing. The test result shows that the method in this paper can not only calculate the sound velocity of the soil, but also can locate the location of underground pipelines accurately.

  17. The Kepler Science Operations Center Pipeline Framework Extensions

    NASA Technical Reports Server (NTRS)

    Klaus, Todd C.; Cote, Miles T.; McCauliff, Sean; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Chandrasekaran, Hema; Bryson, Stephen T.; Middour, Christopher; Caldwell, Douglas A.; hide

    2010-01-01

    The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including managing targets, generating on-board data compression tables, monitoring photometer health and status, processing the science data, and exporting the pipeline products to the mission archive. We describe how the generic pipeline framework software developed for Kepler is extended to achieve these goals, including pipeline configurations for processing science data and other support roles, and custom unit of work generators that control how the Kepler data are partitioned and distributed across the computing cluster. We describe the interface between the Java software that manages the retrieval and storage of the data for a given unit of work and the MATLAB algorithms that process these data. The data for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing these files to be used to debug and evolve the algorithms offline.

  18. Identification of GPC2 as an Oncoprotein and Candidate Immunotherapeutic Target in High-Risk Neuroblastoma.

    PubMed

    Bosse, Kristopher R; Raman, Pichai; Zhu, Zhongyu; Lane, Maria; Martinez, Daniel; Heitzeneder, Sabine; Rathi, Komal S; Kendsersky, Nathan M; Randall, Michael; Donovan, Laura; Morrissy, Sorana; Sussman, Robyn T; Zhelev, Doncho V; Feng, Yang; Wang, Yanping; Hwang, Jennifer; Lopez, Gonzalo; Harenza, Jo Lynne; Wei, Jun S; Pawel, Bruce; Bhatti, Tricia; Santi, Mariarita; Ganguly, Arupa; Khan, Javed; Marra, Marco A; Taylor, Michael D; Dimitrov, Dimiter S; Mackall, Crystal L; Maris, John M

    2017-09-11

    We developed an RNA-sequencing-based pipeline to discover differentially expressed cell-surface molecules in neuroblastoma that meet criteria for optimal immunotherapeutic target safety and efficacy. Here, we show that GPC2 is a strong candidate immunotherapeutic target in this childhood cancer. We demonstrate high GPC2 expression in neuroblastoma due to MYCN transcriptional activation and/or somatic gain of the GPC2 locus. We confirm GPC2 to be highly expressed on most neuroblastomas, but not detectable at appreciable levels in normal childhood tissues. In addition, we demonstrate that GPC2 is required for neuroblastoma proliferation. Finally, we develop a GPC2-directed antibody-drug conjugate that is potently cytotoxic to GPC2-expressing neuroblastoma cells. Collectively, these findings validate GPC2 as a non-mutated neuroblastoma oncoprotein and candidate immunotherapeutic target. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Sleep and eyewitness memory: Fewer false identifications after sleep when the target is absent from the lineup.

    PubMed

    Stepan, Michelle E; Dehnke, Taylor M; Fenn, Kimberly M

    2017-01-01

    Inaccurate eyewitness identifications are the leading cause of known false convictions in the United States. Moreover, improving eyewitness memory is difficult and often unsuccessful. Sleep consistently strengthens and protects memory from interference, particularly when a recall test is used. However, the effect of sleep on recognition memory is more equivocal. Eyewitness identification tests are often recognition based, thus leaving open the question of how sleep affects recognition performance in an eyewitness context. In the current study, we investigated the effect of sleep on eyewitness memory. Participants watched a video of a mock-crime and attempted to identify the perpetrator from a simultaneous lineup after a 12-hour retention interval that either spanned a waking day or night of sleep. In Experiment 1, we used a target-present lineup and, in Experiment 2, we used a target-absent lineup in order to investigate correct and false identifications, respectively. Sleep reduced false identifications in the target-absent lineup (Experiment 2) but had no effect on correct identifications in the target-present lineup (Experiment 1). These results are discussed with respect to memory strength and decision making strategies.

  20. Sleep and eyewitness memory: Fewer false identifications after sleep when the target is absent from the lineup

    PubMed Central

    Dehnke, Taylor M.; Fenn, Kimberly M.

    2017-01-01

    Inaccurate eyewitness identifications are the leading cause of known false convictions in the United States. Moreover, improving eyewitness memory is difficult and often unsuccessful. Sleep consistently strengthens and protects memory from interference, particularly when a recall test is used. However, the effect of sleep on recognition memory is more equivocal. Eyewitness identification tests are often recognition based, thus leaving open the question of how sleep affects recognition performance in an eyewitness context. In the current study, we investigated the effect of sleep on eyewitness memory. Participants watched a video of a mock-crime and attempted to identify the perpetrator from a simultaneous lineup after a 12-hour retention interval that either spanned a waking day or night of sleep. In Experiment 1, we used a target-present lineup and, in Experiment 2, we used a target-absent lineup in order to investigate correct and false identifications, respectively. Sleep reduced false identifications in the target-absent lineup (Experiment 2) but had no effect on correct identifications in the target-present lineup (Experiment 1). These results are discussed with respect to memory strength and decision making strategies. PMID:28877169

  1. 2-Aryl-5-carboxytetrazole as a New Photoaffinity Label for Drug Target Identification

    PubMed Central

    2016-01-01

    Photoaffinity labels are powerful tools for dissecting ligand–protein interactions, and they have a broad utility in medicinal chemistry and drug discovery. Traditional photoaffinity labels work through nonspecific C–H/X–H bond insertion reactions with the protein of interest by the highly reactive photogenerated intermediate. Herein, we report a new photoaffinity label, 2-aryl-5-carboxytetrazole (ACT), that interacts with the target protein via a unique mechanism in which the photogenerated carboxynitrile imine reacts with a proximal nucleophile near the target active site. In two distinct case studies, we demonstrate that the attachment of ACT to a ligand does not significantly alter the binding affinity and specificity of the parent drug. Compared with diazirine and benzophenone, two commonly used photoaffinity labels, in two case studies ACT showed higher photo-cross-linking yields toward their protein targets in vitro based on mass spectrometry analysis. In the in situ target identification studies, ACT successfully captured the desired targets with an efficiency comparable to the diazirine. We expect that further development of this class of photoaffinity labels will lead to a broad range of applications across target identification, and validation and elucidation of the binding site in drug discovery. PMID:27740749

  2. 2-Aryl-5-carboxytetrazole as a New Photoaffinity Label for Drug Target Identification.

    PubMed

    Herner, András; Marjanovic, Jasmina; Lewandowski, Tracey M; Marin, Violeta; Patterson, Melanie; Miesbauer, Laura; Ready, Damien; Williams, Jon; Vasudevan, Anil; Lin, Qing

    2016-11-09

    Photoaffinity labels are powerful tools for dissecting ligand-protein interactions, and they have a broad utility in medicinal chemistry and drug discovery. Traditional photoaffinity labels work through nonspecific C-H/X-H bond insertion reactions with the protein of interest by the highly reactive photogenerated intermediate. Herein, we report a new photoaffinity label, 2-aryl-5-carboxytetrazole (ACT), that interacts with the target protein via a unique mechanism in which the photogenerated carboxynitrile imine reacts with a proximal nucleophile near the target active site. In two distinct case studies, we demonstrate that the attachment of ACT to a ligand does not significantly alter the binding affinity and specificity of the parent drug. Compared with diazirine and benzophenone, two commonly used photoaffinity labels, in two case studies ACT showed higher photo-cross-linking yields toward their protein targets in vitro based on mass spectrometry analysis. In the in situ target identification studies, ACT successfully captured the desired targets with an efficiency comparable to the diazirine. We expect that further development of this class of photoaffinity labels will lead to a broad range of applications across target identification, and validation and elucidation of the binding site in drug discovery.

  3. 78 FR 41991 - Pipeline Safety: Potential for Damage to Pipeline Facilities Caused by Flooding

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-12

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Notice; Issuance of Advisory... Gas and Hazardous Liquid Pipeline Systems. Subject: Potential for Damage to Pipeline Facilities Caused...

  4. 78 FR 41496 - Pipeline Safety: Meetings of the Gas and Liquid Pipeline Advisory Committees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-10

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0156] Pipeline Safety: Meetings of the Gas and Liquid Pipeline Advisory Committees AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice of advisory committee...

  5. Atlas-based identification of targets for functional radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stancanello, Joseph; Romanelli, Pantaleo; Modugno, Nicola

    2006-06-15

    Functional disorders of the brain, such as Parkinson's disease, dystonia, epilepsy, and neuropathic pain, may exhibit poor response to medical therapy. In such cases, surgical intervention may become necessary. Modern surgical approaches to such disorders include radio-frequency lesioning and deep brain stimulation (DBS). The subthalamic nucleus (STN) is one of the most useful stereotactic targets available: STN DBS is known to induce substantial improvement in patients with end-stage Parkinson's disease. Other targets include the Globus Pallidus pars interna (GPi) for dystonia and Parkinson's disease, and the centromedian nucleus of the thalamus (CMN) for neuropathic pain. Radiosurgery is an attractive noninvasivemore » alternative to treat some functional brain disorders. The main technical limitation to radiosurgery is that the target can be selected only on the basis of magnetic resonance anatomy without electrophysiological confirmation. The aim of this work is to provide a method for the correct atlas-based identification of the target to be used in functional neurosurgery treatment planning. The coordinates of STN, CMN, and GPi were identified in the Talairach and Tournoux atlas and transformed to the corresponding regions of the Montreal Neurological Institute (MNI) electronic atlas. Binary masks describing the target nuclei were created. The MNI electronic atlas was deformed onto the patient magnetic resonance imaging-T1 scan by applying an affine transformation followed by a local nonrigid registration. The first transformation was based on normalized cross correlation and the second on optimization of a two-part objective function consisting of similarity criteria and weighted regularization. The obtained deformation field was then applied to the target masks. The minimum distance between the surface of an implanted electrode and the surface of the deformed mask was calculated. The validation of the method consisted of comparing the electrode

  6. An Optimized Informatics Pipeline for Mass Spectrometry-Based Peptidomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chaochao; Monroe, Matthew E.; Xu, Zhe

    2015-12-26

    Comprehensive MS analysis of peptidome, the intracellular and intercellular products of protein degradation, has the potential to provide novel insights on endogenous proteolytic processing and their utility in disease diagnosis and prognosis. Along with the advances in MS instrumentation, a plethora of proteomics data analysis tools have been applied for direct use in peptidomics; however an evaluation of the currently available informatics pipelines for peptidomics data analysis has yet to be reported. In this study, we set off by evaluating the results of several popular MS/MS database search engines including MS-GF+, SEQUEST and MS-Align+ for peptidomics data analysis, followed bymore » identification and label-free quantification using the well-established accurate mass and time (AMT) tag and newly developed informed quantification (IQ) approaches, both based on direct LC-MS analysis. Our result demonstrated that MS-GF+ outperformed both SEQUEST and MS-Align+ in identifying peptidome peptides. Using a database established from the MS-GF+ peptide identifications, both the AMT tag and IQ approaches provided significantly deeper peptidome coverage and less missing value for each individual data set than the MS/MS methods, while achieving robust label-free quantification. Besides having an excellent correlation with the AMT tag quantification results, IQ also provided slightly higher peptidome coverage than AMT. Taken together, we propose an optimal informatics pipeline combining MS-GF+ for initial database searching with IQ (or AMT) for identification and label-free quantification for high-throughput, comprehensive and quantitative peptidomics analysis.« less

  7. Bicycle: a bioinformatics pipeline to analyze bisulfite sequencing data.

    PubMed

    Graña, Osvaldo; López-Fernández, Hugo; Fdez-Riverola, Florentino; González Pisano, David; Glez-Peña, Daniel

    2018-04-15

    High-throughput sequencing of bisulfite-converted DNA is a technique used to measure DNA methylation levels. Although a considerable number of computational pipelines have been developed to analyze such data, none of them tackles all the peculiarities of the analysis together, revealing limitations that can force the user to manually perform additional steps needed for a complete processing of the data. This article presents bicycle, an integrated, flexible analysis pipeline for bisulfite sequencing data. Bicycle analyzes whole genome bisulfite sequencing data, targeted bisulfite sequencing data and hydroxymethylation data. To show how bicycle overtakes other available pipelines, we compared them on a defined number of features that are summarized in a table. We also tested bicycle with both simulated and real datasets, to show its level of performance, and compared it to different state-of-the-art methylation analysis pipelines. Bicycle is publicly available under GNU LGPL v3.0 license at http://www.sing-group.org/bicycle. Users can also download a customized Ubuntu LiveCD including bicycle and other bisulfite sequencing data pipelines compared here. In addition, a docker image with bicycle and its dependencies, which allows a straightforward use of bicycle in any platform (e.g. Linux, OS X or Windows), is also available. ograna@cnio.es or dgpena@uvigo.es. Supplementary data are available at Bioinformatics online.

  8. Evaluation of Primers Targeting the Diazotroph Functional Gene and Development of NifMAP – A Bioinformatics Pipeline for Analyzing nifH Amplicon Data

    PubMed Central

    Angel, Roey; Nepel, Maximilian; Panhölzl, Christopher; Schmidt, Hannes; Herbold, Craig W.; Eichorst, Stephanie A.; Woebken, Dagmar

    2018-01-01

    Diazotrophic microorganisms introduce biologically available nitrogen (N) to the global N cycle through the activity of the nitrogenase enzyme. The genetically conserved dinitrogenase reductase (nifH) gene is phylogenetically distributed across four clusters (I–IV) and is widely used as a marker gene for N2 fixation, permitting investigators to study the genetic diversity of diazotrophs in nature and target potential participants in N2 fixation. To date there have been limited, standardized pipelines for analyzing the nifH functional gene, which is in stark contrast to the 16S rRNA gene. Here we present a bioinformatics pipeline for processing nifH amplicon datasets – NifMAP (“NifH MiSeq Illumina Amplicon Analysis Pipeline”), which as a novel aspect uses Hidden-Markov Models to filter out homologous genes to nifH. By using this pipeline, we evaluated the broadly inclusive primer pairs (Ueda19F–R6, IGK3–DVV, and F2–R6) that target the nifH gene. To evaluate any systematic biases, the nifH gene was amplified with the aforementioned primer pairs in a diverse collection of environmental samples (soils, rhizosphere and roots samples, biological soil crusts and estuarine samples), in addition to a nifH mock community consisting of six phylogenetically diverse members. We noted that all primer pairs co-amplified nifH homologs to varying degrees; up to 90% of the amplicons were nifH homologs with IGK3–DVV in some samples (rhizosphere and roots from tall oat-grass). In regards to specificity, we observed some degree of bias across the primer pairs. For example, primer pair F2–R6 discriminated against cyanobacteria (amongst others), yet captured many sequences from subclusters IIIE and IIIL-N. These aforementioned subclusters were largely missing by the primer pair IGK3–DVV, which also tended to discriminate against Alphaproteobacteria, but amplified sequences within clusters IIIC (affiliated with Clostridia) and clusters IVB and IVC. Primer pair Ueda19F

  9. 76 FR 29333 - Pipeline Safety: Meetings of the Technical Pipeline Safety Standards Committee and the Technical...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-20

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Technical Hazardous Liquid Pipeline Safety Standards Committee AGENCY: Pipeline and Hazardous Materials... for natural gas pipelines and for hazardous liquid pipelines. Both committees were established under...

  10. Leveraging 3D chemical similarity, target and phenotypic data in the identification of drug-protein and drug-adverse effect associations.

    PubMed

    Vilar, Santiago; Hripcsak, George

    2016-01-01

    Drug-target identification is crucial to discover novel applications for existing drugs and provide more insights about mechanisms of biological actions, such as adverse drug effects (ADEs). Computational methods along with the integration of current big data sources provide a useful framework for drug-target and drug-adverse effect discovery. In this article, we propose a method based on the integration of 3D chemical similarity, target and adverse effect data to generate a drug-target-adverse effect predictor along with a simple leveraging system to improve identification of drug-targets and drug-adverse effects. In the first step, we generated a system for multiple drug-target identification based on the application of 3D drug similarity into a large target dataset extracted from the ChEMBL. Next, we developed a target-adverse effect predictor combining targets from ChEMBL with phenotypic information provided by SIDER data source. Both modules were linked to generate a final predictor that establishes hypothesis about new drug-target-adverse effect candidates. Additionally, we showed that leveraging drug-target candidates with phenotypic data is very useful to improve the identification of drug-targets. The integration of phenotypic data into drug-target candidates yielded up to twofold precision improvement. In the opposite direction, leveraging drug-phenotype candidates with target data also yielded a significant enhancement in the performance. The modeling described in the current study is simple and efficient and has applications at large scale in drug repurposing and drug safety through the identification of mechanism of action of biological effects.

  11. 76 FR 43743 - Pipeline Safety: Meetings of the Technical Pipeline Safety Standards Committee and the Technical...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-21

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2011-0127] Pipeline Safety: Meetings of the Technical Pipeline Safety Standards Committee and the Technical Hazardous Liquid Pipeline Safety Standards Committee AGENCY: Pipeline and Hazardous Materials...

  12. 77 FR 16471 - Pipeline Safety: Implementation of the National Registry of Pipeline and Liquefied Natural Gas...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-21

    ... Registry of Pipeline and Liquefied Natural Gas Operators AGENCY: Pipeline and Hazardous Materials Safety... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts... Register (75 FR 72878) titled: ``Pipeline Safety: Updates to Pipeline and Liquefied Natural Gas Reporting...

  13. Targeting legume loci: A comparison of three methods for target enrichment bait design in Leguminosae phylogenomics.

    PubMed

    Vatanparast, Mohammad; Powell, Adrian; Doyle, Jeff J; Egan, Ashley N

    2018-03-01

    The development of pipelines for locus discovery has spurred the use of target enrichment for plant phylogenomics. However, few studies have compared pipelines from locus discovery and bait design, through validation, to tree inference. We compared three methods within Leguminosae (Fabaceae) and present a workflow for future efforts. Using 30 transcriptomes, we compared Hyb-Seq, MarkerMiner, and the Yang and Smith (Y&S) pipelines for locus discovery, validated 7501 baits targeting 507 loci across 25 genera via Illumina sequencing, and inferred gene and species trees via concatenation- and coalescent-based methods. Hyb-Seq discovered loci with the longest mean length. MarkerMiner discovered the most conserved loci with the least flagged as paralogous. Y&S offered the most parsimony-informative sites and putative orthologs. Target recovery averaged 93% across taxa. We optimized our targeted locus set based on a workflow designed to minimize paralog/ortholog conflation and thus present 423 loci for legume phylogenomics. Methods differed across criteria important for phylogenetic marker development. We recommend Hyb-Seq as a method that may be useful for most phylogenomic projects. Our targeted locus set is a resource for future, community-driven efforts to reconstruct the legume tree of life.

  14. Feasibility of CRISPR-Cas9-Based In Vitro Drug Target Identification for Personalized Prostate Cancer Medicine

    DTIC Science & Technology

    2017-09-01

    AWARD NUMBER: W81XWH-16-1-0502 TITLE: Feasibility of CRISPR -Cas9-Based In Vitro Drug Target Identification for Personalized Prostate Cancer Medicine...CONTRACT NUMBER Feasibility of CRISPR -Cas9-Based In Vitro Drug Target Identification for Personalized Prostate Cancer Medicine 5b. GRANT NUMBER...Approved for Public Release; Distribution Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT This study tests the feasibility of using CRISPR -Cas9 to

  15. Identification of neuronal target genes for CCAAT/Enhancer Binding Proteins

    PubMed Central

    Kfoury, N.; Kapatos, G.

    2009-01-01

    CCAAT/Enhancer Binding Proteins (C/EBPs) play pivotal roles in development and plasticity of the nervous system. Identification of the physiological targets of C/EBPs (C/EBP target genes) should therefore provide insight into the underlying biology of these processes. We used unbiased genome-wide mapping to identify 115 C/EBPβ target genes in PC12 cells that include transcription factors, neurotransmitter receptors, ion channels, protein kinases and synaptic vesicle proteins. C/EBPβ binding sites were located primarily within introns, suggesting novel regulatory functions, and were associated with binding sites for other developmentally important transcription factors. Experiments using dominant negatives showed C/EBPβ to repress transcription of a subset of target genes. Target genes in rat brain were subsequently found to preferentially bind C/EBPα, β and δ. Analysis of the hippocampal transcriptome of C/EBPβ knockout mice revealed dysregulation of a high percentage of transcripts identified as C/EBP target genes. These results support the hypothesis that C/EBPs play non-redundant roles in the brain. PMID:19103292

  16. 75 FR 45591 - Pipeline Safety: Notice of Technical Pipeline Safety Advisory Committee Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Committee Meetings AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION... safety standards, risk assessments, and safety policies for natural gas pipelines and for hazardous...

  17. Identification of NPM and DDX5 as Therapeutic Targets in TSC

    DTIC Science & Technology

    2017-12-01

    Individuals with TSC develop benign tumors in multiple organs, including the retina, skin, lung, kidney and brain. The identification of valid targets in TSC...individuals. Individuals with TSC develop benign tumors in multiple organs, including the retina, skin, lung, kidney and brain. However, these lesions can

  18. Multinode reconfigurable pipeline computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, Daniel M. (Inventor); Littman, Michael G. (Inventor)

    1989-01-01

    A multinode parallel-processing computer is made up of a plurality of innerconnected, large capacity nodes each including a reconfigurable pipeline of functional units such as Integer Arithmetic Logic Processors, Floating Point Arithmetic Processors, Special Purpose Processors, etc. The reconfigurable pipeline of each node is connected to a multiplane memory by a Memory-ALU switch NETwork (MASNET). The reconfigurable pipeline includes three (3) basic substructures formed from functional units which have been found to be sufficient to perform the bulk of all calculations. The MASNET controls the flow of signals from the memory planes to the reconfigurable pipeline and vice versa. the nodes are connectable together by an internode data router (hyperspace router) so as to form a hypercube configuration. The capability of the nodes to conditionally configure the pipeline at each tick of the clock, without requiring a pipeline flush, permits many powerful algorithms to be implemented directly.

  19. Natural gas pipeline leaks across Washington, DC.

    PubMed

    Jackson, Robert B; Down, Adrian; Phillips, Nathan G; Ackley, Robert C; Cook, Charles W; Plata, Desiree L; Zhao, Kaiguang

    2014-01-01

    Pipeline safety in the United States has increased in recent decades, but incidents involving natural gas pipelines still cause an average of 17 fatalities and $133 M in property damage annually. Natural gas leaks are also the largest anthropogenic source of the greenhouse gas methane (CH4) in the U.S. To reduce pipeline leakage and increase consumer safety, we deployed a Picarro G2301 Cavity Ring-Down Spectrometer in a car, mapping 5893 natural gas leaks (2.5 to 88.6 ppm CH4) across 1500 road miles of Washington, DC. The δ(13)C-isotopic signatures of the methane (-38.2‰ ± 3.9‰ s.d.) and ethane (-36.5 ± 1.1 s.d.) and the CH4:C2H6 ratios (25.5 ± 8.9 s.d.) closely matched the pipeline gas (-39.0‰ and -36.2‰ for methane and ethane; 19.0 for CH4/C2H6). Emissions from four street leaks ranged from 9200 to 38,200 L CH4 day(-1) each, comparable to natural gas used by 1.7 to 7.0 homes, respectively. At 19 tested locations, 12 potentially explosive (Grade 1) methane concentrations of 50,000 to 500,000 ppm were detected in manholes. Financial incentives and targeted programs among companies, public utility commissions, and scientists to reduce leaks and replace old cast-iron pipes will improve consumer safety and air quality, save money, and lower greenhouse gas emissions.

  20. Targeted nanodiamonds for identification of subcellular protein assemblies in mammalian cells

    PubMed Central

    Lake, Michael P.; Bouchard, Louis-S.

    2017-01-01

    Transmission electron microscopy (TEM) can be used to successfully determine the structures of proteins. However, such studies are typically done ex situ after extraction of the protein from the cellular environment. Here we describe an application for nanodiamonds as targeted intensity contrast labels in biological TEM, using the nuclear pore complex (NPC) as a model macroassembly. We demonstrate that delivery of antibody-conjugated nanodiamonds to live mammalian cells using maltotriose-conjugated polypropylenimine dendrimers results in efficient localization of nanodiamonds to the intended cellular target. We further identify signatures of nanodiamonds under TEM that allow for unambiguous identification of individual nanodiamonds from a resin-embedded, OsO4-stained environment. This is the first demonstration of nanodiamonds as labels for nanoscale TEM-based identification of subcellular protein assemblies. These results, combined with the unique fluorescence properties and biocompatibility of nanodiamonds, represent an important step toward the use of nanodiamonds as markers for correlated optical/electron bioimaging. PMID:28636640

  1. Detection and Identification of Multiple Stationary Human Targets Via Bio-Radar Based on the Cross-Correlation Method

    PubMed Central

    Zhang, Yang; Chen, Fuming; Xue, Huijun; Li, Zhao; An, Qiang; Wang, Jianqi; Zhang, Yang

    2016-01-01

    Ultra-wideband (UWB) radar has been widely used for detecting human physiological signals (respiration, movement, etc.) in the fields of rescue, security, and medicine owing to its high penetrability and range resolution. In these applications, especially in rescue after disaster (earthquake, collapse, mine accident, etc.), the presence, number, and location of the trapped victims to be detected and rescued are the key issues of concern. Ample research has been done on the first issue, whereas the identification and localization of multi-targets remains a challenge. False positive and negative identification results are two common problems associated with the detection of multiple stationary human targets. This is mainly because the energy of the signal reflected from the target close to the receiving antenna is considerably stronger than those of the targets at further range, often leading to missing or false recognition if the identification method is based on the energy of the respiratory signal. Therefore, a novel method based on cross-correlation is proposed in this paper that is based on the relativity and periodicity of the signals, rather than on the energy. The validity of this method is confirmed through experiments using different scenarios; the results indicate a discernible improvement in the detection precision and identification of the multiple stationary targets. PMID:27801795

  2. Detection and Identification of Multiple Stationary Human Targets Via Bio-Radar Based on the Cross-Correlation Method.

    PubMed

    Zhang, Yang; Chen, Fuming; Xue, Huijun; Li, Zhao; An, Qiang; Wang, Jianqi; Zhang, Yang

    2016-10-27

    Ultra-wideband (UWB) radar has been widely used for detecting human physiological signals (respiration, movement, etc.) in the fields of rescue, security, and medicine owing to its high penetrability and range resolution. In these applications, especially in rescue after disaster (earthquake, collapse, mine accident, etc.), the presence, number, and location of the trapped victims to be detected and rescued are the key issues of concern. Ample research has been done on the first issue, whereas the identification and localization of multi-targets remains a challenge. False positive and negative identification results are two common problems associated with the detection of multiple stationary human targets. This is mainly because the energy of the signal reflected from the target close to the receiving antenna is considerably stronger than those of the targets at further range, often leading to missing or false recognition if the identification method is based on the energy of the respiratory signal. Therefore, a novel method based on cross-correlation is proposed in this paper that is based on the relativity and periodicity of the signals, rather than on the energy. The validity of this method is confirmed through experiments using different scenarios; the results indicate a discernible improvement in the detection precision and identification of the multiple stationary targets.

  3. Comparison of conventional ultrasonography and ultrasonography-computed tomography fusion imaging for target identification using digital/real hybrid phantoms: a preliminary study.

    PubMed

    Soyama, Takeshi; Sakuhara, Yusuke; Kudo, Kohsuke; Abo, Daisuke; Wang, Jeff; Ito, Yoichi M; Hasegawa, Yu; Shirato, Hiroki

    2016-07-01

    This preliminary study compared ultrasonography-computed tomography (US-CT) fusion imaging and conventional ultrasonography (US) for accuracy and time required for target identification using a combination of real phantoms and sets of digitally modified computed tomography (CT) images (digital/real hybrid phantoms). In this randomized prospective study, 27 spheres visible on B-mode US were placed at depths of 3.5, 8.5, and 13.5 cm (nine spheres each). All 27 spheres were digitally erased from the CT images, and a radiopaque sphere was digitally placed at each of the 27 locations to create 27 different sets of CT images. Twenty clinicians were instructed to identify the sphere target using US alone and fusion imaging. The accuracy of target identification of the two methods was compared using McNemar's test. The mean time required for target identification and error distances were compared using paired t tests. At all three depths, target identification was more accurate and the mean time required for target identification was significantly less with US-CT fusion imaging than with US alone, and the mean error distances were also shorter with US-CT fusion imaging. US-CT fusion imaging was superior to US alone in terms of accurate and rapid identification of target lesions.

  4. ASTROPOP: ASTROnomical Polarimetry and Photometry pipeline

    NASA Astrophysics Data System (ADS)

    Campagnolo, Julio C. N.

    2018-05-01

    AstroPoP reduces almost any CCD photometry and image polarimetry data. For photometry reduction, the code performs source finding, aperture and PSF photometry, astrometry calibration using different automated and non-automated methods and automated source identification and magnitude calibration based on online and local catalogs. For polarimetry, the code resolves linear and circular Stokes parameters produced by image beam splitter or polarizer polarimeters. In addition to the modular functions, ready-to-use pipelines based in configuration files and header keys are also provided with the code. AstroPOP was initially developed to reduce the IAGPOL polarimeter data installed at Observatório Pico dos Dias (Brazil).

  5. Identification and validation nucleolin as a target of curcumol in nasopharyngeal carcinoma cells.

    PubMed

    Wang, Juan; Wu, Jiacai; Li, Xumei; Liu, Haowei; Qin, Jianli; Bai, Zhun; Chi, Bixia; Chen, Xu

    2018-06-30

    Identification of the specific protein target(s) of a drug is a critical step in unraveling its mechanisms of action (MOA) in many natural products. Curcumol, isolated from well known Chinese medicinal plant Curcuma zedoary, has been shown to possess multiple biological activities. It can inhibit nasopharyngeal carcinoma (NPC) proliferation and induce apoptosis, but its target protein(s) in NPC cells remains unclear. In this study, we employed a mass spectrometry-based chemical proteomics approach reveal the possible protein targets of curcumol in NPC cells. Cellular thermal shift assay (CETSA), molecular docking and cell-based assay was used to validate the binding interactions. Chemical proteomics capturing uncovered that NCL is a target of curcumol in NPC cells, Molecular docking showed that curcumol bound to NCL with an -7.8 kcal/mol binding free energy. Cell function analysis found that curcumol's treatment leads to a degradation of NCL in NPC cells, and it showed slight effects on NP69 cells. In conclusion, our results providing evidences that NCL is a target protein of curcumol. We revealed that the anti-cancer effects of curcumol in NPC cells are mediated, at least in part, by NCL inhibition. Many natural products showed high bioactivity, while their mechanisms of action (MOA) are very poor or completely missed. Understanding the MOA of natural drugs can thoroughly exploit their therapeutic potential and minimize their adverse side effects. Identification of the specific protein target(s) of a drug is a critical step in unraveling its MOA. Compound-centric chemical proteomics is a classic chemical proteomics approach which integrates chemical synthesis with cell biology and mass spectrometry (MS) to identify protein targets of natural products determine the drug mechanism of action, describe its toxicity, and figure out the possible cause of off-target. It is an affinity-based chemical proteomics method to identify small molecule-protein interactions

  6. Conceptual design of multi-source CCS pipeline transportation network for Polish energy sector

    NASA Astrophysics Data System (ADS)

    Isoli, Niccolo; Chaczykowski, Maciej

    2017-11-01

    The aim of this study was to identify an optimal CCS transport infrastructure for Polish energy sector in regards of selected European Commission Energy Roadmap 2050 scenario. The work covers identification of the offshore storage site location, CO2 pipeline network design and sizing for deployment at a national scale along with CAPEX analysis. It was conducted for the worst-case scenario, wherein the power plants operate under full-load conditions. The input data for the evaluation of CO2 flow rates (flue gas composition) were taken from the selected cogeneration plant with the maximum electric capacity of 620 MW and the results were extrapolated from these data given the power outputs of the remaining units. A graph search algorithm was employed to estimate pipeline infrastructure costs to transport 95 MT of CO2 annually, which amount to about 612.6 M€. Additional pipeline infrastructure costs will have to be incurred after 9 years of operation of the system due to limited storage site capacity. The results show that CAPEX estimates for CO2 pipeline infrastructure cannot be relied on natural gas infrastructure data, since both systems exhibit differences in pipe wall thickness that affects material cost.

  7. Rolling Band Artifact Flagging in the Kepler Data Pipeline

    NASA Astrophysics Data System (ADS)

    Clarke, Bruce; Kolodziejczak, Jeffery J; Caldwell, Douglas A.

    2014-06-01

    Instrument-induced artifacts in the raw Kepler pixel data include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, manifestations of drifting moiré pattern as locally correlated nonstationary noise and rolling bands in the images. These systematics find their way into the calibrated pixel time series and ultimately into the target flux time series. The Kepler pipeline module Dynablack models the FGS crosstalk artifacts using a combination of raw science pixel data, full frame images, reverse-clocked pixel data and ancillary temperature data. The calibration module (CAL) uses the fitted Dynablack models to remove FGS crosstalk artifacts in the calibrated pixels by adjusting the black level correction per cadence. Dynablack also detects and flags spatial regions and time intervals of strong time-varying black-level. These rolling band artifact (RBA) flags are produced on a per row per cadence basis by searching for transit signatures in the Dynablack fit residuals. The Photometric Analysis module (PA) generates per target per cadence data quality flags based on the Dynablack RBA flags. Proposed future work includes using the target data quality flags as a basis for de-weighting in the Presearch Data Conditioning (PDC), Transiting Planet Search (TPS) and Data Validation (DV) pipeline modules. We discuss the effectiveness of RBA flagging for downstream users and illustrate with some affected light curves. We also discuss the implementation of Dynablack in the Kepler data pipeline and present results regarding the improvement in calibrated pixels and the expected improvement in cotrending performance as a result of including FGS corrections in the calibration. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  8. Pipeline repair development in support of the Oman to India gas pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abadie, W.; Carlson, W.

    1995-12-01

    This paper provides a summary of development which has been conducted to date for the ultra deep, diverless pipeline repair system for the proposed Oman to India Gas Pipeline. The work has addressed critical development areas involving testing and/or prototype development of tools and procedures required to perform a diverless pipeline repair in water depths of up to 3,525 m.

  9. Comparison of carbon footprints of steel versus concrete pipelines for water transmission.

    PubMed

    Chilana, Lalit; Bhatt, Arpita H; Najafi, Mohammad; Sattler, Melanie

    2016-05-01

    The global demand for water transmission and service pipelines is expected to more than double between 2012 and 2022. This study compared the carbon footprint of the two most common materials used for large-diameter water transmission pipelines, steel pipe (SP) and prestressed concrete cylinder pipe (PCCP). A planned water transmission pipeline in Texas was used as a case study. Four life-cycle phases for each material were considered: material production and pipeline fabrication, pipe transportation to the job site, pipe installation in the trench, and operation of the pipeline. In each phase, the energy consumed and the CO2-equivalent emissions were quantified. It was found that pipe manufacturing consumed a large amount of energy, and thus contributed more than 90% of life cycle carbon emissions for both kinds of pipe. Steel pipe had 64% larger CO2-eq emissions from manufacturing compared to PCCP. For the transportation phase, PCCP consumed more fuel due to its heavy weight, and therefore had larger CO2-eq emissions. Fuel consumption by construction equipment for installation of pipe was found to be similar for steel pipe and PCCP. Overall, steel had a 32% larger footprint due to greater energy used during manufacturing. This study compared the carbon footprint of two large-diameter water transmission pipeline materials, steel and prestressed concrete cylinder, considering four life-cycle phases for each. The study provides information that project managers can incorporate into their decision-making process concerning pipeline materials. It also provides information concerning the most important phases of the pipeline life cycle to target for emission reductions.

  10. 49 CFR 195.210 - Pipeline location.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Pipeline location. 195.210 Section 195.210 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY... PIPELINE Construction § 195.210 Pipeline location. (a) Pipeline right-of-way must be selected to avoid, as...

  11. 77 FR 2126 - Pipeline Safety: Implementation of the National Registry of Pipeline and Liquefied Natural Gas...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-13

    ... Natural Gas Operators AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...: ``Pipeline Safety: Updates to Pipeline and Liquefied Natural Gas Reporting Requirements.'' The final rule...

  12. Data Validation in the Kepler Science Operations Center Pipeline

    NASA Technical Reports Server (NTRS)

    Wu, Hayley; Twicken, Joseph D.; Tenenbaum, Peter; Clarke, Bruce D.; Li, Jie; Quintana, Elisa V.; Allen, Christopher; Chandrasekaran, Hema; Jenkins, Jon M.; Caldwell, Douglas A.; hide

    2010-01-01

    We present an overview of the Data Validation (DV) software component and its context within the Kepler Science Operations Center (SOC) pipeline and overall Kepler Science mission. The SOC pipeline performs a transiting planet search on the corrected light curves for over 150,000 targets across the focal plane array. We discuss the DV strategy for automated validation of Threshold Crossing Events (TCEs) generated in the transiting planet search. For each TCE, a transiting planet model is fitted to the target light curve. A multiple planet search is conducted by repeating the transiting planet search on the residual light curve after the model flux has been removed; if an additional detection occurs, a planet model is fitted to the new TCE. A suite of automated tests are performed after all planet candidates have been identified. We describe a centroid motion test to determine the significance of the motion of the target photocenter during transit and to estimate the coordinates of the transit source within the photometric aperture; a series of eclipsing binary discrimination tests on the parameters of the planet model fits to all transits and the sequences of odd and even transits; and a statistical bootstrap to assess the likelihood that the TCE would have been generated purely by chance given the target light curve with all transits removed. Keywords: photometry, data validation, Kepler, Earth-size planets

  13. 75 FR 72877 - Pipeline Safety: Updates to Pipeline and Liquefied Natural Gas Reporting Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-26

    ... liquid pipelines, and liquefied natural gas (LNG) facilities. These revisions will enhance PHMSA's... of natural gas pipelines, hazardous liquid pipelines, and LNG facilities. Specifically, PHMSA... commodity transported, and type of commodity transported. 8. Modify hazardous liquid operator telephonic...

  14. 78 FR 42889 - Pipeline Safety: Reminder of Requirements for Utility LP-Gas and LPG Pipeline Systems

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 192 [Docket No. PHMSA-2013-0097] Pipeline Safety: Reminder of Requirements for Utility LP-Gas and LPG Pipeline Systems AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION...

  15. Construction of a directed hammerhead ribozyme library: towards the identification of optimal target sites for antisense-mediated gene inhibition.

    PubMed Central

    Pierce, M L; Ruffner, D E

    1998-01-01

    Antisense-mediated gene inhibition uses short complementary DNA or RNA oligonucleotides to block expression of any mRNA of interest. A key parameter in the success or failure of an antisense therapy is the identification of a suitable target site on the chosen mRNA. Ultimately, the accessibility of the target to the antisense agent determines target suitability. Since accessibility is a function of many complex factors, it is currently beyond our ability to predict. Consequently, identification of the most effective target(s) requires examination of every site. Towards this goal, we describe a method to construct directed ribozyme libraries against any chosen mRNA. The library contains nearly equal amounts of ribozymes targeting every site on the chosen transcript and the library only contains ribozymes capable of binding to that transcript. Expression of the ribozyme library in cultured cells should allow identification of optimal target sites under natural conditions, subject to the complexities of a fully functional cell. Optimal target sites identified in this manner should be the most effective sites for therapeutic intervention. PMID:9801305

  16. Method and system for pipeline communication

    DOEpatents

    Richardson,; John, G [Idaho Falls, ID

    2008-01-29

    A pipeline communication system and method includes a pipeline having a surface extending along at least a portion of the length of the pipeline. A conductive bus is formed to and extends along a portion of the surface of the pipeline. The conductive bus includes a first conductive trace and a second conductive trace with the first and second conductive traces being adapted to conformally couple with a pipeline at the surface extending along at least a portion of the length of the pipeline. A transmitter for sending information along the conductive bus on the pipeline is coupled thereto and a receiver for receiving the information from the conductive bus on the pipeline is also couple to the conductive bus.

  17. RISC RNA sequencing for context-specific identification of in vivo microRNA targets.

    PubMed

    Matkovich, Scot J; Van Booven, Derek J; Eschenbacher, William H; Dorn, Gerald W

    2011-01-07

    MicroRNAs (miRs) are expanding our understanding of cardiac disease and have the potential to transform cardiovascular therapeutics. One miR can target hundreds of individual mRNAs, but existing methodologies are not sufficient to accurately and comprehensively identify these mRNA targets in vivo. To develop methods permitting identification of in vivo miR targets in an unbiased manner, using massively parallel sequencing of mouse cardiac transcriptomes in combination with sequencing of mRNA associated with mouse cardiac RNA-induced silencing complexes (RISCs). We optimized techniques for expression profiling small amounts of RNA without introducing amplification bias and applied this to anti-Argonaute 2 immunoprecipitated RISCs (RISC-Seq) from mouse hearts. By comparing RNA-sequencing results of cardiac RISC and transcriptome from the same individual hearts, we defined 1645 mRNAs consistently targeted to mouse cardiac RISCs. We used this approach in hearts overexpressing miRs from Myh6 promoter-driven precursors (programmed RISC-Seq) to identify 209 in vivo targets of miR-133a and 81 in vivo targets of miR-499. Consistent with the fact that miR-133a and miR-499 have widely differing "seed" sequences and belong to different miR families, only 6 targets were common to miR-133a- and miR-499-programmed hearts. RISC-sequencing is a highly sensitive method for general RISC profiling and individual miR target identification in biological context and is applicable to any tissue and any disease state.

  18. Characterization and Validation of Transiting Planets in the Kepler and TESS Pipelines

    NASA Astrophysics Data System (ADS)

    Twicken, Joseph; Brownston, Lee; Catanzarite, Joseph; Clarke, Bruce; Cote, Miles; Girouard, Forrest; Li, Jie; McCauliff, Sean; Seader, Shawn; Tenenbaum, Peter; Wohler, Bill; Jenkins, Jon Michael; Batalha, Natalie; Bryson, Steve; Burke, Christopher; Caldwell, Douglas

    2015-08-01

    Light curves for Kepler targets are searched for transiting planet signatures in the Transiting Planet Search (TPS) component of the Science Operations Center (SOC) Processing Pipeline. Targets for which the detection threshold is exceeded are subsequently processed in the Data Validation (DV) Pipeline component. The primary functions of DV are to (1) characterize planets identified in the transiting planet search, (2) search for additional transiting planet signatures in light curves after modeled transit signatures have been removed, and (3) perform a comprehensive suite of diagnostic tests to aid in discrimination between true transiting planets and false positive detections. DV output products include extensive reports by target, one-page report summaries by planet candidate, and tabulated planet model fit and diagnostic test results. The DV products are employed by humans and automated systems to vet planet candidates identified in the pipeline. The final revision of the Kepler SOC codebase (9.3) was released in March 2015. It will be utilized to reprocess the complete Q1-Q17 data set later this year. At the same time, the SOC Pipeline codebase is being ported to support the Transiting Exoplanet Survey Satellite (TESS) Mission. TESS is expected to launch in 2017 and survey the entire sky for transiting exoplanets over a period of two years. We describe the final revision of the Kepler Data Validation component with emphasis on the diagnostic tests and reports. This revision also serves as the DV baseline for TESS. The diagnostic tests exploit the flux (i.e., light curve), centroid and pixel time series associated with each target to facilitate the determination of the true origin of each purported transiting planet signature. Candidate planet detections and DV products for Kepler are delivered to the Exoplanet Archive at the NASA Exoplanet Science Institute (NExScI). The Exoplanet Archive is located at exoplanetarchive.ipac.caltech.edu. Funding for the Kepler

  19. Current Pipelines for Neglected Diseases

    PubMed Central

    di Procolo, Paolo; Jommi, Claudio

    2014-01-01

    This paper scrutinises pipelines for Neglected Diseases (NDs), through freely accessible and at-least-weekly updated trials databases. It updates to 2012 data provided by recent publications, and integrates these analyses with information on location of trials coordinators and patients recruitment status. Additionally, it provides (i) disease-specific information to better understand the rational of investments in NDs, (ii) yearly data, to understand the investment trends. The search identified 650 clinical studies. Leishmaniasis, Arbovirus infection, and Dengue are the top three diseases by number of clinical studies. Disease diffusion risk seems to be the most important driver of the clinical trials target choice, whereas the role played by disease prevalence and unmet need is controversial. Number of trials is stable between 2005 and 2010, with an increase in the last two years. Patient recruitment was completed for most studies (57.6%), and Phases II and III account for 35% and 28% of trials, respectively. The primary purpose of clinical investigations is prevention (49.3%), especially for infectious diseases with mosquitoes and sand flies as the vector, and treatment (43.2%), which is the primary target for parasitic diseases Research centres and public organisations are the most important clinical studies sponsors (58.9%), followed by the pharmaceutical industry (24.1%), foundations and non-governmental organisations (9.3%). Many coordinator centres are located in less affluent countries (43.7%), whereas OECD countries and BRICS account for 34.7% and 17.5% of trials, respectively. Information was partially missing for some parameters. Notwithstanding, and despite its descriptive nature, this research has enhanced the evidence of the literature on pipelines for NDs. Future contributions may further investigate whether trials metrics are consistent with the characteristics of the interested countries and the explicative variables of trials location, target

  20. Current pipelines for neglected diseases.

    PubMed

    di Procolo, Paolo; Jommi, Claudio

    2014-09-01

    This paper scrutinises pipelines for Neglected Diseases (NDs), through freely accessible and at-least-weekly updated trials databases. It updates to 2012 data provided by recent publications, and integrates these analyses with information on location of trials coordinators and patients recruitment status. Additionally, it provides (i) disease-specific information to better understand the rational of investments in NDs, (ii) yearly data, to understand the investment trends. The search identified 650 clinical studies. Leishmaniasis, Arbovirus infection, and Dengue are the top three diseases by number of clinical studies. Disease diffusion risk seems to be the most important driver of the clinical trials target choice, whereas the role played by disease prevalence and unmet need is controversial. Number of trials is stable between 2005 and 2010, with an increase in the last two years. Patient recruitment was completed for most studies (57.6%), and Phases II and III account for 35% and 28% of trials, respectively. The primary purpose of clinical investigations is prevention (49.3%), especially for infectious diseases with mosquitoes and sand flies as the vector, and treatment (43.2%), which is the primary target for parasitic diseases Research centres and public organisations are the most important clinical studies sponsors (58.9%), followed by the pharmaceutical industry (24.1%), foundations and non-governmental organisations (9.3%). Many coordinator centres are located in less affluent countries (43.7%), whereas OECD countries and BRICS account for 34.7% and 17.5% of trials, respectively. Information was partially missing for some parameters. Notwithstanding, and despite its descriptive nature, this research has enhanced the evidence of the literature on pipelines for NDs. Future contributions may further investigate whether trials metrics are consistent with the characteristics of the interested countries and the explicative variables of trials location, target

  1. CloudNeo: a cloud pipeline for identifying patient-specific tumor neoantigens.

    PubMed

    Bais, Preeti; Namburi, Sandeep; Gatti, Daniel M; Zhang, Xinyu; Chuang, Jeffrey H

    2017-10-01

    We present CloudNeo, a cloud-based computational workflow for identifying patient-specific tumor neoantigens from next generation sequencing data. Tumor-specific mutant peptides can be detected by the immune system through their interactions with the human leukocyte antigen complex, and neoantigen presence has recently been shown to correlate with anti T-cell immunity and efficacy of checkpoint inhibitor therapy. However computing capabilities to identify neoantigens from genomic sequencing data are a limiting factor for understanding their role. This challenge has grown as cancer datasets become increasingly abundant, making them cumbersome to store and analyze on local servers. Our cloud-based pipeline provides scalable computation capabilities for neoantigen identification while eliminating the need to invest in local infrastructure for data transfer, storage or compute. The pipeline is a Common Workflow Language (CWL) implementation of human leukocyte antigen (HLA) typing using Polysolver or HLAminer combined with custom scripts for mutant peptide identification and NetMHCpan for neoantigen prediction. We have demonstrated the efficacy of these pipelines on Amazon cloud instances through the Seven Bridges Genomics implementation of the NCI Cancer Genomics Cloud, which provides graphical interfaces for running and editing, infrastructure for workflow sharing and version tracking, and access to TCGA data. The CWL implementation is at: https://github.com/TheJacksonLaboratory/CloudNeo. For users who have obtained licenses for all internal software, integrated versions in CWL and on the Seven Bridges Cancer Genomics Cloud platform (https://cgc.sbgenomics.com/, recommended version) can be obtained by contacting the authors. jeff.chuang@jax.org. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  2. Advances in the Study of Aptamer-Protein Target Identification Using the Chromatographic Approach.

    PubMed

    Drabik, Anna; Ner-Kluza, Joanna; Mielczarek, Przemyslaw; Civit, Laia; Mayer, Günter; Silberring, Jerzy

    2018-06-01

    Ever since the development of the process known as the systematic evolution of ligands by exponential enrichment (SELEX), aptamers have been widely used in a variety of studies, including the exploration of new diagnostic tools and the discovery of new treatment methods. Aptamers' ability to bind to proteins with high affinity and specificity, often compared to that of antibodies, enables the search for potential cancer biomarkers and helps us understand the mechanisms of carcinogenesis. The blind spot of those investigations is usually the difficulty in the selective extraction of targets attached to the aptamer. There are many studies describing the cell SELEX for the prime choice of aptamers toward living cancer cells or even whole tumors in the animal models. However, a dilemma arises when a large number of proteins are being identified as potential targets, which is often the case. In this article, we present a new analytical approach designed to selectively target proteins bound to aptamers. During studies, we have focused on the unambiguous identification of the molecular targets of aptamers characterized by high specificity to the prostate cancer cells. We have compared four assay approaches using electrophoretic and chromatographic methods for "fishing out" aptamer protein targets followed by mass spectrometry identification. We have established a new methodology, based on the fluorescent-tagged oligonucleotides commonly used for flow-cytometry experiments or as optic aptasensors, that allowed the detection of specific aptamer-protein interactions by mass spectrometry. The use of atto488-labeled aptamers for the tracking of the formation of specific aptamer-target complexes provides the possibility of studying putative protein counterparts without needing to apply enrichment techniques. Significantly, changes in the hydrophobic properties of atto488-labeled aptamer-protein complexes facilitate their separation by reverse-phase chromatography combined with

  3. 76 FR 75894 - Information Collection Activities: Pipelines and Pipeline Rights-of-Way; Submitted for Office of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-05

    ... pipelines `` * * * for the transportation of oil, natural gas, sulphur, or other minerals, or under such...) Submit repair report 3 1008(f) Submit report of pipeline failure analysis...... 30 1008(g) Submit plan of.... BSEE-2011-0002; OMB Control Number 1010-0050] Information Collection Activities: Pipelines and Pipeline...

  4. 77 FR 16052 - Information Collection Activities: Pipelines and Pipeline Rights-of-Way; Submitted for Office of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-19

    ... submerged lands of the OCS for pipelines ``* * * for the transportation of oil, natural gas, sulphur, or... ensure that the pipeline, as constructed, will provide for safe transportation of oil and gas and other...-0002; OMB Control Number 1014-0016] Information Collection Activities: Pipelines and Pipeline Rights...

  5. Hydrocarbons pipeline transportation risk assessment

    NASA Astrophysics Data System (ADS)

    Zanin, A. V.; Milke, A. A.; Kvasov, I. N.

    2018-04-01

    The pipeline transportation applying risks assessment issue in the arctic conditions is addressed in the paper. Pipeline quality characteristics in the given environment has been assessed. To achieve the stated objective, the pipelines mathematical model was designed and visualized by using the software product SOLIDWORKS. When developing the mathematical model the obtained results made possible to define the pipeline optimal characteristics for designing on the Arctic sea bottom. In the course of conducting the research the pipe avalanche collapse risks were examined, internal longitudinal and circular loads acting on the pipeline were analyzed, as well as the water impact hydrodynamic force was taken into consideration. The conducted calculation can contribute to the pipeline transport further development under the harsh climate conditions of the Russian Federation Arctic shelf territory.

  6. Building Effective Pipelines to Increase Diversity in the Geosciences

    NASA Astrophysics Data System (ADS)

    Snow, E.; Robinson, C. R.; Neal-Mujahid, R.

    2017-12-01

    The U.S. Geological Survey (USGS) recognizes and understands the importance of a diverse workforce in advancing our science. Valuing Differences is one of the guiding principles of the USGS, and is the critical basis of the collaboration among the Youth and Education in Science (YES) program in the USGS Office of Science, Quality, and Integrity (OSQI), the Office of Diversity and Equal Opportunity (ODEO), and USGS science centers to build pipeline programs targeting diverse young scientists. Pipeline programs are robust, sustained relationships between two entities that provide a pathway from one to the other, in this case, from minority serving institutions to the USGS. The USGS has benefited from pipeline programs for many years. Our longest running program, with University of Puerto Rico Mayaguez (UPR), is a targeted outreach and internship program that has been managed by USGS scientists in Florida since the mid-1980's Originally begun as the Minority Participation in the Earth Sciences (MPES ) Program, it has evolved over the years, and in its several forms has brought dozens of interns to the USGS. Based in part on that success, in 2006 USGS scientists in Woods Hole MA worked with their Florida counterparts to build a pipeline program with City College of New York (CCNY). In this program, USGS scientists visit CCNY monthly, giving a symposium and meeting with students and faculty. The talks are so successful that the college created a course around them. In 2017, the CCNY and UPR programs brought 12 students to the USGS for summer internships. The CCNY model has been so successful that USGS is exploring creating similar pipeline programs. The YES office is coordinating with ODEO and USGS science centers to identify partner universities and build relationships that will lead to robust partnership where USGS scientists will visit regularly to engage with faculty and students and recruit students for USGS internships. The ideal partner universities will have a

  7. Structure based drug discovery for designing leads for the non-toxic metabolic targets in multi drug resistant Mycobacterium tuberculosis.

    PubMed

    Kaur, Divneet; Mathew, Shalu; Nair, Chinchu G S; Begum, Azitha; Jainanarayan, Ashwin K; Sharma, Mukta; Brahmachari, Samir K

    2017-12-21

    The problem of drug resistance and bacterial persistence in tuberculosis is a cause of global alarm. Although, the UN's Sustainable Development Goals for 2030 has targeted a Tb free world, the treatment gap exists and only a few new drug candidates are in the pipeline. In spite of large information from medicinal chemistry to 'omics' data, there has been a little effort from pharmaceutical companies to generate pipelines for the development of novel drug candidates against the multi drug resistant Mycobacterium tuberculosis. In the present study, we describe an integrated methodology; utilizing systems level information to optimize ligand selection to lower the failure rates at the pre-clinical and clinical levels. In the present study, metabolic targets (Rv2763c, Rv3247c, Rv1094, Rv3607c, Rv3048c, Rv2965c, Rv2361c, Rv0865, Rv0321, Rv0098, Rv0390, Rv3588c, Rv2244, Rv2465c and Rv2607) in M. tuberculosis, identified using our previous Systems Biology and data-intensive genome level analysis, have been used to design potential lead molecules, which are likely to be non-toxic. Various in silico drug discovery tools have been utilized to generate small molecular leads for each of the 15 targets with available crystal structures. The present study resulted in identification of 20 novel lead molecules including 4 FDA approved drugs (droxidropa, tetroxoprim, domperidone and nemonapride) which can be further taken for drug repurposing. This comprehensive integrated methodology, with both experimental and in silico approaches, has the potential to not only tackle the MDR form of Mtb but also the most important persister population of the bacterium, with a potential to reduce the failures in the Tb drug discovery. We propose an integrated approach of systems and structural biology for identifying targets that address the high attrition rate issue in lead identification and drug development We expect that this system level analysis will be applicable for identification of drug

  8. Pipeline surveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1976-03-01

    Each week, pilots of Michigan Wisconsin Pipe Line Co.'s aerial patrol fly over the entire 12,000 mi of the company's pipeline system to discover possible gas leaks and prevent accidental encroachment of pipeline right-of-way. (Leaks are uncommon, but as many as 10 potential trespassing incidents can occur along a route in a week.) Following the pipeline is facilitated by the warmth emitted by the line which affects the plant life directly above it. Gas leaks may be indicated by a patch of brown vegetation or by sediment brought to water surfaces by escaping gas bubbles. The 1- and 2-engine patrolmore » planes, which receive special permission from federal authorities to fly at low altitudes, undergo 100-h safety checks and frequent overhauls.« less

  9. Commanding Constellations (Pipeline Architecture)

    NASA Technical Reports Server (NTRS)

    Ray, Tim; Condron, Jeff

    2003-01-01

    Providing ground command software for constellations of spacecraft is a challenging problem. Reliable command delivery requires a feedback loop; for a constellation there will likely be an independent feedback loop for each constellation member. Each command must be sent via the proper Ground Station, which may change from one contact to the next (and may be different for different members). Dynamic configuration of the ground command software is usually required (e.g. directives to configure each member's feedback loop and assign the appropriate Ground Station). For testing purposes, there must be a way to insert command data at any level in the protocol stack. The Pipeline architecture described in this paper can support all these capabilities with a sequence of software modules (the pipeline), and a single self-identifying message format (for all types of command data and configuration directives). The Pipeline architecture is quite simple, yet it can solve some complex problems. The resulting solutions are conceptually simple, and therefore, reliable. They are also modular, and therefore, easy to distribute and extend. We first used the Pipeline architecture to design a CCSDS (Consultative Committee for Space Data Systems) Ground Telecommand system (to command one spacecraft at a time with a fixed Ground Station interface). This pipeline was later extended to include gateways to any of several Ground Stations. The resulting pipeline was then extended to handle a small constellation of spacecraft. The use of the Pipeline architecture allowed us to easily handle the increasing complexity. This paper will describe the Pipeline architecture, show how it was used to solve each of the above commanding situations, and how it can easily be extended to handle larger constellations.

  10. Main Pipelines Corrosion Monitoring Device

    NASA Astrophysics Data System (ADS)

    Anatoliy, Bazhenov; Galina, Bondareva; Natalia, Grivennaya; Sergey, Malygin; Mikhail, Goryainov

    2017-01-01

    The aim of the article is to substantiate the technical solution for the problem of monitoring corrosion changes in oil and gas pipelines with use (using) of an electromagnetic NDT method. Pipeline wall thinning under operating conditions can lead to perforations and leakage of the product to be transported outside the pipeline. In most cases there is danger for human life and environment. Monitoring of corrosion changes in pipeline inner wall under operating conditions is complicated because pipelines are mainly made of structural steels with conductive and magnetic properties that complicate test signal passage through the entire thickness of the object under study. The technical solution of this problem lies in monitoring of the internal corrosion changes in pipes under operating conditions in order to increase safety of pipelines by automated prediction of achieving the threshold pre-crash values due to corrosion.

  11. The druggable genome and support for target identification and validation in drug development.

    PubMed

    Finan, Chris; Gaulton, Anna; Kruger, Felix A; Lumbers, R Thomas; Shah, Tina; Engmann, Jorgen; Galver, Luana; Kelley, Ryan; Karlsson, Anneli; Santos, Rita; Overington, John P; Hingorani, Aroon D; Casas, Juan P

    2017-03-29

    Target identification (determining the correct drug targets for a disease) and target validation (demonstrating an effect of target perturbation on disease biomarkers and disease end points) are important steps in drug development. Clinically relevant associations of variants in genes encoding drug targets model the effect of modifying the same targets pharmacologically. To delineate drug development (including repurposing) opportunities arising from this paradigm, we connected complex disease- and biomarker-associated loci from genome-wide association studies to an updated set of genes encoding druggable human proteins, to agents with bioactivity against these targets, and, where there were licensed drugs, to clinical indications. We used this set of genes to inform the design of a new genotyping array, which will enable association studies of druggable genes for drug target selection and validation in human disease. Copyright © 2017, American Association for the Advancement of Science.

  12. The X-shooter pipeline

    NASA Astrophysics Data System (ADS)

    Goldoni, P.

    2011-03-01

    The X-shooter data reduction pipeline is an integral part of the X-shooter project, it allows the production of reduced data in physical quantities from the raw data produced by the instrument. The pipeline is based on the data reduction library developed by the X-shooter consortium with contributions from France, The Netherlands and ESO and it uses the Common Pipeline Library (CPL) developed at ESO. The pipeline has been developed for two main functions. The first function is to monitor the operation of the instrument through the reduction of the acquired data, both at Paranal, for a quick-look control, and in Garching, for a more thorough evaluation. The second function is to allow an optimized data reduction for a scientific user. In the following I will first outline the main steps of data reduction with the pipeline then I will briefly show two examples of optimization of the results for science reduction.

  13. Toward better drug repositioning: prioritizing and integrating existing methods into efficient pipelines.

    PubMed

    Jin, Guangxu; Wong, Stephen T C

    2014-05-01

    Recycling old drugs, rescuing shelved drugs and extending patents' lives make drug repositioning an attractive form of drug discovery. Drug repositioning accounts for approximately 30% of the newly US Food and Drug Administration (FDA)-approved drugs and vaccines in recent years. The prevalence of drug-repositioning studies has resulted in a variety of innovative computational methods for the identification of new opportunities for the use of old drugs. Questions often arise from customizing or optimizing these methods into efficient drug-repositioning pipelines for alternative applications. It requires a comprehensive understanding of the available methods gained by evaluating both biological and pharmaceutical knowledge and the elucidated mechanism-of-action of drugs. Here, we provide guidance for prioritizing and integrating drug-repositioning methods for specific drug-repositioning pipelines. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. ProteoCloud: a full-featured open source proteomics cloud computing pipeline.

    PubMed

    Muth, Thilo; Peters, Julian; Blackburn, Jonathan; Rapp, Erdmann; Martens, Lennart

    2013-08-02

    We here present the ProteoCloud pipeline, a freely available, full-featured cloud-based platform to perform computationally intensive, exhaustive searches in a cloud environment using five different peptide identification algorithms. ProteoCloud is entirely open source, and is built around an easy to use and cross-platform software client with a rich graphical user interface. This client allows full control of the number of cloud instances to initiate and of the spectra to assign for identification. It also enables the user to track progress, and to visualize and interpret the results in detail. Source code, binaries and documentation are all available at http://proteocloud.googlecode.com. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. About U.S. Natural Gas Pipelines

    EIA Publications

    2007-01-01

    This information product provides the interested reader with a broad and non-technical overview of how the U.S. natural gas pipeline network operates, along with some insights into the many individual pipeline systems that make up the network. While the focus of the presentation is the transportation of natural gas over the interstate and intrastate pipeline systems, information on subjects related to pipeline development, such as system design and pipeline expansion, are also included.

  16. Design and Implementation of Data Reduction Pipelines for the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Gelino, C. R.; Berriman, G. B.; Kong, M.; Laity, A. C.; Swain, M. A.; Campbell, R.; Goodrich, R. W.; Holt, J.; Lyke, J.; Mader, J. A.; Tran, H. D.; Barlow, T.

    2015-09-01

    The Keck Observatory Archive (KOA), a collaboration between the NASA Exoplanet Science Institute and the W. M. Keck Observatory, serves science and calibration data for all active and inactive instruments from the twin Keck Telescopes located near the summit of Mauna Kea, Hawaii. In addition to the raw data, we produce and provide quick look reduced data for four instruments (HIRES, LWS, NIRC2, and OSIRIS) so that KOA users can more easily assess the scientific content and the quality of the data, which can often be difficult with raw data. The reduced products derive from both publicly available data reduction packages (when available) and KOA-created reduction scripts. The automation of publicly available data reduction packages has the benefit of providing a good quality product without the additional time and expense of creating a new reduction package, and is easily applied to bulk processing needs. The downside is that the pipeline is not always able to create an ideal product, particularly for spectra, because the processing options for one type of target (eg., point sources) may not be appropriate for other types of targets (eg., extended galaxies and nebulae). In this poster we present the design and implementation for the current pipelines used at KOA and discuss our strategies for handling data for which the nature of the targets and the observers' scientific goals and data taking procedures are unknown. We also discuss our plans for implementing automated pipelines for the remaining six instruments.

  17. Alaska oil pipeline in retrospect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, D.R.

    Caribou have not adjusted as well as moose to the presence of the trans-Alaska pipeline. Research has shown that caribou have altered their movements and patterns and range use in relation to the pipeline corridor. Cows with calves show pronounced avoidance of the pipeline, road, and oil field. Traffic and human activity appear more directly responsible for avoidance behavior than does that physical presence of the pipeline, road, and facilities. Animals along the haul road are especially vulnerable to poaching because of the open terrain and the fact that many become tame during the peak of construction activity. Poaching, especiallymore » of furbearers, has increased as pipeline-related traffic has decreased.« less

  18. 75 FR 4134 - Pipeline Safety: Leak Detection on Hazardous Liquid Pipelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-26

    ... http://dms.dot.gov . General information about the PHMSA Office of Pipeline Safety (OPS) can be... of leak detection by tracking product movement is essential to an understanding of line balance... pipelines, the line balance technique for leak detection can often be performed with manual calculations...

  19. Low-complexity nonlinear adaptive filter based on a pipelined bilinear recurrent neural network.

    PubMed

    Zhao, Haiquan; Zeng, Xiangping; He, Zhengyou

    2011-09-01

    To reduce the computational complexity of the bilinear recurrent neural network (BLRNN), a novel low-complexity nonlinear adaptive filter with a pipelined bilinear recurrent neural network (PBLRNN) is presented in this paper. The PBLRNN, inheriting the modular architectures of the pipelined RNN proposed by Haykin and Li, comprises a number of BLRNN modules that are cascaded in a chained form. Each module is implemented by a small-scale BLRNN with internal dynamics. Since those modules of the PBLRNN can be performed simultaneously in a pipelined parallelism fashion, it would result in a significant improvement of computational efficiency. Moreover, due to nesting module, the performance of the PBLRNN can be further improved. To suit for the modular architectures, a modified adaptive amplitude real-time recurrent learning algorithm is derived on the gradient descent approach. Extensive simulations are carried out to evaluate the performance of the PBLRNN on nonlinear system identification, nonlinear channel equalization, and chaotic time series prediction. Experimental results show that the PBLRNN provides considerably better performance compared to the single BLRNN and RNN models.

  20. Hal: an automated pipeline for phylogenetic analyses of genomic data.

    PubMed

    Robbertse, Barbara; Yoder, Ryan J; Boyd, Alex; Reeves, John; Spatafora, Joseph W

    2011-02-07

    The rapid increase in genomic and genome-scale data is resulting in unprecedented levels of discrete sequence data available for phylogenetic analyses. Major analytical impasses exist, however, prior to analyzing these data with existing phylogenetic software. Obstacles include the management of large data sets without standardized naming conventions, identification and filtering of orthologous clusters of proteins or genes, and the assembly of alignments of orthologous sequence data into individual and concatenated super alignments. Here we report the production of an automated pipeline, Hal that produces multiple alignments and trees from genomic data. These alignments can be produced by a choice of four alignment programs and analyzed by a variety of phylogenetic programs. In short, the Hal pipeline connects the programs BLASTP, MCL, user specified alignment programs, GBlocks, ProtTest and user specified phylogenetic programs to produce species trees. The script is available at sourceforge (http://sourceforge.net/projects/bio-hal/). The results from an example analysis of Kingdom Fungi are briefly discussed.

  1. Improved orthologous databases to ease protozoan targets inference.

    PubMed

    Kotowski, Nelson; Jardim, Rodrigo; Dávila, Alberto M R

    2015-09-29

    Homology inference helps on identifying similarities, as well as differences among organisms, which provides a better insight on how closely related one might be to another. In addition, comparative genomics pipelines are widely adopted tools designed using different bioinformatics applications and algorithms. In this article, we propose a methodology to build improved orthologous databases with the potential to aid on protozoan target identification, one of the many tasks which benefit from comparative genomics tools. Our analyses are based on OrthoSearch, a comparative genomics pipeline originally designed to infer orthologs through protein-profile comparison, supported by an HMM, reciprocal best hits based approach. Our methodology allows OrthoSearch to confront two orthologous databases and to generate an improved new one. Such can be later used to infer potential protozoan targets through a similarity analysis against the human genome. The protein sequences of Cryptosporidium hominis, Entamoeba histolytica and Leishmania infantum genomes were comparatively analyzed against three orthologous databases: (i) EggNOG KOG, (ii) ProtozoaDB and (iii) Kegg Orthology (KO). That allowed us to create two new orthologous databases, "KO + EggNOG KOG" and "KO + EggNOG KOG + ProtozoaDB", with 16,938 and 27,701 orthologous groups, respectively. Such new orthologous databases were used for a regular OrthoSearch run. By confronting "KO + EggNOG KOG" and "KO + EggNOG KOG + ProtozoaDB" databases and protozoan species we were able to detect the following total of orthologous groups and coverage (relation between the inferred orthologous groups and the species total number of proteins): Cryptosporidium hominis: 1,821 (11 %) and 3,254 (12 %); Entamoeba histolytica: 2,245 (13 %) and 5,305 (19 %); Leishmania infantum: 2,702 (16 %) and 4,760 (17 %). Using our HMM-based methodology and the largest created orthologous database, it was possible to infer 13

  2. RISC RNA sequencing for context-specific identification of in vivo miR targets

    PubMed Central

    Matkovich, Scot J; Van Booven, Derek J; Eschenbacher, William H; Dorn, Gerald W

    2010-01-01

    Rationale MicroRNAs (miRs) are expanding our understanding of cardiac disease and have the potential to transform cardiovascular therapeutics. One miR can target hundreds of individual mRNAs, but existing methodologies are not sufficient to accurately and comprehensively identify these mRNA targets in vivo. Objective To develop methods permitting identification of in vivo miR targets in an unbiased manner, using massively parallel sequencing of mouse cardiac transcriptomes in combination with sequencing of mRNA associated with mouse cardiac RNA-induced silencing complexes (RISCs). Methods and Results We optimized techniques for expression profiling small amounts of RNA without introducing amplification bias, and applied this to anti-Argonaute 2 immunoprecipitated RISCs (RISC-Seq) from mouse hearts. By comparing RNA-sequencing results of cardiac RISC and transcriptome from the same individual hearts, we defined 1,645 mRNAs consistently targeted to mouse cardiac RISCs. We employed this approach in hearts overexpressing miRs from Myh6 promoter-driven precursors (programmed RISC-Seq) to identify 209 in vivo targets of miR-133a and 81 in vivo targets of miR-499. Consistent with the fact that miR-133a and miR-499 have widely differing ‘seed’ sequences and belong to different miR families, only 6 targets were common to miR-133a- and miR-499-programmed hearts. Conclusions RISC-sequencing is a highly sensitive method for general RISC profiling and individual miR target identification in biological context, and is applicable to any tissue and any disease state. Summary MicroRNAs (miRs) are key regulators of mRNA translation in health and disease. While bioinformatic predictions suggest that a single miR may target hundreds of mRNAs, the number of experimentally verified targets of miRs is low. To enable comprehensive, unbiased examination of miR targets, we have performed deep RNA sequencing of cardiac transcriptomes in parallel with cardiac RNA-induced silencing complex

  3. Combined effects of expectations and visual uncertainty upon detection and identification of a target in the fog.

    PubMed

    Quétard, Boris; Quinton, Jean-Charles; Colomb, Michèle; Pezzulo, Giovanni; Barca, Laura; Izaute, Marie; Appadoo, Owen Kevin; Mermillod, Martial

    2015-09-01

    Detecting a pedestrian while driving in the fog is one situation where the prior expectation about the target presence is integrated with the noisy visual input. We focus on how these sources of information influence the oculomotor behavior and are integrated within an underlying decision-making process. The participants had to judge whether high-/low-density fog scenes displayed on a computer screen contained a pedestrian or a deer by executing a mouse movement toward the response button (mouse-tracking). A variable road sign was added on the scene to manipulate expectations about target identity. We then analyzed the timing and amplitude of the deviation of mouse trajectories toward the incorrect response and, using an eye tracker, the detection time (before fixating the target) and the identification time (fixations on the target). Results revealed that expectation of the correct target results in earlier decisions with less deviation toward the alternative response, this effect being partially explained by the facilitation of target identification.

  4. Template-based and free modeling of I-TASSER and QUARK pipelines using predicted contact maps in CASP12.

    PubMed

    Zhang, Chengxin; Mortuza, S M; He, Baoji; Wang, Yanting; Zhang, Yang

    2018-03-01

    We develop two complementary pipelines, "Zhang-Server" and "QUARK", based on I-TASSER and QUARK pipelines for template-based modeling (TBM) and free modeling (FM), and test them in the CASP12 experiment. The combination of I-TASSER and QUARK successfully folds three medium-size FM targets that have more than 150 residues, even though the interplay between the two pipelines still awaits further optimization. Newly developed sequence-based contact prediction by NeBcon plays a critical role to enhance the quality of models, particularly for FM targets, by the new pipelines. The inclusion of NeBcon predicted contacts as restraints in the QUARK simulations results in an average TM-score of 0.41 for the best in top five predicted models, which is 37% higher than that by the QUARK simulations without contacts. In particular, there are seven targets that are converted from non-foldable to foldable (TM-score >0.5) due to the use of contact restraints in the simulations. Another additional feature in the current pipelines is the local structure quality prediction by ResQ, which provides a robust residue-level modeling error estimation. Despite the success, significant challenges still remain in ab initio modeling of multi-domain proteins and folding of β-proteins with complicated topologies bound by long-range strand-strand interactions. Improvements on domain boundary and long-range contact prediction, as well as optimal use of the predicted contacts and multiple threading alignments, are critical to address these issues seen in the CASP12 experiment. © 2017 Wiley Periodicals, Inc.

  5. 77 FR 6857 - Pipeline Safety: Notice of Public Meetings on Improving Pipeline Leak Detection System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-09

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... installed to lessen the volume of natural gas and hazardous liquid released during catastrophic pipeline... p.m. Panel 3: Considerations for Natural Gas Pipeline Leak Detection Systems 3:30 p.m. Break 3:45 p...

  6. Systematic Identification of Combinatorial Drivers and Targets in Cancer Cell Lines

    PubMed Central

    Tabchy, Adel; Eltonsy, Nevine; Housman, David E.; Mills, Gordon B.

    2013-01-01

    There is an urgent need to elicit and validate highly efficacious targets for combinatorial intervention from large scale ongoing molecular characterization efforts of tumors. We established an in silico bioinformatic platform in concert with a high throughput screening platform evaluating 37 novel targeted agents in 669 extensively characterized cancer cell lines reflecting the genomic and tissue-type diversity of human cancers, to systematically identify combinatorial biomarkers of response and co-actionable targets in cancer. Genomic biomarkers discovered in a 141 cell line training set were validated in an independent 359 cell line test set. We identified co-occurring and mutually exclusive genomic events that represent potential drivers and combinatorial targets in cancer. We demonstrate multiple cooperating genomic events that predict sensitivity to drug intervention independent of tumor lineage. The coupling of scalable in silico and biologic high throughput cancer cell line platforms for the identification of co-events in cancer delivers rational combinatorial targets for synthetic lethal approaches with a high potential to pre-empt the emergence of resistance. PMID:23577104

  7. Systematic identification of combinatorial drivers and targets in cancer cell lines.

    PubMed

    Tabchy, Adel; Eltonsy, Nevine; Housman, David E; Mills, Gordon B

    2013-01-01

    There is an urgent need to elicit and validate highly efficacious targets for combinatorial intervention from large scale ongoing molecular characterization efforts of tumors. We established an in silico bioinformatic platform in concert with a high throughput screening platform evaluating 37 novel targeted agents in 669 extensively characterized cancer cell lines reflecting the genomic and tissue-type diversity of human cancers, to systematically identify combinatorial biomarkers of response and co-actionable targets in cancer. Genomic biomarkers discovered in a 141 cell line training set were validated in an independent 359 cell line test set. We identified co-occurring and mutually exclusive genomic events that represent potential drivers and combinatorial targets in cancer. We demonstrate multiple cooperating genomic events that predict sensitivity to drug intervention independent of tumor lineage. The coupling of scalable in silico and biologic high throughput cancer cell line platforms for the identification of co-events in cancer delivers rational combinatorial targets for synthetic lethal approaches with a high potential to pre-empt the emergence of resistance.

  8. Study on Failure of Third-Party Damage for Urban Gas Pipeline Based on Fuzzy Comprehensive Evaluation.

    PubMed

    Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong

    2016-01-01

    Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention.

  9. Systematic comparison of variant calling pipelines using gold standard personal exome variants

    PubMed Central

    Hwang, Sohyun; Kim, Eiru; Lee, Insuk; Marcotte, Edward M.

    2015-01-01

    The success of clinical genomics using next generation sequencing (NGS) requires the accurate and consistent identification of personal genome variants. Assorted variant calling methods have been developed, which show low concordance between their calls. Hence, a systematic comparison of the variant callers could give important guidance to NGS-based clinical genomics. Recently, a set of high-confident variant calls for one individual (NA12878) has been published by the Genome in a Bottle (GIAB) consortium, enabling performance benchmarking of different variant calling pipelines. Based on the gold standard reference variant calls from GIAB, we compared the performance of thirteen variant calling pipelines, testing combinations of three read aligners—BWA-MEM, Bowtie2, and Novoalign—and four variant callers—Genome Analysis Tool Kit HaplotypeCaller (GATK-HC), Samtools mpileup, Freebayes and Ion Proton Variant Caller (TVC), for twelve data sets for the NA12878 genome sequenced by different platforms including Illumina2000, Illumina2500, and Ion Proton, with various exome capture systems and exome coverage. We observed different biases toward specific types of SNP genotyping errors by the different variant callers. The results of our study provide useful guidelines for reliable variant identification from deep sequencing of personal genomes. PMID:26639839

  10. 77 FR 45417 - Pipeline Safety: Inspection and Protection of Pipeline Facilities After Railway Accidents

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-31

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Accidents AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. [[Page 45418

  11. 76 FR 28326 - Pipeline Safety: National Pipeline Mapping System Data Submissions and Submission Dates for Gas...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-17

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR 191... Reports AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Issuance of... Pipeline and Hazardous Materials Safety Administration (PHMSA) published a final rule on November 26, 2010...

  12. Freight pipelines: Current status and anticipated future use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-07-01

    This report is issued by the Task Committee on Freight Pipelines, Pipeline Division, ASCE. Freight pipelines of various types (including slurry pipeline, pneumatic pipeline, and capsule pipeline) have been used throughout the world for over a century for transporting solid and sometimes even package products. Recent advancements in pipeline technology, aided by advanced computer control systems and trenchless technologies, have greatly facilitated the transportation of solids by pipelines. Today, in many situations, freight pipelines are not only the most economical and practical means for transporting solids, they are also the most reliable, safest and most environmentally friendly transportation mode. Increasedmore » use of underground pipelines to transport freight is anticipated in the future, especially as the technology continues to improve and surface transportation modes such as highways become more congested. This paper describes the state of the art and expected future uses of various types of freight pipelines. Obstacles hindering the development and use of the most advanced freight pipeline systems, such as the pneumatic capsule pipeline for interstate transport of freight, are discussed.« less

  13. In silico re-identification of properties of drug target proteins.

    PubMed

    Kim, Baeksoo; Jo, Jihoon; Han, Jonghyun; Park, Chungoo; Lee, Hyunju

    2017-05-31

    Computational approaches in the identification of drug targets are expected to reduce time and effort in drug development. Advances in genomics and proteomics provide the opportunity to uncover properties of druggable genomes. Although several studies have been conducted for distinguishing drug targets from non-drug targets, they mainly focus on the sequences and functional roles of proteins. Many other properties of proteins have not been fully investigated. Using the DrugBank (version 3.0) database containing nearly 6,816 drug entries including 760 FDA-approved drugs and 1822 of their targets and human UniProt/Swiss-Prot databases, we defined 1578 non-redundant drug target and 17,575 non-drug target proteins. To select these non-redundant protein datasets, we built four datasets (A, B, C, and D) by considering clustering of paralogous proteins. We first reassessed the widely used properties of drug target proteins. We confirmed and extended that drug target proteins (1) are likely to have more hydrophobic, less polar, less PEST sequences, and more signal peptide sequences higher and (2) are more involved in enzyme catalysis, oxidation and reduction in cellular respiration, and operational genes. In this study, we proposed new properties (essentiality, expression pattern, PTMs, and solvent accessibility) for effectively identifying drug target proteins. We found that (1) drug targetability and protein essentiality are decoupled, (2) druggability of proteins has high expression level and tissue specificity, and (3) functional post-translational modification residues are enriched in drug target proteins. In addition, to predict the drug targetability of proteins, we exploited two machine learning methods (Support Vector Machine and Random Forest). When we predicted drug targets by combining previously known protein properties and proposed new properties, an F-score of 0.8307 was obtained. When the newly proposed properties are integrated, the prediction performance

  14. Target specific compound identification using a support vector machine.

    PubMed

    Plewczynski, Dariusz; von Grotthuss, Marcin; Spieser, Stephane A H; Rychlewski, Leszek; Wyrwicz, Lucjan S; Ginalski, Krzysztof; Koch, Uwe

    2007-03-01

    In many cases at the beginning of an HTS-campaign, some information about active molecules is already available. Often known active compounds (such as substrate analogues, natural products, inhibitors of a related protein or ligands published by a pharmaceutical company) are identified in low-throughput validation studies of the biochemical target. In this study we evaluate the effectiveness of a support vector machine applied for those compounds and used to classify a collection with unknown activity. This approach was aimed at reducing the number of compounds to be tested against the given target. Our method predicts the biological activity of chemical compounds based on only the atom pairs (AP) two dimensional topological descriptors. The supervised support vector machine (SVM) method herein is trained on compounds from the MDL drug data report (MDDR) known to be active for specific protein target. For detailed analysis, five different biological targets were selected including cyclooxygenase-2, dihydrofolate reductase, thrombin, HIV-reverse transcriptase and antagonists of the estrogen receptor. The accuracy of compound identification was estimated using the recall and precision values. The sensitivities for all protein targets exceeded 80% and the classification performance reached 100% for selected targets. In another application of the method, we addressed the absence of an initial set of active compounds for a selected protein target at the beginning of an HTS-campaign. In such a case, virtual high-throughput screening (vHTS) is usually applied by using a flexible docking procedure. However, the vHTS experiment typically contains a large percentage of false positives that should be verified by costly and time-consuming experimental follow-up assays. The subsequent use of our machine learning method was found to improve the speed (since the docking procedure was not required for all compounds from the database) and also the accuracy of the HTS hit lists (the

  15. Identification of human microRNA targets from isolated argonaute protein complexes.

    PubMed

    Beitzinger, Michaela; Peters, Lasse; Zhu, Jia Yun; Kremmer, Elisabeth; Meister, Gunter

    2007-06-01

    MicroRNAs (miRNAs) constitute a class of small non-coding RNAs that regulate gene expression on the level of translation and/or mRNA stability. Mammalian miRNAs associate with members of the Argonaute (Ago) protein family and bind to partially complementary sequences in the 3' untranslated region (UTR) of specific target mRNAs. Computer algorithms based on factors such as free binding energy or sequence conservation have been used to predict miRNA target mRNAs. Based on such predictions, up to one third of all mammalian mRNAs seem to be under miRNA regulation. However, due to the low degree of complementarity between the miRNA and its target, such computer programs are often imprecise and therefore not very reliable. Here we report the first biochemical identification approach of miRNA targets from human cells. Using highly specific monoclonal antibodies against members of the Ago protein family, we co-immunoprecipitate Ago-bound mRNAs and identify them by cloning. Interestingly, most of the identified targets are also predicted by different computer programs. Moreover, we randomly analyzed six different target candidates and were able to experimentally validate five as miRNA targets. Our data clearly indicate that miRNA targets can be experimentally identified from Ago complexes and therefore provide a new tool to directly analyze miRNA function.

  16. Lateral instability of high temperature pipelines, the 20-in. Sleipner Vest pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saevik, S.; Levold, E.; Johnsen, O.K.

    1996-12-01

    The present paper addresses methods to control snaking behavior of high temperature pipelines resting on a flat sea bed. A case study is presented based on the detail engineering of the 12.5 km long 20 inch gas pipeline connecting the Sleipner Vest wellhead platform to the Sleipner T processing platform in the North Sea. The study includes screening and evaluation of alternative expansion control methods, ending up with a recommended method. The methodology and philosophy, used as basis to ensure sufficient structural strength throughout the lifetime of the pipeline, are thereafter presented. The results show that in order to findmore » the optimum technical solution to control snaking behavior, many aspects need to be considered such as process requirements, allowable strain, hydrodynamic stability, vertical profile, pipelay installation and trawlboard loading. It is concluded that by proper consideration of all the above aspects, the high temperature pipeline can be designed to obtain sufficient safety level.« less

  17. Peering into the pharmaceutical "pipeline": investigational drugs, clinical trials, and industry priorities.

    PubMed

    Fisher, Jill A; Cottingham, Marci D; Kalbaugh, Corey A

    2015-04-01

    In spite of a growing literature on pharmaceuticalization, little is known about the pharmaceutical industry's investments in research and development (R&D). Information about the drugs being developed can provide important context for existing case studies detailing the expanding--and often problematic--role of pharmaceuticals in society. To access the pharmaceutical industry's pipeline, we constructed a database of drugs for which pharmaceutical companies reported initiating clinical trials over a five-year period (July 2006-June 2011), capturing 2477 different drugs in 4182 clinical trials. Comparing drugs in the pipeline that target diseases in high-income and low-income countries, we found that the number of drugs for diseases prevalent in high-income countries was 3.46 times higher than drugs for diseases prevalent in low-income countries. We also found that the plurality of drugs in the pipeline was being developed to treat cancers (26.2%). Interpreting our findings through the lens of pharmaceuticalization, we illustrate how investigating the entire drug development pipeline provides important information about patterns of pharmaceuticalization that are invisible when only marketed drugs are considered. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Designing a reliable leak bio-detection system for natural gas pipelines.

    PubMed

    Batzias, F A; Siontorou, C G; Spanidis, P-M P

    2011-02-15

    Monitoring of natural gas (NG) pipelines is an important task for economical/safety operation, loss prevention and environmental protection. Timely and reliable leak detection of gas pipeline, therefore, plays a key role in the overall integrity management for the pipeline system. Owing to the various limitations of the currently available techniques and the surveillance area that needs to be covered, the research on new detector systems is still thriving. Biosensors are worldwide considered as a niche technology in the environmental market, since they afford the desired detector capabilities at low cost, provided they have been properly designed/developed and rationally placed/networked/maintained by the aid of operational research techniques. This paper addresses NG leakage surveillance through a robust cooperative/synergistic scheme between biosensors and conventional detector systems; the network is validated in situ and optimized in order to provide reliable information at the required granularity level. The proposed scheme is substantiated through a knowledge based approach and relies on Fuzzy Multicriteria Analysis (FMCA), for selecting the best biosensor design that suits both, the target analyte and the operational micro-environment. This approach is illustrated in the design of leak surveying over a pipeline network in Greece. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. The Hyper Suprime-Cam software pipeline

    NASA Astrophysics Data System (ADS)

    Bosch, James; Armstrong, Robert; Bickerton, Steven; Furusawa, Hisanori; Ikeda, Hiroyuki; Koike, Michitaro; Lupton, Robert; Mineo, Sogo; Price, Paul; Takata, Tadafumi; Tanaka, Masayuki; Yasuda, Naoki; AlSayyad, Yusra; Becker, Andrew C.; Coulton, William; Coupon, Jean; Garmilla, Jose; Huang, Song; Krughoff, K. Simon; Lang, Dustin; Leauthaud, Alexie; Lim, Kian-Tat; Lust, Nate B.; MacArthur, Lauren A.; Mandelbaum, Rachel; Miyatake, Hironao; Miyazaki, Satoshi; Murata, Ryoma; More, Surhud; Okura, Yuki; Owen, Russell; Swinbank, John D.; Strauss, Michael A.; Yamada, Yoshihiko; Yamanoi, Hitomi

    2018-01-01

    In this paper, we describe the optical imaging data processing pipeline developed for the Subaru Telescope's Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope's Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrending and image characterizations.

  20. Identification of distant drug off-targets by direct superposition of binding pocket surfaces.

    PubMed

    Schumann, Marcel; Armen, Roger S

    2013-01-01

    Correctly predicting off-targets for a given molecular structure, which would have the ability to bind a large range of ligands, is both particularly difficult and important if they share no significant sequence or fold similarity with the respective molecular target ("distant off-targets"). A novel approach for identification of off-targets by direct superposition of protein binding pocket surfaces is presented and applied to a set of well-studied and highly relevant drug targets, including representative kinases and nuclear hormone receptors. The entire Protein Data Bank is searched for similar binding pockets and convincing distant off-target candidates were identified that share no significant sequence or fold similarity with the respective target structure. These putative target off-target pairs are further supported by the existence of compounds that bind strongly to both with high topological similarity, and in some cases, literature examples of individual compounds that bind to both. Also, our results clearly show that it is possible for binding pockets to exhibit a striking surface similarity, while the respective off-target shares neither significant sequence nor significant fold similarity with the respective molecular target ("distant off-target").

  1. TargetMiner: microRNA target prediction with systematic identification of tissue-specific negative examples.

    PubMed

    Bandyopadhyay, Sanghamitra; Mitra, Ramkrishna

    2009-10-15

    Prediction of microRNA (miRNA) target mRNAs using machine learning approaches is an important area of research. However, most of the methods suffer from either high false positive or false negative rates. One reason for this is the marked deficiency of negative examples or miRNA non-target pairs. Systematic identification of non-target mRNAs is still not addressed properly, and therefore, current machine learning approaches are compelled to rely on artificially generated negative examples for training. In this article, we have identified approximately 300 tissue-specific negative examples using a novel approach that involves expression profiling of both miRNAs and mRNAs, miRNA-mRNA structural interactions and seed-site conservation. The newly generated negative examples are validated with pSILAC dataset, which elucidate the fact that the identified non-targets are indeed non-targets.These high-throughput tissue-specific negative examples and a set of experimentally verified positive examples are then used to build a system called TargetMiner, a support vector machine (SVM)-based classifier. In addition to assessing the prediction accuracy on cross-validation experiments, TargetMiner has been validated with a completely independent experimental test dataset. Our method outperforms 10 existing target prediction algorithms and provides a good balance between sensitivity and specificity that is not reflected in the existing methods. We achieve a significantly higher sensitivity and specificity of 69% and 67.8% based on a pool of 90 feature set and 76.5% and 66.1% using a set of 30 selected feature set on the completely independent test dataset. In order to establish the effectiveness of the systematically generated negative examples, the SVM is trained using a different set of negative data generated using the method in Yousef et al. A significantly higher false positive rate (70.6%) is observed when tested on the independent set, while all other factors are kept the

  2. Study on Failure of Third-Party Damage for Urban Gas Pipeline Based on Fuzzy Comprehensive Evaluation

    PubMed Central

    Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong

    2016-01-01

    Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention. PMID:27875545

  3. 78 FR 53190 - Pipeline Safety: Notice to Operators of Hazardous Liquid and Natural Gas Pipelines of a Recall on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0185] Pipeline Safety: Notice to Operators of Hazardous Liquid and Natural Gas Pipelines of a Recall on Leak Repair Clamps Due to Defective Seal AGENCY: Pipeline and Hazardous Materials Safety...

  4. DDBJ read annotation pipeline: a cloud computing-based pipeline for high-throughput analysis of next-generation sequencing data.

    PubMed

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-08-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/.

  5. DDBJ Read Annotation Pipeline: A Cloud Computing-Based Pipeline for High-Throughput Analysis of Next-Generation Sequencing Data

    PubMed Central

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-01-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/. PMID:23657089

  6. 77 FR 19414 - Pipeline Safety: Public Comment on Leak and Valve Studies Mandated by the Pipeline Safety...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-30

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... Safety, Regulatory Certainty, and Job Creation Act of 2011 AGENCY: Pipeline and Hazardous Materials... Transportation (DOT), Pipeline and Hazardous Materials Safety Administration (PHMSA) is providing an important...

  7. Ub-ISAP: a streamlined UNIX pipeline for mining unique viral vector integration sites from next generation sequencing data.

    PubMed

    Kamboj, Atul; Hallwirth, Claus V; Alexander, Ian E; McCowage, Geoffrey B; Kramer, Belinda

    2017-06-17

    The analysis of viral vector genomic integration sites is an important component in assessing the safety and efficiency of patient treatment using gene therapy. Alongside this clinical application, integration site identification is a key step in the genetic mapping of viral elements in mutagenesis screens that aim to elucidate gene function. We have developed a UNIX-based vector integration site analysis pipeline (Ub-ISAP) that utilises a UNIX-based workflow for automated integration site identification and annotation of both single and paired-end sequencing reads. Reads that contain viral sequences of interest are selected and aligned to the host genome, and unique integration sites are then classified as transcription start site-proximal, intragenic or intergenic. Ub-ISAP provides a reliable and efficient pipeline to generate large datasets for assessing the safety and efficiency of integrating vectors in clinical settings, with broader applications in cancer research. Ub-ISAP is available as an open source software package at https://sourceforge.net/projects/ub-isap/ .

  8. Identification of chemogenomic features from drug–target interaction networks using interpretable classifiers

    PubMed Central

    Tabei, Yasuo; Pauwels, Edouard; Stoven, Véronique; Takemoto, Kazuhiro; Yamanishi, Yoshihiro

    2012-01-01

    Motivation: Drug effects are mainly caused by the interactions between drug molecules and their target proteins including primary targets and off-targets. Identification of the molecular mechanisms behind overall drug–target interactions is crucial in the drug design process. Results: We develop a classifier-based approach to identify chemogenomic features (the underlying associations between drug chemical substructures and protein domains) that are involved in drug–target interaction networks. We propose a novel algorithm for extracting informative chemogenomic features by using L1 regularized classifiers over the tensor product space of possible drug–target pairs. It is shown that the proposed method can extract a very limited number of chemogenomic features without loosing the performance of predicting drug–target interactions and the extracted features are biologically meaningful. The extracted substructure–domain association network enables us to suggest ligand chemical fragments specific for each protein domain and ligand core substructures important for a wide range of protein families. Availability: Softwares are available at the supplemental website. Contact: yamanishi@bioreg.kyushu-u.ac.jp Supplementary Information: Datasets and all results are available at http://cbio.ensmp.fr/~yyamanishi/l1binary/ . PMID:22962471

  9. A cross docking pipeline for improving pose prediction and virtual screening performance

    NASA Astrophysics Data System (ADS)

    Kumar, Ashutosh; Zhang, Kam Y. J.

    2018-01-01

    Pose prediction and virtual screening performance of a molecular docking method depend on the choice of protein structures used for docking. Multiple structures for a target protein are often used to take into account the receptor flexibility and problems associated with a single receptor structure. However, the use of multiple receptor structures is computationally expensive when docking a large library of small molecules. Here, we propose a new cross-docking pipeline suitable to dock a large library of molecules while taking advantage of multiple target protein structures. Our method involves the selection of a suitable receptor for each ligand in a screening library utilizing ligand 3D shape similarity with crystallographic ligands. We have prospectively evaluated our method in D3R Grand Challenge 2 and demonstrated that our cross-docking pipeline can achieve similar or better performance than using either single or multiple-receptor structures. Moreover, our method displayed not only decent pose prediction performance but also better virtual screening performance over several other methods.

  10. U.S. pipeline industry enters new era

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnsen, M.R.

    1999-11-01

    The largest construction project in North America this year and next--the Alliance Pipeline--marks some advances for the US pipeline industry. With the Alliance Pipeline system (Alliance), mechanized welding and ultrasonic testing are making their debuts in the US as primary mainline construction techniques. Particularly in Canada and Europe, mechanized welding technology has been used for both onshore and offshore pipeline construction for at least 15 years. However, it has never before been used to build a cross-country pipeline in the US, although it has been tested on short segments. This time, however, an accelerated construction schedule, among other reasons, necessitatedmore » the use of mechanized gas metal arc welding (GMAW). The $3-billion pipeline will delivery natural gas from northwestern British Columbia and northeastern Alberta in Canada to a hub near Chicago, Ill., where it will connect to the North American pipeline grid. Once the pipeline is completed and buried, crews will return the topsoil. Corn and other crops will reclaim the land. While the casual passerby probably won't know the Alliance pipeline is there, it may have a far-reaching effect on the way mainline pipelines are built in the US. For even though mechanized welding and ultrasonic testing are being used for the first time in the United States on this project, some US workers had already gained experience with the technology on projects elsewhere. And work on this pipeline has certainly developed a much larger pool of experienced workers for industry to draw from. The Alliance project could well signal the start of a new era in US pipeline construction.« less

  11. The Hyper Suprime-Cam software pipeline

    DOE PAGES

    Bosch, James; Armstrong, Robert; Bickerton, Steven; ...

    2017-10-12

    Here in this article, we describe the optical imaging data processing pipeline developed for the Subaru Telescope’s Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope’s Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrendingmore » and image characterizations.« less

  12. The Hyper Suprime-Cam software pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosch, James; Armstrong, Robert; Bickerton, Steven

    Here in this article, we describe the optical imaging data processing pipeline developed for the Subaru Telescope’s Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope’s Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrendingmore » and image characterizations.« less

  13. The antifungal pipeline: a reality check

    PubMed Central

    Perfect, John R.

    2017-01-01

    Invasive fungal infections continue to appear in record numbers as the immunocompromised population of the world increases, owing partially to the increased number of individuals who are infected with HIV and partially to the successful treatment of serious underlying diseases. The effectiveness of current antifungal therapies — polyenes, flucytosine, azoles and echinocandins (as monotherapies or in combinations for prophylaxis, or as empiric, pre-emptive or specific therapies) — in the management of these infections has plateaued. Although these drugs are clinically useful, they have several limitations, such as off-target toxicity, and drug-resistant fungi are now emerging. New antifungals are therefore needed. In this Review, I discuss the robust and dynamic antifungal pipeline, including results from preclinical academic efforts through to pharmaceutical industry products, and describe the targets, strategies, compounds and potential outcomes. PMID:28496146

  14. Computational Identification of MicroRNAs and Their Targets from Finger Millet (Eleusine coracana).

    PubMed

    Usha, S; Jyothi, M N; Suchithra, B; Dixit, Rekha; Rai, D V; Nagesh Babu, R

    2017-03-01

    MicroRNAs are endogenous small RNAs regulating intrinsic normal growth and development of plant. Discovering miRNAs, their targets and further inferring their functions had become routine process to comprehend the normal biological processes of miRNAs and their roles in plant development. In this study, we used homology-based analysis with available expressed sequence tag of finger millet (Eleusine coracana) to predict conserved miRNAs. Three potent miRNAs targeting 88 genes were identified. The newly identified miRNAs were found to be homologous with miR166 and miR1310. The targets recognized were transcription factors and enzymes, and GO analysis showed these miRNAs played varied roles in gene regulation. The identification of miRNAs and their targets is anticipated to hasten the pace of key epigenetic regulators in plant development.

  15. Identification of Distant Drug Off-Targets by Direct Superposition of Binding Pocket Surfaces

    PubMed Central

    Schumann, Marcel; Armen, Roger S.

    2013-01-01

    Correctly predicting off-targets for a given molecular structure, which would have the ability to bind a large range of ligands, is both particularly difficult and important if they share no significant sequence or fold similarity with the respective molecular target (“distant off-targets”). A novel approach for identification of off-targets by direct superposition of protein binding pocket surfaces is presented and applied to a set of well-studied and highly relevant drug targets, including representative kinases and nuclear hormone receptors. The entire Protein Data Bank is searched for similar binding pockets and convincing distant off-target candidates were identified that share no significant sequence or fold similarity with the respective target structure. These putative target off-target pairs are further supported by the existence of compounds that bind strongly to both with high topological similarity, and in some cases, literature examples of individual compounds that bind to both. Also, our results clearly show that it is possible for binding pockets to exhibit a striking surface similarity, while the respective off-target shares neither significant sequence nor significant fold similarity with the respective molecular target (“distant off-target”). PMID:24391782

  16. The standard operating procedure of the DOE-JGI Metagenome Annotation Pipeline (MAP v.4)

    DOE PAGES

    Huntemann, Marcel; Ivanova, Natalia N.; Mavromatis, Konstantinos; ...

    2016-02-24

    The DOE-JGI Metagenome Annotation Pipeline (MAP v.4) performs structural and functional annotation for metagenomic sequences that are submitted to the Integrated Microbial Genomes with Microbiomes (IMG/M) system for comparative analysis. The pipeline runs on nucleotide sequences provide d via the IMG submission site. Users must first define their analysis projects in GOLD and then submit the associated sequence datasets consisting of scaffolds/contigs with optional coverage information and/or unassembled reads in fasta and fastq file formats. The MAP processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNAs, as well as CRISPR elements. Structural annotation ismore » followed by functional annotation including assignment of protein product names and connection to various protein family databases.« less

  17. The standard operating procedure of the DOE-JGI Metagenome Annotation Pipeline (MAP v.4)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huntemann, Marcel; Ivanova, Natalia N.; Mavromatis, Konstantinos

    The DOE-JGI Metagenome Annotation Pipeline (MAP v.4) performs structural and functional annotation for metagenomic sequences that are submitted to the Integrated Microbial Genomes with Microbiomes (IMG/M) system for comparative analysis. The pipeline runs on nucleotide sequences provide d via the IMG submission site. Users must first define their analysis projects in GOLD and then submit the associated sequence datasets consisting of scaffolds/contigs with optional coverage information and/or unassembled reads in fasta and fastq file formats. The MAP processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNAs, as well as CRISPR elements. Structural annotation ismore » followed by functional annotation including assignment of protein product names and connection to various protein family databases.« less

  18. Pipelining in a changing competitive environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, E.G.; Wishart, D.M.

    1996-12-31

    The changing competitive environment for the pipeline industry presents a broad spectrum of new challenges and opportunities: international cooperation; globalization of opportunities, organizations and competition; and integrated systems approach to system configuration, financing, contracting strategy, materials sourcing, and operations; cutting edge and emerging technologies; adherence to high standards of environmental protection; an emphasis on safety; innovative approaches to project financing; and advances in technology and programs to maintain the long term, cost effective integrity of operating pipeline systems. These challenges and opportunities are partially a result of the increasingly competitive nature of pipeline development and the public`s intolerance to incidentsmore » of pipeline failure. A creative systems approach to these challenges is often the key to the project moving ahead. This usually encompasses collaboration among users of the pipeline, pipeline owners and operators, international engineering and construction companies, equipment and materials suppliers, in-country engineers and constructors, international lending agencies and financial institutions.« less

  19. Development of a Photo-Cross-Linkable Diaminoquinazoline Inhibitor for Target Identification in Plasmodium falciparum.

    PubMed

    Lubin, Alexandra S; Rueda-Zubiaurre, Ainoa; Matthews, Holly; Baumann, Hella; Fisher, Fabio R; Morales-Sanfrutos, Julia; Hadavizadeh, Kate S; Nardella, Flore; Tate, Edward W; Baum, Jake; Scherf, Artur; Fuchter, Matthew J

    2018-04-13

    Diaminoquinazolines represent a privileged scaffold for antimalarial discovery, including use as putative Plasmodium histone lysine methyltransferase inhibitors. Despite this, robust evidence for their molecular targets is lacking. Here we report the design and development of a small-molecule photo-cross-linkable probe to investigate the targets of our diaminoquinazoline series. We demonstrate the effectiveness of our designed probe for photoaffinity labeling of Plasmodium lysates and identify similarities between the target profiles of the probe and the representative diaminoquinazoline BIX-01294. Initial pull-down proteomics experiments identified 104 proteins from different classes, many of which are essential, highlighting the suitability of the developed probe as a valuable tool for target identification in Plasmodium falciparum.

  20. GIS least-cost analysis approach for siting gas pipeline ROWs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sydelko, P.J.; Wilkey, P.L.

    1994-09-01

    Geographic-information-system applications for the siting and monitoring of gas pipeline rights-of-way (ROWS) were developed for areas near Rio Vista, California. The data layers developed for this project represent geographic features, such as landcover, elevation, aspect, slope, soils, hydrography, transportation corridors, endangered species habitats, wetlands, and public line surveys. A geographic information system was used to develop and store spatial data from several sources; to manipulate spatial data to evaluate environmental and engineering issues associated with the siting, permitting, construction, maintenance, and monitoring of gas-pipeline ROWS; and to graphically display analysis results. Examples of these applications include (1) determination of environmentallymore » sensitive areas, such as endangered species habitat, wetlands, and areas of highly erosive soils; (2) evaluation of engineering constraints, including shallow depth to bedrock, major hydrographic features, and shallow water table; (3) classification of satellite imagery for landuse/landcover that will affect ROWS; and (4) identification of alternative ROW corridors that avoid environmentally sensitive areas or areas with severe engineering constraints.« less

  1. PharmMapper 2017 update: a web server for potential drug target identification with a comprehensive target pharmacophore database.

    PubMed

    Wang, Xia; Shen, Yihang; Wang, Shiwei; Li, Shiliang; Zhang, Weilin; Liu, Xiaofeng; Lai, Luhua; Pei, Jianfeng; Li, Honglin

    2017-07-03

    The PharmMapper online tool is a web server for potential drug target identification by reversed pharmacophore matching the query compound against an in-house pharmacophore model database. The original version of PharmMapper includes more than 7000 target pharmacophores derived from complex crystal structures with corresponding protein target annotations. In this article, we present a new version of the PharmMapper web server, of which the backend pharmacophore database is six times larger than the earlier one, with a total of 23 236 proteins covering 16 159 druggable pharmacophore models and 51 431 ligandable pharmacophore models. The expanded target data cover 450 indications and 4800 molecular functions compared to 110 indications and 349 molecular functions in our last update. In addition, the new web server is united with the statistically meaningful ranking of the identified drug targets, which is achieved through the use of standard scores. It also features an improved user interface. The proposed web server is freely available at http://lilab.ecust.edu.cn/pharmmapper/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Recommendations for Improving Identification and Quantification in Non-Targeted, GC-MS-Based Metabolomic Profiling of Human Plasma

    PubMed Central

    Wang, Hanghang; Muehlbauer, Michael J.; O’Neal, Sara K.; Newgard, Christopher B.; Hauser, Elizabeth R.; Shah, Svati H.

    2017-01-01

    The field of metabolomics as applied to human disease and health is rapidly expanding. In recent efforts of metabolomics research, greater emphasis has been placed on quality control and method validation. In this study, we report an experience with quality control and a practical application of method validation. Specifically, we sought to identify and modify steps in gas chromatography-mass spectrometry (GC-MS)-based, non-targeted metabolomic profiling of human plasma that could influence metabolite identification and quantification. Our experimental design included two studies: (1) a limiting-dilution study, which investigated the effects of dilution on analyte identification and quantification; and (2) a concentration-specific study, which compared the optimal plasma extract volume established in the first study with the volume used in the current institutional protocol. We confirmed that contaminants, concentration, repeatability and intermediate precision are major factors influencing metabolite identification and quantification. In addition, we established methods for improved metabolite identification and quantification, which were summarized to provide recommendations for experimental design of GC-MS-based non-targeted profiling of human plasma. PMID:28841195

  3. The Very Large Array Data Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  4. Improved coal-slurry pipeline

    NASA Technical Reports Server (NTRS)

    Dowler, W. L.

    1979-01-01

    High strength steel pipeline carries hot mixture of powdered coal and coal derived oil to electric-power-generating station. Slurry is processed along way to remove sulfur, ash, and nitrogen and to recycle part of oil. System eliminates hazards and limitations associated with anticipated coal/water-slurry pipelines.

  5. drPACS: A Simple UNIX Execution Pipeline

    NASA Astrophysics Data System (ADS)

    Teuben, P.

    2011-07-01

    We describe a very simple yet flexible and effective pipeliner for UNIX commands. It creates a Makefile to define a set of serially dependent commands. The commands in the pipeline share a common set of parameters by which they can communicate. Commands must follow a simple convention to retrieve and store parameters. Pipeline parameters can optionally be made persistent across multiple runs of the pipeline. Tools were added to simplify running a large series of pipelines, which can then also be run in parallel.

  6. A new efficient method of generating photoaffinity beads for drug target identification.

    PubMed

    Nishiya, Yoichi; Hamada, Tomoko; Abe, Masayuki; Takashima, Michio; Tsutsumi, Kyoko; Okawa, Katsuya

    2017-02-15

    Affinity purification is one of the most prevalent methods for the target identification of small molecules. Preparation of an appropriate chemical for immobilization, however, is a tedious and time-consuming process. A decade ago, a photoreaction method for generating affinity beads was reported, where compounds are mixed with agarose beads carrying a photoreactive group (aryldiazirine) and then irradiated with ultraviolet light under dry conditions to form covalent attachment. Although the method has proven useful for identifying drug targets, the beads suffer from inefficient ligand incorporation and tend to shrink and aggregate, which can cause nonspecific binding and low reproducibility. We therefore decided to craft affinity beads free from these shortcomings without compromising the ease of preparation. We herein report a modified method; first, a compound of interest is mixed with a crosslinker having an activated ester and a photoreactive moiety on each end. This mixture is then dried in a glass tube and irradiated with ultraviolet light. Finally, the conjugates are dissolved and reacted with agarose beads with a primary amine. This protocol enabled us to immobilize compounds more efficiently (approximately 500-fold per bead compared to the original method) and generated beads without physical deterioration. We herein demonstrated that the new FK506-immobilized beads specifically isolated more FKBP12 than the original beads, thereby proving our method to be applicable to target identification experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. 49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of a maximum allowable operating pressure based on higher stress levels in the following areas: Take... pipeline at the increased stress level under this section with conventional operation; and (ii) Describe... targeted audience; and (B) Include information about the integrity management activities performed under...

  8. 49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of a maximum allowable operating pressure based on higher stress levels in the following areas: Take... pipeline at the increased stress level under this section with conventional operation; and (ii) Describe... targeted audience; and (B) Include information about the integrity management activities performed under...

  9. 49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of a maximum allowable operating pressure based on higher stress levels in the following areas: Take... pipeline at the increased stress level under this section with conventional operation; and (ii) Describe... targeted audience; and (B) Include information about the integrity management activities performed under...

  10. A novel pipeline based FPGA implementation of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Thirer, Nonel

    2014-05-01

    To solve problems when an analytical solution is not available, more and more bio-inspired computation techniques have been applied in the last years. Thus, an efficient algorithm is the Genetic Algorithm (GA), which imitates the biological evolution process, finding the solution by the mechanism of "natural selection", where the strong has higher chances to survive. A genetic algorithm is an iterative procedure which operates on a population of individuals called "chromosomes" or "possible solutions" (usually represented by a binary code). GA performs several processes with the population individuals to produce a new population, like in the biological evolution. To provide a high speed solution, pipelined based FPGA hardware implementations are used, with a nstages pipeline for a n-phases genetic algorithm. The FPGA pipeline implementations are constraints by the different execution time of each stage and by the FPGA chip resources. To minimize these difficulties, we propose a bio-inspired technique to modify the crossover step by using non identical twins. Thus two of the chosen chromosomes (parents) will build up two new chromosomes (children) not only one as in classical GA. We analyze the contribution of this method to reduce the execution time in the asynchronous and synchronous pipelines and also the possibility to a cheaper FPGA implementation, by using smaller populations. The full hardware architecture for a FPGA implementation to our target ALTERA development card is presented and analyzed.

  11. Applications of CRISPR genome editing technology in drug target identification and validation.

    PubMed

    Lu, Quinn; Livi, George P; Modha, Sundip; Yusa, Kosuke; Macarrón, Ricardo; Dow, David J

    2017-06-01

    The analysis of pharmaceutical industry data indicates that the major reason for drug candidates failing in late stage clinical development is lack of efficacy, with a high proportion of these due to erroneous hypotheses about target to disease linkage. More than ever, there is a requirement to better understand potential new drug targets and their role in disease biology in order to reduce attrition in drug development. Genome editing technology enables precise modification of individual protein coding genes, as well as noncoding regulatory sequences, enabling the elucidation of functional effects in human disease relevant cellular systems. Areas covered: This article outlines applications of CRISPR genome editing technology in target identification and target validation studies. Expert opinion: Applications of CRISPR technology in target validation studies are in evidence and gaining momentum. Whilst technical challenges remain, we are on the cusp of CRISPR being applied in complex cell systems such as iPS derived differentiated cells and stem cell derived organoids. In the meantime, our experience to date suggests that precise genome editing of putative targets in primary cell systems is possible, offering more human disease relevant systems than conventional cell lines.

  12. Acoustic system for communication in pipelines

    DOEpatents

    Martin, II, Louis Peter; Cooper, John F [Oakland, CA

    2008-09-09

    A system for communication in a pipe, or pipeline, or network of pipes containing a fluid. The system includes an encoding and transmitting sub-system connected to the pipe, or pipeline, or network of pipes that transmits a signal in the frequency range of 3-100 kHz into the pipe, or pipeline, or network of pipes containing a fluid, and a receiver and processor sub-system connected to the pipe, or pipeline, or network of pipes containing a fluid that receives said signal and uses said signal for a desired application.

  13. Magnetic pipeline for coal and oil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knolle, E.

    1998-07-01

    A 1994 analysis of the recorded costs of the Alaska oil pipeline, in a paper entitled Maglev Crude Oil Pipeline, (NASA CP-3247 pp. 671--684) concluded that, had the Knolle Magnetrans pipeline technology been available and used, some $10 million per day in transportation costs could have been saved over the 20 years of the Alaska oil pipeline's existence. This over 800 mile long pipeline requires about 500 horsepower per mile in pumping power, which together with the cost of the pipeline's capital investment consumes about one-third of the energy value of the pumped oil. This does not include the costmore » of getting the oil out of the ground. The reason maglev technology performs superior to conventional pipelines is because by magnetically levitating the oil into contact-free suspense, there is no drag-causing adhesion. In addition, by using permanent magnets in repulsion, suspension is achieved without using energy. Also, the pumped oil's adhesion to the inside of pipes limits its speed. In the case of the Alaska pipeline the speed is limited to about 7 miles per hour, which, with its 48-inch pipe diameter and 1200 psi pressure, pumps about 2 million barrels per day. The maglev system, as developed by Knolle Magnetrans, would transport oil in magnetically suspended sealed containers and, thus free of adhesion, at speeds 10 to 20 times faster. Furthermore, the diameter of the levitated containers can be made smaller with the same capacity, which makes the construction of the maglev system light and inexpensive. There are similar advantages when using maglev technology to transport coal. Also, a maglev system has advantages over railroads in mountainous regions where coal is primarily mined. A maglev pipeline can travel, all-year and all weather, in a straight line to the end-user, whereas railroads have difficult circuitous routes. In contrast, a maglev pipeline can climb over steep hills without much difficulty.« less

  14. Comprehensive investigation into historical pipeline construction costs and engineering economic analysis of Alaska in-state gas pipeline

    NASA Astrophysics Data System (ADS)

    Rui, Zhenhua

    This study analyzes historical cost data of 412 pipelines and 220 compressor stations. On the basis of this analysis, the study also evaluates the feasibility of an Alaska in-state gas pipeline using Monte Carlo simulation techniques. Analysis of pipeline construction costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary by diameter, length, volume, year, and location. Overall average learning rates for pipeline material and labor costs are 6.1% and 12.4%, respectively. Overall average cost shares for pipeline material, labor, miscellaneous, and right of way (ROW) are 31%, 40%, 23%, and 7%, respectively. Regression models are developed to estimate pipeline component costs for different lengths, cross-sectional areas, and locations. An analysis of inaccuracy in pipeline cost estimation demonstrates that the cost estimation of pipeline cost components is biased except for in the case of total costs. Overall overrun rates for pipeline material, labor, miscellaneous, ROW, and total costs are 4.9%, 22.4%, -0.9%, 9.1%, and 6.5%, respectively, and project size, capacity, diameter, location, and year of completion have different degrees of impacts on cost overruns of pipeline cost components. Analysis of compressor station costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary in terms of capacity, year, and location. Average learning rates for compressor station material and labor costs are 12.1% and 7.48%, respectively. Overall average cost shares of material, labor, miscellaneous, and ROW are 50.6%, 27.2%, 21.5%, and 0.8%, respectively. Regression models are developed to estimate compressor station component costs in different capacities and locations. An investigation into inaccuracies in compressor station cost estimation demonstrates that the cost estimation for compressor stations is biased except for in the case of material costs. Overall average

  15. Neuronal Target Identification Requires AHA-1-Mediated Fine-Tuning of Wnt Signaling in C. elegans

    PubMed Central

    Zhang, Jingyan; Li, Xia; Jevince, Angela R.; Guan, Liying; Wang, Jiaming; Hall, David H.; Huang, Xun; Ding, Mei

    2013-01-01

    Electrical synaptic transmission through gap junctions is a vital mode of intercellular communication in the nervous system. The mechanism by which reciprocal target cells find each other during the formation of gap junctions, however, is poorly understood. Here we show that gap junctions are formed between BDU interneurons and PLM mechanoreceptors in C. elegans and the connectivity of BDU with PLM is influenced by Wnt signaling. We further identified two PAS-bHLH family transcription factors, AHA-1 and AHR-1, which function cell-autonomously within BDU and PLM to facilitate the target identification process. aha-1 and ahr-1 act genetically upstream of cam-1. CAM-1, a membrane-bound receptor tyrosine kinase, is present on both BDU and PLM cells and likely serves as a Wnt antagonist. By binding to a cis-regulatory element in the cam-1 promoter, AHA-1 enhances cam-1 transcription. Our study reveals a Wnt-dependent fine-tuning mechanism that is crucial for mutual target cell identification during the formation of gap junction connections. PMID:23825972

  16. A pipeline for the systematic identification of non-redundant full-ORF cDNAs for polymorphic and evolutionary divergent genomes: Application to the ascidian Ciona intestinalis

    DOE PAGES

    Gilchrist, Michael J.; Sobral, Daniel; Khoueiry, Pierre; ...

    2015-05-27

    Genome-wide resources, such as collections of cDNA clones encoding for complete proteins (full-ORF clones), are crucial tools for studying the evolution of gene function and genetic interactions. Non-model organisms, in particular marine organisms, provide a rich source of functional diversity. Marine organism genomes are, however, frequently highly polymorphic and encode proteins that diverge significantly from those of well-annotated model genomes. The construction of full-ORF clone collections from non-model organisms is hindered by the difficulty of predicting accurately the N-terminal ends of proteins, and distinguishing recent paralogs from highly polymorphic alleles. We also report a computational strategy that overcomes these difficulties,more » and allows for accurate gene level clustering of transcript data followed by the automated identification of full-ORFs with correct 5'- and 3'-ends. It is robust to polymorphism, includes paralog calling and does not require evolutionary proximity to well annotated model organisms. Here, we developed this pipeline for the ascidian Ciona intestinalis, a highly polymorphic member of the divergent sister group of the vertebrates, emerging as a powerful model organism to study chordate gene function, Gene Regulatory Networks and molecular mechanisms underlying human pathologies. Furthermore, using this pipeline we have generated the first full-ORF collection for a highly polymorphic marine invertebrate. It contains 19,163 full-ORF cDNA clones covering 60% of Ciona coding genes, and full-ORF orthologs for approximately half of curated human disease-associated genes.« less

  17. Comparisons of sediment losses from a newly constructed cross-country natural gas pipeline and an existing in-road pipeline

    Treesearch

    Pamela J. Edwards; Bridget M. Harrison; Daniel J. Holz; Karl W.J. Williard; Jon E. Schoonover

    2014-01-01

    Sediment loads were measured for about one year from natural gas pipelines in two studies in north central West Virginia. One study involved a 1-year-old pipeline buried within the bed of a 25-year-old skid road, and the other involved a newly constructed cross-country pipeline. Both pipelines were the same diameter and were installed using similar trenching and...

  18. 77 FR 27279 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-09

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... collections relate to the pipeline integrity management requirements for gas transmission pipeline operators... Management in High Consequence Areas Gas Transmission Pipeline Operators. OMB Control Number: 2137-0610...

  19. 75 FR 53733 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-01

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0246] Pipeline Safety: Information Collection Activities AGENCY: Pipeline and Hazardous... liquefied natural gas, hazardous liquid, and gas transmission pipeline systems operated by a company. The...

  20. 77 FR 46155 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-02

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... collections relate to the pipeline integrity management requirements for gas transmission pipeline operators... Management in High Consequence Areas Gas Transmission Pipeline Operators. OMB Control Number: 2137-0610...

  1. 78 FR 46560 - Pipeline Safety: Class Location Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-01

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... class location requirements for gas transmission pipelines. Section 5 of the Pipeline Safety, Regulatory... and, with respect to gas transmission pipeline facilities, whether applying IMP requirements to...

  2. 77 FR 15453 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-15

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... information collection titled, ``Gas Pipeline Safety Program Certification and Hazardous Liquid Pipeline... collection request that PHMSA will be submitting to OMB for renewal titled, ``Gas Pipeline Safety Program...

  3. Pharmacogenetics and target identification in diabetes.

    PubMed

    Pearson, Ewan R

    2018-02-24

    In diabetes, pharmacogenetics can be used both to identify patient subgroups who will have most benefit and/or least harm from a particularly treatment, and to gain insights into the molecular mechanisms of drug action and disease aetiology. There is increasing evidence that genetic variation alters response to diabetes treatments-both in terms of glycaemic response and side effects. This can be seen with dramatic impact on clinical care, in patients with genetic forms of diabetes such as Maturity Onset Diabetes of the Young caused by HNF1A mutations, and Neonatal diabetes due to activating mutations in ABCC8 or KCNJ11. Beyond monogenic diabetes, pharmacogenetic variants have yet to impact on clinical practice, yet the effect sizes (e.g. for metformin intolerance and OCT1 variants; or for metformin action and SLC2A2 variants) are potentially of clinical utility, especially if the genotype is already known at the point of prescribing. Over the next few years, increasing cohort sizes and linkage at scale to electronic medical records will provide considerable potential for stratification and novel target identification in diabetes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Capsule injection system for a hydraulic capsule pipelining system

    DOEpatents

    Liu, Henry

    1982-01-01

    An injection system for injecting capsules into a hydraulic capsule pipelining system, the pipelining system comprising a pipeline adapted for flow of a carrier liquid therethrough, and capsules adapted to be transported through the pipeline by the carrier liquid flowing through the pipeline. The injection system comprises a reservoir of carrier liquid, the pipeline extending within the reservoir and extending downstream out of the reservoir, and a magazine in the reservoir for holding capsules in a series, one above another, for injection into the pipeline in the reservoir. The magazine has a lower end in communication with the pipeline in the reservoir for delivery of capsules from the magazine into the pipeline.

  5. Building a genome analysis pipeline to predict disease risk and prevent disease.

    PubMed

    Bromberg, Y

    2013-11-01

    Reduced costs and increased speed and accuracy of sequencing can bring the genome-based evaluation of individual disease risk to the bedside. While past efforts have identified a number of actionable mutations, the bulk of genetic risk remains hidden in sequence data. The biggest challenge facing genomic medicine today is the development of new techniques to predict the specifics of a given human phenome (set of all expressed phenotypes) encoded by each individual variome (full set of genome variants) in the context of the given environment. Numerous tools exist for the computational identification of the functional effects of a single variant. However, the pipelines taking advantage of full genomic, exomic, transcriptomic (and other) sequences have only recently become a reality. This review looks at the building of methodologies for predicting "variome"-defined disease risk. It also discusses some of the challenges for incorporating such a pipeline into everyday medical practice. © 2013. Published by Elsevier Ltd. All rights reserved.

  6. Building a common pipeline for rule-based document classification.

    PubMed

    Patterson, Olga V; Ginter, Thomas; DuVall, Scott L

    2013-01-01

    Instance-based classification of clinical text is a widely used natural language processing task employed as a step for patient classification, document retrieval, or information extraction. Rule-based approaches rely on concept identification and context analysis in order to determine the appropriate class. We propose a five-step process that enables even small research teams to develop simple but powerful rule-based NLP systems by taking advantage of a common UIMA AS based pipeline for classification. Our proposed methodology coupled with the general-purpose solution provides researchers with access to the data locked in clinical text in cases of limited human resources and compact timelines.

  7. 77 FR 51848 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-27

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Program for Gas Distribution Pipelines. DATES: Interested persons are invited to submit comments on or.... These regulations require operators of hazardous liquid pipelines and gas pipelines to develop and...

  8. 77 FR 26822 - Pipeline Safety: Verification of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0068] Pipeline Safety: Verification of Records AGENCY: Pipeline and Hazardous Materials... issuing an Advisory Bulletin to remind operators of gas and hazardous liquid pipeline facilities to verify...

  9. 77 FR 74275 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-13

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No.... These regulations require operators of hazardous liquid pipelines and gas pipelines to develop and... control room. Affected Public: Operators of both natural gas and hazardous liquid pipeline systems. Annual...

  10. INTERNAL REPAIR OF PIPELINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robin Gordon; Bill Bruce; Nancy Porter

    2003-05-01

    The two broad categories of deposited weld metal repair and fiber-reinforced composite repair technologies were reviewed for potential application for internal repair of gas transmission pipelines. Both are used to some extent for other applications and could be further developed for internal, local, structural repair of gas transmission pipelines. Preliminary test programs were developed for both deposited weld metal repairs and for fiber-reinforced composite repair. To date, all of the experimental work pertaining to the evaluation of potential repair methods has focused on fiber-reinforced composite repairs. Hydrostatic testing was also conducted on four pipeline sections with simulated corrosion damage: twomore » with composite liners and two without.« less

  11. Improved FTA methodology and application to subsea pipeline reliability design.

    PubMed

    Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan

    2014-01-01

    An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form.

  12. An insight into the exploration of druggable genome of Streptococcus gordonii for the identification of novel therapeutic candidates.

    PubMed

    Azam, Syed Sikander; Shamim, Amen

    2014-09-01

    The discovery of novel drug targets of a genome that can bind with high affinity to drug-like compounds is a significant challenge in drug development. Streptococcus gordonii initiates dental plaque formation and endocarditis by entering into the blood stream, usually after oral trauma. The prolonged use of antibiotics is raising a problem of multi-drug resistance and lack of an optimal therapeutic regime that necessitates the drug discovery of vital importance in curing various infections. To overcome this dilemma, the in silico approach paves the way for identification and qualitative characterization of promising drug targets for S. gordonii that encompass three phases of analyses. The present study deciphers drug target genomes of S. gordonii in which 93 proteins were identified as potential drug targets and 16 proteins were found to be involved in unique metabolic pathways. Highlighted information will convincingly render to facilitate selection of S. gordonii proteins for successful entry into drug design pipelines. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. 75 FR 73160 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-29

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...-Related Conditions on Gas, Hazardous Liquid, and Carbon Dioxide Pipelines and Liquefied Natural Gas... Pipelines and Liquefied Natural Gas Facilities.'' The Pipeline Safety Laws (49 U.S.C. 60132) require each...

  14. Gene silencing in Tribolium castaneum as a tool for the targeted identification of candidate RNAi targets in crop pests.

    PubMed

    Knorr, Eileen; Fishilevich, Elane; Tenbusch, Linda; Frey, Meghan L F; Rangasamy, Murugesan; Billion, Andre; Worden, Sarah E; Gandra, Premchand; Arora, Kanika; Lo, Wendy; Schulenberg, Greg; Valverde-Garcia, Pablo; Vilcinskas, Andreas; Narva, Kenneth E

    2018-02-01

    RNAi shows potential as an agricultural technology for insect control, yet, a relatively low number of robust lethal RNAi targets have been demonstrated to control insects of agricultural interest. In the current study, a selection of lethal RNAi target genes from the iBeetle (Tribolium castaneum) screen were used to demonstrate efficacy of orthologous targets in the economically important coleopteran pests Diabrotica virgifera virgifera and Meligethes aeneus. Transcript orthologs of 50 selected genes were analyzed in D. v. virgifera diet-based RNAi bioassays; 21 of these RNAi targets showed mortality and 36 showed growth inhibition. Low dose injection- and diet-based dsRNA assays in T. castaneum and D. v. virgifera, respectively, enabled the identification of the four highly potent RNAi target genes: Rop, dre4, ncm, and RpII140. Maize was genetically engineered to express dsRNA directed against these prioritized candidate target genes. T 0 plants expressing Rop, dre4, or RpII140 RNA hairpins showed protection from D. v. virgifera larval feeding damage. dsRNA targeting Rop, dre4, ncm, and RpII140 in M. aeneus also caused high levels of mortality both by injection and feeding. In summary, high throughput systems for model organisms can be successfully used to identify potent RNA targets for difficult-to-work with agricultural insect pests.

  15. Targeted Next-generation Sequencing and Bioinformatics Pipeline to Evaluate Genetic Determinants of Constitutional Disease.

    PubMed

    Dilliott, Allison A; Farhan, Sali M K; Ghani, Mahdi; Sato, Christine; Liang, Eric; Zhang, Ming; McIntyre, Adam D; Cao, Henian; Racacho, Lemuel; Robinson, John F; Strong, Michael J; Masellis, Mario; Bulman, Dennis E; Rogaeva, Ekaterina; Lang, Anthony; Tartaglia, Carmela; Finger, Elizabeth; Zinman, Lorne; Turnbull, John; Freedman, Morris; Swartz, Rick; Black, Sandra E; Hegele, Robert A

    2018-04-04

    Next-generation sequencing (NGS) is quickly revolutionizing how research into the genetic determinants of constitutional disease is performed. The technique is highly efficient with millions of sequencing reads being produced in a short time span and at relatively low cost. Specifically, targeted NGS is able to focus investigations to genomic regions of particular interest based on the disease of study. Not only does this further reduce costs and increase the speed of the process, but it lessens the computational burden that often accompanies NGS. Although targeted NGS is restricted to certain regions of the genome, preventing identification of potential novel loci of interest, it can be an excellent technique when faced with a phenotypically and genetically heterogeneous disease, for which there are previously known genetic associations. Because of the complex nature of the sequencing technique, it is important to closely adhere to protocols and methodologies in order to achieve sequencing reads of high coverage and quality. Further, once sequencing reads are obtained, a sophisticated bioinformatics workflow is utilized to accurately map reads to a reference genome, to call variants, and to ensure the variants pass quality metrics. Variants must also be annotated and curated based on their clinical significance, which can be standardized by applying the American College of Medical Genetics and Genomics Pathogenicity Guidelines. The methods presented herein will display the steps involved in generating and analyzing NGS data from a targeted sequencing panel, using the ONDRISeq neurodegenerative disease panel as a model, to identify variants that may be of clinical significance.

  16. Molgenis-impute: imputation pipeline in a box.

    PubMed

    Kanterakis, Alexandros; Deelen, Patrick; van Dijk, Freerk; Byelas, Heorhiy; Dijkstra, Martijn; Swertz, Morris A

    2015-08-19

    Genotype imputation is an important procedure in current genomic analysis such as genome-wide association studies, meta-analyses and fine mapping. Although high quality tools are available that perform the steps of this process, considerable effort and expertise is required to set up and run a best practice imputation pipeline, particularly for larger genotype datasets, where imputation has to scale out in parallel on computer clusters. Here we present MOLGENIS-impute, an 'imputation in a box' solution that seamlessly and transparently automates the set up and running of all the steps of the imputation process. These steps include genome build liftover (liftovering), genotype phasing with SHAPEIT2, quality control, sample and chromosomal chunking/merging, and imputation with IMPUTE2. MOLGENIS-impute builds on MOLGENIS-compute, a simple pipeline management platform for submission and monitoring of bioinformatics tasks in High Performance Computing (HPC) environments like local/cloud servers, clusters and grids. All the required tools, data and scripts are downloaded and installed in a single step. Researchers with diverse backgrounds and expertise have tested MOLGENIS-impute on different locations and imputed over 30,000 samples so far using the 1,000 Genomes Project and new Genome of the Netherlands data as the imputation reference. The tests have been performed on PBS/SGE clusters, cloud VMs and in a grid HPC environment. MOLGENIS-impute gives priority to the ease of setting up, configuring and running an imputation. It has minimal dependencies and wraps the pipeline in a simple command line interface, without sacrificing flexibility to adapt or limiting the options of underlying imputation tools. It does not require knowledge of a workflow system or programming, and is targeted at researchers who just want to apply best practices in imputation via simple commands. It is built on the MOLGENIS compute workflow framework to enable customization with additional

  17. ampliMethProfiler: a pipeline for the analysis of CpG methylation profiles of targeted deep bisulfite sequenced amplicons.

    PubMed

    Scala, Giovanni; Affinito, Ornella; Palumbo, Domenico; Florio, Ermanno; Monticelli, Antonella; Miele, Gennaro; Chiariotti, Lorenzo; Cocozza, Sergio

    2016-11-25

    CpG sites in an individual molecule may exist in a binary state (methylated or unmethylated) and each individual DNA molecule, containing a certain number of CpGs, is a combination of these states defining an epihaplotype. Classic quantification based approaches to study DNA methylation are intrinsically unable to fully represent the complexity of the underlying methylation substrate. Epihaplotype based approaches, on the other hand, allow methylation profiles of cell populations to be studied at the single molecule level. For such investigations, next-generation sequencing techniques can be used, both for quantitative and for epihaplotype analysis. Currently available tools for methylation analysis lack output formats that explicitly report CpG methylation profiles at the single molecule level and that have suited statistical tools for their interpretation. Here we present ampliMethProfiler, a python-based pipeline for the extraction and statistical epihaplotype analysis of amplicons from targeted deep bisulfite sequencing of multiple DNA regions. ampliMethProfiler tool provides an easy and user friendly way to extract and analyze the epihaplotype composition of reads from targeted bisulfite sequencing experiments. ampliMethProfiler is written in python language and requires a local installation of BLAST and (optionally) QIIME tools. It can be run on Linux and OS X platforms. The software is open source and freely available at http://amplimethprofiler.sourceforge.net .

  18. Landslide and Land Subsidence Hazards to Pipelines

    USGS Publications Warehouse

    Baum, Rex L.; Galloway, Devin L.; Harp, Edwin L.

    2008-01-01

    Landslides and land subsidence pose serious hazards to pipelines throughout the world. Many existing pipeline corridors and more and more new pipelines cross terrain that is affected by either landslides, land subsidence, or both. Consequently the pipeline industry recognizes a need for increased awareness of methods for identifying and evaluating landslide and subsidence hazard for pipeline corridors. This report was prepared in cooperation with the U.S. Department of Transportation Pipeline and Hazardous Materials Safety Administration, and Pipeline Research Council International through a cooperative research and development agreement (CRADA) with DGH Consulting, Inc., to address the need for up-to-date information about current methods to identify and assess these hazards. Chapters in this report (1) describe methods for evaluating landslide hazard on a regional basis, (2) describe the various types of land subsidence hazard in the United States and available methods for identifying and quantifying subsidence, and (3) summarize current methods for investigating individual landslides. In addition to the descriptions, this report provides information about the relative costs, limitations and reliability of various methods.

  19. Plant microRNA-Target Interaction Identification Model Based on the Integration of Prediction Tools and Support Vector Machine

    PubMed Central

    Meng, Jun; Shi, Lin; Luan, Yushi

    2014-01-01

    Background Confident identification of microRNA-target interactions is significant for studying the function of microRNA (miRNA). Although some computational miRNA target prediction methods have been proposed for plants, results of various methods tend to be inconsistent and usually lead to more false positive. To address these issues, we developed an integrated model for identifying plant miRNA–target interactions. Results Three online miRNA target prediction toolkits and machine learning algorithms were integrated to identify and analyze Arabidopsis thaliana miRNA-target interactions. Principle component analysis (PCA) feature extraction and self-training technology were introduced to improve the performance. Results showed that the proposed model outperformed the previously existing methods. The results were validated by using degradome sequencing supported Arabidopsis thaliana miRNA-target interactions. The proposed model constructed on Arabidopsis thaliana was run over Oryza sativa and Vitis vinifera to demonstrate that our model is effective for other plant species. Conclusions The integrated model of online predictors and local PCA-SVM classifier gained credible and high quality miRNA-target interactions. The supervised learning algorithm of PCA-SVM classifier was employed in plant miRNA target identification for the first time. Its performance can be substantially improved if more experimentally proved training samples are provided. PMID:25051153

  20. Kepler Planet Detection Metrics: Per-Target Flux-Level Transit Injection Tests of TPS for Data Release 25

    NASA Technical Reports Server (NTRS)

    Burke, Christopher J.; Catanzarite, Joseph

    2017-01-01

    Quantifying the ability of a transiting planet survey to recover transit signals has commonly been accomplished through Monte-Carlo injection of transit signals into the observed data and subsequent running of the signal search algorithm (Gilliland et al., 2000; Weldrake et al., 2005; Burke et al., 2006). In order to characterize the performance of the Kepler pipeline (Twicken et al., 2016; Jenkins et al., 2017) on a sample of over 200,000 stars, two complementary injection and recovery tests are utilized:1. Injection of a single transit signal per target into the image or pixel-level data, hereafter referred to as pixel-level transit injection (PLTI), with subsequent processing through the Photometric Analysis (PA), Presearch Data Conditioning (PDC), Transiting Planet Search (TPS), and Data Validation (DV) modules of the Kepler pipeline. The PLTI quantification of the Kepler pipeline's completeness has been described previously by Christiansen et al. (2015, 2016); the completeness of the final SOC 9.3 Kepler pipeline acting on the Data Release 25 (DR25) light curves is described by Christiansen (2017).2. Injection of multiple transit signals per target into the normalized flux time series data with a subsequent transit search using a stream-lined version of the Transiting Planet Search (TPS) module. This test, hereafter referred to as flux-level transit injection (FLTI), is the subject of this document. By running a heavily modified version of TPS, FLTI is able to perform many injections on selected targets and determine in some detail which injected signals are recoverable. Significant numerical efficiency gains are enabled by precomputing the data conditioning steps at the onset of TPS and limiting the search parameter space (i.e., orbital period, transit duration, and ephemeris zero-point) to a small region around each injected transit signal.The PLTI test has the advantage that it follows transit signals through all processing steps of the Kepler pipeline, and

  1. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 3 2012-10-01 2012-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast... pipeline is disturbed: (a) That segment of the pipeline must be protected, as necessary, against damage...

  2. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 3 2011-10-01 2011-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast... pipeline is disturbed: (a) That segment of the pipeline must be protected, as necessary, against damage...

  3. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 3 2014-10-01 2014-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast... pipeline is disturbed: (a) That segment of the pipeline must be protected, as necessary, against damage...

  4. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 3 2013-10-01 2013-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast... pipeline is disturbed: (a) That segment of the pipeline must be protected, as necessary, against damage...

  5. 30 CFR 250.1005 - Inspection requirements for DOI pipelines.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 2 2011-07-01 2011-07-01 false Inspection requirements for DOI pipelines. 250.1005 Section 250.1005 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, REGULATION, AND ENFORCEMENT... Pipelines and Pipeline Rights-of-Way § 250.1005 Inspection requirements for DOI pipelines. (a) Pipeline...

  6. Pipeline perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kern, J.J.

    1978-01-01

    The recently completed 800-mile trans-Alaska pipeline is reviewed from the perspective of its first six months of successful operation. Because of the many environmental and political constraints, the $7.7 billion project is viewed as a triumph of both engineering and capitalism. Design problems were imposed by the harsh climate and terrain and by the constant public and bureaucratic monitoring. Specifications are reviewed for the pipes, valves, river crossings, pump stations, control stations, and the terminal at Valdez, where special ballast treatment and a vapor-recovery system were required to protect the harbor's water and air quality. The article outlines operating proceduresmore » and contingency planning for the pipeline and terminal. (DCK)« less

  7. Seqping: gene prediction pipeline for plant genomes using self-training gene models and transcriptomic data.

    PubMed

    Chan, Kuang-Lim; Rosli, Rozana; Tatarinova, Tatiana V; Hogan, Michael; Firdaus-Raih, Mohd; Low, Eng-Ti Leslie

    2017-01-27

    Gene prediction is one of the most important steps in the genome annotation process. A large number of software tools and pipelines developed by various computing techniques are available for gene prediction. However, these systems have yet to accurately predict all or even most of the protein-coding regions. Furthermore, none of the currently available gene-finders has a universal Hidden Markov Model (HMM) that can perform gene prediction for all organisms equally well in an automatic fashion. We present an automated gene prediction pipeline, Seqping that uses self-training HMM models and transcriptomic data. The pipeline processes the genome and transcriptome sequences of the target species using GlimmerHMM, SNAP, and AUGUSTUS pipelines, followed by MAKER2 program to combine predictions from the three tools in association with the transcriptomic evidence. Seqping generates species-specific HMMs that are able to offer unbiased gene predictions. The pipeline was evaluated using the Oryza sativa and Arabidopsis thaliana genomes. Benchmarking Universal Single-Copy Orthologs (BUSCO) analysis showed that the pipeline was able to identify at least 95% of BUSCO's plantae dataset. Our evaluation shows that Seqping was able to generate better gene predictions compared to three HMM-based programs (MAKER2, GlimmerHMM and AUGUSTUS) using their respective available HMMs. Seqping had the highest accuracy in rice (0.5648 for CDS, 0.4468 for exon, and 0.6695 nucleotide structure) and A. thaliana (0.5808 for CDS, 0.5955 for exon, and 0.8839 nucleotide structure). Seqping provides researchers a seamless pipeline to train species-specific HMMs and predict genes in newly sequenced or less-studied genomes. We conclude that the Seqping pipeline predictions are more accurate than gene predictions using the other three approaches with the default or available HMMs.

  8. PRIMO: An Interactive Homology Modeling Pipeline.

    PubMed

    Hatherley, Rowan; Brown, David K; Glenister, Michael; Tastan Bishop, Özlem

    2016-01-01

    The development of automated servers to predict the three-dimensional structure of proteins has seen much progress over the years. These servers make calculations simpler, but largely exclude users from the process. In this study, we present the PRotein Interactive MOdeling (PRIMO) pipeline for homology modeling of protein monomers. The pipeline eases the multi-step modeling process, and reduces the workload required by the user, while still allowing engagement from the user during every step. Default parameters are given for each step, which can either be modified or supplemented with additional external input. PRIMO has been designed for users of varying levels of experience with homology modeling. The pipeline incorporates a user-friendly interface that makes it easy to alter parameters used during modeling. During each stage of the modeling process, the site provides suggestions for novice users to improve the quality of their models. PRIMO provides functionality that allows users to also model ligands and ions in complex with their protein targets. Herein, we assess the accuracy of the fully automated capabilities of the server, including a comparative analysis of the available alignment programs, as well as of the refinement levels used during modeling. The tests presented here demonstrate the reliability of the PRIMO server when producing a large number of protein models. While PRIMO does focus on user involvement in the homology modeling process, the results indicate that in the presence of suitable templates, good quality models can be produced even without user intervention. This gives an idea of the base level accuracy of PRIMO, which users can improve upon by adjusting parameters in their modeling runs. The accuracy of PRIMO's automated scripts is being continuously evaluated by the CAMEO (Continuous Automated Model EvaluatiOn) project. The PRIMO site is free for non-commercial use and can be accessed at https://primo.rubi.ru.ac.za/.

  9. PRIMO: An Interactive Homology Modeling Pipeline

    PubMed Central

    Glenister, Michael

    2016-01-01

    The development of automated servers to predict the three-dimensional structure of proteins has seen much progress over the years. These servers make calculations simpler, but largely exclude users from the process. In this study, we present the PRotein Interactive MOdeling (PRIMO) pipeline for homology modeling of protein monomers. The pipeline eases the multi-step modeling process, and reduces the workload required by the user, while still allowing engagement from the user during every step. Default parameters are given for each step, which can either be modified or supplemented with additional external input. PRIMO has been designed for users of varying levels of experience with homology modeling. The pipeline incorporates a user-friendly interface that makes it easy to alter parameters used during modeling. During each stage of the modeling process, the site provides suggestions for novice users to improve the quality of their models. PRIMO provides functionality that allows users to also model ligands and ions in complex with their protein targets. Herein, we assess the accuracy of the fully automated capabilities of the server, including a comparative analysis of the available alignment programs, as well as of the refinement levels used during modeling. The tests presented here demonstrate the reliability of the PRIMO server when producing a large number of protein models. While PRIMO does focus on user involvement in the homology modeling process, the results indicate that in the presence of suitable templates, good quality models can be produced even without user intervention. This gives an idea of the base level accuracy of PRIMO, which users can improve upon by adjusting parameters in their modeling runs. The accuracy of PRIMO’s automated scripts is being continuously evaluated by the CAMEO (Continuous Automated Model EvaluatiOn) project. The PRIMO site is free for non-commercial use and can be accessed at https://primo.rubi.ru.ac.za/. PMID:27855192

  10. Kepler Planet Detection Metrics: Per-Target Detection Contours for Data Release 25

    NASA Technical Reports Server (NTRS)

    Burke, Christopher J.; Catanzarite, Joseph

    2017-01-01

    A necessary input to planet occurrence calculations is an accurate model for the pipeline completeness (Burke et al., 2015). This document describes the use of the Kepler planet occurrence rate products in order to calculate a per-target detection contour for the measured Data Release 25 (DR25) pipeline performance. A per-target detection contour measures for a given combination of orbital period, Porb, and planet radius, Rp, what fraction of transit signals are recoverable by the Kepler pipeline (Twicken et al., 2016; Jenkins et al., 2017). The steps for calculating a detection contour follow the procedure outlined in Burke et al. (2015), but have been updated to provide improved accuracy enabled by the substantially larger database of transit injection and recovery tests that were performed on the final version (i.e., SOC 9.3) of the Kepler pipeline (Christiansen, 2017; Burke Catanzarite, 2017a). In the following sections, we describe the main inputs to the per-target detection contour and provide a worked example of the python software released with this document (Kepler Planet Occurrence Rate Tools KeplerPORTs)1 that illustrates the generation of a detection contour in practice. As background material for this document and its nomenclature, we recommend the reader be familiar with the previous method of calculating a detection contour (Section 2 of Burke et al.,2015), input parameters relevant for describing the data quantity and quality of Kepler targets (Burke Catanzarite, 2017b), and the extensive new transit injection and recovery tests of the Kepler pipeline (Christiansen et al., 2016; Burke Catanzarite, 2017a; Christiansen, 2017).

  11. 49 CFR 192.627 - Tapping pipelines under pressure.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Tapping pipelines under pressure. 192.627 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Operations § 192.627 Tapping pipelines under pressure. Each tap made on a pipeline under pressure must be performed by a crew qualified to make...

  12. Stability of subsea pipelines during large storms

    PubMed Central

    Draper, Scott; An, Hongwei; Cheng, Liang; White, David J.; Griffiths, Terry

    2015-01-01

    On-bottom stability design of subsea pipelines transporting hydrocarbons is important to ensure safety and reliability but is challenging to achieve in the onerous metocean (meteorological and oceanographic) conditions typical of large storms (such as tropical cyclones, hurricanes or typhoons). This challenge is increased by the fact that industry design guidelines presently give no guidance on how to incorporate the potential benefits of seabed mobility, which can lead to lowering and self-burial of the pipeline on a sandy seabed. In this paper, we demonstrate recent advances in experimental modelling of pipeline scour and present results investigating how pipeline stability can change in a large storm. An emphasis is placed on the initial development of the storm, where scour is inevitable on an erodible bed as the storm velocities build up to peak conditions. During this initial development, we compare the rate at which peak near-bed velocities increase in a large storm (typically less than 10−3 m s−2) to the rate at which a pipeline scours and subsequently lowers (which is dependent not only on the storm velocities, but also on the mechanism of lowering and the pipeline properties). We show that the relative magnitude of these rates influences pipeline embedment during a storm and the stability of the pipeline. PMID:25512592

  13. MEASURING TRANSIT SIGNAL RECOVERY IN THE KEPLER PIPELINE. I. INDIVIDUAL EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christiansen, Jessie L.; Clarke, Bruce D.; Burke, Christopher J.

    The Kepler mission was designed to measure the frequency of Earth-size planets in the habitable zone of Sun-like stars. A crucial component for recovering the underlying planet population from a sample of detected planets is understanding the completeness of that sample-the fraction of the planets that could have been discovered in a given data set that actually were detected. Here, we outline the information required to determine the sample completeness, and describe an experiment to address a specific aspect of that question, i.e., the issue of transit signal recovery. We investigate the extent to which the Kepler pipeline preserves individualmore » transit signals by injecting simulated transits into the pixel-level data, processing the modified pixels through the pipeline, and comparing the measured transit signal-to-noise ratio (S/N) to that expected without perturbation by the pipeline. We inject simulated transit signals across the full focal plane for a set of observations for a duration of 89 days. On average, we find that the S/N of the injected signal is recovered at MS = 0.9973({+-} 0.0012) Multiplication-Sign BS - 0.0151({+-} 0.0049), where MS is the measured S/N and BS is the baseline, or expected, S/N. The 1{sigma} width of the distribution around this correlation is {+-}2.64%. This indicates an extremely high fidelity in reproducing the expected detection statistics for single transit events, and provides teams performing their own periodic transit searches the confidence that there is no systematic reduction in transit signal strength introduced by the pipeline. We discuss the pipeline processes that cause the measured S/N to deviate significantly from the baseline S/N for a small fraction of targets; these are primarily the handling of data adjacent to spacecraft re-pointings and the removal of harmonics prior to the measurement of the S/N. Finally, we outline the further work required to characterize the completeness of the Kepler pipeline.« less

  14. Identification of the Downstream Promoter Targets of Smad Tumor Suppressors in Human Breast Cancer Cells

    DTIC Science & Technology

    2004-10-01

    signaling mediator Smad2, Smad3 and Smad4 which form oligomeric complexes and migrate into nucleus to function as transcription factors to modulate... Smad3 and Smad4. 2. Identification of the downstream promoter targets of Smad3 or Smad4 in breast cancer cells. 3. Identify Smad4 regulated downstream...Development of a novel chromatin immunoprecipitation assay (CHIPS) using a TAP-TAG system to isolate in vivo binding targets of Smad3 and Smad4

  15. Commissioning of a new helium pipeline

    NASA Technical Reports Server (NTRS)

    2000-01-01

    At the commissioning of a new high-pressure helium pipeline at Kennedy Space Center, Ramon Lugo, acting executive director, JPMO , presents a plaque to Center Director Roy Bridges. The pipeline will service launch needs at the new Delta IV Complex 37 at Cape Canaveral Air Force Station. Others at the ceremony were Jerry Jorgensen, pipeline project manager, Space Gateway Support (SGS); Col. Samuel Dick, representative of the 45th Space Wing; David Herst, director, Delta IV Launch Sites; Pierre Dufour, president and CEO, Air Liquide America Corporation; and Michael Butchko, president, SGS. The nine-mile-long buried pipeline will also serve as a backup helium resource for Shuttle launches. Nearly one launch's worth of helium will be available in the pipeline to support a Shuttle pad in an emergency. The line originates at the Helium Facility on KSC and terminates in a meter station at the perimeter of the Delta IV launch pad.

  16. Commissioning of a new helium pipeline

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Jerry Jorgensen, pipeline project manager, Space Gateway Support (SGS) presents an award of appreciation to H.T. Everett, KSC Propellants manager, at the commissioning of a new high-pressure helium pipeline at Kennedy Space Center. The pipeline will service launch needs at the new Delta IV Complex 37 at Cape Canaveral Air Force Station. The nine-mile-long buried pipeline will also serve as a backup helium resource for Shuttle launches. Nearly one launch's worth of helium will be available in the pipeline to support a Shuttle pad in an emergency. The line originates at the Helium Facility on KSC and terminates in a meter station at the perimeter of the Delta IV launch pad. Others at the ceremony were Center Director Roy Bridges;); Col. Samuel Dick, representative of the 45th Space Wing; Ramon Lugo, acting executive director, JPMO; David Herst, director, Delta IV Launch Sites; Pierre Dufour, president and CEO, Air Liquide America Corporation; and Michael Butchko, president, SGS.

  17. Commissioning of a new helium pipeline

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Jerry Jorgensen welcomes the audience to the commissioning of a new high-pressure helium pipeline at Kennedy Space Center. Jorgensen, with Space Gateway Support (SGS), is the pipeline project manager. To the right is Ramon Lugo, acting executive director, JPMO. Others at the ceremony were Center Director Roy Bridges; Col. Samuel Dick, representative of the 45th Space Wing; David Herst, director, Delta IV Launch Sites; Pierre Dufour, president and CEO, Air Liquide America Corporation; and Michael Butchko, president, SGS. The pipeline will service launch needs at the new Delta IV Complex 37 at Cape Canaveral Air Force Station. The nine-mile-long buried pipeline will also serve as a backup helium resource for Shuttle launches. Nearly one launch's worth of helium will be available in the pipeline to support a Shuttle pad in an emergency. The line originates at the Helium Facility on KSC and terminates in a meter station at the perimeter of the Delta IV launch pad.

  18. Integrated surface management for pipeline construction: The Mid-America Pipeline Company Four Corners Project

    Treesearch

    Maria L. Sonett

    1999-01-01

    Integrated surface management techniques for pipeline construction through arid and semi-arid rangeland ecosystems are presented in a case history of a 412-mile pipeline construction project in New Mexico. Planning, implementation and monitoring for restoration of surface hydrology, soil stabilization, soil cover, and plant species succession are discussed. Planning...

  19. Pipeline transport and simultaneous saccharification of corn stover.

    PubMed

    Kumar, Amit; Cameron, Jay B; Flynn, Peter C

    2005-05-01

    Pipeline transport of corn stover delivered by truck from the field is evaluated against a range of truck transport costs. Corn stover transported by pipeline at 20% solids concentration (wet basis) or higher could directly enter an ethanol fermentation plant, and hence the investment in the pipeline inlet end processing facilities displaces comparable investment in the plant. At 20% solids, pipeline transport of corn stover costs less than trucking at capacities in excess of 1.4 M drytonnes/yr when compared to a mid range of truck transport cost (excluding any credit for economies of scale achieved in the ethanol fermentation plant from larger scale due to multiple pipelines). Pipelining of corn stover gives the opportunity to conduct simultaneous transport and saccharification (STS). If current enzymes are used, this would require elevated temperature. Heating of the slurry for STS, which in a fermentation plant is achieved from waste heat, is a significant cost element (more than 5 cents/l of ethanol) if done at the pipeline inlet unless waste heat is available, for example from an electric power plant located adjacent to the pipeline inlet. Heat loss in a 1.26 m pipeline carrying 2 M drytonnes/yr is about 5 degrees C at a distance of 400 km in typical prairie clay soils, and would not likely require insulation; smaller pipelines or different soil conditions might require insulation for STS. Saccharification in the pipeline would reduce the need for investment in the fermentation plant, saving about 0.2 cents/l of ethanol. Transport of corn stover in multiple pipelines offers the opportunity to develop a large ethanol fermentation plant, avoiding some of the diseconomies of scale that arise from smaller plants whose capacities are limited by issues of truck congestion.

  20. California crude-pipeline plans detailed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronco, M.J.

    1986-06-09

    California and the U.S. West have recently become a center for crude-oil pipeline activity. That activity includes existing and proposed lines, offshore and onshore terminals, and some unusual permitting and construction requirements. Operation of existing pipelines is influenced by the varying gravities of crudes in the area. California has three distinct producing areas from which pipelines deliver crude to refineries or marines terminals: 1. The inland Los Angeles basin and coast from Orange County to Ventura County. 2. The San Joaquin Valley in central California which is between the coastal mountains and the Sierras. 3. That portion of the Outermore » Continental Shelf (OCS) located primarily in federal waters off Santa Barbara and San Luis Obispo counties on the central coast. The Los Angeles coastal and inland basin crude-oil pipeline system consists of gathering lines to move crude from the many wells throughout Ventura, Orange, and Los Angeles counties to operating refineries in the greater Los Angeles area. Major refineries include ARCO at Carson, Chevron at El Segundo, Mobil at Torrance, and Shell, Texaco, and Unical at Wilmington. The many different crude-oil pipelines serving these refineries from Ventura County and Orange County and from the many sites around Los Angeles County are too numerous to list.« less

  1. Identification of apoptosis-related PLZF target genes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Maria Victoria; Yelo, Estefania; Gimeno, Lourdes

    2007-07-27

    The PLZF gene encodes a BTB/POZ-zinc finger-type transcription factor, involved in physiological development, proliferation, differentiation, and apoptosis. In this paper, we investigate proliferation, survival, and gene expression regulation in stable clones from the human haematopoietic K562, DG75, and Jurkat cell lines with inducible expression of PLZF. In Jurkat cells, but not in K562 and DG75 cells, PLZF induced growth suppression and apoptosis in a cell density-dependent manner. Deletion of the BTB/POZ domain of PLZF abrogated growth suppression and apoptosis. PLZF was expressed with a nuclear speckled pattern distinctively in the full-length PLZF-expressing Jurkat clones, suggesting that the nuclear speckled localizationmore » is required for PLZF-induced apoptosis. By microarray analysis, we identified that the apoptosis-inducer TP53INP1, ID1, and ID3 genes were upregulated, and the apoptosis-inhibitor TERT gene was downregulated. The identification of apoptosis-related PLZF target genes may have biological and clinical relevance in cancer typified by altered PLZF expression.« less

  2. Redefining the Data Pipeline Using GPUs

    NASA Astrophysics Data System (ADS)

    Warner, C.; Eikenberry, S. S.; Gonzalez, A. H.; Packham, C.

    2013-10-01

    There are two major challenges facing the next generation of data processing pipelines: 1) handling an ever increasing volume of data as array sizes continue to increase and 2) the desire to process data in near real-time to maximize observing efficiency by providing rapid feedback on data quality. Combining the power of modern graphics processing units (GPUs), relational database management systems (RDBMSs), and extensible markup language (XML) to re-imagine traditional data pipelines will allow us to meet these challenges. Modern GPUs contain hundreds of processing cores, each of which can process hundreds of threads concurrently. Technologies such as Nvidia's Compute Unified Device Architecture (CUDA) platform and the PyCUDA (http://mathema.tician.de/software/pycuda) module for Python allow us to write parallel algorithms and easily link GPU-optimized code into existing data pipeline frameworks. This approach has produced speed gains of over a factor of 100 compared to CPU implementations for individual algorithms and overall pipeline speed gains of a factor of 10-25 compared to traditionally built data pipelines for both imaging and spectroscopy (Warner et al., 2011). However, there are still many bottlenecks inherent in the design of traditional data pipelines. For instance, file input/output of intermediate steps is now a significant portion of the overall processing time. In addition, most traditional pipelines are not designed to be able to process data on-the-fly in real time. We present a model for a next-generation data pipeline that has the flexibility to process data in near real-time at the observatory as well as to automatically process huge archives of past data by using a simple XML configuration file. XML is ideal for describing both the dataset and the processes that will be applied to the data. Meta-data for the datasets would be stored using an RDBMS (such as mysql or PostgreSQL) which

  3. Natural gas pipeline leak detector based on NIR diode laser absorption spectroscopy.

    PubMed

    Gao, Xiaoming; Fan, Hong; Huang, Teng; Wang, Xia; Bao, Jian; Li, Xiaoyun; Huang, Wei; Zhang, Weijun

    2006-09-01

    The paper reports on the development of an integrated natural gas pipeline leak detector based on diode laser absorption spectroscopy. The detector transmits a 1.653 microm DFB diode laser with 10 mW and detects a fraction of the backscatter reflected from the topographic targets. To eliminate the effect of topographic scatter targets, a ratio detection technique was used. Wavelength modulation and harmonic detection were used to improve the detection sensitivity. The experimental detection limit is 50 ppmm, remote detection for a distance up to 20 m away topographic scatter target is demonstrated. Using a known simulative leak pipe, minimum detectable pipe leak flux is less than 10 ml/min.

  4. Network Understanding of Herb Medicine via Rapid Identification of Ingredient-Target Interactions

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Ping; Pan, Jian-Bo; Zhang, Chi; Ji, Nan; Wang, Hao; Ji, Zhi-Liang

    2014-01-01

    Today, herb medicines have become the major source for discovery of novel agents in countermining diseases. However, many of them are largely under-explored in pharmacology due to the limitation of current experimental approaches. Therefore, we proposed a computational framework in this study for network understanding of herb pharmacology via rapid identification of putative ingredient-target interactions in human structural proteome level. A marketing anti-cancer herb medicine in China, Yadanzi (Brucea javanica), was chosen for mechanistic study. Total 7,119 ingredient-target interactions were identified for thirteen Yadanzi active ingredients. Among them, about 29.5% were estimated to have better binding affinity than their corresponding marketing drug-target interactions. Further Bioinformatics analyses suggest that simultaneous manipulation of multiple proteins in the MAPK signaling pathway and the phosphorylation process of anti-apoptosis may largely answer for Yadanzi against non-small cell lung cancers. In summary, our strategy provides an efficient however economic solution for systematic understanding of herbs' power.

  5. Network understanding of herb medicine via rapid identification of ingredient-target interactions.

    PubMed

    Zhang, Hai-Ping; Pan, Jian-Bo; Zhang, Chi; Ji, Nan; Wang, Hao; Ji, Zhi-Liang

    2014-01-16

    Today, herb medicines have become the major source for discovery of novel agents in countermining diseases. However, many of them are largely under-explored in pharmacology due to the limitation of current experimental approaches. Therefore, we proposed a computational framework in this study for network understanding of herb pharmacology via rapid identification of putative ingredient-target interactions in human structural proteome level. A marketing anti-cancer herb medicine in China, Yadanzi (Brucea javanica), was chosen for mechanistic study. Total 7,119 ingredient-target interactions were identified for thirteen Yadanzi active ingredients. Among them, about 29.5% were estimated to have better binding affinity than their corresponding marketing drug-target interactions. Further Bioinformatics analyses suggest that simultaneous manipulation of multiple proteins in the MAPK signaling pathway and the phosphorylation process of anti-apoptosis may largely answer for Yadanzi against non-small cell lung cancers. In summary, our strategy provides an efficient however economic solution for systematic understanding of herbs' power.

  6. Improving throughput for temporal target nomination using existing infrastructure

    NASA Astrophysics Data System (ADS)

    Raeth, Peter G.

    2007-04-01

    Earlier, we reported on predictive anomaly detection (PAD) for nominating targets within data streams generated by persistent sensing and surveillance. This technique is purely temporal and does not directly depend on the physics attendant on the sensed environment. Since PAD adapts to evolving data streams, there are no determinacy assumptions. We showed PAD to be general across sensor types, demonstrating it using synthetic chaotic data and in audio, visual, and infrared applications. Defense-oriented demonstrations included explosions, muzzle flashes, and missile and aircraft detection. Experiments were ground-based and air-to-air. As new sensors come on line, PAD offers immediate data filtering and target nomination. Its results can be taken individually, pixel by pixel, for spectral analysis and material detection/identification. They can also be grouped for shape analysis, target identification, and track development. PAD analyses reduce data volume by around 95%, depending on target number and size, while still retaining all target indicators. While PAD's code is simple when compared to physics codes, PAD tends to build a huge model. A PAD model for 512 x 640 frames may contain 19,660,800 Gaussian basis functions. (PAD models grow linearly with the number of pixels and the frequency content, in the FFT sense, of the sensed scenario's background data). PAD's complexity in terms of computational and data intensity is an example of what one sees in new algorithms now in the R&D pipeline, especially as DoD seeks capability that runs fully automatic, with little to no human interaction. Work is needed to improve algorithms' throughput while employing existing infrastructure, yet allowing for growth in the types of hardware employed. In this present paper, we discuss a generic cluster interface for legacy codes that can be partitioned at the data level. The discussion's foundation is the growth of PAD models to accommodate a particular scenario and the need to

  7. The Oman-India gas pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, P.M.

    1995-12-31

    In March 1993, the Governments of the Sultanate of Oman and India executed a Memorandum of Understanding for a long term Gas Supply Contract to transport natural gas from Oman to India by pipeline. A feasibility study was undertaken to determine if such a pipeline was technically achievable and economically attractive. Work was initiated with a consortium of internationally recognized major design and construction firms, as well as with consultants knowledgeable in gas supply and demand in the region. Alternative gas supply volumes as well as two distinct pipeline routes were analyzed in significant detail. As a result of thismore » work, it was concluded that a pipeline crossing, taking a direct route from Oman to India, is economically and technically feasible. In September, 1994, the Agreement on Principal Terms for supply of gas to India from Oman was agreed by the respective governmental authorities. The project and its status are described.« less

  8. Nearshore Pipeline Installation Methods.

    DTIC Science & Technology

    1981-08-01

    inches b) Pipe, materials of construction: fully rigid, semi-rigid, flexible c) Pipeline length, maximum 2 miles d) Pipeline design life , minimum 15...common to their operations. Permanent facilities are specified in the Statement of Work. There- fore, a minimum design life of 15 years is chosen, which...makes the pipe leakproof and resists corrosion and abrasion. 5) Interlocked Z-shaped steel or stainless steel carcass - resists internal and external

  9. The Professional Pipeline for Educational Leadership. A White Paper Developed to Inform the Work of the National Policy Board for Educational Administration

    ERIC Educational Resources Information Center

    Hitt, Dallas Hambrick; Tucker, Pamela D.; Young, Michelle D.

    2012-01-01

    The professional pipeline represents a developmental perspective for fostering leadership capacity in schools and districts, from identification of potential talent during the recruitment phase to ensuring career-long learning through professional development. An intentional and mindful approach to supporting the development of educational leaders…

  10. PANGEA: pipeline for analysis of next generation amplicons

    PubMed Central

    Giongo, Adriana; Crabb, David B; Davis-Richardson, Austin G; Chauliac, Diane; Mobberley, Jennifer M; Gano, Kelsey A; Mukherjee, Nabanita; Casella, George; Roesch, Luiz FW; Walts, Brandon; Riva, Alberto; King, Gary; Triplett, Eric W

    2010-01-01

    High-throughput DNA sequencing can identify organisms and describe population structures in many environmental and clinical samples. Current technologies generate millions of reads in a single run, requiring extensive computational strategies to organize, analyze and interpret those sequences. A series of bioinformatics tools for high-throughput sequencing analysis, including preprocessing, clustering, database matching and classification, have been compiled into a pipeline called PANGEA. The PANGEA pipeline was written in Perl and can be run on Mac OSX, Windows or Linux. With PANGEA, sequences obtained directly from the sequencer can be processed quickly to provide the files needed for sequence identification by BLAST and for comparison of microbial communities. Two different sets of bacterial 16S rRNA sequences were used to show the efficiency of this workflow. The first set of 16S rRNA sequences is derived from various soils from Hawaii Volcanoes National Park. The second set is derived from stool samples collected from diabetes-resistant and diabetes-prone rats. The workflow described here allows the investigator to quickly assess libraries of sequences on personal computers with customized databases. PANGEA is provided for users as individual scripts for each step in the process or as a single script where all processes, except the χ2 step, are joined into one program called the ‘backbone’. PMID:20182525

  11. PANGEA: pipeline for analysis of next generation amplicons.

    PubMed

    Giongo, Adriana; Crabb, David B; Davis-Richardson, Austin G; Chauliac, Diane; Mobberley, Jennifer M; Gano, Kelsey A; Mukherjee, Nabanita; Casella, George; Roesch, Luiz F W; Walts, Brandon; Riva, Alberto; King, Gary; Triplett, Eric W

    2010-07-01

    High-throughput DNA sequencing can identify organisms and describe population structures in many environmental and clinical samples. Current technologies generate millions of reads in a single run, requiring extensive computational strategies to organize, analyze and interpret those sequences. A series of bioinformatics tools for high-throughput sequencing analysis, including pre-processing, clustering, database matching and classification, have been compiled into a pipeline called PANGEA. The PANGEA pipeline was written in Perl and can be run on Mac OSX, Windows or Linux. With PANGEA, sequences obtained directly from the sequencer can be processed quickly to provide the files needed for sequence identification by BLAST and for comparison of microbial communities. Two different sets of bacterial 16S rRNA sequences were used to show the efficiency of this workflow. The first set of 16S rRNA sequences is derived from various soils from Hawaii Volcanoes National Park. The second set is derived from stool samples collected from diabetes-resistant and diabetes-prone rats. The workflow described here allows the investigator to quickly assess libraries of sequences on personal computers with customized databases. PANGEA is provided for users as individual scripts for each step in the process or as a single script where all processes, except the chi(2) step, are joined into one program called the 'backbone'.

  12. Improved FTA Methodology and Application to Subsea Pipeline Reliability Design

    PubMed Central

    Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan

    2014-01-01

    An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form. PMID:24667681

  13. 78 FR 52820 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0181] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Notice. SUMMARY: Pursuant to the Federal pipeline safety laws...

  14. 78 FR 52821 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0146] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Notice. SUMMARY: Pursuant to the Federal pipeline safety laws...

  15. 78 FR 5866 - Pipeline Safety: Annual Reports and Validation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2012-0319] Pipeline Safety: Annual Reports and Validation AGENCY: Pipeline and Hazardous Materials... 2012 gas transmission and gathering annual reports, remind pipeline owners and operators to validate...

  16. The influence of camouflage, obstruction, familiarity and spatial ability on target identification from an unmanned ground vehicle.

    PubMed

    Fincannon, Thomas; Keebler, Joseph R; Jentsch, Florian; Curtis, Michael

    2013-01-01

    The purpose of this study was to examine the effects of environmental and cognitive factors on the identification of targets from an unmanned ground vehicle (UGV). This was accomplished by manipulating obstruction, camouflage and familiarity of objects in the environment, while also measuring spatial ability. The effects of these variables on target identification were studied by measuring performance of participants that observed pre-recorded video from a 1:35 scaled military operations in urban terrain facility. Analyses indicated that a combination of camouflage and obstruction caused the most detrimental effects on performance, and that there were differences in the recognition of familiar and unfamiliar targets. Further analysis indicated that these detrimental effects could only be overcome with a combination of target familiarity and spatial ability. The findings highlight the degree to which environmental factors hinder performance and the need for a multidimensional approach for improving performance under these conditions. Areas in need of future research are also discussed. Cognitive theory is applied to the problem of perception from UGVs. Results from an experimental study indicate that a combination of camouflage and obstruction caused the most detrimental effects on performance, with differences in the recognition of both familiar and unfamiliar targets. Familiarity and spatial ability interacted to predict the performance.

  17. 75 FR 43612 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0042] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials..., Inc., a natural gas pipeline operator, seeking relief from compliance with certain requirements in the...

  18. 77 FR 34458 - Pipeline Safety: Requests for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-11

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0112] Pipeline Safety: Requests for Special Permit AGENCY: Pipeline and Hazardous Materials... BreitBurn Energy Company LP, two natural gas pipeline operators, seeking relief from compliance with...

  19. 75 FR 66425 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0124] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials... Company, LP, a natural gas pipeline operator, seeking relief from compliance with certain requirements in...

  20. 78 FR 14877 - Pipeline Safety: Incident and Accident Reports

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-07

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2013-0028] Pipeline Safety: Incident and Accident Reports AGENCY: Pipeline and Hazardous Materials... PHMSA F 7100.2--Incident Report--Natural and Other Gas Transmission and Gathering Pipeline Systems and...

  1. 75 FR 9018 - Pipeline Safety: Random Drug Testing Rate

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2010-0034] Pipeline Safety: Random Drug Testing Rate AGENCY: Pipeline and Hazardous Materials... pipelines and operators of liquefied natural gas facilities must select and test a percentage of covered...

  2. 77 FR 2606 - Pipeline Safety: Random Drug Testing Rate

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2012-0004] Pipeline Safety: Random Drug Testing Rate AGENCY: Pipeline and Hazardous Materials... pipelines and operators of liquefied natural gas facilities must select and test a percentage of covered...

  3. Regulatory reform for natural gas pipelines: The effect on pipeline and distribution company share prices

    NASA Astrophysics Data System (ADS)

    Jurman, Elisabeth Antonie

    1997-08-01

    The natural gas shortages in the 1970s focused considerable attention on the federal government's role in altering energy consumption. For the natural gas industry these shortages eventually led to the passage of the Natural Gas Policy Act (NGPA) in 1978 as part of the National Energy Plan. A series of events in the decade of the 1980s has brought about the restructuring of interstate natural gas pipelines which have been transformed by regulators and the courts from monopolies into competitive entities. This transformation also changed their relationship with their downstream customers, the LDCs, who no longer had to deal with pipelines as the only merchants of gas. Regulatory reform made it possible for LDCs to buy directly from producers using the pipelines only for delivery of their purchases. This study tests for the existence of monopoly rents by analyzing the daily returns of natural gas pipeline and utility industry stock price data from 1982 to 1990, a period of regulatory reform for the natural gas industry. The study's main objective is to investigate the degree of empirical support for claims that regulatory reforms increase profits in the affected industry, as the normative theory of regulation expects, or decrease profits, as advocates of the positive theory of regulation believe. I also test Norton's theory of risk which predicts that systematic risk will increase for firms undergoing deregulation. Based on a sample of twelve natural gas pipelines, and 25 utilities an event study concept was employed to measure the impact of regulatory event announcements on daily natural gas pipeline or utility industry stock price data using a market model regression equation. The results of this study provide some evidence that regulatory reforms did not increase the profits of pipeline firms, confirming the expectations of those who claim that excess profits result from regulation and will disappear, once that protection is removed and the firms are operating in

  4. Identification of novel microRNAs in Hevea brasiliensis and computational prediction of their targets

    PubMed Central

    2012-01-01

    Background Plants respond to external stimuli through fine regulation of gene expression partially ensured by small RNAs. Of these, microRNAs (miRNAs) play a crucial role. They negatively regulate gene expression by targeting the cleavage or translational inhibition of target messenger RNAs (mRNAs). In Hevea brasiliensis, environmental and harvesting stresses are known to affect natural rubber production. This study set out to identify abiotic stress-related miRNAs in Hevea using next-generation sequencing and bioinformatic analysis. Results Deep sequencing of small RNAs was carried out on plantlets subjected to severe abiotic stress using the Solexa technique. By combining the LeARN pipeline, data from the Plant microRNA database (PMRD) and Hevea EST sequences, we identified 48 conserved miRNA families already characterized in other plant species, and 10 putatively novel miRNA families. The results showed the most abundant size for miRNAs to be 24 nucleotides, except for seven families. Several MIR genes produced both 20-22 nucleotides and 23-27 nucleotides. The two miRNA class sizes were detected for both conserved and putative novel miRNA families, suggesting their functional duality. The EST databases were scanned with conserved and novel miRNA sequences. MiRNA targets were computationally predicted and analysed. The predicted targets involved in "responses to stimuli" and to "antioxidant" and "transcription activities" are presented. Conclusions Deep sequencing of small RNAs combined with transcriptomic data is a powerful tool for identifying conserved and novel miRNAs when the complete genome is not yet available. Our study provided additional information for evolutionary studies and revealed potentially specific regulation of the control of redox status in Hevea. PMID:22330773

  5. 78 FR 65429 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-31

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0041] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials...-0041 Williams Gas Pipeline 49 CFR 192.150........ To authorize the extension Company, LLC (WGP). of a...

  6. 76 FR 11853 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2011-0027] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials... a 24-inch mainline natural gas pipeline, 595 feet in length. The first segment of the special permit...

  7. Lessons Learned from Developing and Operating the Kepler Science Pipeline and Building the TESS Science Pipeline

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.

    2017-01-01

    The experience acquired through development, implementation and operation of the KeplerK2 science pipelines can provide lessons learned for the development of science pipelines for other missions such as NASA's Transiting Exoplanet Survey Satellite, and ESA's PLATO mission.

  8. SEALS: an Innovative Pipeline Program Targeting Obstacles to Diversity in the Physician Workforce.

    PubMed

    Fritz, Cassandra D L; Press, Valerie G; Nabers, Darrell; Levinson, Dana; Humphrey, Holly; Vela, Monica B

    2016-06-01

    Medical schools may find implementing pipeline programs for minority pre-medical students prohibitive due to a number of factors including the lack of well-described programs in the literature, the limited evidence for program development, and institutional financial barriers. Our goals were to (1) design a pipeline program based on educational theory; (2) deliver the program in a low cost, sustainable manner; and (3) evaluate intermediate outcomes of the program. SEALS is a 6-week program based on an asset bundles model designed to promote: (1) socialization and professionalism, (2) education in science learning tools, (3) acquisition of finance literacy, (4) the leveraging of mentorship and networks, and (5) social expectations and resilience, among minority pre-medical students. This is a prospective mixed methods study. Students completed survey instruments pre-program, post-program, and 6 months post-program, establishing intermediate outcome measures. Thirteen students matriculated to SEALS. The SEALS cohort rated themselves as improved or significantly improved when asked to rate their familiarity with MCAT components (p < 0.01), ability to ask for a letter of recommendation (p = 0.04), and importance of interview skills (p = 0.04) compared with before the program. Over 90 % of students referenced the health disparities lecture series as an inspiration to advocate for minority health. Six-month surveys suggested that SEALS students acquired and applied four of the five assets at their college campuses. This low-cost, high-quality, program can be undertaken by medical schools interested in promoting a diverse workforce that may ultimately begin to address and reduce health care disparities.

  9. HybPiper: Extracting coding sequence and introns for phylogenetics from high-throughput sequencing reads using target enrichment1

    PubMed Central

    Johnson, Matthew G.; Gardner, Elliot M.; Liu, Yang; Medina, Rafael; Goffinet, Bernard; Shaw, A. Jonathan; Zerega, Nyree J. C.; Wickett, Norman J.

    2016-01-01

    Premise of the study: Using sequence data generated via target enrichment for phylogenetics requires reassembly of high-throughput sequence reads into loci, presenting a number of bioinformatics challenges. We developed HybPiper as a user-friendly platform for assembly of gene regions, extraction of exon and intron sequences, and identification of paralogous gene copies. We test HybPiper using baits designed to target 333 phylogenetic markers and 125 genes of functional significance in Artocarpus (Moraceae). Methods and Results: HybPiper implements parallel execution of sequence assembly in three phases: read mapping, contig assembly, and target sequence extraction. The pipeline was able to recover nearly complete gene sequences for all genes in 22 species of Artocarpus. HybPiper also recovered more than 500 bp of nontargeted intron sequence in over half of the phylogenetic markers and identified paralogous gene copies in Artocarpus. Conclusions: HybPiper was designed for Linux and Mac OS X and is freely available at https://github.com/mossmatters/HybPiper. PMID:27437175

  10. 75 FR 40863 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-14

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0193] Pipeline Safety: Information Collection Activities AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice and request for comments. SUMMARY: In...

  11. 77 FR 74276 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-13

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0302] Pipeline Safety: Information Collection Activities AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice and request for comments. SUMMARY: In...

  12. 76 FR 50539 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-15

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2011-0136] Pipeline Safety: Information Collection Activities AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice and request for comments. SUMMARY: In...

  13. 75 FR 35516 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-22

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0147] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials... with the Class 1 location portion of a 7.4 mile natural gas pipeline to be constructed in Alaska. This...

  14. 49 CFR 192.513 - Test requirements for plastic pipelines.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 3 2011-10-01 2011-10-01 false Test requirements for plastic pipelines. 192.513 Section 192.513 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND... Test requirements for plastic pipelines. (a) Each segment of a plastic pipeline must be tested in...

  15. 49 CFR 192.513 - Test requirements for plastic pipelines.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Test requirements for plastic pipelines. 192.513 Section 192.513 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND... Test requirements for plastic pipelines. (a) Each segment of a plastic pipeline must be tested in...

  16. 49 CFR 192.513 - Test requirements for plastic pipelines.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 3 2013-10-01 2013-10-01 false Test requirements for plastic pipelines. 192.513 Section 192.513 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND... Test requirements for plastic pipelines. (a) Each segment of a plastic pipeline must be tested in...

  17. 49 CFR 192.513 - Test requirements for plastic pipelines.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 3 2014-10-01 2014-10-01 false Test requirements for plastic pipelines. 192.513 Section 192.513 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND... Test requirements for plastic pipelines. (a) Each segment of a plastic pipeline must be tested in...

  18. 49 CFR 192.513 - Test requirements for plastic pipelines.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 3 2012-10-01 2012-10-01 false Test requirements for plastic pipelines. 192.513 Section 192.513 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND... Test requirements for plastic pipelines. (a) Each segment of a plastic pipeline must be tested in...

  19. From benchmarking HITS-CLIP peak detection programs to a new method for identification of miRNA-binding sites from Ago2-CLIP data

    PubMed Central

    Bottini, Silvia; Hamouda-Tekaya, Nedra; Tanasa, Bogdan; Zaragosi, Laure-Emmanuelle; Grandjean, Valerie; Repetto, Emanuela

    2017-01-01

    Abstract Experimental evidence indicates that about 60% of miRNA-binding activity does not follow the canonical rule about the seed matching between miRNA and target mRNAs, but rather a non-canonical miRNA targeting activity outside the seed or with a seed-like motifs. Here, we propose a new unbiased method to identify canonical and non-canonical miRNA-binding sites from peaks identified by Ago2 Cross-Linked ImmunoPrecipitation associated to high-throughput sequencing (CLIP-seq). Since the quality of peaks is of pivotal importance for the final output of the proposed method, we provide a comprehensive benchmarking of four peak detection programs, namely CIMS, PIPE-CLIP, Piranha and Pyicoclip, on four publicly available Ago2-HITS-CLIP datasets and one unpublished in-house Ago2-dataset in stem cells. We measured the sensitivity, the specificity and the position accuracy toward miRNA binding sites identification, and the agreement with TargetScan. Secondly, we developed a new pipeline, called miRBShunter, to identify canonical and non-canonical miRNA-binding sites based on de novo motif identification from Ago2 peaks and prediction of miRNA::RNA heteroduplexes. miRBShunter was tested and experimentally validated on the in-house Ago2-dataset and on an Ago2-PAR-CLIP dataset in human stem cells. Overall, we provide guidelines to choose a suitable peak detection program and a new method for miRNA-target identification. PMID:28108660

  20. 78 FR 16764 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0302] Pipeline Safety: Information Collection Activities AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice and Request for Comments on a Previously...

  1. 76 FR 70217 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-10

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Administration [Docket No. PHMSA... OMB approval of new Information Collection. AGENCY: Pipeline and Hazardous Materials Safety... Pipeline and Hazardous Materials Safety Administration (PHMSA) published a notice in the Federal Register...

  2. 76 FR 21423 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-15

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2011-0063] Pipeline Safety: Request for Special Permit AGENCY: Pipeline and Hazardous Materials... application is for two 30-inch segments, segments 3 and 4, of the TPL 330 natural gas pipeline located in St...

  3. Pipeline safety : the office of pipeline safety is changing how it oversees the pipeline industry

    DOT National Transportation Integrated Search

    2000-05-01

    Pipelines are inherently safer to the public than other modes of freight transportation for natural gas and hazardous liquids (such as oil products) because they are, for the most part, located underground. Nevertheless, the volatile nature of these ...

  4. Pipeline Safety: The Office of Pipeline Safety Is Changing How It Oversees the Pipeline Industry

    DOT National Transportation Integrated Search

    2000-05-01

    Pipelines are inherently safer to the public than other modes of freight transportation for natural gas and hazardous liquids (such as oil products) because they are, for the most part, located underground. Nevertheless, the volatile nature of these ...

  5. 75 FR 4610 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2009-0375] Pipeline Safety: Information Collection Activities AGENCY: Pipeline and Hazardous Materials Safety Administration. ACTION: Notice and request for comments. SUMMARY: On November 24, 2009, as...

  6. 75 FR 67807 - Pipeline Safety: Emergency Preparedness Communications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... is issuing an Advisory Bulletin to remind operators of gas and hazardous liquid pipeline facilities... Gas Pipeline Systems. Subject: Emergency Preparedness Communications. Advisory: To further enhance the...

  7. 76 FR 65778 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-24

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...: 12,120. Frequency of Collection: On occasion. 2. Title: Recordkeeping for Natural Gas Pipeline... investigating incidents. Affected Public: Operators of natural gas pipeline systems. Annual Reporting and...

  8. Seismic hazard exposure for the Trans-Alaska Pipeline

    USGS Publications Warehouse

    Cluff, L.S.; Page, R.A.; Slemmons, D.B.; Grouse, C.B.; ,

    2003-01-01

    The discovery of oil on Alaska's North Slope and the construction of a pipeline to transport that oil across Alaska coincided with the National Environmental Policy Act of 1969 and a destructive Southern California earthquake in 1971 to cause stringent stipulations, state-of-the-art investigations, and innovative design for the pipeline. The magnitude 7.9 earthquake on the Denali fault in November 2002 was remarkably consistent with the design earthquake and fault displacement postulated for the Denali crossing of the Trans-Alaska Pipeline route. The pipeline maintained its integrity, and disaster was averted. Recent probabilistic studies to update previous hazard exposure conclusions suggest continuing pipeline integrity.

  9. The School-to-Prison Pipeline

    ERIC Educational Resources Information Center

    Elias, Marilyn

    2013-01-01

    Policies that encourage police presence at schools, harsh tactics including physical restraint, and automatic punishments that result in suspensions and out-of-class time are huge contributors to the school-to-prison pipeline, but the problem is more complex than that. The school-to-prison pipeline starts (or is best avoided) in the classroom.…

  10. 75 FR 13807 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-23

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... of Transportation, Pipeline and Hazardous Materials Safety Administration, 1200 New Jersey Avenue, SE...: Updates to Pipeline and Liquefied Natural Gas Reporting Requirements (One Rule). The Notice of Proposed...

  11. 76 FR 45904 - Pipeline Safety: Information Collection Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... at U.S. Department of Transportation, Pipeline and Hazardous Materials Safety Administration, 1200...: On Occasion. Title: Record Keeping for Natural Gas Pipeline Operators. OMB Control Number: 2137-0049...

  12. Pipeline enhances Norman Wells potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Approval of an oil pipeline from halfway down Canada's MacKenzie River Valley at Norman Wells to N. Alberta has raised the potential for development of large reserves along with controversy over native claims. The project involves 2 closely related proposals. One, by Esso Resources, the exploration and production unit of Imperial Oil, will increase oil production from the Norman Wells field from 3000 bpd currently to 25,000 bpd. The other proposal, by Interprovincial Pipeline (N.W) Ltd., calls for construction of an underground pipeline to transport the additional production from Norman Wells to Alberta. The 560-mile, 12-in. pipeline will extend frommore » Norman Wells, which is 90 miles south of the Arctic Circle on the north shore of the Mackenzie River, south to the end of an existing line at Zama in N. Alberta. There will be 3 pumping stations en route. This work also discusses recovery, potential, drilling limitations, the processing plant, positive impact, and further development of the Norman Wells project.« less

  13. Project Report: Active Pipeline Encroachment Detector (Phase I)

    DOT National Transportation Integrated Search

    2008-06-10

    Of the many pipeline accident causes that occur to oil and gas pipelines, approximately 40% of are caused by third-party excavating activities into the buried pipeline right of way (ROW). According to DOT statistics, excavation damage is the second l...

  14. Airborne LIDAR Pipeline Inspection System (ALPIS) Mapping Tests

    DOT National Transportation Integrated Search

    2003-06-06

    Natural gas and hazardous liquid pipeline operators have a need to identify where leaks are occurring along their pipelines in order to lower the risks the pipelines pose to people and the environment. Current methods of locating natural gas and haza...

  15. 49 CFR 195.210 - Pipeline location.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF HAZARDOUS LIQUIDS BY... far as practicable, areas containing private dwellings, industrial buildings, and places of public...

  16. 49 CFR 195.210 - Pipeline location.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF HAZARDOUS LIQUIDS BY... far as practicable, areas containing private dwellings, industrial buildings, and places of public...

  17. 49 CFR 195.210 - Pipeline location.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF HAZARDOUS LIQUIDS BY... far as practicable, areas containing private dwellings, industrial buildings, and places of public...

  18. 49 CFR 195.210 - Pipeline location.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF HAZARDOUS LIQUIDS BY... far as practicable, areas containing private dwellings, industrial buildings, and places of public...

  19. An Analysis Pipeline with Statistical and Visualization-Guided Knowledge Discovery for Michigan-Style Learning Classifier Systems

    PubMed Central

    Urbanowicz, Ryan J.; Granizo-Mackenzie, Ambrose; Moore, Jason H.

    2014-01-01

    Michigan-style learning classifier systems (M-LCSs) represent an adaptive and powerful class of evolutionary algorithms which distribute the learned solution over a sizable population of rules. However their application to complex real world data mining problems, such as genetic association studies, has been limited. Traditional knowledge discovery strategies for M-LCS rule populations involve sorting and manual rule inspection. While this approach may be sufficient for simpler problems, the confounding influence of noise and the need to discriminate between predictive and non-predictive attributes calls for additional strategies. Additionally, tests of significance must be adapted to M-LCS analyses in order to make them a viable option within fields that require such analyses to assess confidence. In this work we introduce an M-LCS analysis pipeline that combines uniquely applied visualizations with objective statistical evaluation for the identification of predictive attributes, and reliable rule generalizations in noisy single-step data mining problems. This work considers an alternative paradigm for knowledge discovery in M-LCSs, shifting the focus from individual rules to a global, population-wide perspective. We demonstrate the efficacy of this pipeline applied to the identification of epistasis (i.e., attribute interaction) and heterogeneity in noisy simulated genetic association data. PMID:25431544

  20. Academic Pipeline and Futures Lab

    DTIC Science & Technology

    2016-02-01

    AFRL-RY-WP-TR-2015-0186 ACADEMIC PIPELINE AND FUTURES LAB Brian D. Rigling Wright State University FEBRUARY 2016...DD-MM-YY) 2. REPORT TYPE 3. DATES COVERED (From - To) February 2016 Final 12 June 2009 – 30 September 2015 4. TITLE AND SUBTITLE ACADEMIC ...6 3 WSU ACADEMIC PIPELINE AND LAYERED SENSING FUTURES LAB (prepared by K

  1. Oman-India pipeline route survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mullee, J.E.

    1995-12-01

    Paper describes the geological setting in the Arabian Sea for a proposed 28-inch gas pipeline from Oman to India reaching 3,500-m water depths. Covers planning, execution, quality control and results of geophysical, geotechnical and oceanographic surveys. Outlines theory and application of pipeline stress analysis on board survey vessel for feasibility assessment, and specifies equipment used.

  2. 76 FR 68828 - Pipeline Safety: Emergency Responder Forum

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-07

    ... PHMSA-2011-0295] Pipeline Safety: Emergency Responder Forum AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice of Forum. SUMMARY: PHMSA is co-sponsoring a one-day Emergency Responder Forum with the National Association of Pipeline Safety Representatives and the United...

  3. JGI Plant Genomics Gene Annotation Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Shengqiang; Rokhsar, Dan; Goodstein, David

    2014-07-14

    Plant genomes vary in size and are highly complex with a high amount of repeats, genome duplication and tandem duplication. Gene encodes a wealth of information useful in studying organism and it is critical to have high quality and stable gene annotation. Thanks to advancement of sequencing technology, many plant species genomes have been sequenced and transcriptomes are also sequenced. To use these vastly large amounts of sequence data to make gene annotation or re-annotation in a timely fashion, an automatic pipeline is needed. JGI plant genomics gene annotation pipeline, called integrated gene call (IGC), is our effort toward thismore » aim with aid of a RNA-seq transcriptome assembly pipeline. It utilizes several gene predictors based on homolog peptides and transcript ORFs. See Methods for detail. Here we present genome annotation of JGI flagship green plants produced by this pipeline plus Arabidopsis and rice except for chlamy which is done by a third party. The genome annotations of these species and others are used in our gene family build pipeline and accessible via JGI Phytozome portal whose URL and front page snapshot are shown below.« less

  4. ORAC-DR: A generic data reduction pipeline infrastructure

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie

    2015-03-01

    ORAC-DR is a general purpose data reduction pipeline system designed to be instrument and observatory agnostic. The pipeline works with instruments as varied as infrared integral field units, imaging arrays and spectrographs, and sub-millimeter heterodyne arrays and continuum cameras. This paper describes the architecture of the pipeline system and the implementation of the core infrastructure. We finish by discussing the lessons learned since the initial deployment of the pipeline system in the late 1990s.

  5. Rights, Bunche, Rose and the "pipeline".

    PubMed Central

    Marks, Steven R.; Wilkinson-Lee, Ada M.

    2006-01-01

    We address education "pipelines" and their social ecology, drawing on the 1930's writing of Ralph J. Bunche, a Nobel peace maker whose war against systematic second-class education for the poor, minority and nonminority alike is nearly forgotten; and of the epidemiologist Geoffrey Rose, whose 1985 paper spotlighted the difficulty of shifting health status and risks in a "sick society. From the perspective of human rights and human development, we offer suggestions toward the paired "ends" of the pipeline: equality of opportunity for individuals, and equality of health for populations. We offer a national "to do" list to improve pipeline flow and then reconsider the merits of the "pipeline" metaphor, which neither matches the reality of lived education pathways nor supports notions of human rights, freedoms and capabilities, but rather reflects a commoditizing stance to free persons. PMID:17019927

  6. Evaluation of fishing gear induced pipeline damage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellinas, C.P.; King, B.; Davies, R.

    1995-12-31

    Impact and damage to pipelines due to fishing activities is one of the hazards faced by North Sea pipelines during their operating lives. Available data indicate that about one in ten of reported incidents are due to fishing activities. This paper is concerned with one such occurrence, the assessment of the resulting damage, the methods used to confirm pipeline integrity and the approaches developed for its repair.

  7. Gas pipeline leakage detection based on PZT sensors

    NASA Astrophysics Data System (ADS)

    Zhu, Junxiao; Ren, Liang; Ho, Siu-Chun; Jia, Ziguang; Song, Gangbing

    2017-02-01

    In this paper, an innovative method for rapid detection and location determination of pipeline leakage utilizing lead zirconate titanate (PZT) sensors is proposed. The negative pressure wave (NPW) is a stress wave generated by leakage in the pipeline, and propagates along the pipeline from the leakage point to both ends. Thus the NPW is associated with hoop strain variation along the pipe wall. PZT sensors mounted on the pipeline were used to measure the strain variation and allowed accurate (within 2% error) and repeatable location (within 4% variance) of five manually controlled leakage points. Experimental results have verified the effectiveness and the location accuracy for leakage in a 55 meter long model pipeline.

  8. Target Abundance-Based Fitness Screening (TAFiS) Facilitates Rapid Identification of Target-Specific and Physiologically Active Chemical Probes

    PubMed Central

    Butts, Arielle; DeJarnette, Christian; Peters, Tracy L.; Parker, Josie E.; Kerns, Morgan E.; Eberle, Karen E.; Kelly, Steve L.

    2017-01-01

    -generation target-based whole-cell screening approach that incorporates the principles of both chemical genetics and competitive fitness, which enables the identification of target-specific and physiologically active compounds from a single screen. We have chosen to validate this approach using the important human fungal pathogen Candida albicans with the intention of pursuing novel antifungal targets. However, this approach is broadly applicable and is expected to dramatically reduce the time and resources required to progress from screening hit to lead compound. PMID:28989971

  9. Efficient Identification of Murine M2 Macrophage Peptide Targeting Ligands by Phage Display and Next-Generation Sequencing.

    PubMed

    Liu, Gary W; Livesay, Brynn R; Kacherovsky, Nataly A; Cieslewicz, Maryelise; Lutz, Emi; Waalkes, Adam; Jensen, Michael C; Salipante, Stephen J; Pun, Suzie H

    2015-08-19

    Peptide ligands are used to increase the specificity of drug carriers to their target cells and to facilitate intracellular delivery. One method to identify such peptide ligands, phage display, enables high-throughput screening of peptide libraries for ligands binding to therapeutic targets of interest. However, conventional methods for identifying target binders in a library by Sanger sequencing are low-throughput, labor-intensive, and provide a limited perspective (<0.01%) of the complete sequence space. Moreover, the small sample space can be dominated by nonspecific, preferentially amplifying "parasitic sequences" and plastic-binding sequences, which may lead to the identification of false positives or exclude the identification of target-binding sequences. To overcome these challenges, we employed next-generation Illumina sequencing to couple high-throughput screening and high-throughput sequencing, enabling more comprehensive access to the phage display library sequence space. In this work, we define the hallmarks of binding sequences in next-generation sequencing data, and develop a method that identifies several target-binding phage clones for murine, alternatively activated M2 macrophages with a high (100%) success rate: sequences and binding motifs were reproducibly present across biological replicates; binding motifs were identified across multiple unique sequences; and an unselected, amplified library accurately filtered out parasitic sequences. In addition, we validate the Multiple Em for Motif Elicitation tool as an efficient and principled means of discovering binding sequences.

  10. Urban Underground Pipelines Mapping Using Ground Penetrating Radar

    NASA Astrophysics Data System (ADS)

    Jaw, S. W.; M, Hashim

    2014-02-01

    Underground spaces are now being given attention to exploit for transportation, utilities, and public usage. The underground has become a spider's web of utility networks. Mapping of underground utility pipelines has become a challenging and difficult task. As such, mapping of underground utility pipelines is a "hit-and-miss" affair, and results in many catastrophic damages, particularly in urban areas. Therefore, this study was conducted to extract locational information of the urban underground utility pipeline using trenchless measuring tool, namely ground penetrating radar (GPR). The focus of this study was to conduct underground utility pipeline mapping for retrieval of geometry properties of the pipelines, using GPR. In doing this, a series of tests were first conducted at the preferred test site and real-life experiment, followed by modeling of field-based model using Finite-Difference Time-Domain (FDTD). Results provide the locational information of underground utility pipelines associated with its mapping accuracy. Eventually, this locational information of the underground utility pipelines is beneficial to civil infrastructure management and maintenance which in the long term is time-saving and critically important for the development of metropolitan areas.

  11. Structure identification by Mass Spectrometry Non-Targeted Analysis using the US EPA’s CompTox Chemistry Dashboard

    EPA Science Inventory

    Identification of unknowns in mass spectrometry based non-targeted analyses (NTA) requires the integration of complementary pieces of data to arrive at a confident, consensus structure. Researchers use chemical reference databases, spectral matching, fragment prediction tools, r...

  12. Computer models of complex multiloop branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.

    2013-11-01

    This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.

  13. Bad Actors Criticality Assessment for Pipeline system

    NASA Astrophysics Data System (ADS)

    Nasir, Meseret; Chong, Kit wee; Osman, Sabtuni; Siaw Khur, Wee

    2015-04-01

    Failure of a pipeline system could bring huge economic loss. In order to mitigate such catastrophic loss, it is required to evaluate and rank the impact of each bad actor of the pipeline system. In this study, bad actors are known as the root causes or any potential factor leading to the system downtime. Fault Tree Analysis (FTA) is used to analyze the probability of occurrence for each bad actor. Bimbaum's Importance and criticality measure (BICM) is also employed to rank the impact of each bad actor on the pipeline system failure. The results demonstrate that internal corrosion; external corrosion and construction damage are critical and highly contribute to the pipeline system failure with 48.0%, 12.4% and 6.0% respectively. Thus, a minor improvement in internal corrosion; external corrosion and construction damage would bring significant changes in the pipeline system performance and reliability. These results could also be useful to develop efficient maintenance strategy by identifying the critical bad actors.

  14. Failure modes for pipelines in landslide areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruschi, R.; Spinazze, M.; Tomassini, D.

    1995-12-31

    In recent years a number of incidences of pipelines affected by slow soil movements have been reported in the relevant literature. Further related issues such as soil-pipe interaction have been studied both theoretically and through experimental surveys, along with the environmental conditions which are responsible for hazard to the pipeline integrity. A suitable design criteria under these circumstances has been discussed by several authors, in particular in relation to a limit state approach and hence a strain based criteria. The scope of this paper is to describe the failure mechanisms which may affect the pipeline in the presence of slowmore » soil movements impacting on the pipeline, both in the longitudinal and transverse direction. Particular attention is paid to environmental, geometric and structural parameters which steer the process towards one or other failure mechanism. Criteria for deciding upon remedial measures required to guarantee the structural integrity of the pipeline, both in the short and in the long term, are discussed.« less

  15. Methods for protecting subsea pipelines and installations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rochelle, W.R.; Simpson, D.M.

    1981-01-01

    The hazards for subsea pipelines and installations are described. Methods currently being used to protect subsea pipelines and installations are discussed with the emphasis on various trenching methods and equipment. Technical data on progress rates for trenching and feasible depths of trench are given. Possible methods for protection against icebergs are discussed. A case for more comprehensive data on icebergs is presented. Should a pipeline become damaged, repair methods are noted.

  16. Commissioning of a new helium pipeline

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Center Director Roy Bridges addresses the audience at the commissioning of a new high-pressure helium pipeline at Kennedy Space Center that will service launch needs at the new Delta IV Complex 37 at Cape Canaveral Air Force Station. The nine-mile- long buried pipeline will also serve as a backup helium resource for Shuttle launches. Nearly one launch's worth of helium will be available in the pipeline to support a Shuttle pad in an emergency. The line originates at the Helium Facility on KSC and terminates in a meter station at the perimeter of the Delta IV launch pad. Others at the ceremony were Jerry Jorgensen, pipeline project manager, Space Gateway Support (SGS); Col. Samuel Dick, representative of the 45th Space Wing; Ramon Lugo, acting executive director, JPMO; David Herst, director, Delta IV Launch Sites; Pierre Dufour, president and CEO, Air Liquide America Corporation; and Michael Butchko, president, SGS.

  17. Nuclear Security: Target Analysis-rev

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Surinder Paul; Gibbs, Philip W.; Bultz, Garl A.

    2014-03-01

    The objectives of this presentation are to understand target identification, including roll-up and protracted theft; evaluate target identification in the SNRI; recognize the target characteristics and consequence levels; and understand graded safeguards.

  18. Garden Banks 388 deepwater pipeline span avoidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, S.W.; Sawyer, M.A.; Kenney, T.D.

    1995-12-31

    This paper will describe the span avoidance measures taken for the installation of the Garden Banks 388 deepwater oil and gas gathering pipelines. The two 12 inch pipelines connect a shallow water facility in EI-315 to a deep water subsea template in GB-388. These pipelines run across the irregular continental slope typically found in moderate to deep water in the Gulf of Mexico. To minimize pipeline spans, steps were taken during design, survey, and installation phases of the project. During each phase, as additional information became available, analyses and resulting recommended approaches were refined. This continuity, seldom easily obtained, provedmore » beneficial in translating design work into field results.« less

  19. Sensor and transmitter system for communication in pipelines

    DOEpatents

    Cooper, John F.; Burnham, Alan K.

    2013-01-29

    A system for sensing and communicating in a pipeline that contains a fluid. An acoustic signal containing information about a property of the fluid is produced in the pipeline. The signal is transmitted through the pipeline. The signal is received with the information and used by a control.

  20. Experimental design and data analysis of Ago-RIP-Seq experiments for the identification of microRNA targets.

    PubMed

    Tichy, Diana; Pickl, Julia Maria Anna; Benner, Axel; Sültmann, Holger

    2017-03-31

    The identification of microRNA (miRNA) target genes is crucial for understanding miRNA function. Many methods for the genome-wide miRNA target identification have been developed in recent years; however, they have several limitations including the dependence on low-confident prediction programs and artificial miRNA manipulations. Ago-RNA immunoprecipitation combined with high-throughput sequencing (Ago-RIP-Seq) is a promising alternative. However, appropriate statistical data analysis algorithms taking into account the experimental design and the inherent noise of such experiments are largely lacking.Here, we investigate the experimental design for Ago-RIP-Seq and examine biostatistical methods to identify de novo miRNA target genes. Statistical approaches considered are either based on a negative binomial model fit to the read count data or applied to transformed data using a normal distribution-based generalized linear model. We compare them by a real data simulation study using plasmode data sets and evaluate the suitability of the approaches to detect true miRNA targets by sensitivity and false discovery rates. Our results suggest that simple approaches like linear regression models on (appropriately) transformed read count data are preferable. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics.

    PubMed

    Röst, Hannes L; Liu, Yansheng; D'Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi

    2016-09-01

    Next-generation mass spectrometric (MS) techniques such as SWATH-MS have substantially increased the throughput and reproducibility of proteomic analysis, but ensuring consistent quantification of thousands of peptide analytes across multiple liquid chromatography-tandem MS (LC-MS/MS) runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we developed TRIC (http://proteomics.ethz.ch/tric/), a software tool that utilizes fragment-ion data to perform cross-run alignment, consistent peak-picking and quantification for high-throughput targeted proteomics. TRIC reduced the identification error compared to a state-of-the-art SWATH-MS analysis without alignment by more than threefold at constant recall while correcting for highly nonlinear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups. Thus, TRIC fills a gap in the pipeline for automated analysis of massively parallel targeted proteomics data sets.

  2. ORAC-DR -- SCUBA Pipeline Data Reduction

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie

    ORAC-DR is a flexible data reduction pipeline designed to reduce data from many different instruments. This document describes how to use the ORAC-DR pipeline to reduce data taken with the Submillimetre Common-User Bolometer Array (SCUBA) obtained from the James Clerk Maxwell Telescope.

  3. 77 FR 73637 - Alliance Pipeline L.P.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-11

    ... Pipeline L.P.; Notice of Application Take notice that on November 26, 2012, Alliance Pipeline L.P..., Manager, Regulatory Affairs, Alliance Pipeline Ltd. on behalf of Alliance Pipeline L.P., 800, 605-5 Ave...] BILLING CODE 6717-01-P ...

  4. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast-iron pipelines. When an operator has knowledge that the support for a segment of a buried cast-iron...

  5. A computational genomics pipeline for prokaryotic sequencing projects.

    PubMed

    Kislyuk, Andrey O; Katz, Lee S; Agrawal, Sonia; Hagen, Matthew S; Conley, Andrew B; Jayaraman, Pushkala; Nelakuditi, Viswateja; Humphrey, Jay C; Sammons, Scott A; Govil, Dhwani; Mair, Raydel D; Tatti, Kathleen M; Tondella, Maria L; Harcourt, Brian H; Mayer, Leonard W; Jordan, I King

    2010-08-01

    New sequencing technologies have accelerated research on prokaryotic genomes and have made genome sequencing operations outside major genome sequencing centers routine. However, no off-the-shelf solution exists for the combined assembly, gene prediction, genome annotation and data presentation necessary to interpret sequencing data. The resulting requirement to invest significant resources into custom informatics support for genome sequencing projects remains a major impediment to the accessibility of high-throughput sequence data. We present a self-contained, automated high-throughput open source genome sequencing and computational genomics pipeline suitable for prokaryotic sequencing projects. The pipeline has been used at the Georgia Institute of Technology and the Centers for Disease Control and Prevention for the analysis of Neisseria meningitidis and Bordetella bronchiseptica genomes. The pipeline is capable of enhanced or manually assisted reference-based assembly using multiple assemblers and modes; gene predictor combining; and functional annotation of genes and gene products. Because every component of the pipeline is executed on a local machine with no need to access resources over the Internet, the pipeline is suitable for projects of a sensitive nature. Annotation of virulence-related features makes the pipeline particularly useful for projects working with pathogenic prokaryotes. The pipeline is licensed under the open-source GNU General Public License and available at the Georgia Tech Neisseria Base (http://nbase.biology.gatech.edu/). The pipeline is implemented with a combination of Perl, Bourne Shell and MySQL and is compatible with Linux and other Unix systems.

  6. Acoustic power delivery to pipeline monitoring wireless sensors.

    PubMed

    Kiziroglou, M E; Boyle, D E; Wright, S W; Yeatman, E M

    2017-05-01

    The use of energy harvesting for powering wireless sensors is made more challenging in most applications by the requirement for customization to each specific application environment because of specificities of the available energy form, such as precise location, direction and motion frequency, as well as the temporal variation and unpredictability of the energy source. Wireless power transfer from dedicated sources can overcome these difficulties, and in this work, the use of targeted ultrasonic power transfer as a possible method for remote powering of sensor nodes is investigated. A powering system for pipeline monitoring sensors is described and studied experimentally, with a pair of identical, non-inertial piezoelectric transducers used at the transmitter and receiver. Power transmission of 18mW (Root-Mean-Square) through 1m of a118mm diameter cast iron pipe, with 8mm wall thickness is demonstrated. By analysis of the delay between transmission and reception, including reflections from the pipeline edges, a transmission speed of 1000m/s is observed, corresponding to the phase velocity of the L(0,1) axial and F(1,1) radial modes of the pipe structure. A reduction of power delivery with water-filling is observed, yet over 4mW of delivered power through a fully-filled pipe is demonstrated. The transmitted power and voltage levels exceed the requirements for efficient power management, including rectification at cold-starting conditions, and for the operation of low-power sensor nodes. The proposed powering technique may allow the implementation of energy autonomous wireless sensor systems for monitoring industrial and network pipeline infrastructure. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Bioinformatic pipelines in Python with Leaf

    PubMed Central

    2013-01-01

    Background An incremental, loosely planned development approach is often used in bioinformatic studies when dealing with custom data analysis in a rapidly changing environment. Unfortunately, the lack of a rigorous software structuring can undermine the maintainability, communicability and replicability of the process. To ameliorate this problem we propose the Leaf system, the aim of which is to seamlessly introduce the pipeline formality on top of a dynamical development process with minimum overhead for the programmer, thus providing a simple layer of software structuring. Results Leaf includes a formal language for the definition of pipelines with code that can be transparently inserted into the user’s Python code. Its syntax is designed to visually highlight dependencies in the pipeline structure it defines. While encouraging the developer to think in terms of bioinformatic pipelines, Leaf supports a number of automated features including data and session persistence, consistency checks between steps of the analysis, processing optimization and publication of the analytic protocol in the form of a hypertext. Conclusions Leaf offers a powerful balance between plan-driven and change-driven development environments in the design, management and communication of bioinformatic pipelines. Its unique features make it a valuable alternative to other related tools. PMID:23786315

  8. 75 FR 4136 - Pipeline Safety: Request To Modify Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2009-0377] Pipeline Safety: Request To Modify Special Permit AGENCY: Pipeline and Hazardous... coating on its gas pipeline. DATES: Submit any comments regarding this special permit modification request...

  9. Development of an Automated Imaging Pipeline for the Analysis of the Zebrafish Larval Kidney

    PubMed Central

    Westhoff, Jens H.; Giselbrecht, Stefan; Schmidts, Miriam; Schindler, Sebastian; Beales, Philip L.; Tönshoff, Burkhard; Liebel, Urban; Gehrig, Jochen

    2013-01-01

    The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP) transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems. PMID:24324758

  10. Oil and Gas Wells and Pipelines on U.S. Wildlife Refuges: Challenges for Managers

    PubMed Central

    2015-01-01

    The increased demand for oil and gas places a burden on lands set aside for natural resource conservation. Oil and gas development alters the environment locally and on a much broader spatial scale depending on the intensity and extent of mineral resource extraction. The current increase in oil and gas exploration and production in the United States prompted an update of the number of pipelines and wells associated with oil and gas production on National Wildlife Refuge System (NWRS) lands. We obtained geospatial data on the location of oil and gas wells and pipelines within and close to the boundaries of NWRS lands (units) acquired as fee simple (i.e. absolute title to the surface land) by the U.S. Fish and Wildlife Service. We found that 5,002 wells are located in 107 NWRS units and 595 pipelines transect 149 of the 599 NWRS units. Almost half of the wells (2,196) were inactive, one-third (1,665) were active, and the remainder of the wells were either plugged and abandoned or the status was unknown. Pipelines crossed a total of 2,155 kilometers (1,339 miles) of NWRS fee simple lands. The high level of oil and gas activity warrants follow up assessments for wells lacking information on production type or well status with emphasis on verifying the well status and identifying abandoned and unplugged wells. NWRS fee simple lands should also be assessed for impacts from brine, oil and other hydrocarbon spills, as well as habitat alteration associated with oil and gas, including the identification of abandoned oil and gas facilities requiring equipment removal and site restoration. PMID:25915417

  11. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    PubMed

    Westhoff, Jens H; Giselbrecht, Stefan; Schmidts, Miriam; Schindler, Sebastian; Beales, Philip L; Tönshoff, Burkhard; Liebel, Urban; Gehrig, Jochen

    2013-01-01

    The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP) transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  12. Targeting the tumor microenvironment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenny, P.A.; Lee, G.Y.; Bissell, M.J.

    2006-11-07

    Despite some notable successes cancer remains, for the most part, a seemingly intractable problem. There is, however, a growing appreciation that targeting the tumor epithelium in isolation is not sufficient as there is an intricate mutually sustaining synergy between the tumor epithelial cells and their surrounding stroma. As the details of this dialogue emerge, new therapeutic targets have been proposed. The FDA has already approved drugs targeting microenvironmental components such as VEGF and aromatase and many more agents are in the pipeline. In this article, we describe some of the 'druggable' targets and processes within the tumor microenvironment and reviewmore » the approaches being taken to disrupt these interactions.« less

  13. 33 CFR 88.15 - Lights on dredge pipelines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Lights on dredge pipelines. 88.15... NAVIGATION RULES ANNEX V: PILOT RULES § 88.15 Lights on dredge pipelines. Dredge pipelines that are floating or supported on trestles shall display the following lights at night and in periods of restricted...

  14. 33 CFR 88.15 - Lights on dredge pipelines.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Lights on dredge pipelines. 88.15... NAVIGATION RULES ANNEX V: PILOT RULES § 88.15 Lights on dredge pipelines. Dredge pipelines that are floating or supported on trestles shall display the following lights at night and in periods of restricted...

  15. 33 CFR 88.15 - Lights on dredge pipelines.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Lights on dredge pipelines. 88.15... NAVIGATION RULES ANNEX V: PILOT RULES § 88.15 Lights on dredge pipelines. Dredge pipelines that are floating or supported on trestles shall display the following lights at night and in periods of restricted...

  16. 33 CFR 88.15 - Lights on dredge pipelines.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Lights on dredge pipelines. 88.15... NAVIGATION RULES ANNEX V: PILOT RULES § 88.15 Lights on dredge pipelines. Dredge pipelines that are floating or supported on trestles shall display the following lights at night and in periods of restricted...

  17. 33 CFR 88.15 - Lights on dredge pipelines.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Lights on dredge pipelines. 88.15... NAVIGATION RULES ANNEX V: PILOT RULES § 88.15 Lights on dredge pipelines. Dredge pipelines that are floating or supported on trestles shall display the following lights at night and in periods of restricted...

  18. Use of geographic information systems for applications on gas pipeline rights-of-way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sydelko, P.J.; Wilkey, P.L.

    1992-12-01

    Geographic information system (GIS) applications for the siting and monitoring of gas pipeline rights-of-way (ROWS) were developed for areas near Rio Vista, California. The data layers developed for this project represent geographic features, such as landcover, elevation, aspect, slope, soils, hydrography, transportation, endangered species, wetlands, and public line surveys. A GIS was used to develop and store spatial data from several sources; to manipulate spatial data to evaluate environmental and engineering issues associated with the siting, permitting, construction, maintenance, and monitoring of gas pipeline ROWS; and to graphically display analysis results. Examples of these applications include (1) determination of environmentallymore » sensitive areas, such as endangered species habitat, wetlands, and areas of highly erosive soils; (2) evaluation of engineering constraints, including shallow depth to bedrock, major hydrographic features, and shallow water table; (3) classification of satellite imagery for landuse/landcover that will affect ROWS; and (4) identification of alternative ROW corridors that avoid environmentally sensitive areas or areas with severe engineering constraints.« less

  19. Use of geographic information systems for applications on gas pipeline rights-of-way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, P.J.

    1991-12-01

    Geographic information system (GIS) applications for the siting and monitoring of gas pipeline rights-of-way for this project (ROWs) were developed for areas near Rio Vista, California. The data layers developed for this project represent geographic features, such as landcover, elevation, aspect, slope, soils, hydrography, transportation, endangered species, wetlands, and public line surveys. A GIS was used to develop and store spatial data from several sources; to manipulate spatial data to evaluate environmental and engineering issues associated with the siting, permitting, construction, maintenance, and monitoring of gas pipeline ROWS; and to graphically display analysis results. Examples of these applications include (1)more » determination of environmentally sensitive areas, such as endangered species habitat, wetlands, and areas of highly erosive soils; (2) evaluation of engineering constraints, including shallow depth to bedrock, major hydrographic features, and shallow water table; (3) classification of satellite imagery for landuse/landcover that will affect ROWs; and (4) identification of alternative ROW corridors that avoid environmentally sensitive areas or areas with severe engineering constraints.« less

  20. Use of geographic information systems for applications on gas pipeline rights-of-way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sydelko, P.J.

    1993-10-01

    Geographic information system (GIS) applications for the siting and monitoring of gas pipeline rights-of-way (ROWS) were developed for areas near Rio Vista, California. The data layers developed for this project represent geographic features, such as landcover, elevation, aspect, slope, soils, hydrography, transportation, endangered species, wetlands, and public line surveys. A GIS was used to develop and store spatial data from several sources; to manipulate spatial data to evaluate environmental and engineering issues associated with the siting, permitting, construction, maintenance, and monitoring of gas pipeline ROWS; and to graphically display analysis results. Examples of these applications include (1) determination of environmentallymore » sensitive areas, such as endangered species habitat, wetlands, and areas of highly erosive soils; (2) evaluation of engineering constraints, including shallow depth to bedrock, major hydrographic features, and shallow water table; (3) classification of satellite imagery for land use/landcover that will affect ROWS; and (4) identification of alternative ROW corridors that avoid environmentally sensitive areas or areas with severe engineering constraints.« less

  1. Use of geographic information systems for applications on gas pipeline rights-of-way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sydelko, P.J.; Wilkey, P.L.

    1992-01-01

    Geographic information system (GIS) applications for the siting and monitoring of gas pipeline rights-of-way (ROWS) were developed for areas near Rio Vista, California. The data layers developed for this project represent geographic features, such as landcover, elevation, aspect, slope, soils, hydrography, transportation, endangered species, wetlands, and public line surveys. A GIS was used to develop and store spatial data from several sources; to manipulate spatial data to evaluate environmental and engineering issues associated with the siting, permitting, construction, maintenance, and monitoring of gas pipeline ROWS; and to graphically display analysis results. Examples of these applications include (1) determination of environmentallymore » sensitive areas, such as endangered species habitat, wetlands, and areas of highly erosive soils; (2) evaluation of engineering constraints, including shallow depth to bedrock, major hydrographic features, and shallow water table; (3) classification of satellite imagery for landuse/landcover that will affect ROWS; and (4) identification of alternative ROW corridors that avoid environmentally sensitive areas or areas with severe engineering constraints.« less

  2. 77 FR 7572 - Alliance Pipeline L.P.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-13

    ...] Alliance Pipeline L.P.; Notice of Application Take notice that on January 25, 2012, Alliance Pipeline L.P... Pipeline Inc., Managing General Partner of Alliance Pipeline L.P., 800, 605--5 Ave. SW., Calgary, Alberta...; 8:45 am] BILLING CODE 6717-01-P ...

  3. 76 FR 54531 - Pipeline Safety: Potential for Damage to Pipeline Facilities Caused by the Passage of Hurricanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-01

    ... production and processing is prone to disruption by hurricanes. In 2005, Hurricanes Katrina and Rita caused... Hurricanes AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice... the passage of Hurricanes. ADDRESSES: This document can be viewed on the Office of Pipeline Safety...

  4. In Silico Identification of Candidate Genes for Fertility Restoration in Cytoplasmic Male Sterile Perennial Ryegrass (Lolium perenne L.)

    PubMed Central

    Sykes, Timothy; Yates, Steven; Nagy, Istvan; Asp, Torben; Small, Ian

    2017-01-01

    Perennial ryegrass (Lolium perenne L.) is widely used for forage production in both permanent and temporary grassland systems. To increase yields in perennial ryegrass, recent breeding efforts have been focused on strategies to more efficiently exploit heterosis by hybrid breeding. Cytoplasmic male sterility (CMS) is a widely applied mechanism to control pollination for commercial hybrid seed production and although CMS systems have been identified in perennial ryegrass, they are yet to be fully characterized. Here, we present a bioinformatics pipeline for efficient identification of candidate restorer of fertility (Rf) genes for CMS. From a high-quality draft of the perennial ryegrass genome, 373 pentatricopeptide repeat (PPR) genes were identified and classified, further identifying 25 restorer of fertility-like PPR (RFL) genes through a combination of DNA sequence clustering and comparison to known Rf genes. This extensive gene family was targeted as the majority of Rf genes in higher plants are RFL genes. These RFL genes were further investigated by phylogenetic analyses, identifying three groups of perennial ryegrass RFLs. These three groups likely represent genomic regions of active RFL generation and identify the probable location of perennial ryegrass PPR-Rf genes. This pipeline allows for the identification of candidate PPR-Rf genes from genomic sequence data and can be used in any plant species. Functional markers for PPR-Rf genes will facilitate map-based cloning of Rf genes and enable the use of CMS as an efficient tool to control pollination for hybrid crop production. PMID:26951780

  5. CFD analysis of onshore oil pipelines in permafrost

    NASA Astrophysics Data System (ADS)

    Nardecchia, Fabio; Gugliermetti, Luca; Gugliermetti, Franco

    2017-07-01

    Underground pipelines are built all over the world and the knowledge of their thermal interaction with the soil is crucial for their design. This paper studies the "thermal influenced zone" produced by a buried pipeline and the parameters that can influence its extension by 2D-steady state CFD simulations with the aim to improve the design of new pipelines in permafrost. In order to represent a real case, the study is referred to the Eastern Siberia-Pacific Ocean Oil Pipeline at the three stations of Mo'he, Jiagedaqi and Qiqi'har. Different burial depth sand diameters of the pipe are analyzed; the simulation results show that the effect of the oil pipeline diameter on the thermal field increases with the increase of the distance from the starting station.

  6. TipMT: Identification of PCR-based taxon-specific markers.

    PubMed

    Rodrigues-Luiz, Gabriela F; Cardoso, Mariana S; Valdivia, Hugo O; Ayala, Edward V; Gontijo, Célia M F; Rodrigues, Thiago de S; Fujiwara, Ricardo T; Lopes, Robson S; Bartholomeu, Daniella C

    2017-02-11

    Molecular genetic markers are one of the most informative and widely used genome features in clinical and environmental diagnostic studies. A polymerase chain reaction (PCR)-based molecular marker is very attractive because it is suitable to high throughput automation and confers high specificity. However, the design of taxon-specific primers may be difficult and time consuming due to the need to identify appropriate genomic regions for annealing primers and to evaluate primer specificity. Here, we report the development of a Tool for Identification of Primers for Multiple Taxa (TipMT), which is a web application to search and design primers for genotyping based on genomic data. The tool identifies and targets single sequence repeats (SSR) or orthologous/taxa-specific genes for genotyping using Multiplex PCR. This pipeline was applied to the genomes of four species of Leishmania (L. amazonensis, L. braziliensis, L. infantum and L. major) and validated by PCR using artificial genomic DNA mixtures of the Leishmania species as templates. This experimental validation demonstrates the reliability of TipMT because amplification profiles showed discrimination of genomic DNA samples from Leishmania species. The TipMT web tool allows for large-scale identification and design of taxon-specific primers and is freely available to the scientific community at http://200.131.37.155/tipMT/ .

  7. The AIRS Applications Pipeline, from Identification to Visualization to Distribution

    NASA Astrophysics Data System (ADS)

    Ray, S. E.; Pagano, T. S.; Fetzer, E. J.; Lambrigtsen, B.; Teixeira, J.

    2014-12-01

    The Atmospheric Infrared Sounder (AIRS) on NASA's Aqua spacecraft has been returning daily global observations of Earth's atmospheric constituents and properties since 2002. AIRS provides observations of temperature and water vapor along the atmospheric column and is sensitive to many atmospheric constituents in the mid-troposphere, including carbon monoxide, carbon dioxide and ozone. With a 12-year data record and daily, global observations in near real-time, we are finding that AIRS data can play a role in applications that fall under most of the NASA Applied Sciences focus areas. Currently in development are temperature inversion maps that can potentially correlate to respiratory health problems, dengue fever and West Nile virus outbreak prediction maps, maps that can be used to make assessments of air quality, and maps of volcanic ash burden. This poster will communicate the Project's approach and efforts to date of its applications pipeline, which includes identifying applications, utilizing science expertise, hiring outside experts to assist with development and dissemination, visualization along application themes, and leveraging existing NASA data frameworks and organizations to facilitate archiving and distribution. In addition, a new web-based browse tool being developed by the AIRS Project for easy access to application product imagery will also be described.

  8. Insights into Integrated Lead Generation and Target Identification in Malaria and Tuberculosis Drug Discovery

    PubMed Central

    2017-01-01

    Conspectus New, safe and effective drugs are urgently needed to treat and control malaria and tuberculosis, which affect millions of people annually. However, financial return on investment in the poor settings where these diseases are mostly prevalent is very minimal to support market-driven drug discovery and development. Moreover, the imminent loss of therapeutic lifespan of existing therapies due to evolution and spread of drug resistance further compounds the urgency to identify novel effective drugs. However, the advent of new public–private partnerships focused on tropical diseases and the recent release of large data sets by pharmaceutical companies on antimalarial and antituberculosis compounds derived from phenotypic whole cell high throughput screening have spurred renewed interest and opened new frontiers in malaria and tuberculosis drug discovery. This Account recaps the existing challenges facing antimalarial and antituberculosis drug discovery, including limitations associated with experimental animal models as well as biological complexities intrinsic to the causative pathogens. We enlist various highlights from a body of work within our research group aimed at identifying and characterizing new chemical leads, and navigating these challenges to contribute toward the global drug discovery and development pipeline in malaria and tuberculosis. We describe a catalogue of in-house efforts toward deriving safe and efficacious preclinical drug development candidates via cell-based medicinal chemistry optimization of phenotypic whole-cell medium and high throughput screening hits sourced from various small molecule chemical libraries. We also provide an appraisal of target-based screening, as invoked in our laboratory for mechanistic evaluation of the hits generated, with particular focus on the enzymes within the de novo pyrimidine biosynthetic and hemoglobin degradation pathways, the latter constituting a heme detoxification process and an associated

  9. Time-Distance Helioseismology Data-Analysis Pipeline for Helioseismic and Magnetic Imager Onboard Solar Dynamics Observatory (SDO-HMI) and Its Initial Results

    NASA Technical Reports Server (NTRS)

    Zhao, J.; Couvidat, S.; Bogart, R. S.; Parchevsky, K. V.; Birch, A. C.; Duvall, Thomas L., Jr.; Beck, J. G.; Kosovichev, A. G.; Scherrer, P. H.

    2011-01-01

    The Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory (SDO/HMI) provides continuous full-disk observations of solar oscillations. We develop a data-analysis pipeline based on the time-distance helioseismology method to measure acoustic travel times using HMI Doppler-shift observations, and infer solar interior properties by inverting these measurements. The pipeline is used for routine production of near-real-time full-disk maps of subsurface wave-speed perturbations and horizontal flow velocities for depths ranging from 0 to 20 Mm, every eight hours. In addition, Carrington synoptic maps for the subsurface properties are made from these full-disk maps. The pipeline can also be used for selected target areas and time periods. We explain details of the pipeline organization and procedures, including processing of the HMI Doppler observations, measurements of the travel times, inversions, and constructions of the full-disk and synoptic maps. Some initial results from the pipeline, including full-disk flow maps, sunspot subsurface flow fields, and the interior rotation and meridional flow speeds, are presented.

  10. Pipeline Expansions

    EIA Publications

    1999-01-01

    This appendix examines the nature and type of proposed pipeline projects announced or approved for construction during the next several years in the United States. It also includes those projects in Canada and Mexico that tie-in with the U.S. markets or projects.

  11. VizieR Online Data Catalog: Kepler pipeline transit signal recovery. III. (Christiansen+, 2016)

    NASA Astrophysics Data System (ADS)

    Christiansen, J. L.; Clarke, B. D.; Burke, C. J.; Jenkins, J. M.; Bryson, S. T.; Coughlin, J. L.; Mullally, F.; Thompson, S. E.; Twicken, J. D.; Batalha, N. M.; Haas, M. R.; Catanzarite, J.; Campbell, J. R.; Uddin, A. K.; Zamudio, K.; Smith, J. C.; Henze, C. E.

    2018-03-01

    Here we describe the third transit injection experiment, which tests the entire Kepler observing baseline (Q1-Q17) for the first time across all 84 CCD channels. It was performed to measure the sensitivity of the Kepler pipeline used to generate the Q1-Q17 Data Release 24 (DR24) catalog of Kepler Objects of Interest (Coughlin et al. 2016, J/ApJS/224/12) available at the NASA Exoplanet Archive (Akeson et al. 2013PASP..125..989A). The average detection efficiency describes the likelihood that the Kepler pipeline would successfully recover a given transit signal. To measure this property we perform a Monte Carlo experiment where we inject the signatures of simulated transiting planets around 198154 target stars, one per star, across the focal plane starting with the Q1-Q17 DR24 calibrated pixels. The simulated transits are generated using the Mandel & Agol (2002ApJ...580L.171M) model. Of the injections, 159013 resulted in three or more injected transits (the minimum required for detection by the pipeline) and were used for the subsequent analysis. (1 data file).

  12. 78 FR 24309 - Pipeline and Hazardous Materials Safety Administration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-24

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration List of Special Permit Applications Delayed AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA..., Pipeline and Hazardous Materials Safety Administration, U.S. Department of Transportation, East Building...

  13. Short Communication: Analysis of Minor Populations of Human Immunodeficiency Virus by Primer Identification and Insertion-Deletion and Carry Forward Correction Pipelines.

    PubMed

    Hughes, Paul; Deng, Wenjie; Olson, Scott C; Coombs, Robert W; Chung, Michael H; Frenkel, Lisa M

    2016-03-01

    Accurate analysis of minor populations of drug-resistant HIV requires analysis of a sufficient number of viral templates. We assessed the effect of experimental conditions on the analysis of HIV pol 454 pyrosequences generated from plasma using (1) the "Insertion-deletion (indel) and Carry Forward Correction" (ICC) pipeline, which clusters sequence reads using a nonsubstitution approach and can correct for indels and carry forward errors, and (2) the "Primer Identification (ID)" method, which facilitates construction of a consensus sequence to correct for sequencing errors and allelic skewing. The Primer ID and ICC methods produced similar estimates of viral diversity, but differed in the number of sequence variants generated. Sequence preparation for ICC was comparably simple, but was limited by an inability to assess the number of templates analyzed and allelic skewing. The more costly Primer ID method corrected for allelic skewing and provided the number of viral templates analyzed, which revealed that amplifiable HIV templates varied across specimens and did not correlate with clinical viral load. This latter observation highlights the value of the Primer ID method, which by determining the number of templates amplified, enables more accurate assessment of minority species in the virus population, which may be relevant to prescribing effective antiretroviral therapy.

  14. Numerical Simulation of Pipeline Deformation Caused by Rockfall Impact

    PubMed Central

    Liang, Zheng; Han, Chuanjun

    2014-01-01

    Rockfall impact is one of the fatal hazards in pipeline transportation of oil and gas. The deformation of oil and gas pipeline caused by rockfall impact was investigated using the finite element method in this paper. Pipeline deformations under radial impact, longitudinal inclined impact, transverse inclined impact, and lateral eccentric impact of spherical and cube rockfalls were discussed, respectively. The effects of impact angle and eccentricity on the plastic strain of pipeline were analyzed. The results show that the crater depth on pipeline caused by spherical rockfall impact is deeper than by cube rockfall impact with the same volume. In the inclined impact condition, the maximum plastic strain of crater caused by spherical rockfall impact appears when incidence angle α is 45°. The pipeline is prone to rupture under the cube rockfall impact when α is small. The plastic strain distribution of impact crater is more uneven with the increasing of impact angle. In the eccentric impact condition, plastic strain zone of pipeline decreases with the increasing of eccentricity k. PMID:24959599

  15. Changes in the Pipeline Transportation Market

    EIA Publications

    1999-01-01

    This analysis assesses the amount of capacity that may be turned back to pipeline companies, based on shippers' actions over the past several years and the profile of contracts in place as of July 1, 1998. It also examines changes in the characteristics of contracts between shippers and pipeline companies.

  16. 77 FR 61826 - Pipeline Safety: Communication During Emergency Situations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-11

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... liquefied natural gas pipeline facilities that operators should immediately and directly notify the Public.... Background Federal regulations for gas, liquefied natural gas (LNG), and hazardous liquid pipeline facilities...

  17. From benchmarking HITS-CLIP peak detection programs to a new method for identification of miRNA-binding sites from Ago2-CLIP data.

    PubMed

    Bottini, Silvia; Hamouda-Tekaya, Nedra; Tanasa, Bogdan; Zaragosi, Laure-Emmanuelle; Grandjean, Valerie; Repetto, Emanuela; Trabucchi, Michele

    2017-05-19

    Experimental evidence indicates that about 60% of miRNA-binding activity does not follow the canonical rule about the seed matching between miRNA and target mRNAs, but rather a non-canonical miRNA targeting activity outside the seed or with a seed-like motifs. Here, we propose a new unbiased method to identify canonical and non-canonical miRNA-binding sites from peaks identified by Ago2 Cross-Linked ImmunoPrecipitation associated to high-throughput sequencing (CLIP-seq). Since the quality of peaks is of pivotal importance for the final output of the proposed method, we provide a comprehensive benchmarking of four peak detection programs, namely CIMS, PIPE-CLIP, Piranha and Pyicoclip, on four publicly available Ago2-HITS-CLIP datasets and one unpublished in-house Ago2-dataset in stem cells. We measured the sensitivity, the specificity and the position accuracy toward miRNA binding sites identification, and the agreement with TargetScan. Secondly, we developed a new pipeline, called miRBShunter, to identify canonical and non-canonical miRNA-binding sites based on de novo motif identification from Ago2 peaks and prediction of miRNA::RNA heteroduplexes. miRBShunter was tested and experimentally validated on the in-house Ago2-dataset and on an Ago2-PAR-CLIP dataset in human stem cells. Overall, we provide guidelines to choose a suitable peak detection program and a new method for miRNA-target identification. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Next Generation Sequence Analysis and Computational Genomics Using Graphical Pipeline Workflows

    PubMed Central

    Torri, Federica; Dinov, Ivo D.; Zamanyan, Alen; Hobel, Sam; Genco, Alex; Petrosyan, Petros; Clark, Andrew P.; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Knowles, James A.; Ames, Joseph; Kesselman, Carl; Toga, Arthur W.; Potkin, Steven G.; Vawter, Marquis P.; Macciardi, Fabio

    2012-01-01

    Whole-genome and exome sequencing have already proven to be essential and powerful methods to identify genes responsible for simple Mendelian inherited disorders. These methods can be applied to complex disorders as well, and have been adopted as one of the current mainstream approaches in population genetics. These achievements have been made possible by next generation sequencing (NGS) technologies, which require substantial bioinformatics resources to analyze the dense and complex sequence data. The huge analytical burden of data from genome sequencing might be seen as a bottleneck slowing the publication of NGS papers at this time, especially in psychiatric genetics. We review the existing methods for processing NGS data, to place into context the rationale for the design of a computational resource. We describe our method, the Graphical Pipeline for Computational Genomics (GPCG), to perform the computational steps required to analyze NGS data. The GPCG implements flexible workflows for basic sequence alignment, sequence data quality control, single nucleotide polymorphism analysis, copy number variant identification, annotation, and visualization of results. These workflows cover all the analytical steps required for NGS data, from processing the raw reads to variant calling and annotation. The current version of the pipeline is freely available at http://pipeline.loni.ucla.edu. These applications of NGS analysis may gain clinical utility in the near future (e.g., identifying miRNA signatures in diseases) when the bioinformatics approach is made feasible. Taken together, the annotation tools and strategies that have been developed to retrieve information and test hypotheses about the functional role of variants present in the human genome will help to pinpoint the genetic risk factors for psychiatric disorders. PMID:23139896

  19. Drug target identification using network analysis: Taking active components in Sini decoction as an example

    NASA Astrophysics Data System (ADS)

    Chen, Si; Jiang, Hailong; Cao, Yan; Wang, Yun; Hu, Ziheng; Zhu, Zhenyu; Chai, Yifeng

    2016-04-01

    Identifying the molecular targets for the beneficial effects of active small-molecule compounds simultaneously is an important and currently unmet challenge. In this study, we firstly proposed network analysis by integrating data from network pharmacology and metabolomics to identify targets of active components in sini decoction (SND) simultaneously against heart failure. To begin with, 48 potential active components in SND against heart failure were predicted by serum pharmacochemistry, text mining and similarity match. Then, we employed network pharmacology including text mining and molecular docking to identify the potential targets of these components. The key enriched processes, pathways and related diseases of these target proteins were analyzed by STRING database. At last, network analysis was conducted to identify most possible targets of components in SND. Among the 25 targets predicted by network analysis, tumor necrosis factor α (TNF-α) was firstly experimentally validated in molecular and cellular level. Results indicated that hypaconitine, mesaconitine, higenamine and quercetin in SND can directly bind to TNF-α, reduce the TNF-α-mediated cytotoxicity on L929 cells and exert anti-myocardial cell apoptosis effects. We envisage that network analysis will also be useful in target identification of a bioactive compound.

  20. Drug target identification using network analysis: Taking active components in Sini decoction as an example

    PubMed Central

    Chen, Si; Jiang, Hailong; Cao, Yan; Wang, Yun; Hu, Ziheng; Zhu, Zhenyu; Chai, Yifeng

    2016-01-01

    Identifying the molecular targets for the beneficial effects of active small-molecule compounds simultaneously is an important and currently unmet challenge. In this study, we firstly proposed network analysis by integrating data from network pharmacology and metabolomics to identify targets of active components in sini decoction (SND) simultaneously against heart failure. To begin with, 48 potential active components in SND against heart failure were predicted by serum pharmacochemistry, text mining and similarity match. Then, we employed network pharmacology including text mining and molecular docking to identify the potential targets of these components. The key enriched processes, pathways and related diseases of these target proteins were analyzed by STRING database. At last, network analysis was conducted to identify most possible targets of components in SND. Among the 25 targets predicted by network analysis, tumor necrosis factor α (TNF-α) was firstly experimentally validated in molecular and cellular level. Results indicated that hypaconitine, mesaconitine, higenamine and quercetin in SND can directly bind to TNF-α, reduce the TNF-α-mediated cytotoxicity on L929 cells and exert anti-myocardial cell apoptosis effects. We envisage that network analysis will also be useful in target identification of a bioactive compound. PMID:27095146

  1. Sludge pipeline design.

    PubMed

    Slatter, P T

    2001-01-01

    The need for the design engineer to have a sound basis for designing sludge pumping and pipelining plant is becoming more critical. This paper examines both a traditional text-book approach and one of the latest approaches from the literature, and compares them with experimental data. The pipelining problem can be divided into the following main areas; rheological characterisation, laminar, transitional and turbulent flow and each is addressed in turn. Experimental data for a digested sludge tested in large pipes is analysed and compared with the two different theoretical approaches. Discussion is centred on the differences between the two methods and the degree of agreement with the data. It is concluded that the new approach has merit and can be used for practical design.

  2. Anterior choroidal artery patency and clinical follow-up after coverage with the pipeline embolization device.

    PubMed

    Raz, E; Shapiro, M; Becske, T; Zumofen, D W; Tanweer, O; Potts, M B; Riina, H A; Nelson, P K

    2015-05-01

    Endoluminal reconstruction with the Pipeline Embolization Device is an effective treatment option for select intracranial aneurysms. However, concerns for the patency of eloquent branch arteries covered by the Pipeline Embolization Device have been raised. We aimed to examine the patency of the anterior choroidal artery and clinical sequelae after ICA aneurysm treatment. We prospectively analyzed all patients among our first 157 patients with ICA aneurysms treated by the Pipeline Embolization Device who required placement of at least 1 device across the ostium of the anterior choroidal artery. The primary outcome measure was angiographic patency of the anterior choroidal artery at last follow-up. Age, sex, type of aneurysm, neurologic examination data, number of Pipeline Embolization Devices used, relationship of the anterior choroidal artery to the aneurysm, and completeness of aneurysm occlusion on follow-up angiograms were also analyzed. Twenty-nine aneurysms requiring placement of at least 1 Pipeline Embolization Device (median = 1, range = 1-3) across the anterior choroidal artery ostium were identified. At angiographic follow-up (mean = 15.1 months; range = 12-39 months), the anterior choroidal artery remained patent, with antegrade flow in 28/29 aneurysms (96.5%), while 24/29 (82.7%) of the target aneurysms were angiographically occluded by 1-year follow-up angiography. Anterior choroidal artery occlusion, with retrograde reconstitution of the vessel, was noted in a single case. A significant correlation between the origin of the anterior choroidal artery from the aneurysm dome and failure of the aneurysms to occlude following treatment was found. After placement of 36 Pipeline Embolization Devices across 29 anterior choroidal arteries (median = 1 device, range = 1-3 devices), 1 of 29 anterior choroidal arteries was found occluded on angiographic follow-up. The vessel occlusion did not result in persistent clinical sequelae. Coverage of the anterior

  3. 49 CFR 191.27 - Filing offshore pipeline condition reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE; ANNUAL REPORTS, INCIDENT REPORTS, AND SAFETY-RELATED..., job title, and business telephone number of person submitting the report. (4) Total length of pipeline...

  4. 77 FR 34457 - Pipeline Safety: Mechanical Fitting Failure Reports

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-11

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... notice provides clarification to owners and operators of gas distribution pipeline facilities when... of a gas distribution pipeline facility to file a written report for any mechanical fitting failure...

  5. 25 CFR 169.25 - Oil and gas pipelines.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Oil and gas pipelines. 169.25 Section 169.25 Indians....25 Oil and gas pipelines. (a) The Act of March 11, 1904 (33 Stat. 65), as amended by the Act of March 2, 1917 (39 Stat. 973; 25 U.S.C. 321), authorizes right-of-way grants for oil and gas pipelines...

  6. 25 CFR 169.25 - Oil and gas pipelines.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false Oil and gas pipelines. 169.25 Section 169.25 Indians....25 Oil and gas pipelines. (a) The Act of March 11, 1904 (33 Stat. 65), as amended by the Act of March 2, 1917 (39 Stat. 973; 25 U.S.C. 321), authorizes right-of-way grants for oil and gas pipelines...

  7. 25 CFR 169.25 - Oil and gas pipelines.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false Oil and gas pipelines. 169.25 Section 169.25 Indians....25 Oil and gas pipelines. (a) The Act of March 11, 1904 (33 Stat. 65), as amended by the Act of March 2, 1917 (39 Stat. 973; 25 U.S.C. 321), authorizes right-of-way grants for oil and gas pipelines...

  8. 25 CFR 169.25 - Oil and gas pipelines.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false Oil and gas pipelines. 169.25 Section 169.25 Indians....25 Oil and gas pipelines. (a) The Act of March 11, 1904 (33 Stat. 65), as amended by the Act of March 2, 1917 (39 Stat. 973; 25 U.S.C. 321), authorizes right-of-way grants for oil and gas pipelines...

  9. 25 CFR 169.25 - Oil and gas pipelines.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true Oil and gas pipelines. 169.25 Section 169.25 Indians....25 Oil and gas pipelines. (a) The Act of March 11, 1904 (33 Stat. 65), as amended by the Act of March 2, 1917 (39 Stat. 973; 25 U.S.C. 321), authorizes right-of-way grants for oil and gas pipelines...

  10. Disrupting the School-to-Prison Pipeline

    ERIC Educational Resources Information Center

    Bahena, Sofía, Ed.; Cooc, North, Ed.; Currie-Rubin, Rachel, Ed.; Kuttner, Paul, Ed.; Ng, Monica, Ed.

    2012-01-01

    A trenchant and wide-ranging look at this alarming national trend, "Disrupting the School-to-Prison Pipeline" is unsparing in its account of the problem while pointing in the direction of meaningful and much-needed reforms. The "school-to-prison pipeline" has received much attention in the education world over the past few…

  11. Using industry ROV videos to assess fish associations with subsea pipelines

    NASA Astrophysics Data System (ADS)

    McLean, D. L.; Partridge, J. C.; Bond, T.; Birt, M. J.; Bornt, K. R.; Langlois, T. J.

    2017-06-01

    Remote Operated Vehicles are routinely used to undertake inspection and maintenance activities of underwater pipelines in north-west Australia. In doing so, many terabytes of geo-referenced underwater video are collected at depths, and on a scale usually unobtainable for ecological research. We assessed fish diversity and abundance from existing ROV videos collected along 2-3 km sections of two pipelines in north-west Australia, one at 60-80 m water depth and the other at 120-130 m. A total of 5962 individual fish from 92 species and 42 families were observed. Both pipelines were characterised by a high abundance of commercially important fishes including: snappers (Lutjanidae) and groupers (Epinephelidae). The presence of thousands of unidentifiable larval fish, in addition to juveniles, sub-adults and adults suggests that the pipelines may be enhancing, rather than simply attracting, fish stocks. The prevalence and high complexity of sponges on the shallower pipeline and of deepwater corals on the deeper pipeline had a strong positive correlation with the fish abundance. These habitats likely offer a significant food source and refuge for fish, but also for invertebrates upon which fish feed. A greater diversity on the shallower pipeline, and a higher abundance of fishes on both pipelines, were associated with unsupported pipeline sections (spans) and many species appeared to be utilising pipeline spans as refuges. This study is a first look at the potential value of subsea pipelines for fishes on the north-west shelf. While the results suggest that these sections of pipeline appear to offer significant habitat that supports diverse and important commercially fished species, further work, including off-pipeline surveys on the natural seafloor, are required to determine conclusively the ecological value of pipelines and thereby inform discussions regarding the ecological implications of pipeline decommissioning.

  12. Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.

    PubMed

    Arganda-Carreras, Ignacio; Andrey, Philippe

    2017-01-01

    With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.

  13. The HTS barcode checker pipeline, a tool for automated detection of illegally traded species from high-throughput sequencing data.

    PubMed

    Lammers, Youri; Peelen, Tamara; Vos, Rutger A; Gravendeel, Barbara

    2014-02-06

    Mixtures of internationally traded organic substances can contain parts of species protected by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). These mixtures often raise the suspicion of border control and customs offices, which can lead to confiscation, for example in the case of Traditional Chinese medicines (TCMs). High-throughput sequencing of DNA barcoding markers obtained from such samples provides insight into species constituents of mixtures, but manual cross-referencing of results against the CITES appendices is labor intensive. Matching DNA barcodes against NCBI GenBank using BLAST may yield misleading results both as false positives, due to incorrectly annotated sequences, and false negatives, due to spurious taxonomic re-assignment. Incongruence between the taxonomies of CITES and NCBI GenBank can result in erroneous estimates of illegal trade. The HTS barcode checker pipeline is an application for automated processing of sets of 'next generation' barcode sequences to determine whether these contain DNA barcodes obtained from species listed on the CITES appendices. This analytical pipeline builds upon and extends existing open-source applications for BLAST matching against the NCBI GenBank reference database and for taxonomic name reconciliation. In a single operation, reads are converted into taxonomic identifications matched with names on the CITES appendices. By inclusion of a blacklist and additional names databases, the HTS barcode checker pipeline prevents false positives and resolves taxonomic heterogeneity. The HTS barcode checker pipeline can detect and correctly identify DNA barcodes of CITES-protected species from reads obtained from TCM samples in just a few minutes. The pipeline facilitates and improves molecular monitoring of trade in endangered species, and can aid in safeguarding these species from extinction in the wild. The HTS barcode checker pipeline is available at https://github.com/naturalis/HTS-barcode-checker.

  14. The HTS barcode checker pipeline, a tool for automated detection of illegally traded species from high-throughput sequencing data

    PubMed Central

    2014-01-01

    Background Mixtures of internationally traded organic substances can contain parts of species protected by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). These mixtures often raise the suspicion of border control and customs offices, which can lead to confiscation, for example in the case of Traditional Chinese medicines (TCMs). High-throughput sequencing of DNA barcoding markers obtained from such samples provides insight into species constituents of mixtures, but manual cross-referencing of results against the CITES appendices is labor intensive. Matching DNA barcodes against NCBI GenBank using BLAST may yield misleading results both as false positives, due to incorrectly annotated sequences, and false negatives, due to spurious taxonomic re-assignment. Incongruence between the taxonomies of CITES and NCBI GenBank can result in erroneous estimates of illegal trade. Results The HTS barcode checker pipeline is an application for automated processing of sets of 'next generation’ barcode sequences to determine whether these contain DNA barcodes obtained from species listed on the CITES appendices. This analytical pipeline builds upon and extends existing open-source applications for BLAST matching against the NCBI GenBank reference database and for taxonomic name reconciliation. In a single operation, reads are converted into taxonomic identifications matched with names on the CITES appendices. By inclusion of a blacklist and additional names databases, the HTS barcode checker pipeline prevents false positives and resolves taxonomic heterogeneity. Conclusions The HTS barcode checker pipeline can detect and correctly identify DNA barcodes of CITES-protected species from reads obtained from TCM samples in just a few minutes. The pipeline facilitates and improves molecular monitoring of trade in endangered species, and can aid in safeguarding these species from extinction in the wild. The HTS barcode checker pipeline is

  15. A pipeline of programs for collecting and analyzing group II intron retroelement sequences from GenBank

    PubMed Central

    2013-01-01

    Background Accurate and complete identification of mobile elements is a challenging task in the current era of sequencing, given their large numbers and frequent truncations. Group II intron retroelements, which consist of a ribozyme and an intron-encoded protein (IEP), are usually identified in bacterial genomes through their IEP; however, the RNA component that defines the intron boundaries is often difficult to identify because of a lack of strong sequence conservation corresponding to the RNA structure. Compounding the problem of boundary definition is the fact that a majority of group II intron copies in bacteria are truncated. Results Here we present a pipeline of 11 programs that collect and analyze group II intron sequences from GenBank. The pipeline begins with a BLAST search of GenBank using a set of representative group II IEPs as queries. Subsequent steps download the corresponding genomic sequences and flanks, filter out non-group II introns, assign introns to phylogenetic subclasses, filter out incomplete and/or non-functional introns, and assign IEP sequences and RNA boundaries to the full-length introns. In the final step, the redundancy in the data set is reduced by grouping introns into sets of ≥95% identity, with one example sequence chosen to be the representative. Conclusions These programs should be useful for comprehensive identification of group II introns in sequence databases as data continue to rapidly accumulate. PMID:24359548

  16. Cellular and Biophysical Pipeline for the Screening of Peroxisome Proliferator-Activated Receptor Beta/Delta Agonists: Avoiding False Positives

    PubMed Central

    Batista, Fernanda Aparecida Heleno

    2018-01-01

    Peroxisome proliferator-activated receptor beta/delta (PPARß/δ) is considered a therapeutic target for metabolic disorders, cancer, and cardiovascular diseases. Here, we developed one pipeline for the screening of PPARß/δ agonists, which reduces the cost, time, and false-positive hits. The first step is an optimized 3-day long cellular transactivation assay based on reporter-gene technology, which is supported by automated liquid-handlers. This primary screening is followed by a confirmatory transactivation assay and by two biophysical validation methods (thermal shift assay (TSA) and (ANS) fluorescence quenching), which allow the calculation of the affinity constant, giving more information about the selected hits. All of the assays were validated using well-known commercial agonists providing trustworthy data. Furthermore, to validate and test this pipeline, we screened a natural extract library (560 extracts), and we found one plant extract that might be interesting for PPARß/δ modulation. In conclusion, our results suggested that we developed a cheaper and more robust pipeline that goes beyond the single activation screening, as it also evaluates PPARß/δ tertiary structure stabilization and the ligand affinity constant, selecting only molecules that directly bind to the receptor. Moreover, this approach might improve the effectiveness of the screening for agonists that target PPARß/δ for drug development.

  17. IMPACT: a whole-exome sequencing analysis pipeline for integrating molecular profiles with actionable therapeutics in clinical samples

    PubMed Central

    Hintzsche, Jennifer; Kim, Jihye; Yadav, Vinod; Amato, Carol; Robinson, Steven E; Seelenfreund, Eric; Shellman, Yiqun; Wisell, Joshua; Applegate, Allison; McCarter, Martin; Box, Neil; Tentler, John; De, Subhajyoti

    2016-01-01

    Objective Currently, there is a disconnect between finding a patient’s relevant molecular profile and predicting actionable therapeutics. Here we develop and implement the Integrating Molecular Profiles with Actionable Therapeutics (IMPACT) analysis pipeline, linking variants detected from whole-exome sequencing (WES) to actionable therapeutics. Methods and materials The IMPACT pipeline contains 4 analytical modules: detecting somatic variants, calling copy number alterations, predicting drugs against deleterious variants, and analyzing tumor heterogeneity. We tested the IMPACT pipeline on whole-exome sequencing data in The Cancer Genome Atlas (TCGA) lung adenocarcinoma samples with known EGFR mutations. We also used IMPACT to analyze melanoma patient tumor samples before treatment, after BRAF-inhibitor treatment, and after BRAF- and MEK-inhibitor treatment. Results IMPACT Food and Drug Administration (FDA) correctly identified known EGFR mutations in the TCGA lung adenocarcinoma samples. IMPACT linked these EGFR mutations to the appropriate FDA-approved EGFR inhibitors. For the melanoma patient samples, we identified NRAS p.Q61K as an acquired resistance mutation to BRAF-inhibitor treatment. We also identified CDKN2A deletion as a novel acquired resistance mutation to BRAFi/MEKi inhibition. The IMPACT analysis pipeline predicts these somatic variants to actionable therapeutics. We observed the clonal dynamic in the tumor samples after various treatments. We showed that IMPACT not only helped in successful prioritization of clinically relevant variants but also linked these variations to possible targeted therapies. Conclusion IMPACT provides a new bioinformatics strategy to delineate candidate somatic variants and actionable therapies. This approach can be applied to other patient tumor samples to discover effective drug targets for personalized medicine. IMPACT is publicly available at http://tanlab.ucdenver.edu/IMPACT. PMID:27026619

  18. Hazard identification and risk assessment for biologics targeting the immune system.

    PubMed

    Weir, Andrea B

    2008-01-01

    Biologic pharmaceuticals include a variety of products, such as monoclonal antibodies, fusion proteins and cytokines. Products in those classes include immunomodulatory biologics, which are intended to enhance or diminish the activity of the immune system. Immunomodulatory biologics have been approved by the U.S. FDA for a variety of indications, including cancer and inflammatory conditions. Prior to gaining approval for marketing, sponsoring companies for all types of products must demonstrate a product's safety in toxicology studies conducted in animals and show safety and efficacy in clinical trials conducted in patients. The overall goal of toxicology studies, which applies to immunomodulatory and other product types, is to identify the hazards that products pose to humans. Because biologics are generally highly selective for specific targets (receptors/epitopes), conducting toxicology studies in animal models with the target is essential. Such animals are referred to as pharmacologically relevant. Endpoints routinely included in toxicology studies, such as hematology, organ weight and histopathology, can be used to assess the effect of a product on the structure of the immune system. Additionally, specialized endpoints, such as immunophenotyping and immune function tests, can be used to define effects of immunomodulatory products on the immune system. Following hazard identification, risks posed to patients are assessed and managed. Risks can be managed through clinical trial design and risk communication, a practice that applies to immunomodulatory and other product types. Examples of risk management in clinical trial design include establishing a safe starting dose, defining the appropriate patient population and establishing appropriate patient monitoring. Risk communication starts during clinical trials and continues after product approval. A combination of hazard identification, risk assessment and risk management allows for drug development to proceed

  19. 77 FR 19799 - Pipeline Safety: Pipeline Damage Prevention Programs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-02

    ...,602 to $3,445,975. Evaluating just the lower range of benefits over ten years results in a total... consequences resulting from excavation damage to pipelines. A comprehensive damage prevention program requires..., including that resulting from excavation, digging, and other impacts, is also precipitated by operators...

  20. Deliverability on the interstate natural gas pipeline system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-05-01

    Deliverability on the Interstate Natural Gas Pipeline System examines the capability of the national pipeline grid to transport natural gas to various US markets. The report quantifies the capacity levels and utilization rates of major interstate pipeline companies in 1996 and the changes since 1990, as well as changes in markets and end-use consumption patterns. It also discusses the effects of proposed capacity expansions on capacity levels. The report consists of five chapters, several appendices, and a glossary. Chapter 1 discusses some of the operational and regulatory features of the US interstate pipeline system and how they affect overall systemmore » design, system utilization, and capacity expansions. Chapter 2 looks at how the exploration, development, and production of natural gas within North America is linked to the national pipeline grid. Chapter 3 examines the capability of the interstate natural gas pipeline network to link production areas to market areas, on the basis of capacity and usage levels along 10 corridors. The chapter also examines capacity expansions that have occurred since 1990 along each corridor and the potential impact of proposed new capacity. Chapter 4 discusses the last step in the transportation chain, that is, deliverability to the ultimate end user. Flow patterns into and out of each market region are discussed, as well as the movement of natural gas between States in each region. Chapter 5 examines how shippers reserve interstate pipeline capacity in the current transportation marketplace and how pipeline companies are handling the secondary market for short-term unused capacity. Four appendices provide supporting data and additional detail on the methodology used to estimate capacity. 32 figs., 15 tabs.« less

  1. A computational genomics pipeline for prokaryotic sequencing projects

    PubMed Central

    Kislyuk, Andrey O.; Katz, Lee S.; Agrawal, Sonia; Hagen, Matthew S.; Conley, Andrew B.; Jayaraman, Pushkala; Nelakuditi, Viswateja; Humphrey, Jay C.; Sammons, Scott A.; Govil, Dhwani; Mair, Raydel D.; Tatti, Kathleen M.; Tondella, Maria L.; Harcourt, Brian H.; Mayer, Leonard W.; Jordan, I. King

    2010-01-01

    Motivation: New sequencing technologies have accelerated research on prokaryotic genomes and have made genome sequencing operations outside major genome sequencing centers routine. However, no off-the-shelf solution exists for the combined assembly, gene prediction, genome annotation and data presentation necessary to interpret sequencing data. The resulting requirement to invest significant resources into custom informatics support for genome sequencing projects remains a major impediment to the accessibility of high-throughput sequence data. Results: We present a self-contained, automated high-throughput open source genome sequencing and computational genomics pipeline suitable for prokaryotic sequencing projects. The pipeline has been used at the Georgia Institute of Technology and the Centers for Disease Control and Prevention for the analysis of Neisseria meningitidis and Bordetella bronchiseptica genomes. The pipeline is capable of enhanced or manually assisted reference-based assembly using multiple assemblers and modes; gene predictor combining; and functional annotation of genes and gene products. Because every component of the pipeline is executed on a local machine with no need to access resources over the Internet, the pipeline is suitable for projects of a sensitive nature. Annotation of virulence-related features makes the pipeline particularly useful for projects working with pathogenic prokaryotes. Availability and implementation: The pipeline is licensed under the open-source GNU General Public License and available at the Georgia Tech Neisseria Base (http://nbase.biology.gatech.edu/). The pipeline is implemented with a combination of Perl, Bourne Shell and MySQL and is compatible with Linux and other Unix systems. Contact: king.jordan@biology.gatech.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20519285

  2. 75 FR 72778 - Pipeline Safety: Technical Pipeline Safety Advisory Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-26

    ... pipeline safety regulations to the remaining unregulated rural onshore hazardous liquid low-stress... Low-Stress Lines'' on June 22, 2010 (75 FR 35366). The meeting agenda will include the committee's...

  3. Virtual Instrumentation Corrosion Controller for Natural Gas Pipelines

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan, J.; Agnihotri, G.; Deshpande, D. M.

    2012-12-01

    Corrosion is an electrochemical process. Corrosion in natural gas (methane) pipelines leads to leakages. Corrosion occurs when anode and cathode are connected through electrolyte. Rate of corrosion in metallic pipeline can be controlled by impressing current to it and thereby making it to act as cathode of corrosion cell. Technologically advanced and energy efficient corrosion controller is required to protect natural gas pipelines. Proposed virtual instrumentation (VI) based corrosion controller precisely controls the external corrosion in underground metallic pipelines, enhances its life and ensures safety. Designing and development of proportional-integral-differential (PID) corrosion controller using VI (LabVIEW) is carried out. When the designed controller is deployed at field, it maintains the pipe to soil potential (PSP) within safe operating limit and not entering into over/under protection zone. Horizontal deployment of this technique can be done to protect all metallic structure, oil pipelines, which need corrosion protection.

  4. SeMPI: a genome-based secondary metabolite prediction and identification web server.

    PubMed

    Zierep, Paul F; Padilla, Natàlia; Yonchev, Dimitar G; Telukunta, Kiran K; Klementz, Dennis; Günther, Stefan

    2017-07-03

    The secondary metabolism of bacteria, fungi and plants yields a vast number of bioactive substances. The constantly increasing amount of published genomic data provides the opportunity for an efficient identification of gene clusters by genome mining. Conversely, for many natural products with resolved structures, the encoding gene clusters have not been identified yet. Even though genome mining tools have become significantly more efficient in the identification of biosynthetic gene clusters, structural elucidation of the actual secondary metabolite is still challenging, especially due to as yet unpredictable post-modifications. Here, we introduce SeMPI, a web server providing a prediction and identification pipeline for natural products synthesized by polyketide synthases of type I modular. In order to limit the possible structures of PKS products and to include putative tailoring reactions, a structural comparison with annotated natural products was introduced. Furthermore, a benchmark was designed based on 40 gene clusters with annotated PKS products. The web server of the pipeline (SeMPI) is freely available at: http://www.pharmaceutical-bioinformatics.de/sempi. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Natural Gas Pipeline and System Expansions

    EIA Publications

    1997-01-01

    This special report examines recent expansions to the North American natural gas pipeline network and the nature and type of proposed pipeline projects announced or approved for construction during the next several years in the United States. It includes those projects in Canada and Mexico that tie in with U.S. markets or projects.

  6. Evaluation of Campus Pipeline, Spring 2002.

    ERIC Educational Resources Information Center

    Serban, Andreea M.; Fleming, Steve

    The Campus Pipeline at Santa Barbara City College (SBCC), California, is a portal whose purpose is to provide single web entry to relevant academic and institutional information for students and faculty. This fall 2002 evaluation of the Campus Pipeline aims to: (1) explore the degree of satisfaction of students and faculty; (2) determine which…

  7. Testing the School-to-Prison Pipeline

    ERIC Educational Resources Information Center

    Owens, Emily G.

    2017-01-01

    The School-to-Prison Pipeline is a social phenomenon where students become formally involved with the criminal justice system as a result of school policies that use law enforcement, rather than discipline, to address behavioral problems. A potentially important part of the School-to-Prison Pipeline is the use of sworn School Resource Officers…

  8. Problematizing the STEM Pipeline Metaphor: Is the STEM Pipeline Metaphor Serving Our Students and the STEM Workforce?

    ERIC Educational Resources Information Center

    Cannady, Matthew A.; Greenwald, Eric; Harris, Kimberly N.

    2014-01-01

    Researchers and policy makers often use the metaphor of an ever-narrowing pipeline to describe the trajectory to a science, technology, engineering or mathematics (STEM) degree or career. This study interrogates the appropriateness of the STEM pipeline as the dominant frame for understanding and making policies related to STEM career trajectories.…

  9. Commissioning of a new helium pipeline

    NASA Technical Reports Server (NTRS)

    2000-01-01

    At the commissioning of a new high-pressure helium pipeline at Kennedy Space Center, participants cut the lines to helium-filled balloons. From left, they are Center Director Roy Bridges; Michael Butchko, president, SGS; Pierre Dufour, president and CEO, Air Liquide America Corporation; David Herst, director, Delta IV Launch Sites; Pamela Gillespie, executive administrator, office of Congressman Dave Weldon; and Col. Samuel Dick, representative of the 45th Space Wing. The nine-mile-long buried pipeline will service launch needs at the new Delta IV Complex 37 at Cape Canaveral Air Force Station. It will also serve as a backup helium resource for Shuttle launches. Nearly one launch's worth of helium will be available in the pipeline to support a Shuttle pad in an emergency. The line originates at the Helium Facility on KSC and terminates in a meter station at the perimeter of the Delta IV launch pad. Others at the ceremony were Jerry Jorgensen, pipeline project manager, Space Gateway Support (SGS), and Ramon Lugo, acting executive director, JPMO.

  10. Commissioning of a new helium pipeline

    NASA Technical Reports Server (NTRS)

    2000-01-01

    At the commissioning of a new high-pressure helium pipeline at Kennedy Space Center, participants watch as helium-filled balloons take to the sky after their lines were cut. From left, they are Center Director Roy Bridges; Michael Butchko, president, SGS; Pierre Dufour, president and CEO, Air Liquide America Corporation; David Herst, director, Delta IV Launch Sites; Pamela Gillespie, executive administrator, office of Congressman Dave Weldon; and Col. Samuel Dick, representative of the 45th Space Wing. The nine-mile-long buried pipeline will service launch needs at the new Delta IV Complex 37 at Cape Canaveral Air Force Station. It will also serve as a backup helium resource for Shuttle launches. Nearly one launch's worth of helium will be available in the pipeline to support a Shuttle pad in an emergency. The line originates at the Helium Facility on KSC and terminates in a meter station at the perimeter of the Delta IV launch pad. Others at the ceremony were Jerry Jorgensen, pipeline project manager, Space Gateway Support (SGS), and Ramon Lugo, acting executive director, JPMO.

  11. Color correction pipeline optimization for digital cameras

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo

    2013-04-01

    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.

  12. Pharmacologic Management of Duchenne Muscular Dystrophy: Target Identification and Preclinical Trials

    PubMed Central

    Kornegay, Joe N.; Spurney, Christopher F.; Nghiem, Peter P.; Brinkmeyer-Langford, Candice L.; Hoffman, Eric P.; Nagaraju, Kanneboyina

    2014-01-01

    Duchenne muscular dystrophy (DMD) is an X-linked human disorder in which absence of the protein dystrophin causes degeneration of skeletal and cardiac muscle. For the sake of treatment development, over and above definitive genetic and cell-based therapies, there is considerable interest in drugs that target downstream disease mechanisms. Drug candidates have typically been chosen based on the nature of pathologic lesions and presumed underlying mechanisms and then tested in animal models. Mammalian dystrophinopathies have been characterized in mice (mdx mouse) and dogs (golden retriever muscular dystrophy [GRMD]). Despite promising results in the mdx mouse, some therapies have not shown efficacy in DMD. Although the GRMD model offers a higher hurdle for translation, dogs have primarily been used to test genetic and cellular therapies where there is greater risk. Failed translation of animal studies to DMD raises questions about the propriety of methods and models used to identify drug targets and test efficacy of pharmacologic intervention. The mdx mouse and GRMD dog are genetically homologous to DMD but not necessarily analogous. Subcellular species differences are undoubtedly magnified at the whole-body level in clinical trials. This problem is compounded by disparate cultures in clinical trials and preclinical studies, pointing to a need for greater rigor and transparency in animal experiments. Molecular assays such as mRNA arrays and genome-wide association studies allow identification of genetic drug targets more closely tied to disease pathogenesis. Genes in which polymorphisms have been directly linked to DMD disease progression, as with osteopontin, are particularly attractive targets. PMID:24936034

  13. BigDataScript: a scripting language for data pipelines.

    PubMed

    Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu

    2015-01-01

    The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. © The Author 2014. Published by Oxford University Press.

  14. BigDataScript: a scripting language for data pipelines

    PubMed Central

    Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu

    2015-01-01

    Motivation: The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. Results: We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. Availability and implementation: BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. Contact: pablo.e.cingolani@gmail.com PMID:25189778

  15. Identification of novel plant peroxisomal targeting signals by a combination of machine learning methods and in vivo subcellular targeting analyses.

    PubMed

    Lingner, Thomas; Kataya, Amr R; Antonicelli, Gerardo E; Benichou, Aline; Nilssen, Kjersti; Chen, Xiong-Yan; Siemsen, Tanja; Morgenstern, Burkhard; Meinicke, Peter; Reumann, Sigrun

    2011-04-01

    In the postgenomic era, accurate prediction tools are essential for identification of the proteomes of cell organelles. Prediction methods have been developed for peroxisome-targeted proteins in animals and fungi but are missing specifically for plants. For development of a predictor for plant proteins carrying peroxisome targeting signals type 1 (PTS1), we assembled more than 2500 homologous plant sequences, mainly from EST databases. We applied a discriminative machine learning approach to derive two different prediction methods, both of which showed high prediction accuracy and recognized specific targeting-enhancing patterns in the regions upstream of the PTS1 tripeptides. Upon application of these methods to the Arabidopsis thaliana genome, 392 gene models were predicted to be peroxisome targeted. These predictions were extensively tested in vivo, resulting in a high experimental verification rate of Arabidopsis proteins previously not known to be peroxisomal. The prediction methods were able to correctly infer novel PTS1 tripeptides, which even included novel residues. Twenty-three newly predicted PTS1 tripeptides were experimentally confirmed, and a high variability of the plant PTS1 motif was discovered. These prediction methods will be instrumental in identifying low-abundance and stress-inducible peroxisomal proteins and defining the entire peroxisomal proteome of Arabidopsis and agronomically important crop plants.

  16. 30 CFR 250.1005 - Inspection requirements for DOI pipelines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Inspection requirements for DOI pipelines. 250.1005 Section 250.1005 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR...-Way § 250.1005 Inspection requirements for DOI pipelines. (a) Pipeline routes shall be inspected at...

  17. Prospects for coal slurry pipelines in California

    NASA Technical Reports Server (NTRS)

    Lynch, J. F.

    1978-01-01

    The coal slurry pipeline segment of the transport industry is emerging in the United States. If accepted it will play a vital role in meeting America's urgent energy requirements without public subsidy, tax relief, or federal grants. It is proven technology, ideally suited for transport of an abundant energy resource over thousands of miles to energy short industrial centers and at more than competitive costs. Briefly discussed are the following: (1) history of pipelines; (2) California market potential; (3) slurry technology; (4) environmental benefits; (5) market competition; and (6) a proposed pipeline.

  18. Data-driven risk models could help target pipeline safety inspections

    DOT National Transportation Integrated Search

    2008-07-01

    Federal safety agencies share a common problemthe : need to target resources effectively to reduce risk. One : way this targeting is commonly done is with a risk model : that uses safety data along with expert judgment to identify : and weight ris...

  19. Influence of Anchoring on Burial Depth of Submarine Pipelines

    PubMed Central

    Zhuang, Yuan; Li, Yang; Su, Wei

    2016-01-01

    Since the beginning of the twenty-first century, there has been widespread construction of submarine oil-gas transmission pipelines due to an increase in offshore oil exploration. Vessel anchoring operations are causing more damage to submarine pipelines due to shipping transportation also increasing. Therefore, it is essential that the influence of anchoring on the required burial depth of submarine pipelines is determined. In this paper, mathematical models for ordinary anchoring and emergency anchoring have been established to derive an anchor impact energy equation for each condition. The required effective burial depth for submarine pipelines has then been calculated via an energy absorption equation for the protection layer covering the submarine pipelines. Finally, the results of the model calculation have been verified by accident case analysis, and the impact of the anchoring height, anchoring water depth and the anchor weight on the required burial depth of submarine pipelines has been further analyzed. PMID:27166952

  20. Integrated Proteomic Pipeline Using Multiple Search Engines for a Proteogenomic Study with a Controlled Protein False Discovery Rate.

    PubMed

    Park, Gun Wook; Hwang, Heeyoun; Kim, Kwang Hoe; Lee, Ju Yeon; Lee, Hyun Kyoung; Park, Ji Yeong; Ji, Eun Sun; Park, Sung-Kyu Robin; Yates, John R; Kwon, Kyung-Hoon; Park, Young Mok; Lee, Hyoung-Joo; Paik, Young-Ki; Kim, Jin Young; Yoo, Jong Shin

    2016-11-04

    In the Chromosome-Centric Human Proteome Project (C-HPP), false-positive identification by peptide spectrum matches (PSMs) after database searches is a major issue for proteogenomic studies using liquid-chromatography and mass-spectrometry-based large proteomic profiling. Here we developed a simple strategy for protein identification, with a controlled false discovery rate (FDR) at the protein level, using an integrated proteomic pipeline (IPP) that consists of four engrailed steps as follows. First, using three different search engines, SEQUEST, MASCOT, and MS-GF+, individual proteomic searches were performed against the neXtProt database. Second, the search results from the PSMs were combined using statistical evaluation tools including DTASelect and Percolator. Third, the peptide search scores were converted into E-scores normalized using an in-house program. Last, ProteinInferencer was used to filter the proteins containing two or more peptides with a controlled FDR of 1.0% at the protein level. Finally, we compared the performance of the IPP to a conventional proteomic pipeline (CPP) for protein identification using a controlled FDR of <1% at the protein level. Using the IPP, a total of 5756 proteins (vs 4453 using the CPP) including 477 alternative splicing variants (vs 182 using the CPP) were identified from human hippocampal tissue. In addition, a total of 10 missing proteins (vs 7 using the CPP) were identified with two or more unique peptides, and their tryptic peptides were validated using MS/MS spectral pattern from a repository database or their corresponding synthetic peptides. This study shows that the IPP effectively improved the identification of proteins, including alternative splicing variants and missing proteins, in human hippocampal tissues for the C-HPP. All RAW files used in this study were deposited in ProteomeXchange (PXD000395).

  1. BioMaS: a modular pipeline for Bioinformatic analysis of Metagenomic AmpliconS.

    PubMed

    Fosso, Bruno; Santamaria, Monica; Marzano, Marinella; Alonso-Alemany, Daniel; Valiente, Gabriel; Donvito, Giacinto; Monaco, Alfonso; Notarangelo, Pasquale; Pesole, Graziano

    2015-07-01

    Substantial advances in microbiology, molecular evolution and biodiversity have been carried out in recent years thanks to Metagenomics, which allows to unveil the composition and functions of mixed microbial communities in any environmental niche. If the investigation is aimed only at the microbiome taxonomic structure, a target-based metagenomic approach, here also referred as Meta-barcoding, is generally applied. This approach commonly involves the selective amplification of a species-specific genetic marker (DNA meta-barcode) in the whole taxonomic range of interest and the exploration of its taxon-related variants through High-Throughput Sequencing (HTS) technologies. The accessibility to proper computational systems for the large-scale bioinformatic analysis of HTS data represents, currently, one of the major challenges in advanced Meta-barcoding projects. BioMaS (Bioinformatic analysis of Metagenomic AmpliconS) is a new bioinformatic pipeline designed to support biomolecular researchers involved in taxonomic studies of environmental microbial communities by a completely automated workflow, comprehensive of all the fundamental steps, from raw sequence data upload and cleaning to final taxonomic identification, that are absolutely required in an appropriately designed Meta-barcoding HTS-based experiment. In its current version, BioMaS allows the analysis of both bacterial and fungal environments starting directly from the raw sequencing data from either Roche 454 or Illumina HTS platforms, following two alternative paths, respectively. BioMaS is implemented into a public web service available at https://recasgateway.ba.infn.it/ and is also available in Galaxy at http://galaxy.cloud.ba.infn.it:8080 (only for Illumina data). BioMaS is a friendly pipeline for Meta-barcoding HTS data analysis specifically designed for users without particular computing skills. A comparative benchmark, carried out by using a simulated dataset suitably designed to broadly represent

  2. Natural disasters and the gas pipeline system.

    DOT National Transportation Integrated Search

    1996-11-01

    Episodic descriptions are provided of the effects of the Loma Prieta earthquake (1989) on the gas pipeline systems of Pacific Gas & Electric Company and the Cit of Palo Alto and of the Northridge earthquake (1994) on Southern California Gas' pipeline...

  3. Development of a Free-Swimming Acoustic Tool for Liquid Pipeline Leak Detection Including Evaluation for Natural Gas Pipeline Applications

    DOT National Transportation Integrated Search

    2010-08-01

    Significant financial and environmental consequences often result from line leakage of oil product pipelines. Product can escape into the surrounding soil as even the smallest leak can lead to rupture of the pipeline. From a health perspective, water...

  4. Stress and Strain State Analysis of Defective Pipeline Portion

    NASA Astrophysics Data System (ADS)

    Burkov, P. V.; Burkova, S. P.; Knaub, S. A.

    2015-09-01

    The paper presents computer simulation results of the pipeline having defects in a welded joint. Autodesk Inventor software is used for simulation of the stress and strain state of the pipeline. Places of the possible failure and stress concentrators are predicted on the defective portion of the pipeline.

  5. Targeted sequencing of clade-specific markers from skin microbiomes for forensic human identification.

    PubMed

    Schmedes, Sarah E; Woerner, August E; Novroski, Nicole M M; Wendt, Frank R; King, Jonathan L; Stephens, Kathryn M; Budowle, Bruce

    2018-01-01

    The human skin microbiome is comprised of diverse communities of bacterial, eukaryotic, and viral taxa and contributes millions of additional genes to the repertoire of human genes, affecting human metabolism and immune response. Numerous genetic and environmental factors influence the microbiome composition and as such contribute to individual-specific microbial signatures which may be exploited for forensic applications. Previous studies have demonstrated the potential to associate skin microbial profiles collected from touched items to their individual owner, mainly using unsupervised methods from samples collected over short time intervals. Those studies utilize either targeted 16S rRNA or shotgun metagenomic sequencing to characterize skin microbiomes; however, these approaches have limited species and strain resolution and susceptibility to stochastic effects, respectively. Clade-specific markers from the skin microbiome, using supervised learning, can predict individual identity using skin microbiomes from their respective donors with high accuracy. In this study the hidSkinPlex is presented, a novel targeted sequencing method using skin microbiome markers developed for human identification. The hidSkinPlex (comprised of 286 bacterial (and phage) family-, genus-, species-, and subspecies-level markers), initially was evaluated on three bacterial control samples represented in the panel (i.e., Propionibacterium acnes, Propionibacterium granulosum, and Rothia dentocariosa) to assess the performance of the multiplex. The hidSkinPlex was further evaluated for prediction purposes. The hidSkinPlex markers were used to attribute skin microbiomes collected from eight individuals from three body sites (i.e., foot (Fb), hand (Hp) and manubrium (Mb)) to their host donor. Supervised learning, specifically regularized multinomial logistic regression and 1-nearest-neighbor classification were used to classify skin microbiomes to their hosts with up to 92% (Fb), 96% (Mb

  6. UGbS-Flex, a novel bioinformatics pipeline for imputation-free SNP discovery in polyploids without a reference genome: finger millet as a case study.

    PubMed

    Qi, Peng; Gimode, Davis; Saha, Dipnarayan; Schröder, Stephan; Chakraborty, Debkanta; Wang, Xuewen; Dida, Mathews M; Malmberg, Russell L; Devos, Katrien M

    2018-06-15

    Research on orphan crops is often hindered by a lack of genomic resources. With the advent of affordable sequencing technologies, genotyping an entire genome or, for large-genome species, a representative fraction of the genome has become feasible for any crop. Nevertheless, most genotyping-by-sequencing (GBS) methods are geared towards obtaining large numbers of markers at low sequence depth, which excludes their application in heterozygous individuals. Furthermore, bioinformatics pipelines often lack the flexibility to deal with paired-end reads or to be applied in polyploid species. UGbS-Flex combines publicly available software with in-house python and perl scripts to efficiently call SNPs from genotyping-by-sequencing reads irrespective of the species' ploidy level, breeding system and availability of a reference genome. Noteworthy features of the UGbS-Flex pipeline are an ability to use paired-end reads as input, an effective approach to cluster reads across samples with enhanced outputs, and maximization of SNP calling. We demonstrate use of the pipeline for the identification of several thousand high-confidence SNPs with high representation across samples in an F 3 -derived F 2 population in the allotetraploid finger millet. Robust high-density genetic maps were constructed using the time-tested mapping program MAPMAKER which we upgraded to run efficiently and in a semi-automated manner in a Windows Command Prompt Environment. We exploited comparative GBS with one of the diploid ancestors of finger millet to assign linkage groups to subgenomes and demonstrate the presence of chromosomal rearrangements. The paper combines GBS protocol modifications, a novel flexible GBS analysis pipeline, UGbS-Flex, recommendations to maximize SNP identification, updated genetic mapping software, and the first high-density maps of finger millet. The modules used in the UGbS-Flex pipeline and for genetic mapping were applied to finger millet, an allotetraploid selfing species

  7. Sub-soil contamination due to oil spills in zones surrounding oil pipeline-pump stations and oil pipeline right-of-ways in Southwest-Mexico.

    PubMed

    Iturbe, Rosario; Flores, Carlos; Castro, Alejandrina; Torres, Luis G

    2007-10-01

    Oil spills due to oil pipelines is a very frequent problem in Mexico. Petroleos Mexicanos (PEMEX), very concerned with the environmental agenda, has been developing inspection and correction plans for zones around oil pipelines pumping stations and pipeline right-of-way. These stations are located at regular intervals of kilometres along the pipelines. In this study, two sections of an oil pipeline and two pipeline pumping stations zones are characterized in terms of the presence of Total Petroleum Hydrocarbons (TPHs) and Polycyclic Aromatic Hydrocarbons (PAHs). The study comprehends sampling of the areas, delimitation of contamination in the vertical and horizontal extension, analysis of the sampled soils regarding TPHs content and, in some cases, the 16 PAHs considered as priority by USEPA, calculation of areas and volumes contaminated (according to Mexican legislation, specifically NOM-EM-138-ECOL-2002) and, finally, a proposal for the best remediation techniques suitable for the contamination levels and the localization of contaminants.

  8. In Situ Identification of Cyanobacteria with Horseradish Peroxidase-Labeled, rRNA-Targeted Oligonucleotide Probes

    PubMed Central

    Schönhuber, Wilhelm; Zarda, Boris; Eix, Stella; Rippka, Rosmarie; Herdman, Michael; Ludwig, Wolfgang; Amann, Rudolf

    1999-01-01

    Individual cyanobacterial cells are normally identified in environmental samples only on the basis of their pigmentation and morphology. However, these criteria are often insufficient for the differentiation of species. Here, a whole-cell hybridization technique is presented that uses horseradish peroxidase (HRP)-labeled, rRNA-targeted oligonucleotides for in situ identification of cyanobacteria. This indirect method, in which the probe-conferred enzyme has to be visualized in an additional step, was necessary since fluorescently monolabeled oligonucleotides were insufficient to overstain the autofluorescence of the target cells. Initially, a nonfluorescent detection assay was developed and successfully applied to cyanobacterial mats. Later, it was demonstrated that tyramide signal amplification (TSA) resulted in fluorescent signals far above the level of autofluorescence. Furthermore, TSA-based detection of HRP was more sensitive than that based on nonfluorescent substrates. Critical points of the assay, such as cell fixation and permeabilization, specificity, and sensitivity, were systematically investigated by using four oligonucleotides newly designed to target groups of cyanobacteria. PMID:10049892

  9. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  10. Risk Analysis using Corrosion Rate Parameter on Gas Transmission Pipeline

    NASA Astrophysics Data System (ADS)

    Sasikirono, B.; Kim, S. J.; Haryadi, G. D.; Huda, A.

    2017-05-01

    In the oil and gas industry, the pipeline is a major component in the transmission and distribution process of oil and gas. Oil and gas distribution process sometimes performed past the pipeline across the various types of environmental conditions. Therefore, in the transmission and distribution process of oil and gas, a pipeline should operate safely so that it does not harm the surrounding environment. Corrosion is still a major cause of failure in some components of the equipment in a production facility. In pipeline systems, corrosion can cause failures in the wall and damage to the pipeline. Therefore it takes care and periodic inspections or checks on the pipeline system. Every production facility in an industry has a level of risk for damage which is a result of the opportunities and consequences of damage caused. The purpose of this research is to analyze the level of risk of 20-inch Natural Gas Transmission Pipeline using Risk-based inspection semi-quantitative based on API 581 associated with the likelihood of failure and the consequences of the failure of a component of the equipment. Then the result is used to determine the next inspection plans. Nine pipeline components were observed, such as a straight pipes inlet, connection tee, and straight pipes outlet. The risk assessment level of the nine pipeline’s components is presented in a risk matrix. The risk level of components is examined at medium risk levels. The failure mechanism that is used in this research is the mechanism of thinning. Based on the results of corrosion rate calculation, remaining pipeline components age can be obtained, so the remaining lifetime of pipeline components are known. The calculation of remaining lifetime obtained and the results vary for each component. Next step is planning the inspection of pipeline components by NDT external methods.

  11. ORAC-DR: Astronomy data reduction pipeline

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Economou, Frossie; Cavanagh, Brad; Currie, Malcolm J.; Gibb, Andy

    2013-10-01

    ORAC-DR is a generic data reduction pipeline infrastructure; it includes specific data processing recipes for a number of instruments. It is used at the James Clerk Maxwell Telescope, United Kingdom Infrared Telescope, AAT, and LCOGT. This pipeline runs at the JCMT Science Archive hosted by CADC to generate near-publication quality data products; the code has been in use since 1998.

  12. Simplified Technique for Predicting Offshore Pipeline Expansion

    NASA Astrophysics Data System (ADS)

    Seo, J. H.; Kim, D. K.; Choi, H. S.; Yu, S. Y.; Park, K. S.

    2018-06-01

    In this study, we propose a method for estimating the amount of expansion that occurs in subsea pipelines, which could be applied in the design of robust structures that transport oil and gas from offshore wells. We begin with a literature review and general discussion of existing estimation methods and terminologies with respect to subsea pipelines. Due to the effects of high pressure and high temperature, the production of fluid from offshore wells is typically caused by physical deformation of subsea structures, e.g., expansion and contraction during the transportation process. In severe cases, vertical and lateral buckling occurs, which causes a significant negative impact on structural safety, and which is related to on-bottom stability, free-span, structural collapse, and many other factors. In addition, these factors may affect the production rate with respect to flow assurance, wax, and hydration, to name a few. In this study, we developed a simple and efficient method for generating a reliable pipe expansion design in the early stage, which can lead to savings in both cost and computation time. As such, in this paper, we propose an applicable diagram, which we call the standard dimensionless ratio (SDR) versus virtual anchor length (L A ) diagram, that utilizes an efficient procedure for estimating subsea pipeline expansion based on applied reliable scenarios. With this user guideline, offshore pipeline structural designers can reliably determine the amount of subsea pipeline expansion and the obtained results will also be useful for the installation, design, and maintenance of the subsea pipeline.

  13. Consensus between Pipelines in Structural Brain Networks

    PubMed Central

    Parker, Christopher S.; Deligianni, Fani; Cardoso, M. Jorge; Daga, Pankaj; Modat, Marc; Dayan, Michael; Clark, Chris A.

    2014-01-01

    Structural brain networks may be reconstructed from diffusion MRI tractography data and have great potential to further our understanding of the topological organisation of brain structure in health and disease. Network reconstruction is complex and involves a series of processesing methods including anatomical parcellation, registration, fiber orientation estimation and whole-brain fiber tractography. Methodological choices at each stage can affect the anatomical accuracy and graph theoretical properties of the reconstructed networks, meaning applying different combinations in a network reconstruction pipeline may produce substantially different networks. Furthermore, the choice of which connections are considered important is unclear. In this study, we assessed the similarity between structural networks obtained using two independent state-of-the-art reconstruction pipelines. We aimed to quantify network similarity and identify the core connections emerging most robustly in both pipelines. Similarity of network connections was compared between pipelines employing different atlases by merging parcels to a common and equivalent node scale. We found a high agreement between the networks across a range of fiber density thresholds. In addition, we identified a robust core of highly connected regions coinciding with a peak in similarity across network density thresholds, and replicated these results with atlases at different node scales. The binary network properties of these core connections were similar between pipelines but showed some differences in atlases across node scales. This study demonstrates the utility of applying multiple structural network reconstrution pipelines to diffusion data in order to identify the most important connections for further study. PMID:25356977

  14. The standard operating procedure of the DOE-JGI Microbial Genome Annotation Pipeline (MGAP v.4).

    PubMed

    Huntemann, Marcel; Ivanova, Natalia N; Mavromatis, Konstantinos; Tripp, H James; Paez-Espino, David; Palaniappan, Krishnaveni; Szeto, Ernest; Pillay, Manoj; Chen, I-Min A; Pati, Amrita; Nielsen, Torben; Markowitz, Victor M; Kyrpides, Nikos C

    2015-01-01

    The DOE-JGI Microbial Genome Annotation Pipeline performs structural and functional annotation of microbial genomes that are further included into the Integrated Microbial Genome comparative analysis system. MGAP is applied to assembled nucleotide sequence datasets that are provided via the IMG submission site. Dataset submission for annotation first requires project and associated metadata description in GOLD. The MGAP sequence data processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNA features, as well as CRISPR elements. Structural annotation is followed by assignment of protein product names and functions.

  15. Expansion of the U.S. Natural Gas Pipeline Network

    EIA Publications

    2009-01-01

    Additions in 2008 and Projects through 2011. This report examines new natural gas pipeline capacity added to the U.S. natural gas pipeline system during 2008. In addition, it discusses and analyzes proposed natural gas pipeline projects that may be developed between 2009 and 2011, and the market factors supporting these initiatives.

  16. 75 FR 32836 - Pipeline Safety: Workshop on Public Awareness Programs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-09

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... American Public Gas Association Association of Oil Pipelines American Petroleum Institute Interstate... the pipeline industry). Hazardous Liquid Gas Transmission/Gathering Natural Gas Distribution (10...

  17. The Expanding Diversity of Mycobacterium tuberculosis Drug Targets.

    PubMed

    Wellington, Samantha; Hung, Deborah T

    2018-05-11

    After decades of relative inactivity, a large increase in efforts to discover antitubercular therapeutics has brought insights into the biology of Mycobacterium tuberculosis (Mtb) and promising new drugs such as bedaquiline, which inhibits ATP synthase, and the nitroimidazoles delamanid and pretomanid, which inhibit both mycolic acid synthesis and energy production. Despite these advances, the drug discovery pipeline remains underpopulated. The field desperately needs compounds with novel mechanisms of action capable of inhibiting multi- and extensively drug -resistant Mtb (M/XDR-TB) and, potentially, nonreplicating Mtb with the hope of shortening the duration of required therapy. New knowledge about Mtb, along with new methods and technologies, has driven exploration into novel target areas, such as energy production and central metabolism, that diverge from the classical targets in macromolecular synthesis. Here, we review new small molecule drug candidates that act on these novel targets to highlight the methods and perspectives advancing the field. These new targets bring with them the aspiration of shortening treatment duration as well as a pipeline of effective regimens against XDR-TB, positioning Mtb drug discovery to become a model for anti-infective discovery.

  18. Identification of tissue-specific targeting peptide

    NASA Astrophysics Data System (ADS)

    Jung, Eunkyoung; Lee, Nam Kyung; Kang, Sang-Kee; Choi, Seung-Hoon; Kim, Daejin; Park, Kisoo; Choi, Kihang; Choi, Yun-Jaie; Jung, Dong Hyun

    2012-11-01

    Using phage display technique, we identified tissue-targeting peptide sets that recognize specific tissues (bone-marrow dendritic cell, kidney, liver, lung, spleen and visceral adipose tissue). In order to rapidly evaluate tissue-specific targeting peptides, we performed machine learning studies for predicting the tissue-specific targeting activity of peptides on the basis of peptide sequence information using four machine learning models and isolated the groups of peptides capable of mediating selective targeting to specific tissues. As a representative liver-specific targeting sequence, the peptide "DKNLQLH" was selected by the sequence similarity analysis. This peptide has a high degree of homology with protein ligands which can interact with corresponding membrane counterparts. We anticipate that our models will be applicable to the prediction of tissue-specific targeting peptides which can recognize the endothelial markers of target tissues.

  19. Bio-oil transport by pipeline: a techno-economic assessment.

    PubMed

    Pootakham, Thanyakarn; Kumar, Amit

    2010-09-01

    Bio-oil, produced by fast pyrolysis of biomass, has high energy density compared to 'as received' biomass. The study assesses and compares the cost of transportation ($/liter of bio-oil) of bio-oil by pipeline and truck. The fixed and variable cost components of transportation of bio-oil at a pipeline capacity of 560 m(3)/day and to a distance of 100 km are 0.0423$/m(3) and 0.1201$/m(3)/km, respectively. Pipeline transportation of bio-oil costs less than transportation by liquid tank truck (load capacity 30 m(3)) and super B-train trailer (load capacity 60 m(3)) above pipeline capacities of 1000 and 1700 m(3)/day, respectively. When transportation distance is greater than 100 km, bio-oil must be heated at booster stations. When transporting bio-oil by pipeline to a distance of 400 km, minimum pipeline capacities of 1150 and 2000 m(3)/day are required to compete economically with liquid tank trucks and super B-train tank trailers, respectively. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. Dynamic Black-Level Correction and Artifact Flagging in the Kepler Data Pipeline

    NASA Technical Reports Server (NTRS)

    Clarke, B. D.; Kolodziejczak, J. J.; Caldwell, D. A.

    2013-01-01

    Instrument-induced artifacts in the raw Kepler pixel data include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, manifestations of drifting moiré pattern as locally correlated nonstationary noise and rolling bands in the images which find their way into the calibrated pixel time series and ultimately into the calibrated target flux time series. Using a combination of raw science pixel data, full frame images, reverse-clocked pixel data and ancillary temperature data the Keplerpipeline models and removes the FGS crosstalk artifacts by dynamically adjusting the black level correction. By examining the residuals to the model fits, the pipeline detects and flags spatial regions and time intervals of strong time-varying blacklevel (rolling bands ) on a per row per cadence basis. These flags are made available to downstream users of the data since the uncorrected rolling band artifacts could complicate processing or lead to misinterpretation of instrument behavior as stellar. This model fitting and artifact flagging is performed within the new stand-alone pipeline model called Dynablack. We discuss the implementation of Dynablack in the Kepler data pipeline and present results regarding the improvement in calibrated pixels and the expected improvement in cotrending performances as a result of including FGS corrections in the calibration. We also discuss the effectiveness of the rolling band flagging for downstream users and illustrate with some affected light curves.

  1. Flame: A Flexible Data Reduction Pipeline for Near-Infrared and Optical Spectroscopy

    NASA Astrophysics Data System (ADS)

    Belli, Sirio; Contursi, Alessandra; Davies, Richard I.

    2018-05-01

    We present flame, a pipeline for reducing spectroscopic observations obtained with multi-slit near-infrared and optical instruments. Because of its flexible design, flame can be easily applied to data obtained with a wide variety of spectrographs. The flexibility is due to a modular architecture, which allows changes and customizations to the pipeline, and relegates the instrument-specific parts to a single module. At the core of the data reduction is the transformation from observed pixel coordinates (x, y) to rectified coordinates (λ, γ). This transformation consists in the polynomial functions λ(x, y) and γ(x, y) that are derived from arc or sky emission lines and slit edge tracing, respectively. The use of 2D transformations allows one to wavelength-calibrate and rectify the data using just one interpolation step. Furthermore, the γ(x, y) transformation includes also the spatial misalignment between frames, which can be measured from a reference star observed simultaneously with the science targets. The misalignment can then be fully corrected during the rectification, without having to further resample the data. Sky subtraction can be performed via nodding and/or modeling of the sky spectrum; the combination of the two methods typically yields the best results. We illustrate the pipeline by showing examples of data reduction for a near-infrared instrument (LUCI at the Large Binocular Telescope) and an optical one (LRIS at the Keck telescope).

  2. UXO detection and identification based on intrinsic target polarizabilities: A case history

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasperikova, E.; Smith, J.T.; Morrison, H.F.

    2008-07-15

    Electromagnetic induction data parameterized in time dependent object intrinsic polarizabilities allow discrimination of unexploded ordnance (UXO) from false targets (scrap metal). Data from a cart-mounted system designed for discrimination of UXO with 20 mm to 155 mm diameters are used. Discrimination of UXO from irregular scrap metal is based on the principal dipole polarizabilities of a target. A near-intact UXO displays a single major polarizability coincident with the long axis of the object and two equal smaller transverse polarizabilities, whereas metal scraps have distinct polarizability signatures that rarely mimic those of elongated symmetric bodies. Based on a training data setmore » of known targets, object identification was made by estimating the probability that an object is a single UXO. Our test survey took place on a military base where both 4.2-inch mortar shells and scrap metal were present. The results show that we detected and discriminated correctly all 4.2-inch mortars, and in that process we added 7%, and 17%, respectively, of dry holes (digging scrap) to the total number of excavations in two different survey modes. We also demonstrated a mode of operation that might be more cost effective than the current practice.« less

  3. Trans-Proteomic Pipeline, a standardized data processing pipeline for large-scale reproducible proteomics informatics

    PubMed Central

    Deutsch, Eric W.; Mendoza, Luis; Shteynberg, David; Slagel, Joseph; Sun, Zhi; Moritz, Robert L.

    2015-01-01

    Democratization of genomics technologies has enabled the rapid determination of genotypes. More recently the democratization of comprehensive proteomics technologies is enabling the determination of the cellular phenotype and the molecular events that define its dynamic state. Core proteomic technologies include mass spectrometry to define protein sequence, protein:protein interactions, and protein post-translational modifications. Key enabling technologies for proteomics are bioinformatic pipelines to identify, quantitate, and summarize these events. The Trans-Proteomics Pipeline (TPP) is a robust open-source standardized data processing pipeline for large-scale reproducible quantitative mass spectrometry proteomics. It supports all major operating systems and instrument vendors via open data formats. Here we provide a review of the overall proteomics workflow supported by the TPP, its major tools, and how it can be used in its various modes from desktop to cloud computing. We describe new features for the TPP, including data visualization functionality. We conclude by describing some common perils that affect the analysis of tandem mass spectrometry datasets, as well as some major upcoming features. PMID:25631240

  4. Trans-Proteomic Pipeline, a standardized data processing pipeline for large-scale reproducible proteomics informatics.

    PubMed

    Deutsch, Eric W; Mendoza, Luis; Shteynberg, David; Slagel, Joseph; Sun, Zhi; Moritz, Robert L

    2015-08-01

    Democratization of genomics technologies has enabled the rapid determination of genotypes. More recently the democratization of comprehensive proteomics technologies is enabling the determination of the cellular phenotype and the molecular events that define its dynamic state. Core proteomic technologies include MS to define protein sequence, protein:protein interactions, and protein PTMs. Key enabling technologies for proteomics are bioinformatic pipelines to identify, quantitate, and summarize these events. The Trans-Proteomics Pipeline (TPP) is a robust open-source standardized data processing pipeline for large-scale reproducible quantitative MS proteomics. It supports all major operating systems and instrument vendors via open data formats. Here, we provide a review of the overall proteomics workflow supported by the TPP, its major tools, and how it can be used in its various modes from desktop to cloud computing. We describe new features for the TPP, including data visualization functionality. We conclude by describing some common perils that affect the analysis of MS/MS datasets, as well as some major upcoming features. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Vitamin D Pathway Status and the Identification of Target Genes in the Mouse Mammary Gland

    DTIC Science & Technology

    2013-01-01

    12 Palmer HG et al. The vitamin D receptor is a Wnt effector that controls hair follicle differentiation and specifies tumor type in adult epidermis...AD_________________ Award Number: W81XWH-11-1-0152 TITLE: Vitamin D pathway status and the...December 2012 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER W81XWH-11-1-0152 Vitamin D pathway status and the identification of target genes in the

  6. Data processing pipeline for Herschel HIFI

    NASA Astrophysics Data System (ADS)

    Shipman, R. F.; Beaulieu, S. F.; Teyssier, D.; Morris, P.; Rengel, M.; McCoey, C.; Edwards, K.; Kester, D.; Lorenzani, A.; Coeur-Joly, O.; Melchior, M.; Xie, J.; Sanchez, E.; Zaal, P.; Avruch, I.; Borys, C.; Braine, J.; Comito, C.; Delforge, B.; Herpin, F.; Hoac, A.; Kwon, W.; Lord, S. D.; Marston, A.; Mueller, M.; Olberg, M.; Ossenkopf, V.; Puga, E.; Akyilmaz-Yabaci, M.

    2017-12-01

    Context. The HIFI instrument on the Herschel Space Observatory performed over 9100 astronomical observations, almost 900 of which were calibration observations in the course of the nearly four-year Herschel mission. The data from each observation had to be converted from raw telemetry into calibrated products and were included in the Herschel Science Archive. Aims: The HIFI pipeline was designed to provide robust conversion from raw telemetry into calibrated data throughout all phases of the HIFI missions. Pre-launch laboratory testing was supported as were routine mission operations. Methods: A modular software design allowed components to be easily added, removed, amended and/or extended as the understanding of the HIFI data developed during and after mission operations. Results: The HIFI pipeline processed data from all HIFI observing modes within the Herschel automated processing environment as well as within an interactive environment. The same software can be used by the general astronomical community to reprocess any standard HIFI observation. The pipeline also recorded the consistency of processing results and provided automated quality reports. Many pipeline modules were in use since the HIFI pre-launch instrument level testing. Conclusions: Processing in steps facilitated data analysis to discover and address instrument artefacts and uncertainties. The availability of the same pipeline components from pre-launch throughout the mission made for well-understood, tested, and stable processing. A smooth transition from one phase to the next significantly enhanced processing reliability and robustness. Herschel was an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  7. Kepler Science Operations Center Pipeline Framework

    NASA Technical Reports Server (NTRS)

    Klaus, Todd C.; McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Middour, Christopher; Caldwell, Douglas A.; Jenkins, Jon M.

    2010-01-01

    The Kepler mission is designed to continuously monitor up to 170,000 stars at a 30 minute cadence for 3.5 years searching for Earth-size planets. The data are processed at the Science Operations Center (SOC) at NASA Ames Research Center. Because of the large volume of data and the memory and CPU-intensive nature of the analysis, significant computing hardware is required. We have developed generic pipeline framework software that is used to distribute and synchronize the processing across a cluster of CPUs and to manage the resulting products. The framework is written in Java and is therefore platform-independent, and scales from a single, standalone workstation (for development and research on small data sets) to a full cluster of homogeneous or heterogeneous hardware with minimal configuration changes. A plug-in architecture provides customized control of the unit of work without the need to modify the framework itself. Distributed transaction services provide for atomic storage of pipeline products for a unit of work across a relational database and the custom Kepler DB. Generic parameter management and data accountability services are provided to record the parameter values, software versions, and other meta-data used for each pipeline execution. A graphical console allows for the configuration, execution, and monitoring of pipelines. An alert and metrics subsystem is used to monitor the health and performance of the pipeline. The framework was developed for the Kepler project based on Kepler requirements, but the framework itself is generic and could be used for a variety of applications where these features are needed.

  8. Experimental investigation for an isolation technique on conducting the electromechanical impedance method in high-temperature pipeline facilities

    NASA Astrophysics Data System (ADS)

    Na, Wongi S.; Lee, Hyeonseok

    2016-11-01

    In general, the pipelines within a nuclear power plant facility may experience high temperatures up to several hundred degrees. Thus it is absolutely vital to monitor these pipes to prevent leakage of radioactive substances which may lead to a catastrophic outcome of the surrounding environment. Over the years, one of the structural health monitoring technique known as the electromechanical impedance (EMI) technique has been of great interests in various fields including civil infrastructures, mechanical and aerospace structures. Although it has one of the best advantages to be able for a single piezoelectric transducer to act as a sensor and an actuator, simultaneously, its low curie temperature makes it difficult for the EMI technique to be conducted at high temperature environment. To overcome this problem, this study shows a method to avoid attaching the piezoelectric transducer directly onto the target structure using a metal wire for damage detection at high temperature. By shifting the frequency to compensate the signature changes subjected to the variations in temperature, the experimental results indicate that damage identification is more successful above 200 oC, making the metal wire method suitable for the EMI technique at high temperature environment.

  9. IMPACT: a whole-exome sequencing analysis pipeline for integrating molecular profiles with actionable therapeutics in clinical samples.

    PubMed

    Hintzsche, Jennifer; Kim, Jihye; Yadav, Vinod; Amato, Carol; Robinson, Steven E; Seelenfreund, Eric; Shellman, Yiqun; Wisell, Joshua; Applegate, Allison; McCarter, Martin; Box, Neil; Tentler, John; De, Subhajyoti; Robinson, William A; Tan, Aik Choon

    2016-07-01

    Currently, there is a disconnect between finding a patient's relevant molecular profile and predicting actionable therapeutics. Here we develop and implement the Integrating Molecular Profiles with Actionable Therapeutics (IMPACT) analysis pipeline, linking variants detected from whole-exome sequencing (WES) to actionable therapeutics. The IMPACT pipeline contains 4 analytical modules: detecting somatic variants, calling copy number alterations, predicting drugs against deleterious variants, and analyzing tumor heterogeneity. We tested the IMPACT pipeline on whole-exome sequencing data in The Cancer Genome Atlas (TCGA) lung adenocarcinoma samples with known EGFR mutations. We also used IMPACT to analyze melanoma patient tumor samples before treatment, after BRAF-inhibitor treatment, and after BRAF- and MEK-inhibitor treatment. IMPACT Food and Drug Administration (FDA) correctly identified known EGFR mutations in the TCGA lung adenocarcinoma samples. IMPACT linked these EGFR mutations to the appropriate FDA-approved EGFR inhibitors. For the melanoma patient samples, we identified NRAS p.Q61K as an acquired resistance mutation to BRAF-inhibitor treatment. We also identified CDKN2A deletion as a novel acquired resistance mutation to BRAFi/MEKi inhibition. The IMPACT analysis pipeline predicts these somatic variants to actionable therapeutics. We observed the clonal dynamic in the tumor samples after various treatments. We showed that IMPACT not only helped in successful prioritization of clinically relevant variants but also linked these variations to possible targeted therapies. IMPACT provides a new bioinformatics strategy to delineate candidate somatic variants and actionable therapies. This approach can be applied to other patient tumor samples to discover effective drug targets for personalized medicine.IMPACT is publicly available at http://tanlab.ucdenver.edu/IMPACT. © The Author 2016. Published by Oxford University Press on behalf of the American Medical

  10. 76 FR 35130 - Pipeline Safety: Control Room Management/Human Factors

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-16

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts...: Control Room Management/Human Factors AGENCY: Pipeline and Hazardous Materials Safety Administration... safety standards, risk assessments, and safety policies for natural gas pipelines and for hazardous...

  11. 76 FR 5494 - Pipeline Safety: Mechanical Fitting Failure Reporting Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-01

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Safety: Mechanical Fitting Failure Reporting Requirements AGENCY: Pipeline and Hazardous Materials Safety... tightening. A widely accepted industry guidance document, Gas Pipeline Technical Committee (GPTC) Guide, does...

  12. A Conceptual Model of the Air Force Logistics Pipeline

    DTIC Science & Technology

    1989-09-01

    Contracting Process . ....... 138 Industrial Capacity .. ......... 140 The Disposal Pipeline Subsystem ....... 142 Collective Pipeline Models...Explosion of " Industry ," Acquisition and Production Process .... ............ 202 60. First Level Explosion of "Attrition," the Disposal Process...Terminology and Phrases, a publication of The American Production and Inventory Control Society ( APICS ). This dictionary defines 5 "pipeline stock" as the

  13. A systematic approach to prioritize drug targets using machine learning, a molecular descriptor-based classification model, and high-throughput screening of plant derived molecules: a case study in oral cancer.

    PubMed

    Randhawa, Vinay; Kumar Singh, Anil; Acharya, Vishal

    2015-12-01

    Systems-biology inspired identification of drug targets and machine learning-based screening of small molecules which modulate their activity have the potential to revolutionize modern drug discovery by complementing conventional methods. To utilize the effectiveness of such pipelines, we first analyzed the dysregulated gene pairs between control and tumor samples and then implemented an ensemble-based feature selection approach to prioritize targets in oral squamous cell carcinoma (OSCC) for therapeutic exploration. Based on the structural information of known inhibitors of CXCR4-one of the best targets identified in this study-a feature selection was implemented for the identification of optimal structural features (molecular descriptor) based on which a classification model was generated. Furthermore, the CXCR4-centered descriptor-based classification model was finally utilized to screen a repository of plant derived small-molecules to obtain potential inhibitors. The application of our methodology may assist effective selection of the best targets which may have previously been overlooked, that in turn will lead to the development of new oral cancer medications. The small molecules identified in this study can be ideal candidates for trials as potential novel anti-oral cancer agents. Importantly, distinct steps of this whole study may provide reference for the analysis of other complex human diseases.

  14. Optimal Energy Consumption Analysis of Natural Gas Pipeline

    PubMed Central

    Liu, Enbin; Li, Changjun; Yang, Yi

    2014-01-01

    There are many compressor stations along long-distance natural gas pipelines. Natural gas can be transported using different boot programs and import pressures, combined with temperature control parameters. Moreover, different transport methods have correspondingly different energy consumptions. At present, the operating parameters of many pipelines are determined empirically by dispatchers, resulting in high energy consumption. This practice does not abide by energy reduction policies. Therefore, based on a full understanding of the actual needs of pipeline companies, we introduce production unit consumption indicators to establish an objective function for achieving the goal of lowering energy consumption. By using a dynamic programming method for solving the model and preparing calculation software, we can ensure that the solution process is quick and efficient. Using established optimization methods, we analyzed the energy savings for the XQ gas pipeline. By optimizing the boot program, the import station pressure, and the temperature parameters, we achieved the optimal energy consumption. By comparison with the measured energy consumption, the pipeline now has the potential to reduce energy consumption by 11 to 16 percent. PMID:24955410

  15. Amateur Image Pipeline Processing using Python plus PyRAF

    NASA Astrophysics Data System (ADS)

    Green, Wayne

    2012-05-01

    A template pipeline spanning observing planning to publishing is offered as a basis for establishing a long term observing program. The data reduction pipeline encapsulates all policy and procedures, providing an accountable framework for data analysis and a teaching framework for IRAF. This paper introduces the technical details of a complete pipeline processing environment using Python, PyRAF and a few other languages. The pipeline encapsulates all processing decisions within an auditable framework. The framework quickly handles the heavy lifting of image processing. It also serves as an excellent teaching environment for astronomical data management and IRAF reduction decisions.

  16. STARR: shortwave-targeted agile Raman robot for the detection and identification of emplaced explosives

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Gardner, Charles W.

    2014-05-01

    In order to combat the threat of emplaced explosives (land mines, etc.), ChemImage Sensor Systems (CISS) has developed a multi-sensor, robot mounted sensor capable of identification and confirmation of potential threats. The system, known as STARR (Shortwave-infrared Targeted Agile Raman Robot), utilizes shortwave infrared spectroscopy for the identification of potential threats, combined with a visible short-range standoff Raman hyperspectral imaging (HSI) system for material confirmation. The entire system is mounted onto a Talon UGV (Unmanned Ground Vehicle), giving the sensor an increased area search rate and reducing the risk of injury to the operator. The Raman HSI system utilizes a fiber array spectral translator (FAST) for the acquisition of high quality Raman chemical images, allowing for increased sensitivity and improved specificity. An overview of the design and operation of the system will be presented, along with initial detection results of the fusion sensor.

  17. Improving Kepler Pipeline Sensitivity with Pixel Response Function Photometry.

    NASA Astrophysics Data System (ADS)

    Morris, Robert L.; Bryson, Steve; Jenkins, Jon Michael; Smith, Jeffrey C

    2014-06-01

    We present the results of our investigation into the feasibility and expected benefits of implementing PRF-fitting photometry in the Kepler Science Processing Pipeline. The Kepler Pixel Response Function (PRF) describes the expected system response to a point source at infinity and includes the effects of the optical point spread function, the CCD detector responsivity function, and spacecraft pointing jitter. Planet detection in the Kepler pipeline is currently based on simple aperture photometry (SAP), which is most effective when applied to uncrowded bright stars. Its effectiveness diminishes rapidly as target brightness decreases relative to the effects of noise sources such as detector electronics, background stars, and image motion. In contrast, PRF photometry is based on fitting an explicit model of image formation to the data and naturally accounts for image motion and contributions of background stars. The key to obtaining high-quality photometry from PRF fitting is a high-quality model of the system's PRF, while the key to efficiently processing the large number of Kepler targets is an accurate catalog and accurate mapping of celestial coordinates onto the focal plane. If the CCD coordinates of stellar centroids are known a priori then the problem of PRF fitting becomes linear. A model of the Kepler PRF was constructed at the time of spacecraft commissioning by fitting piecewise polynomial surfaces to data from dithered full frame images. While this model accurately captured the initial state of the system, the PRF has evolved dynamically since then and has been seen to deviate significantly from the initial (static) model. We construct a dynamic PRF model which is then used to recover photometry for all targets of interest. Both simulation tests and results from Kepler flight data demonstrate the effectiveness of our approach. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA’s Science

  18. 76 FR 53086 - Pipeline Safety: Safety of Gas Transmission Pipelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-25

    ... and external corrosion (subpart I of 49 CFR part 192). Pressure tests of new pipelines (subpart J of..., integrate and validate data (e.g., review of mill inspection reports, hydrostatic tests reports, pipe leaks... chemical properties, mill inspection reports, hydrostatic tests reports, coating type and condition, pipe...

  19. A novel algorithm for finding optimal driver nodes to target control complex networks and its applications for drug targets identification.

    PubMed

    Guo, Wei-Feng; Zhang, Shao-Wu; Shi, Qian-Qian; Zhang, Cheng-Ming; Zeng, Tao; Chen, Luonan

    2018-01-19

    The advances in target control of complex networks not only can offer new insights into the general control dynamics of complex systems, but also be useful for the practical application in systems biology, such as discovering new therapeutic targets for disease intervention. In many cases, e.g. drug target identification in biological networks, we usually require a target control on a subset of nodes (i.e., disease-associated genes) with minimum cost, and we further expect that more driver nodes consistent with a certain well-selected network nodes (i.e., prior-known drug-target genes). Therefore, motivated by this fact, we pose and address a new and practical problem called as target control problem with objectives-guided optimization (TCO): how could we control the interested variables (or targets) of a system with the optional driver nodes by minimizing the total quantity of drivers and meantime maximizing the quantity of constrained nodes among those drivers. Here, we design an efficient algorithm (TCOA) to find the optional driver nodes for controlling targets in complex networks. We apply our TCOA to several real-world networks, and the results support that our TCOA can identify more precise driver nodes than the existing control-fucus approaches. Furthermore, we have applied TCOA to two bimolecular expert-curate networks. Source code for our TCOA is freely available from http://sysbio.sibcb.ac.cn/cb/chenlab/software.htm or https://github.com/WilfongGuo/guoweifeng . In the previous theoretical research for the full control, there exists an observation and conclusion that the driver nodes tend to be low-degree nodes. However, for target control the biological networks, we find interestingly that the driver nodes tend to be high-degree nodes, which is more consistent with the biological experimental observations. Furthermore, our results supply the novel insights into how we can efficiently target control a complex system, and especially many evidences on the

  20. Statistical method to compare massive parallel sequencing pipelines.

    PubMed

    Elsensohn, M H; Leblay, N; Dimassi, S; Campan-Fournier, A; Labalme, A; Roucher-Boulez, F; Sanlaville, D; Lesca, G; Bardel, C; Roy, P

    2017-03-01

    Today, sequencing is frequently carried out by Massive Parallel Sequencing (MPS) that cuts drastically sequencing time and expenses. Nevertheless, Sanger sequencing remains the main validation method to confirm the presence of variants. The analysis of MPS data involves the development of several bioinformatic tools, academic or commercial. We present here a statistical method to compare MPS pipelines and test it in a comparison between an academic (BWA-GATK) and a commercial pipeline (TMAP-NextGENe®), with and without reference to a gold standard (here, Sanger sequencing), on a panel of 41 genes in 43 epileptic patients. This method used the number of variants to fit log-linear models for pairwise agreements between pipelines. To assess the heterogeneity of the margins and the odds ratios of agreement, four log-linear models were used: a full model, a homogeneous-margin model, a model with single odds ratio for all patients, and a model with single intercept. Then a log-linear mixed model was fitted considering the biological variability as a random effect. Among the 390,339 base-pairs sequenced, TMAP-NextGENe® and BWA-GATK found, on average, 2253.49 and 1857.14 variants (single nucleotide variants and indels), respectively. Against the gold standard, the pipelines had similar sensitivities (63.47% vs. 63.42%) and close but significantly different specificities (99.57% vs. 99.65%; p < 0.001). Same-trend results were obtained when only single nucleotide variants were considered (99.98% specificity and 76.81% sensitivity for both pipelines). The method allows thus pipeline comparison and selection. It is generalizable to all types of MPS data and all pipelines.