Sample records for methods analysis pipeline

  1. Frequency Spectrum Method-Based Stress Analysis for Oil Pipelines in Earthquake Disaster Areas

    PubMed Central

    Wu, Xiaonan; Lu, Hongfang; Huang, Kun; Wu, Shijuan; Qiao, Weibiao

    2015-01-01

    When a long distance oil pipeline crosses an earthquake disaster area, inertial force and strong ground motion can cause the pipeline stress to exceed the failure limit, resulting in bending and deformation failure. To date, researchers have performed limited safety analyses of oil pipelines in earthquake disaster areas that include stress analysis. Therefore, using the spectrum method and theory of one-dimensional beam units, CAESAR II is used to perform a dynamic earthquake analysis for an oil pipeline in the XX earthquake disaster area. This software is used to determine if the displacement and stress of the pipeline meet the standards when subjected to a strong earthquake. After performing the numerical analysis, the primary seismic action axial, longitudinal and horizontal displacement directions and the critical section of the pipeline can be located. Feasible project enhancement suggestions based on the analysis results are proposed. The designer is able to utilize this stress analysis method to perform an ultimate design for an oil pipeline in earthquake disaster areas; therefore, improving the safe operation of the pipeline. PMID:25692790

  2. Frequency spectrum method-based stress analysis for oil pipelines in earthquake disaster areas.

    PubMed

    Wu, Xiaonan; Lu, Hongfang; Huang, Kun; Wu, Shijuan; Qiao, Weibiao

    2015-01-01

    When a long distance oil pipeline crosses an earthquake disaster area, inertial force and strong ground motion can cause the pipeline stress to exceed the failure limit, resulting in bending and deformation failure. To date, researchers have performed limited safety analyses of oil pipelines in earthquake disaster areas that include stress analysis. Therefore, using the spectrum method and theory of one-dimensional beam units, CAESAR II is used to perform a dynamic earthquake analysis for an oil pipeline in the XX earthquake disaster area. This software is used to determine if the displacement and stress of the pipeline meet the standards when subjected to a strong earthquake. After performing the numerical analysis, the primary seismic action axial, longitudinal and horizontal displacement directions and the critical section of the pipeline can be located. Feasible project enhancement suggestions based on the analysis results are proposed. The designer is able to utilize this stress analysis method to perform an ultimate design for an oil pipeline in earthquake disaster areas; therefore, improving the safe operation of the pipeline.

  3. A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.

    2018-02-01

    Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.

  4. Inverse Transient Analysis for Classification of Wall Thickness Variations in Pipelines

    PubMed Central

    Tuck, Jeffrey; Lee, Pedro

    2013-01-01

    Analysis of transient fluid pressure signals has been investigated as an alternative method of fault detection in pipeline systems and has shown promise in both laboratory and field trials. The advantage of the method is that it can potentially provide a fast and cost effective means of locating faults such as leaks, blockages and pipeline wall degradation within a pipeline while the system remains fully operational. The only requirement is that high speed pressure sensors are placed in contact with the fluid. Further development of the method requires detailed numerical models and enhanced understanding of transient flow within a pipeline where variations in pipeline condition and geometry occur. One such variation commonly encountered is the degradation or thinning of pipe walls, which can increase the susceptible of a pipeline to leak development. This paper aims to improve transient-based fault detection methods by investigating how changes in pipe wall thickness will affect the transient behaviour of a system; this is done through the analysis of laboratory experiments. The laboratory experiments are carried out on a stainless steel pipeline of constant outside diameter, into which a pipe section of variable wall thickness is inserted. In order to detect the location and severity of these changes in wall conditions within the laboratory system an inverse transient analysis procedure is employed which considers independent variations in wavespeed and diameter. Inverse transient analyses are carried out using a genetic algorithm optimisation routine to match the response from a one-dimensional method of characteristics transient model to the experimental time domain pressure responses. The accuracy of the detection technique is evaluated and benefits associated with various simplifying assumptions and simulation run times are investigated. It is found that for the case investigated, changes in the wavespeed and nominal diameter of the pipeline are both important to the accuracy of the inverse analysis procedure and can be used to differentiate the observed transient behaviour caused by changes in wall thickness from that caused by other known faults such as leaks. Further application of the method to real pipelines is discussed.

  5. Pipeline monitoring with unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kochetkova, L. I.

    2018-05-01

    Pipeline leakage during transportation of combustible substances leads to explosion and fire thus causing death of people and destruction of production and accommodation facilities. Continuous pipeline monitoring allows identifying leaks in due time and quickly taking measures for their elimination. The paper describes the solution of identification of pipeline leakage using unmanned aerial vehicles. It is recommended to apply the spectral analysis with input RGB signal to identify pipeline damages. The application of multi-zone digital images allows defining potential spill of oil hydrocarbons as well as possible soil pollution. The method of multi-temporal digital images within the visible region makes it possible to define changes in soil morphology for its subsequent analysis. The given solution is cost efficient and reliable thus allowing reducing timing and labor resources in comparison with other methods of pipeline monitoring.

  6. Leakage detection in galvanized iron pipelines using ensemble empirical mode decomposition analysis

    NASA Astrophysics Data System (ADS)

    Amin, Makeen; Ghazali, M. Fairusham

    2015-05-01

    There are many numbers of possible approaches to detect leaks. Some leaks are simply noticeable when the liquids or water appears on the surface. However many leaks do not find their way to the surface and the existence has to be check by analysis of fluid flow in the pipeline. The first step is to determine the approximate position of leak. This can be done by isolate the sections of the mains in turn and noting which section causes a drop in the flow. Next approach is by using sensor to locate leaks. This approach are involves strain gauge pressure transducers and piezoelectric sensor. the occurrence of leaks and know its exact location in the pipeline by using specific method which are Acoustic leak detection method and transient method. The objective is to utilize the signal processing technique in order to analyse leaking in the pipeline. With this, an EEMD method will be applied as the analysis method to collect and analyse the data.

  7. Push Force Analysis of Anchor Block of the Oil and Gas Pipeline in a Single-Slope Tunnel Based on the Energy Balance Method

    PubMed Central

    Yan, Yifei; Zhang, Lisong; Yan, Xiangzhen

    2016-01-01

    In this paper, a single-slope tunnel pipeline was analysed considering the effects of vertical earth pressure, horizontal soil pressure, inner pressure, thermal expansion force and pipeline—soil friction. The concept of stagnation point for the pipeline was proposed. Considering the deformation compatibility condition of the pipeline elbow, the push force of anchor blocks of a single-slope tunnel pipeline was derived based on an energy method. Then, the theoretical formula for this force is thus generated. Using the analytical equation, the push force of the anchor block of an X80 large-diameter pipeline from the West—East Gas Transmission Project was determined. Meanwhile, to verify the results of the analytical method, and the finite element method, four categories of finite element codes were introduced to calculate the push force, including CAESARII, ANSYS, AutoPIPE and ALGOR. The results show that the analytical results agree well with the numerical results, and the maximum relative error is only 4.1%. Therefore, the results obtained with the analytical method can satisfy engineering requirements. PMID:26963097

  8. TRAPR: R Package for Statistical Analysis and Visualization of RNA-Seq Data.

    PubMed

    Lim, Jae Hyun; Lee, Soo Youn; Kim, Ju Han

    2017-03-01

    High-throughput transcriptome sequencing, also known as RNA sequencing (RNA-Seq), is a standard technology for measuring gene expression with unprecedented accuracy. Numerous bioconductor packages have been developed for the statistical analysis of RNA-Seq data. However, these tools focus on specific aspects of the data analysis pipeline, and are difficult to appropriately integrate with one another due to their disparate data structures and processing methods. They also lack visualization methods to confirm the integrity of the data and the process. In this paper, we propose an R-based RNA-Seq analysis pipeline called TRAPR, an integrated tool that facilitates the statistical analysis and visualization of RNA-Seq expression data. TRAPR provides various functions for data management, the filtering of low-quality data, normalization, transformation, statistical analysis, data visualization, and result visualization that allow researchers to build customized analysis pipelines.

  9. SNPhylo: a pipeline to construct a phylogenetic tree from huge SNP data.

    PubMed

    Lee, Tae-Ho; Guo, Hui; Wang, Xiyin; Kim, Changsoo; Paterson, Andrew H

    2014-02-26

    Phylogenetic trees are widely used for genetic and evolutionary studies in various organisms. Advanced sequencing technology has dramatically enriched data available for constructing phylogenetic trees based on single nucleotide polymorphisms (SNPs). However, massive SNP data makes it difficult to perform reliable analysis, and there has been no ready-to-use pipeline to generate phylogenetic trees from these data. We developed a new pipeline, SNPhylo, to construct phylogenetic trees based on large SNP datasets. The pipeline may enable users to construct a phylogenetic tree from three representative SNP data file formats. In addition, in order to increase reliability of a tree, the pipeline has steps such as removing low quality data and considering linkage disequilibrium. A maximum likelihood method for the inference of phylogeny is also adopted in generation of a tree in our pipeline. Using SNPhylo, users can easily produce a reliable phylogenetic tree from a large SNP data file. Thus, this pipeline can help a researcher focus more on interpretation of the results of analysis of voluminous data sets, rather than manipulations necessary to accomplish the analysis.

  10. Comparison study on qualitative and quantitative risk assessment methods for urban natural gas pipeline network.

    PubMed

    Han, Z Y; Weng, W G

    2011-05-15

    In this paper, a qualitative and a quantitative risk assessment methods for urban natural gas pipeline network are proposed. The qualitative method is comprised of an index system, which includes a causation index, an inherent risk index, a consequence index and their corresponding weights. The quantitative method consists of a probability assessment, a consequences analysis and a risk evaluation. The outcome of the qualitative method is a qualitative risk value, and for quantitative method the outcomes are individual risk and social risk. In comparison with previous research, the qualitative method proposed in this paper is particularly suitable for urban natural gas pipeline network, and the quantitative method takes different consequences of accidents into consideration, such as toxic gas diffusion, jet flame, fire ball combustion and UVCE. Two sample urban natural gas pipeline networks are used to demonstrate these two methods. It is indicated that both of the two methods can be applied to practical application, and the choice of the methods depends on the actual basic data of the gas pipelines and the precision requirements of risk assessment. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  11. Literature Review: Theory and Application of In-Line Inspection Technologies for Oil and Gas Pipeline Girth Weld Defection

    PubMed Central

    Feng, Qingshan; Li, Rui; Nie, Baohua; Liu, Shucong; Zhao, Lianyu; Zhang, Hong

    2016-01-01

    Girth weld cracking is one of the main failure modes in oil and gas pipelines; girth weld cracking inspection has great economic and social significance for the intrinsic safety of pipelines. This paper introduces the typical girth weld defects of oil and gas pipelines and the common nondestructive testing methods, and systematically generalizes the progress in the studies on technical principles, signal analysis, defect sizing method and inspection reliability, etc., of magnetic flux leakage (MFL) inspection, liquid ultrasonic inspection, electromagnetic acoustic transducer (EMAT) inspection and remote field eddy current (RFDC) inspection for oil and gas pipeline girth weld defects. Additionally, it introduces the new technologies for composite ultrasonic, laser ultrasonic, and magnetostriction inspection, and provides reference for development and application of oil and gas pipeline girth weld defect in-line inspection technology. PMID:28036016

  12. State of art of seismic design and seismic hazard analysis for oil and gas pipeline system

    NASA Astrophysics Data System (ADS)

    Liu, Aiwen; Chen, Kun; Wu, Jian

    2010-06-01

    The purpose of this paper is to adopt the uniform confidence method in both water pipeline design and oil-gas pipeline design. Based on the importance of pipeline and consequence of its failure, oil and gas pipeline can be classified into three pipe classes, with exceeding probabilities over 50 years of 2%, 5% and 10%, respectively. Performance-based design requires more information about ground motion, which should be obtained by evaluating seismic safety for pipeline engineering site. Different from a city’s water pipeline network, the long-distance oil and gas pipeline system is a spatially linearly distributed system. For the uniform confidence of seismic safety, a long-distance oil and pipeline formed with pump stations and different-class pipe segments should be considered as a whole system when analyzing seismic risk. Considering the uncertainty of earthquake magnitude, the design-basis fault displacements corresponding to the different pipeline classes are proposed to improve deterministic seismic hazard analysis (DSHA). A new empirical relationship between the maximum fault displacement and the surface-wave magnitude is obtained with the supplemented earthquake data in East Asia. The estimation of fault displacement for a refined oil pipeline in Wenchuan M S8.0 earthquake is introduced as an example in this paper.

  13. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines.

    PubMed

    Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo

    2016-12-13

    In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods.

  14. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines

    PubMed Central

    Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo

    2016-01-01

    In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods. PMID:27983577

  15. Ultrasonic wave based pressure measurement in small diameter pipeline.

    PubMed

    Wang, Dan; Song, Zhengxiang; Wu, Yuan; Jiang, Yuan

    2015-12-01

    An effective non-intrusive method of ultrasound-based technique that allows monitoring liquid pressure in small diameter pipeline (less than 10mm) is presented in this paper. Ultrasonic wave could penetrate medium, through the acquisition of representative information from the echoes, properties of medium can be reflected. This pressure measurement is difficult due to that echoes' information is not easy to obtain in small diameter pipeline. The proposed method is a study on pipeline with Kneser liquid and is based on the principle that the transmission speed of ultrasonic wave in pipeline liquid correlates with liquid pressure and transmission speed of ultrasonic wave in pipeline liquid is reflected through ultrasonic propagation time providing that acoustic distance is fixed. Therefore, variation of ultrasonic propagation time can reflect variation of pressure in pipeline. Ultrasonic propagation time is obtained by electric processing approach and is accurately measured to nanosecond through high resolution time measurement module. We used ultrasonic propagation time difference to reflect actual pressure in this paper to reduce the environmental influences. The corresponding pressure values are finally obtained by acquiring the relationship between variation of ultrasonic propagation time difference and pressure with the use of neural network analysis method, the results show that this method is accurate and can be used in practice. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Theory and Application of Magnetic Flux Leakage Pipeline Detection.

    PubMed

    Shi, Yan; Zhang, Chao; Li, Rui; Cai, Maolin; Jia, Guanwei

    2015-12-10

    Magnetic flux leakage (MFL) detection is one of the most popular methods of pipeline inspection. It is a nondestructive testing technique which uses magnetic sensitive sensors to detect the magnetic leakage field of defects on both the internal and external surfaces of pipelines. This paper introduces the main principles, measurement and processing of MFL data. As the key point of a quantitative analysis of MFL detection, the identification of the leakage magnetic signal is also discussed. In addition, the advantages and disadvantages of different identification methods are analyzed. Then the paper briefly introduces the expert systems used. At the end of this paper, future developments in pipeline MFL detection are predicted.

  17. Theory and Application of Magnetic Flux Leakage Pipeline Detection

    PubMed Central

    Shi, Yan; Zhang, Chao; Li, Rui; Cai, Maolin; Jia, Guanwei

    2015-01-01

    Magnetic flux leakage (MFL) detection is one of the most popular methods of pipeline inspection. It is a nondestructive testing technique which uses magnetic sensitive sensors to detect the magnetic leakage field of defects on both the internal and external surfaces of pipelines. This paper introduces the main principles, measurement and processing of MFL data. As the key point of a quantitative analysis of MFL detection, the identification of the leakage magnetic signal is also discussed. In addition, the advantages and disadvantages of different identification methods are analyzed. Then the paper briefly introduces the expert systems used. At the end of this paper, future developments in pipeline MFL detection are predicted. PMID:26690435

  18. Optimal Energy Consumption Analysis of Natural Gas Pipeline

    PubMed Central

    Liu, Enbin; Li, Changjun; Yang, Yi

    2014-01-01

    There are many compressor stations along long-distance natural gas pipelines. Natural gas can be transported using different boot programs and import pressures, combined with temperature control parameters. Moreover, different transport methods have correspondingly different energy consumptions. At present, the operating parameters of many pipelines are determined empirically by dispatchers, resulting in high energy consumption. This practice does not abide by energy reduction policies. Therefore, based on a full understanding of the actual needs of pipeline companies, we introduce production unit consumption indicators to establish an objective function for achieving the goal of lowering energy consumption. By using a dynamic programming method for solving the model and preparing calculation software, we can ensure that the solution process is quick and efficient. Using established optimization methods, we analyzed the energy savings for the XQ gas pipeline. By optimizing the boot program, the import station pressure, and the temperature parameters, we achieved the optimal energy consumption. By comparison with the measured energy consumption, the pipeline now has the potential to reduce energy consumption by 11 to 16 percent. PMID:24955410

  19. NGSANE: a lightweight production informatics framework for high-throughput data analysis.

    PubMed

    Buske, Fabian A; French, Hugh J; Smith, Martin A; Clark, Susan J; Bauer, Denis C

    2014-05-15

    The initial steps in the analysis of next-generation sequencing data can be automated by way of software 'pipelines'. However, individual components depreciate rapidly because of the evolving technology and analysis methods, often rendering entire versions of production informatics pipelines obsolete. Constructing pipelines from Linux bash commands enables the use of hot swappable modular components as opposed to the more rigid program call wrapping by higher level languages, as implemented in comparable published pipelining systems. Here we present Next Generation Sequencing ANalysis for Enterprises (NGSANE), a Linux-based, high-performance-computing-enabled framework that minimizes overhead for set up and processing of new projects, yet maintains full flexibility of custom scripting when processing raw sequence data. Ngsane is implemented in bash and publicly available under BSD (3-Clause) licence via GitHub at https://github.com/BauerLab/ngsane. Denis.Bauer@csiro.au Supplementary data are available at Bioinformatics online.

  20. Statistical method to compare massive parallel sequencing pipelines.

    PubMed

    Elsensohn, M H; Leblay, N; Dimassi, S; Campan-Fournier, A; Labalme, A; Roucher-Boulez, F; Sanlaville, D; Lesca, G; Bardel, C; Roy, P

    2017-03-01

    Today, sequencing is frequently carried out by Massive Parallel Sequencing (MPS) that cuts drastically sequencing time and expenses. Nevertheless, Sanger sequencing remains the main validation method to confirm the presence of variants. The analysis of MPS data involves the development of several bioinformatic tools, academic or commercial. We present here a statistical method to compare MPS pipelines and test it in a comparison between an academic (BWA-GATK) and a commercial pipeline (TMAP-NextGENe®), with and without reference to a gold standard (here, Sanger sequencing), on a panel of 41 genes in 43 epileptic patients. This method used the number of variants to fit log-linear models for pairwise agreements between pipelines. To assess the heterogeneity of the margins and the odds ratios of agreement, four log-linear models were used: a full model, a homogeneous-margin model, a model with single odds ratio for all patients, and a model with single intercept. Then a log-linear mixed model was fitted considering the biological variability as a random effect. Among the 390,339 base-pairs sequenced, TMAP-NextGENe® and BWA-GATK found, on average, 2253.49 and 1857.14 variants (single nucleotide variants and indels), respectively. Against the gold standard, the pipelines had similar sensitivities (63.47% vs. 63.42%) and close but significantly different specificities (99.57% vs. 99.65%; p < 0.001). Same-trend results were obtained when only single nucleotide variants were considered (99.98% specificity and 76.81% sensitivity for both pipelines). The method allows thus pipeline comparison and selection. It is generalizable to all types of MPS data and all pipelines.

  1. Feasibility study for wax deposition imaging in oil pipelines by PGNAA technique.

    PubMed

    Cheng, Can; Jia, Wenbao; Hei, Daqian; Wei, Zhiyong; Wang, Hongtao

    2017-10-01

    Wax deposition in pipelines is a crucial problem in the oil industry. A method based on the prompt gamma-ray neutron activation analysis technique was applied to reconstruct the image of wax deposition in oil pipelines. The 2.223MeV hydrogen capture gamma rays were used to reconstruct the wax deposition image. To validate the method, both MCNP simulation and experiments were performed for wax deposited with a maximum thickness of 20cm. The performance of the method was simulated using the MCNP code. The experiment was conducted with a 252 Cf neutron source and a LaBr 3 : Ce detector. A good correspondence between the simulations and the experiments was observed. The results obtained indicate that the present approach is efficient for wax deposition imaging in oil pipelines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Quantitative Risk Mapping of Urban Gas Pipeline Networks Using GIS

    NASA Astrophysics Data System (ADS)

    Azari, P.; Karimi, M.

    2017-09-01

    Natural gas is considered an important source of energy in the world. By increasing growth of urbanization, urban gas pipelines which transmit natural gas from transmission pipelines to consumers, will become a dense network. The increase in the density of urban pipelines will influence probability of occurring bad accidents in urban areas. These accidents have a catastrophic effect on people and their property. Within the next few years, risk mapping will become an important component in urban planning and management of large cities in order to decrease the probability of accident and to control them. Therefore, it is important to assess risk values and determine their location on urban map using an appropriate method. In the history of risk analysis of urban natural gas pipeline networks, the pipelines has always been considered one by one and their density in urban area has not been considered. The aim of this study is to determine the effect of several pipelines on the risk value of a specific grid point. This paper outlines a quantitative risk assessment method for analysing the risk of urban natural gas pipeline networks. It consists of two main parts: failure rate calculation where the EGIG historical data are used and fatal length calculation that involves calculation of gas release and fatality rate of consequences. We consider jet fire, fireball and explosion for investigating the consequences of gas pipeline failure. The outcome of this method is an individual risk and is shown as a risk map.

  3. A De-Novo Genome Analysis Pipeline (DeNoGAP) for large-scale comparative prokaryotic genomics studies.

    PubMed

    Thakur, Shalabh; Guttman, David S

    2016-06-30

    Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .

  4. Enhanced cortical thickness measurements for rodent brains via Lagrangian-based RK4 streamline computation

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Kim, Sun Hyung; Oguz, Ipek; Styner, Martin

    2016-03-01

    The cortical thickness of the mammalian brain is an important morphological characteristic that can be used to investigate and observe the brain's developmental changes that might be caused by biologically toxic substances such as ethanol or cocaine. Although various cortical thickness analysis methods have been proposed that are applicable for human brain and have developed into well-validated open-source software packages, cortical thickness analysis methods for rodent brains have not yet become as robust and accurate as those designed for human brains. Based on a previously proposed cortical thickness measurement pipeline for rodent brain analysis,1 we present an enhanced cortical thickness pipeline in terms of accuracy and anatomical consistency. First, we propose a Lagrangian-based computational approach in the thickness measurement step in order to minimize local truncation error using the fourth-order Runge-Kutta method. Second, by constructing a line object for each streamline of the thickness measurement, we can visualize the way the thickness is measured and achieve sub-voxel accuracy by performing geometric post-processing. Last, with emphasis on the importance of an anatomically consistent partial differential equation (PDE) boundary map, we propose an automatic PDE boundary map generation algorithm that is specific to rodent brain anatomy, which does not require manual labeling. The results show that the proposed cortical thickness pipeline can produce statistically significant regions that are not observed in the previous cortical thickness analysis pipeline.

  5. Assessing the hodgepodge of non-mapped reads in bacterial transcriptomes: real or artifactual RNA chimeras?

    PubMed

    Lloréns-Rico, Verónica; Serrano, Luis; Lluch-Senar, Maria

    2014-07-29

    RNA sequencing methods have already altered our view of the extent and complexity of bacterial and eukaryotic transcriptomes, revealing rare transcript isoforms (circular RNAs, RNA chimeras) that could play an important role in their biology. We performed an analysis of chimera formation by four different computational approaches, including a custom designed pipeline, to study the transcriptomes of M. pneumoniae and P. aeruginosa, as well as mixtures of both. We found that rare transcript isoforms detected by conventional pipelines of analysis could be artifacts of the experimental procedure used in the library preparation, and that they are protocol-dependent. By using a customized pipeline we show that optimal library preparation protocol and the pipeline to analyze the results are crucial to identify real chimeric RNAs.

  6. Time-Distance Helioseismology Data-Analysis Pipeline for Helioseismic and Magnetic Imager Onboard Solar Dynamics Observatory (SDO-HMI) and Its Initial Results

    NASA Technical Reports Server (NTRS)

    Zhao, J.; Couvidat, S.; Bogart, R. S.; Parchevsky, K. V.; Birch, A. C.; Duvall, Thomas L., Jr.; Beck, J. G.; Kosovichev, A. G.; Scherrer, P. H.

    2011-01-01

    The Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory (SDO/HMI) provides continuous full-disk observations of solar oscillations. We develop a data-analysis pipeline based on the time-distance helioseismology method to measure acoustic travel times using HMI Doppler-shift observations, and infer solar interior properties by inverting these measurements. The pipeline is used for routine production of near-real-time full-disk maps of subsurface wave-speed perturbations and horizontal flow velocities for depths ranging from 0 to 20 Mm, every eight hours. In addition, Carrington synoptic maps for the subsurface properties are made from these full-disk maps. The pipeline can also be used for selected target areas and time periods. We explain details of the pipeline organization and procedures, including processing of the HMI Doppler observations, measurements of the travel times, inversions, and constructions of the full-disk and synoptic maps. Some initial results from the pipeline, including full-disk flow maps, sunspot subsurface flow fields, and the interior rotation and meridional flow speeds, are presented.

  7. An open RNA-Seq data analysis pipeline tutorial with an example of reprocessing data from a recent Zika virus study.

    PubMed

    Wang, Zichen; Ma'ayan, Avi

    2016-01-01

    RNA-seq analysis is becoming a standard method for global gene expression profiling. However, open and standard pipelines to perform RNA-seq analysis by non-experts remain challenging due to the large size of the raw data files and the hardware requirements for running the alignment step. Here we introduce a reproducible open source RNA-seq pipeline delivered as an IPython notebook and a Docker image. The pipeline uses state-of-the-art tools and can run on various platforms with minimal configuration overhead. The pipeline enables the extraction of knowledge from typical RNA-seq studies by generating interactive principal component analysis (PCA) and hierarchical clustering (HC) plots, performing enrichment analyses against over 90 gene set libraries, and obtaining lists of small molecules that are predicted to either mimic or reverse the observed changes in mRNA expression. We apply the pipeline to a recently published RNA-seq dataset collected from human neuronal progenitors infected with the Zika virus (ZIKV). In addition to confirming the presence of cell cycle genes among the genes that are downregulated by ZIKV, our analysis uncovers significant overlap with upregulated genes that when knocked out in mice induce defects in brain morphology. This result potentially points to the molecular processes associated with the microcephaly phenotype observed in newborns from pregnant mothers infected with the virus. In addition, our analysis predicts small molecules that can either mimic or reverse the expression changes induced by ZIKV. The IPython notebook and Docker image are freely available at:  http://nbviewer.jupyter.org/github/maayanlab/Zika-RNAseq-Pipeline/blob/master/Zika.ipynb and  https://hub.docker.com/r/maayanlab/zika/.

  8. Data reduction and calibration for LAMOST survey

    NASA Astrophysics Data System (ADS)

    Luo, Ali; Zhang, Jiannan; Chen, Jianjun; Song, Yihan; Wu, Yue; Bai, Zhongrui; Wang, Fengfei; Du, Bing; Zhang, Haotong

    2014-01-01

    There are three data pipelines for LAMOST survey. The raw data is reduced to one dimension spectra by the data reduction pipeline(2D pipeline), the extracted spectra are classified and measured by the spectral analysis pipeline(1D pipeline), while stellar parameters are measured by LASP pipeline. (a) The data reduction pipeline. The main tasks of the data reduction pipeline include bias calibration, flat field, spectra extraction, sky subtraction, wavelength calibration, exposure merging and wavelength band connection. (b) The spectra analysis pipeline. This pipeline is designed to classify and identify objects from the extracted spectra and to measure their redshift (or radial velocity). The PCAZ (Glazebrook et al. 1998) method is applied to do the classification and redshift measurement. (c) Stellar parameters LASP. Stellar parameters pipeline (LASP) is to estimate stellar atmospheric parameters, e.g. effective temperature Teff, surface gravity log g, and metallicity [Fe/H], for F, G and K type stars. To effectively determine those fundamental stellar measurements, three steps with different methods are employed. The first step utilizes the line indices to approximately define the effective temperature range of the analyzed star. Secondly, a set of the initial approximate values of the three parameters are given based on template fitting method. Finally, we exploit ULySS (Koleva et al. 2009) to give the final values of parameters through minimizing the χ 2 value between the observed spectrum and a multidimensional grid of model spectra which is generated by an interpolating of ELODIE library. There are two other classification for A type star and M type star. For A type star, standard MK system is employed (Gray et al. 2009) to give each object temperature class and luminosity type. For M type star, they are classified into subclasses by an improved Hammer method, and metallicity of each objects is also given. During the pilot survey, algorithms were improved and the pipelines were tested. The products of LAMOST survey will include extracted and calibrated spectra in FITS format, a catalog of FGK stars with stellar parameters, a catalog of M dwarf with subclass and metallicity, and a catalog of A type star with MK classification. A part of the pilot survey data, including about 319 000 high quality spectra with SNR > 10, a catalog of stellar parameters of FGK stars and another catalog of a subclass of M type stars have been released to the public in August 2012 (Luo et al. 2012). The general survey started from October 2012, and completed the first year survey. The formal data release one (DR1) is being prepared, which will include both pilot survey and first year general survey, and planed to be released under the LAMOST data policy.

  9. SIMPLEX: Cloud-Enabled Pipeline for the Comprehensive Analysis of Exome Sequencing Data

    PubMed Central

    Fischer, Maria; Snajder, Rene; Pabinger, Stephan; Dander, Andreas; Schossig, Anna; Zschocke, Johannes; Trajanoski, Zlatko; Stocker, Gernot

    2012-01-01

    In recent studies, exome sequencing has proven to be a successful screening tool for the identification of candidate genes causing rare genetic diseases. Although underlying targeted sequencing methods are well established, necessary data handling and focused, structured analysis still remain demanding tasks. Here, we present a cloud-enabled autonomous analysis pipeline, which comprises the complete exome analysis workflow. The pipeline combines several in-house developed and published applications to perform the following steps: (a) initial quality control, (b) intelligent data filtering and pre-processing, (c) sequence alignment to a reference genome, (d) SNP and DIP detection, (e) functional annotation of variants using different approaches, and (f) detailed report generation during various stages of the workflow. The pipeline connects the selected analysis steps, exposes all available parameters for customized usage, performs required data handling, and distributes computationally expensive tasks either on a dedicated high-performance computing infrastructure or on the Amazon cloud environment (EC2). The presented application has already been used in several research projects including studies to elucidate the role of rare genetic diseases. The pipeline is continuously tested and is publicly available under the GPL as a VirtualBox or Cloud image at http://simplex.i-med.ac.at; additional supplementary data is provided at http://www.icbi.at/exome. PMID:22870267

  10. MSP-HTPrimer: a high-throughput primer design tool to improve assay design for DNA methylation analysis in epigenetics.

    PubMed

    Pandey, Ram Vinay; Pulverer, Walter; Kallmeyer, Rainer; Beikircher, Gabriel; Pabinger, Stephan; Kriegner, Albert; Weinhäusel, Andreas

    2016-01-01

    Bisulfite (BS) conversion-based and methylation-sensitive restriction enzyme (MSRE)-based PCR methods have been the most commonly used techniques for locus-specific DNA methylation analysis. However, both methods have advantages and limitations. Thus, an integrated approach would be extremely useful to quantify the DNA methylation status successfully with great sensitivity and specificity. Designing specific and optimized primers for target regions is the most critical and challenging step in obtaining the adequate DNA methylation results using PCR-based methods. Currently, no integrated, optimized, and high-throughput methylation-specific primer design software methods are available for both BS- and MSRE-based methods. Therefore an integrated, powerful, and easy-to-use methylation-specific primer design pipeline with great accuracy and success rate will be very useful. We have developed a new web-based pipeline, called MSP-HTPrimer, to design primers pairs for MSP, BSP, pyrosequencing, COBRA, and MSRE assays on both genomic strands. First, our pipeline converts all target sequences into bisulfite-treated templates for both forward and reverse strand and designs all possible primer pairs, followed by filtering for single nucleotide polymorphisms (SNPs) and known repeat regions. Next, each primer pairs are annotated with the upstream and downstream RefSeq genes, CpG island, and cut sites (for COBRA and MSRE). Finally, MSP-HTPrimer selects specific primers from both strands based on custom and user-defined hierarchical selection criteria. MSP-HTPrimer produces a primer pair summary output table in TXT and HTML format for display and UCSC custom tracks for resulting primer pairs in GTF format. MSP-HTPrimer is an integrated, web-based, and high-throughput pipeline and has no limitation on the number and size of target sequences and designs MSP, BSP, pyrosequencing, COBRA, and MSRE assays. It is the only pipeline, which automatically designs primers on both genomic strands to increase the success rate. It is a standalone web-based pipeline, which is fully configured within a virtual machine and thus can be readily used without any configuration. We have experimentally validated primer pairs designed by our pipeline and shown a very high success rate of primer pairs: out of 66 BSP primer pairs, 63 were successfully validated without any further optimization step and using the same qPCR conditions. The MSP-HTPrimer pipeline is freely available from http://sourceforge.net/p/msp-htprimer.

  11. Untargeted UPLC-MS Profiling Pipeline to Expand Tissue Metabolome Coverage: Application to Cardiovascular Disease

    PubMed Central

    2015-01-01

    Metabolic profiling studies aim to achieve broad metabolome coverage in specific biological samples. However, wide metabolome coverage has proven difficult to achieve, mostly because of the diverse physicochemical properties of small molecules, obligating analysts to seek multiplatform and multimethod approaches. Challenges are even greater when it comes to applications to tissue samples, where tissue lysis and metabolite extraction can induce significant systematic variation in composition. We have developed a pipeline for obtaining the aqueous and organic compounds from diseased arterial tissue using two consecutive extractions, followed by a different untargeted UPLC-MS analysis method for each extract. Methods were rationally chosen and optimized to address the different physicochemical properties of each extract: hydrophilic interaction liquid chromatography (HILIC) for the aqueous extract and reversed-phase chromatography for the organic. This pipeline can be generic for tissue analysis as demonstrated by applications to different tissue types. The experimental setup and fast turnaround time of the two methods contributed toward obtaining highly reproducible features with exceptional chromatographic performance (CV % < 0.5%), making this pipeline suitable for metabolic profiling applications. We structurally assigned 226 metabolites from a range of chemical classes (e.g., carnitines, α-amino acids, purines, pyrimidines, phospholipids, sphingolipids, free fatty acids, and glycerolipids) which were mapped to their corresponding pathways, biological functions and known disease mechanisms. The combination of the two untargeted UPLC-MS methods showed high metabolite complementarity. We demonstrate the application of this pipeline to cardiovascular disease, where we show that the analyzed diseased groups (n = 120) of arterial tissue could be distinguished based on their metabolic profiles. PMID:25664760

  12. 18 CFR 357.3 - FERC Form No. 73, Oil Pipeline Data for Depreciation Analysis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Pipeline Data for Depreciation Analysis. 357.3 Section 357.3 Conservation of Power and Water Resources... No. 73, Oil Pipeline Data for Depreciation Analysis. (a) Who must file. Any oil pipeline company.... 73, Oil Pipeline Data for Depreciation Analysis, available for review at the Commission's Public...

  13. Statistical analysis on the signals monitoring multiphase flow patterns in pipeline-riser system

    NASA Astrophysics Data System (ADS)

    Ye, Jing; Guo, Liejin

    2013-07-01

    The signals monitoring petroleum transmission pipeline in offshore oil industry usually contain abundant information about the multiphase flow on flow assurance which includes the avoidance of most undesirable flow pattern. Therefore, extracting reliable features form these signals to analyze is an alternative way to examine the potential risks to oil platform. This paper is focused on characterizing multiphase flow patterns in pipeline-riser system that is often appeared in offshore oil industry and finding an objective criterion to describe the transition of flow patterns. Statistical analysis on pressure signal at the riser top is proposed, instead of normal prediction method based on inlet and outlet flow conditions which could not be easily determined during most situations. Besides, machine learning method (least square supported vector machine) is also performed to classify automatically the different flow patterns. The experiment results from a small-scale loop show that the proposed method is effective for analyzing the multiphase flow pattern.

  14. Comparison of software packages for detecting differential expression in RNA-seq studies

    PubMed Central

    Seyednasrollah, Fatemeh; Laiho, Asta

    2015-01-01

    RNA-sequencing (RNA-seq) has rapidly become a popular tool to characterize transcriptomes. A fundamental research problem in many RNA-seq studies is the identification of reliable molecular markers that show differential expression between distinct sample groups. Together with the growing popularity of RNA-seq, a number of data analysis methods and pipelines have already been developed for this task. Currently, however, there is no clear consensus about the best practices yet, which makes the choice of an appropriate method a daunting task especially for a basic user without a strong statistical or computational background. To assist the choice, we perform here a systematic comparison of eight widely used software packages and pipelines for detecting differential expression between sample groups in a practical research setting and provide general guidelines for choosing a robust pipeline. In general, our results demonstrate how the data analysis tool utilized can markedly affect the outcome of the data analysis, highlighting the importance of this choice. PMID:24300110

  15. Comparison of software packages for detecting differential expression in RNA-seq studies.

    PubMed

    Seyednasrollah, Fatemeh; Laiho, Asta; Elo, Laura L

    2015-01-01

    RNA-sequencing (RNA-seq) has rapidly become a popular tool to characterize transcriptomes. A fundamental research problem in many RNA-seq studies is the identification of reliable molecular markers that show differential expression between distinct sample groups. Together with the growing popularity of RNA-seq, a number of data analysis methods and pipelines have already been developed for this task. Currently, however, there is no clear consensus about the best practices yet, which makes the choice of an appropriate method a daunting task especially for a basic user without a strong statistical or computational background. To assist the choice, we perform here a systematic comparison of eight widely used software packages and pipelines for detecting differential expression between sample groups in a practical research setting and provide general guidelines for choosing a robust pipeline. In general, our results demonstrate how the data analysis tool utilized can markedly affect the outcome of the data analysis, highlighting the importance of this choice. © The Author 2013. Published by Oxford University Press.

  16. NGSPanPipe: A Pipeline for Pan-genome Identification in Microbial Strains from Experimental Reads.

    PubMed

    Kulsum, Umay; Kapil, Arti; Singh, Harpreet; Kaur, Punit

    2018-01-01

    Recent advancements in sequencing technologies have decreased both time span and cost for sequencing the whole bacterial genome. High-throughput Next-Generation Sequencing (NGS) technology has led to the generation of enormous data concerning microbial populations publically available across various repositories. As a consequence, it has become possible to study and compare the genomes of different bacterial strains within a species or genus in terms of evolution, ecology and diversity. Studying the pan-genome provides insights into deciphering microevolution, global composition and diversity in virulence and pathogenesis of a species. It can also assist in identifying drug targets and proposing vaccine candidates. The effective analysis of these large genome datasets necessitates the development of robust tools. Current methods to develop pan-genome do not support direct input of raw reads from the sequencer machine but require preprocessing of reads as an assembled protein/gene sequence file or the binary matrix of orthologous genes/proteins. We have designed an easy-to-use integrated pipeline, NGSPanPipe, which can directly identify the pan-genome from short reads. The output from the pipeline is compatible with other pan-genome analysis tools. We evaluated our pipeline with other methods for developing pan-genome, i.e. reference-based assembly and de novo assembly using simulated reads of Mycobacterium tuberculosis. The single script pipeline (pipeline.pl) is applicable for all bacterial strains. It integrates multiple in-house Perl scripts and is freely accessible from https://github.com/Biomedinformatics/NGSPanPipe .

  17. Employing Machine-Learning Methods to Study Young Stellar Objects

    NASA Astrophysics Data System (ADS)

    Moore, Nicholas

    2018-01-01

    Vast amounts of data exist in the astronomical data archives, and yet a large number of sources remain unclassified. We developed a multi-wavelength pipeline to classify infrared sources. The pipeline uses supervised machine learning methods to classify objects into the appropriate categories. The program is fed data that is already classified to train it, and is then applied to unknown catalogues. The primary use for such a pipeline is the rapid classification and cataloging of data that would take a much longer time to classify otherwise. While our primary goal is to study young stellar objects (YSOs), the applications extend beyond the scope of this project. We present preliminary results from our analysis and discuss future applications.

  18. THE MURCHISON WIDEFIELD ARRAY 21 cm POWER SPECTRUM ANALYSIS METHODOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, Daniel C.; Beardsley, A. P.; Bowman, Judd D.

    2016-07-10

    We present the 21 cm power spectrum analysis approach of the Murchison Widefield Array Epoch of Reionization project. In this paper, we compare the outputs of multiple pipelines for the purpose of validating statistical limits cosmological hydrogen at redshifts between 6 and 12. Multiple independent data calibration and reduction pipelines are used to make power spectrum limits on a fiducial night of data. Comparing the outputs of imaging and power spectrum stages highlights differences in calibration, foreground subtraction, and power spectrum calculation. The power spectra found using these different methods span a space defined by the various tradeoffs between speed,more » accuracy, and systematic control. Lessons learned from comparing the pipelines range from the algorithmic to the prosaically mundane; all demonstrate the many pitfalls of neglecting reproducibility. We briefly discuss the way these different methods attempt to handle the question of evaluating a significant detection in the presence of foregrounds.« less

  19. Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.

    PubMed

    Arganda-Carreras, Ignacio; Andrey, Philippe

    2017-01-01

    With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.

  20. Comparative assessment of water use and environmental implications of coal slurry pipelines

    USGS Publications Warehouse

    Palmer, Richard N.; James II, I. C.; Hirsch, R.M.

    1977-01-01

    With other studies conducted by the U.S. Geological Survey of water use in the conversion and transportation of the West 's coal, an analysis of water use and environmental implications of coal-slurry pipeline transport is presented. Simulations of a hypothetical slurry pipeline of 1000-mile length transporting 12.5 million tons per year indicate that pipeline costs and energy requirements are quite sensitive to the coal-to-water ratio. For realistic water prices, the optimal ratio will not vary far from the 50/50 ratio by weight. In comparison to other methods of energy conversion and transport, coal-slurry pipeline utilize about 1/3 the amount of water required for coal gasification, and about 1/5 the amount required for on-site electrical generation. An analysis of net energy output from operating alternative energy transportation systems for the assumed conditions indicates that both slurry pipeline and rail shipment require approximately 4.5 percent of the potential electrical energy output of the coal transported, and high-voltage, direct-current transportation requires approximately 6.5 percent. The environmental impacts of the different transports options are so substantially different that a common basis for comparison does not exist. (Woodard-USGS)

  1. Chemical laser exhaust pipe design research

    NASA Astrophysics Data System (ADS)

    Sun, Yunqiang; Huang, Zhilong; Chen, Zhiqiang; Ren, Zebin; Guo, Longde

    2016-10-01

    In order to weaken the chemical laser exhaust gas influence of the optical transmission, a vent pipe is advised to emissions gas to the outside of the optical transmission area. Based on a variety of exhaust pipe design, a flow field characteristic of the pipe is carried out by numerical simulation and analysis in detail. The research results show that for uniform deflating exhaust pipe, although the pipeline structure is cyclical and convenient for engineering implementation, but there is a phenomenon of air reflows at the pipeline entrance slit which can be deduced from the numerical simulation results. So, this type of pipeline structure does not guarantee seal. For the design scheme of putting the pipeline contract part at the end of the exhaust pipe, or using the method of local area or tail contraction, numerical simulation results show that backflow phenomenon still exists at the pipeline entrance slit. Preliminary analysis indicates that the contraction of pipe would result in higher static pressure near the wall for the low speed flow field, so as to produce counter pressure gradient at the entrance slit. In order to eliminate backflow phenomenon at the pipe entrance slit, concerned with the pipeline type of radial size increase gradually along the flow, flow field property in the pipe is analyzed in detail by numerical simulation methods. Numerical simulation results indicate that there is not reflow phenomenon at entrance slit of the dilated duct. However the cold air inhaled in the slit which makes the temperature of the channel wall is lower than the center temperature. Therefore, this kind of pipeline structure can not only prevent the leak of the gas, but also reduce the wall temperature. In addition, compared with the straight pipe connection way, dilated pipe structure also has periodic structure, which can facilitate system integration installation.

  2. System for corrosion monitoring in pipeline applying fuzzy logic mathematics

    NASA Astrophysics Data System (ADS)

    Kuzyakov, O. N.; Kolosova, A. L.; Andreeva, M. A.

    2018-05-01

    A list of factors influencing corrosion rate on the external side of underground pipeline is determined. Principles of constructing a corrosion monitoring system are described; the system performance algorithm and program are elaborated. A comparative analysis of methods for calculating corrosion rate is undertaken. Fuzzy logic mathematics is applied to reduce calculations while considering a wider range of corrosion factors.

  3. Polar bear encephalitis: establishment of a comprehensive next-generation pathogen analysis pipeline for captive and free-living wildlife.

    PubMed

    Szentiks, C A; Tsangaras, K; Abendroth, B; Scheuch, M; Stenglein, M D; Wohlsein, P; Heeger, F; Höveler, R; Chen, W; Sun, W; Damiani, A; Nikolin, V; Gruber, A D; Grobbel, M; Kalthoff, D; Höper, D; Czirják, G Á; Derisi, J; Mazzoni, C J; Schüle, A; Aue, A; East, M L; Hofer, H; Beer, M; Osterrieder, N; Greenwood, A D

    2014-05-01

    This report describes three possibly related incidences of encephalitis, two of them lethal, in captive polar bears (Ursus maritimus). Standard diagnostic methods failed to identify pathogens in any of these cases. A comprehensive, three-stage diagnostic 'pipeline' employing both standard serological methods and new DNA microarray and next generation sequencing-based diagnostics was developed, in part as a consequence of this initial failure. This pipeline approach illustrates the strengths, weaknesses and limitations of these tools in determining pathogen caused deaths in non-model organisms such as wildlife species and why the use of a limited number of diagnostic tools may fail to uncover important wildlife pathogens. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. It's DE-licious: A Recipe for Differential Expression Analyses of RNA-seq Experiments Using Quasi-Likelihood Methods in edgeR.

    PubMed

    Lun, Aaron T L; Chen, Yunshun; Smyth, Gordon K

    2016-01-01

    RNA sequencing (RNA-seq) is widely used to profile transcriptional activity in biological systems. Here we present an analysis pipeline for differential expression analysis of RNA-seq experiments using the Rsubread and edgeR software packages. The basic pipeline includes read alignment and counting, filtering and normalization, modelling of biological variability and hypothesis testing. For hypothesis testing, we describe particularly the quasi-likelihood features of edgeR. Some more advanced downstream analysis steps are also covered, including complex comparisons, gene ontology enrichment analyses and gene set testing. The code required to run each step is described, along with an outline of the underlying theory. The chapter includes a case study in which the pipeline is used to study the expression profiles of mammary gland cells in virgin, pregnant and lactating mice.

  5. Measuring The cmb Polarization At 94 GHz With The QUIET Pseudo-cL Pipeline

    NASA Astrophysics Data System (ADS)

    Buder, Immanuel; QUIET Collaboration

    2012-01-01

    The Q/U Imaging ExperimenT (QUIET) aims to limit or detect cosmic microwave background (CMB) B-mode polarization from inflation. This talk is part of a 3-talk series on QUIET. The previous talk describes the QUIET science and instrument. QUIET has two parallel analysis pipelines which are part of an effort to validate the analysis and confirm the result. In this talk, I will describe the analysis methods of one of these: the pseudo-Cl pipeline. Calibration, noise modeling, filtering, and data-selection choices are made following a blind-analysis strategy. Central to this strategy is a suite of 30 null tests, each motivated by a possible instrumental problem or systematic effect. The systematic errors are also evaluated through full-season simulations in the blind stage of the analysis before the result is known. The CMB power spectra are calculated using a pseudo-Cl cross-correlation technique which suppresses contamination and makes the result insensitive to noise bias. QUIET will detect the first three peaks of the even-parity (E-mode) spectrum at high significance. I will show forecasts of the systematic errors for these results and for the upper limit on B-mode polarization. The very low systematic errors in these forecasts show that the technology is ready to be applied in a more sensitive next-generation experiment. The next and final talk in this series covers the other parallel analysis pipeline, based on maximum likelihood methods. This work was supported by NSF and the Department of Education.

  6. A comprehensive assessment of somatic mutation detection in cancer using whole-genome sequencing

    PubMed Central

    Alioto, Tyler S.; Buchhalter, Ivo; Derdak, Sophia; Hutter, Barbara; Eldridge, Matthew D.; Hovig, Eivind; Heisler, Lawrence E.; Beck, Timothy A.; Simpson, Jared T.; Tonon, Laurie; Sertier, Anne-Sophie; Patch, Ann-Marie; Jäger, Natalie; Ginsbach, Philip; Drews, Ruben; Paramasivam, Nagarajan; Kabbe, Rolf; Chotewutmontri, Sasithorn; Diessl, Nicolle; Previti, Christopher; Schmidt, Sabine; Brors, Benedikt; Feuerbach, Lars; Heinold, Michael; Gröbner, Susanne; Korshunov, Andrey; Tarpey, Patrick S.; Butler, Adam P.; Hinton, Jonathan; Jones, David; Menzies, Andrew; Raine, Keiran; Shepherd, Rebecca; Stebbings, Lucy; Teague, Jon W.; Ribeca, Paolo; Giner, Francesc Castro; Beltran, Sergi; Raineri, Emanuele; Dabad, Marc; Heath, Simon C.; Gut, Marta; Denroche, Robert E.; Harding, Nicholas J.; Yamaguchi, Takafumi N.; Fujimoto, Akihiro; Nakagawa, Hidewaki; Quesada, Víctor; Valdés-Mas, Rafael; Nakken, Sigve; Vodák, Daniel; Bower, Lawrence; Lynch, Andrew G.; Anderson, Charlotte L.; Waddell, Nicola; Pearson, John V.; Grimmond, Sean M.; Peto, Myron; Spellman, Paul; He, Minghui; Kandoth, Cyriac; Lee, Semin; Zhang, John; Létourneau, Louis; Ma, Singer; Seth, Sahil; Torrents, David; Xi, Liu; Wheeler, David A.; López-Otín, Carlos; Campo, Elías; Campbell, Peter J.; Boutros, Paul C.; Puente, Xose S.; Gerhard, Daniela S.; Pfister, Stefan M.; McPherson, John D.; Hudson, Thomas J.; Schlesner, Matthias; Lichter, Peter; Eils, Roland; Jones, David T. W.; Gut, Ivo G.

    2015-01-01

    As whole-genome sequencing for cancer genome analysis becomes a clinical tool, a full understanding of the variables affecting sequencing analysis output is required. Here using tumour-normal sample pairs from two different types of cancer, chronic lymphocytic leukaemia and medulloblastoma, we conduct a benchmarking exercise within the context of the International Cancer Genome Consortium. We compare sequencing methods, analysis pipelines and validation methods. We show that using PCR-free methods and increasing sequencing depth to ∼100 × shows benefits, as long as the tumour:control coverage ratio remains balanced. We observe widely varying mutation call rates and low concordance among analysis pipelines, reflecting the artefact-prone nature of the raw data and lack of standards for dealing with the artefacts. However, we show that, using the benchmark mutation set we have created, many issues are in fact easy to remedy and have an immediate positive impact on mutation detection accuracy. PMID:26647970

  7. Landslide and Land Subsidence Hazards to Pipelines

    USGS Publications Warehouse

    Baum, Rex L.; Galloway, Devin L.; Harp, Edwin L.

    2008-01-01

    Landslides and land subsidence pose serious hazards to pipelines throughout the world. Many existing pipeline corridors and more and more new pipelines cross terrain that is affected by either landslides, land subsidence, or both. Consequently the pipeline industry recognizes a need for increased awareness of methods for identifying and evaluating landslide and subsidence hazard for pipeline corridors. This report was prepared in cooperation with the U.S. Department of Transportation Pipeline and Hazardous Materials Safety Administration, and Pipeline Research Council International through a cooperative research and development agreement (CRADA) with DGH Consulting, Inc., to address the need for up-to-date information about current methods to identify and assess these hazards. Chapters in this report (1) describe methods for evaluating landslide hazard on a regional basis, (2) describe the various types of land subsidence hazard in the United States and available methods for identifying and quantifying subsidence, and (3) summarize current methods for investigating individual landslides. In addition to the descriptions, this report provides information about the relative costs, limitations and reliability of various methods.

  8. Research on numerical simulation and protection of transient process in long-distance slurry transportation pipelines

    NASA Astrophysics Data System (ADS)

    Lan, G.; Jiang, J.; Li, D. D.; Yi, W. S.; Zhao, Z.; Nie, L. N.

    2013-12-01

    The calculation of water-hammer pressure phenomenon of single-phase liquid is already more mature for a pipeline of uniform characteristics, but less research has addressed the calculation of slurry water hammer pressure in complex pipelines with slurry flows carrying solid particles. In this paper, based on the developments of slurry pipelines at home and abroad, the fundamental principle and method of numerical simulation of transient processes are presented, and several boundary conditions are given. Through the numerical simulation and analysis of transient processes of a practical engineering of long-distance slurry transportation pipeline system, effective protection measures and operating suggestions are presented. A model for calculating the water impact of solid and fluid phases is established based on a practical engineering of long-distance slurry pipeline transportation system. After performing a numerical simulation of the transient process, analyzing and comparing the results, effective protection measures and operating advice are recommended, which has guiding significance to the design and operating management of practical engineering of longdistance slurry pipeline transportation system.

  9. A detailed comparison of analysis processes for MCC-IMS data in disease classification—Automated methods can replace manual peak annotations

    PubMed Central

    Horsch, Salome; Kopczynski, Dominik; Kuthe, Elias; Baumbach, Jörg Ingo; Rahmann, Sven

    2017-01-01

    Motivation Disease classification from molecular measurements typically requires an analysis pipeline from raw noisy measurements to final classification results. Multi capillary column—ion mobility spectrometry (MCC-IMS) is a promising technology for the detection of volatile organic compounds in the air of exhaled breath. From raw measurements, the peak regions representing the compounds have to be identified, quantified, and clustered across different experiments. Currently, several steps of this analysis process require manual intervention of human experts. Our goal is to identify a fully automatic pipeline that yields competitive disease classification results compared to an established but subjective and tedious semi-manual process. Method We combine a large number of modern methods for peak detection, peak clustering, and multivariate classification into analysis pipelines for raw MCC-IMS data. We evaluate all combinations on three different real datasets in an unbiased cross-validation setting. We determine which specific algorithmic combinations lead to high AUC values in disease classifications across the different medical application scenarios. Results The best fully automated analysis process achieves even better classification results than the established manual process. The best algorithms for the three analysis steps are (i) SGLTR (Savitzky-Golay Laplace-operator filter thresholding regions) and LM (Local Maxima) for automated peak identification, (ii) EM clustering (Expectation Maximization) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) for the clustering step and (iii) RF (Random Forest) for multivariate classification. Thus, automated methods can replace the manual steps in the analysis process to enable an unbiased high throughput use of the technology. PMID:28910313

  10. VIPER: Visualization Pipeline for RNA-seq, a Snakemake workflow for efficient and complete RNA-seq analysis.

    PubMed

    Cornwell, MacIntosh; Vangala, Mahesh; Taing, Len; Herbert, Zachary; Köster, Johannes; Li, Bo; Sun, Hanfei; Li, Taiwen; Zhang, Jian; Qiu, Xintao; Pun, Matthew; Jeselsohn, Rinath; Brown, Myles; Liu, X Shirley; Long, Henry W

    2018-04-12

    RNA sequencing has become a ubiquitous technology used throughout life sciences as an effective method of measuring RNA abundance quantitatively in tissues and cells. The increase in use of RNA-seq technology has led to the continuous development of new tools for every step of analysis from alignment to downstream pathway analysis. However, effectively using these analysis tools in a scalable and reproducible way can be challenging, especially for non-experts. Using the workflow management system Snakemake we have developed a user friendly, fast, efficient, and comprehensive pipeline for RNA-seq analysis. VIPER (Visualization Pipeline for RNA-seq analysis) is an analysis workflow that combines some of the most popular tools to take RNA-seq analysis from raw sequencing data, through alignment and quality control, into downstream differential expression and pathway analysis. VIPER has been created in a modular fashion to allow for the rapid incorporation of new tools to expand the capabilities. This capacity has already been exploited to include very recently developed tools that explore immune infiltrate and T-cell CDR (Complementarity-Determining Regions) reconstruction abilities. The pipeline has been conveniently packaged such that minimal computational skills are required to download and install the dozens of software packages that VIPER uses. VIPER is a comprehensive solution that performs most standard RNA-seq analyses quickly and effectively with a built-in capacity for customization and expansion.

  11. SAND: an automated VLBI imaging and analysing pipeline - I. Stripping component trajectories

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Collioud, A.; Charlot, P.

    2018-02-01

    We present our implementation of an automated very long baseline interferometry (VLBI) data-reduction pipeline that is dedicated to interferometric data imaging and analysis. The pipeline can handle massive VLBI data efficiently, which makes it an appropriate tool to investigate multi-epoch multiband VLBI data. Compared to traditional manual data reduction, our pipeline provides more objective results as less human interference is involved. The source extraction is carried out in the image plane, while deconvolution and model fitting are performed in both the image plane and the uv plane for parallel comparison. The output from the pipeline includes catalogues of CLEANed images and reconstructed models, polarization maps, proper motion estimates, core light curves and multiband spectra. We have developed a regression STRIP algorithm to automatically detect linear or non-linear patterns in the jet component trajectories. This algorithm offers an objective method to match jet components at different epochs and to determine their proper motions.

  12. Next Generation Sequence Analysis and Computational Genomics Using Graphical Pipeline Workflows

    PubMed Central

    Torri, Federica; Dinov, Ivo D.; Zamanyan, Alen; Hobel, Sam; Genco, Alex; Petrosyan, Petros; Clark, Andrew P.; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Knowles, James A.; Ames, Joseph; Kesselman, Carl; Toga, Arthur W.; Potkin, Steven G.; Vawter, Marquis P.; Macciardi, Fabio

    2012-01-01

    Whole-genome and exome sequencing have already proven to be essential and powerful methods to identify genes responsible for simple Mendelian inherited disorders. These methods can be applied to complex disorders as well, and have been adopted as one of the current mainstream approaches in population genetics. These achievements have been made possible by next generation sequencing (NGS) technologies, which require substantial bioinformatics resources to analyze the dense and complex sequence data. The huge analytical burden of data from genome sequencing might be seen as a bottleneck slowing the publication of NGS papers at this time, especially in psychiatric genetics. We review the existing methods for processing NGS data, to place into context the rationale for the design of a computational resource. We describe our method, the Graphical Pipeline for Computational Genomics (GPCG), to perform the computational steps required to analyze NGS data. The GPCG implements flexible workflows for basic sequence alignment, sequence data quality control, single nucleotide polymorphism analysis, copy number variant identification, annotation, and visualization of results. These workflows cover all the analytical steps required for NGS data, from processing the raw reads to variant calling and annotation. The current version of the pipeline is freely available at http://pipeline.loni.ucla.edu. These applications of NGS analysis may gain clinical utility in the near future (e.g., identifying miRNA signatures in diseases) when the bioinformatics approach is made feasible. Taken together, the annotation tools and strategies that have been developed to retrieve information and test hypotheses about the functional role of variants present in the human genome will help to pinpoint the genetic risk factors for psychiatric disorders. PMID:23139896

  13. DDBJ read annotation pipeline: a cloud computing-based pipeline for high-throughput analysis of next-generation sequencing data.

    PubMed

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-08-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/.

  14. DDBJ Read Annotation Pipeline: A Cloud Computing-Based Pipeline for High-Throughput Analysis of Next-Generation Sequencing Data

    PubMed Central

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-01-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/. PMID:23657089

  15. Methods for protecting subsea pipelines and installations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rochelle, W.R.; Simpson, D.M.

    1981-01-01

    The hazards for subsea pipelines and installations are described. Methods currently being used to protect subsea pipelines and installations are discussed with the emphasis on various trenching methods and equipment. Technical data on progress rates for trenching and feasible depths of trench are given. Possible methods for protection against icebergs are discussed. A case for more comprehensive data on icebergs is presented. Should a pipeline become damaged, repair methods are noted.

  16. Research on Buckling State of Prestressed Fiber-Strengthened Steel Pipes

    NASA Astrophysics Data System (ADS)

    Wang, Ruheng; Lan, Kunchang

    2018-01-01

    The main restorative methods of damaged oil and gas pipelines include welding reinforcement, fixture reinforcement and fiber material reinforcement. Owing to the severe corrosion problems of pipes in practical use, the research on renovation and consolidation techniques of damaged pipes gains extensive attention by experts and scholars both at home and abroad. The analysis of mechanical behaviors of reinforced pressure pipelines and further studies focusing on “the critical buckling” and intensity of pressure pipeline failure are conducted in this paper, providing theoretical basis to restressed fiber-strengthened steel pipes. Deformation coordination equations and buckling control equations of steel pipes under the effect of prestress is deduced by using Rayleigh Ritz method, which is an approximation method based on potential energy stationary value theory and minimum potential energy principle. According to the deformation of prestressed steel pipes, the deflection differential equation of prestressed steel pipes is established, and the critical value of buckling under prestress is obtained.

  17. The Application of Simulation Method in Isothermal Elastic Natural Gas Pipeline

    NASA Astrophysics Data System (ADS)

    Xing, Chunlei; Guan, Shiming; Zhao, Yue; Cao, Jinggang; Chu, Yanji

    2018-02-01

    This Elastic pipeline mathematic model is of crucial importance in natural gas pipeline simulation because of its compliance with the practical industrial cases. The numerical model of elastic pipeline will bring non-linear complexity to the discretized equations. Hence the Newton-Raphson method cannot achieve fast convergence in this kind of problems. Therefore A new Newton Based method with Powell-Wolfe Condition to simulate the Isothermal elastic pipeline flow is presented. The results obtained by the new method aregiven based on the defined boundary conditions. It is shown that the method converges in all cases and reduces significant computational cost.

  18. Development of Time-Distance Helioseismology Data Analysis Pipeline for SDO/HMI

    NASA Technical Reports Server (NTRS)

    DuVall, T. L., Jr.; Zhao, J.; Couvidat, S.; Parchevsky, K. V.; Beck, J.; Kosovichev, A. G.; Scherrer, P. H.

    2008-01-01

    The Helioseismic and Magnetic Imager of SDO will provide uninterrupted 4k x 4k-pixel Doppler-shift images of the Sun with approximately 40 sec cadence. These data will have a unique potential for advancing local helioseismic diagnostics of the Sun's interior structure and dynamics. They will help to understand the basic mechanisms of solar activity and develop predictive capabilities for NASA's Living with a Star program. Because of the tremendous amount of data the HMI team is developing a data analysis pipeline, which will provide maps of subsurface flows and sound-speed distributions inferred form the Doppler data by the time-distance technique. We discuss the development plan, methods, and algorithms, and present the status of the pipeline, testing results and examples of the data products.

  19. Optimizing Preprocessing and Analysis Pipelines for Single-Subject FMRI. I. Standard Temporal Motion and Physiological Noise Correction Methods

    PubMed Central

    Churchill, Nathan W.; Oder, Anita; Abdi, Hervé; Tam, Fred; Lee, Wayne; Thomas, Christopher; Ween, Jon E.; Graham, Simon J.; Strother, Stephen C.

    2016-01-01

    Subject-specific artifacts caused by head motion and physiological noise are major confounds in BOLD fMRI analyses. However, there is little consensus on the optimal choice of data preprocessing steps to minimize these effects. To evaluate the effects of various preprocessing strategies, we present a framework which comprises a combination of (1) nonparametric testing including reproducibility and prediction metrics of the data-driven NPAIRS framework (Strother et al. [2002]: NeuroImage 15:747–771), and (2) intersubject comparison of SPM effects, using DISTATIS (a three-way version of metric multidimensional scaling (Abdi et al. [2009]: NeuroImage 45:89–95). It is shown that the quality of brain activation maps may be significantly limited by sub-optimal choices of data preprocessing steps (or “pipeline”) in a clinical task-design, an fMRI adaptation of the widely used Trail-Making Test. The relative importance of motion correction, physiological noise correction, motion parameter regression, and temporal detrending were examined for fMRI data acquired in young, healthy adults. Analysis performance and the quality of activation maps were evaluated based on Penalized Discriminant Analysis (PDA). The relative importance of different preprocessing steps was assessed by (1) a nonparametric Friedman rank test for fixed sets of preprocessing steps, applied to all subjects; and (2) evaluating pipelines chosen specifically for each subject. Results demonstrate that preprocessing choices have significant, but subject-dependant effects, and that individually-optimized pipelines may significantly improve the reproducibility of fMRI results over fixed pipelines. This was demonstrated by the detection of a significant interaction with motion parameter regression and physiological noise correction, even though the range of subject head motion was small across the group (≪ 1 voxel). Optimizing pipelines on an individual-subject basis also revealed brain activation patterns either weak or absent under fixed pipelines, which has implications for the overall interpretation of fMRI data, and the relative importance of preprocessing methods. PMID:21455942

  20. Method and system for pipeline communication

    DOEpatents

    Richardson,; John, G [Idaho Falls, ID

    2008-01-29

    A pipeline communication system and method includes a pipeline having a surface extending along at least a portion of the length of the pipeline. A conductive bus is formed to and extends along a portion of the surface of the pipeline. The conductive bus includes a first conductive trace and a second conductive trace with the first and second conductive traces being adapted to conformally couple with a pipeline at the surface extending along at least a portion of the length of the pipeline. A transmitter for sending information along the conductive bus on the pipeline is coupled thereto and a receiver for receiving the information from the conductive bus on the pipeline is also couple to the conductive bus.

  1. Granatum: a graphical single-cell RNA-Seq analysis pipeline for genomics scientists.

    PubMed

    Zhu, Xun; Wolfgruber, Thomas K; Tasato, Austin; Arisdakessian, Cédric; Garmire, David G; Garmire, Lana X

    2017-12-05

    Single-cell RNA sequencing (scRNA-Seq) is an increasingly popular platform to study heterogeneity at the single-cell level. Computational methods to process scRNA-Seq data are not very accessible to bench scientists as they require a significant amount of bioinformatic skills. We have developed Granatum, a web-based scRNA-Seq analysis pipeline to make analysis more broadly accessible to researchers. Without a single line of programming code, users can click through the pipeline, setting parameters and visualizing results via the interactive graphical interface. Granatum conveniently walks users through various steps of scRNA-Seq analysis. It has a comprehensive list of modules, including plate merging and batch-effect removal, outlier-sample removal, gene-expression normalization, imputation, gene filtering, cell clustering, differential gene expression analysis, pathway/ontology enrichment analysis, protein network interaction visualization, and pseudo-time cell series construction. Granatum enables broad adoption of scRNA-Seq technology by empowering bench scientists with an easy-to-use graphical interface for scRNA-Seq data analysis. The package is freely available for research use at http://garmiregroup.org/granatum/app.

  2. Microseismic response characteristics modeling and locating of underground water supply pipe leak

    NASA Astrophysics Data System (ADS)

    Wang, J.; Liu, J.

    2015-12-01

    In traditional methods of pipeline leak location, geophones must be located on the pipe wall. If the exact location of the pipeline is unknown, the leaks cannot be identified accurately. To solve this problem, taking into account the characteristics of the pipeline leak, we propose a continuous random seismic source model and construct geological models to investigate the proposed method for locating underground pipeline leaks. Based on two dimensional (2D) viscoacoustic equations and the staggered grid finite-difference (FD) algorithm, the microseismic wave field generated by a leaking pipe is modeled. Cross-correlation analysis and the simulated annealing (SA) algorithm were utilized to obtain the time difference and the leak location. We also analyze and discuss the effect of the number of recorded traces, the survey layout, and the offset and interval of the traces on the accuracy of the estimated location. The preliminary results of the simulation and data field experiment indicate that (1) a continuous random source can realistically represent the leak microseismic wave field in a simulation using 2D visco-acoustic equations and a staggered grid FD algorithm. (2) The cross-correlation method is effective for calculating the time difference of the direct wave relative to the reference trace. However, outside the refraction blind zone, the accuracy of the time difference is reduced by the effects of the refracted wave. (3) The acquisition method of time difference based on the microseismic theory and SA algorithm has a great potential for locating leaks from underground pipelines from an array located on the ground surface. Keywords: Viscoacoustic finite-difference simulation; continuous random source; simulated annealing algorithm; pipeline leak location

  3. Strain-Based Design Methodology of Large Diameter Grade X80 Linepipe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lower, Mark D.

    2014-04-01

    Continuous growth in energy demand is driving oil and natural gas production to areas that are often located far from major markets where the terrain is prone to earthquakes, landslides, and other types of ground motion. Transmission pipelines that cross this type of terrain can experience large longitudinal strains and plastic circumferential elongation as the pipeline experiences alignment changes resulting from differential ground movement. Such displacements can potentially impact pipeline safety by adversely affecting structural capacity and leak tight integrity of the linepipe steel. Planning for new long-distance transmission pipelines usually involves consideration of higher strength linepipe steels because theirmore » use allows pipeline operators to reduce the overall cost of pipeline construction and increase pipeline throughput by increasing the operating pressure. The design trend for new pipelines in areas prone to ground movement has evolved over the last 10 years from a stress-based design approach to a strain-based design (SBD) approach to further realize the cost benefits from using higher strength linepipe steels. This report presents an overview of SBD for pipelines subjected to large longitudinal strain and high internal pressure with emphasis on the tensile strain capacity of high-strength microalloyed linepipe steel. The technical basis for this report involved engineering analysis and examination of the mechanical behavior of Grade X80 linepipe steel in both the longitudinal and circumferential directions. Testing was conducted to assess effects on material processing including as-rolled, expanded, and heat treatment processing intended to simulate coating application. Elastic-plastic and low-cycle fatigue analyses were also performed with varying internal pressures. Proposed SBD models discussed in this report are based on classical plasticity theory and account for material anisotropy, triaxial strain, and microstructural damage effects developed from test data. The results are intended to enhance SBD and analysis methods for producing safe and cost effective pipelines capable of accommodating large plastic strains in seismically active arctic areas.« less

  4. Analysis of the strength of sea gas pipelines of positive buoyancy conditioned by glaciation

    NASA Astrophysics Data System (ADS)

    Malkov, Venyamin; Kurbatova, Galina; Ermolaeva, Nadezhda; Malkova, Yulia; Petrukhin, Ruslan

    2018-05-01

    A technique for estimating the stress state of a gas pipeline laid along the seabed in northern latitudes in the presence of glaciation is proposed. It is assumed that the pipeline lies on the bottom of the seabed, but under certain conditions on the some part of the pipeline a glaciation is formed and the gas pipeline section in the place of glaciation can come off the ground due to the positive buoyancy of the ice. Calculation of additional stresses caused by bending of the pipeline is of practical interest for strength evaluation. The gas pipeline is a two-layer cylindrical shell of circular cross section. The inner layer is made of high-strength steel, the outer layer is made of reinforced ferroconcrete. The proposed methodology for calculating the gas pipeline for strength is based on the equations of the theory of shells. The procedure takes into account the effect of internal gas pressure, external pressure of sea water, the weight of two-layer gas pipeline and the weight of the ice layer. The lifting force created by the displaced fluid and the positive buoyancy of the ice is also taken into account. It is significant that the listed loads cause only two types of deformation of the gas pipeline: axisymmetric and antisymmetric. The interaction of the pipeline with the ground as an elastic foundation is not considered. The main objective of the research is to establish the fact of separation of part of the pipeline from the ground. The method of calculations of stresses and deformations occurring in a model sea gas pipeline is presented.

  5. [Character accentuations as a criterion for psychological risks in the professional activity of the builders of main gas pipelines in the conditions of arctic].

    PubMed

    Korneeva, Ia A; Simonova, N N

    2015-01-01

    The article is devoted to the study of character accentuations as a criterion for psychological risks in the professional activity of builders of main gas pipelines in the conditions of Arctic. to study the severity of character accentuations in rotation-employed builders of main gas pipelines, stipulated by their professional activities, as well as personal resources to overcome these destructions. The study involved 70 rotation-employed builders of trunk pipelines, working in the Tyumen Region (duration of the shift-in--52 days), aged from 23 to 59 (mean age 34,9 ± 8.1) years, with the experience of work from 0.5 years to 14 years (the average length of 4.42 ± 3.1). Methods of the study: questionnaires, psychological testing, participant observation. One-Sample t-test of Student, multiple regression analysis, incremental analysis. In the work there were revealed differences of expression of character accentuations in builders of trunk pipelines with experience in work on rotation less and more than five years. There was determined that builders of the main gas pipelines, working on the rotation in Arctic, with more pronounced accentuation ofthe character use mainly psychological defenses of compensation, substitution and denial, and have an average level of expression of flexibility as the regulatory process.

  6. archiDART v3.0: A new data analysis pipeline allowing the topological analysis of plant root systems.

    PubMed

    Delory, Benjamin M; Li, Mao; Topp, Christopher N; Lobet, Guillaume

    2018-01-01

    Quantifying plant morphology is a very challenging task that requires methods able to capture the geometry and topology of plant organs at various spatial scales. Recently, the use of persistent homology as a mathematical framework to quantify plant morphology has been successfully demonstrated for leaves, shoots, and root systems. In this paper, we present a new data analysis pipeline implemented in the R package archiDART to analyse root system architectures using persistent homology. In addition, we also show that both geometric and topological descriptors are necessary to accurately compare root systems and assess their natural complexity.

  7. archiDART v3.0: A new data analysis pipeline allowing the topological analysis of plant root systems

    PubMed Central

    Delory, Benjamin M.; Li, Mao; Topp, Christopher N.; Lobet, Guillaume

    2018-01-01

    Quantifying plant morphology is a very challenging task that requires methods able to capture the geometry and topology of plant organs at various spatial scales. Recently, the use of persistent homology as a mathematical framework to quantify plant morphology has been successfully demonstrated for leaves, shoots, and root systems. In this paper, we present a new data analysis pipeline implemented in the R package archiDART to analyse root system architectures using persistent homology. In addition, we also show that both geometric and topological descriptors are necessary to accurately compare root systems and assess their natural complexity. PMID:29636899

  8. Phylogenetic analysis of a biofilm bacterial population in a water pipeline in the Gulf of Mexico.

    PubMed

    López, Miguel A; Zavala-Díaz de la Serna, F Javier; Jan-Roblero, Janet; Romero, Juan M; Hernández-Rodríguez, César

    2006-10-01

    The aim of this study was to assess the bacterial diversity associated with a corrosive biofilm in a steel pipeline from the Gulf of Mexico used to inject marine water into the oil reservoir. Several aerobic and heterotrophic bacteria were isolated and identified by 16S rRNA gene sequence analysis. Metagenomic DNA was also extracted to perform a denaturing gradient gel electrophoresis analysis of ribosomal genes and to construct a 16S rRNA gene metagenomic library. Denaturing gradient gel electrophoresis profiles and ribosomal libraries exhibited a limited bacterial diversity. Most of the species detected in the ribosomal library or isolated from the pipeline were assigned to Proteobacteria (Halomonas spp., Idiomarina spp., Marinobacter aquaeolei, Thalassospira sp., Silicibacter sp. and Chromohalobacter sp.) and Bacilli (Bacillus spp. and Exiguobacterium spp.). This is the first report that associates some of these bacteria with a corrosive biofilm. It is relevant that no sulfate-reducing bacteria were isolated or detected by a PCR-based method. The diversity and relative abundance of bacteria from water pipeline biofilms may contribute to an understanding of the complexity and mechanisms of metal corrosion during marine water injection in oil secondary recovery.

  9. 49 CFR 192.945 - What methods must an operator use to measure program effectiveness?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.945 What methods must an operator use to measure program...

  10. PipeCraft: Flexible open-source toolkit for bioinformatics analysis of custom high-throughput amplicon sequencing data.

    PubMed

    Anslan, Sten; Bahram, Mohammad; Hiiesalu, Indrek; Tedersoo, Leho

    2017-11-01

    High-throughput sequencing methods have become a routine analysis tool in environmental sciences as well as in public and private sector. These methods provide vast amount of data, which need to be analysed in several steps. Although the bioinformatics may be applied using several public tools, many analytical pipelines allow too few options for the optimal analysis for more complicated or customized designs. Here, we introduce PipeCraft, a flexible and handy bioinformatics pipeline with a user-friendly graphical interface that links several public tools for analysing amplicon sequencing data. Users are able to customize the pipeline by selecting the most suitable tools and options to process raw sequences from Illumina, Pacific Biosciences, Ion Torrent and Roche 454 sequencing platforms. We described the design and options of PipeCraft and evaluated its performance by analysing the data sets from three different sequencing platforms. We demonstrated that PipeCraft is able to process large data sets within 24 hr. The graphical user interface and the automated links between various bioinformatics tools enable easy customization of the workflow. All analytical steps and options are recorded in log files and are easily traceable. © 2017 John Wiley & Sons Ltd.

  11. Comprehensive machine learning analysis of Hydra behavior reveals a stable basal behavioral repertoire

    PubMed Central

    Taralova, Ekaterina; Dupre, Christophe; Yuste, Rafael

    2018-01-01

    Animal behavior has been studied for centuries, but few efficient methods are available to automatically identify and classify it. Quantitative behavioral studies have been hindered by the subjective and imprecise nature of human observation, and the slow speed of annotating behavioral data. Here, we developed an automatic behavior analysis pipeline for the cnidarian Hydra vulgaris using machine learning. We imaged freely behaving Hydra, extracted motion and shape features from the videos, and constructed a dictionary of visual features to classify pre-defined behaviors. We also identified unannotated behaviors with unsupervised methods. Using this analysis pipeline, we quantified 6 basic behaviors and found surprisingly similar behavior statistics across animals within the same species, regardless of experimental conditions. Our analysis indicates that the fundamental behavioral repertoire of Hydra is stable. This robustness could reflect a homeostatic neural control of "housekeeping" behaviors which could have been already present in the earliest nervous systems. PMID:29589829

  12. Risk Analysis using Corrosion Rate Parameter on Gas Transmission Pipeline

    NASA Astrophysics Data System (ADS)

    Sasikirono, B.; Kim, S. J.; Haryadi, G. D.; Huda, A.

    2017-05-01

    In the oil and gas industry, the pipeline is a major component in the transmission and distribution process of oil and gas. Oil and gas distribution process sometimes performed past the pipeline across the various types of environmental conditions. Therefore, in the transmission and distribution process of oil and gas, a pipeline should operate safely so that it does not harm the surrounding environment. Corrosion is still a major cause of failure in some components of the equipment in a production facility. In pipeline systems, corrosion can cause failures in the wall and damage to the pipeline. Therefore it takes care and periodic inspections or checks on the pipeline system. Every production facility in an industry has a level of risk for damage which is a result of the opportunities and consequences of damage caused. The purpose of this research is to analyze the level of risk of 20-inch Natural Gas Transmission Pipeline using Risk-based inspection semi-quantitative based on API 581 associated with the likelihood of failure and the consequences of the failure of a component of the equipment. Then the result is used to determine the next inspection plans. Nine pipeline components were observed, such as a straight pipes inlet, connection tee, and straight pipes outlet. The risk assessment level of the nine pipeline’s components is presented in a risk matrix. The risk level of components is examined at medium risk levels. The failure mechanism that is used in this research is the mechanism of thinning. Based on the results of corrosion rate calculation, remaining pipeline components age can be obtained, so the remaining lifetime of pipeline components are known. The calculation of remaining lifetime obtained and the results vary for each component. Next step is planning the inspection of pipeline components by NDT external methods.

  13. Unipro UGENE NGS pipelines and components for variant calling, RNA-seq and ChIP-seq data analyses.

    PubMed

    Golosova, Olga; Henderson, Ross; Vaskin, Yuriy; Gabrielian, Andrei; Grekhov, German; Nagarajan, Vijayaraj; Oler, Andrew J; Quiñones, Mariam; Hurt, Darrell; Fursov, Mikhail; Huyen, Yentram

    2014-01-01

    The advent of Next Generation Sequencing (NGS) technologies has opened new possibilities for researchers. However, the more biology becomes a data-intensive field, the more biologists have to learn how to process and analyze NGS data with complex computational tools. Even with the availability of common pipeline specifications, it is often a time-consuming and cumbersome task for a bench scientist to install and configure the pipeline tools. We believe that a unified, desktop and biologist-friendly front end to NGS data analysis tools will substantially improve productivity in this field. Here we present NGS pipelines "Variant Calling with SAMtools", "Tuxedo Pipeline for RNA-seq Data Analysis" and "Cistrome Pipeline for ChIP-seq Data Analysis" integrated into the Unipro UGENE desktop toolkit. We describe the available UGENE infrastructure that helps researchers run these pipelines on different datasets, store and investigate the results and re-run the pipelines with the same parameters. These pipeline tools are included in the UGENE NGS package. Individual blocks of these pipelines are also available for expert users to create their own advanced workflows.

  14. 75 FR 80300 - Five-Year Review of Oil Pipeline Pricing Index

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-22

    .... On September 24, 2010, the U.S. Department of Transportation, Pipeline and Hazardous Materials Safety... pipeline cost changes for the 2004-2009 period: \\12\\ AOPL states that Dr. Shehadeh began his analysis using... typical pipeline operator. Valero states that Mr. O'Loughlin's analysis applied an objective filter which...

  15. Identification of pathogen genomic variants through an integrated pipeline

    PubMed Central

    2014-01-01

    Background Whole-genome sequencing represents a powerful experimental tool for pathogen research. We present methods for the analysis of small eukaryotic genomes, including a streamlined system (called Platypus) for finding single nucleotide and copy number variants as well as recombination events. Results We have validated our pipeline using four sets of Plasmodium falciparum drug resistant data containing 26 clones from 3D7 and Dd2 background strains, identifying an average of 11 single nucleotide variants per clone. We also identify 8 copy number variants with contributions to resistance, and report for the first time that all analyzed amplification events are in tandem. Conclusions The Platypus pipeline provides malaria researchers with a powerful tool to analyze short read sequencing data. It provides an accurate way to detect SNVs using known software packages, and a novel methodology for detection of CNVs, though it does not currently support detection of small indels. We have validated that the pipeline detects known SNVs in a variety of samples while filtering out spurious data. We bundle the methods into a freely available package. PMID:24589256

  16. Computerized image analysis for quantitative neuronal phenotyping in zebrafish.

    PubMed

    Liu, Tianming; Lu, Jianfeng; Wang, Ye; Campbell, William A; Huang, Ling; Zhu, Jinmin; Xia, Weiming; Wong, Stephen T C

    2006-06-15

    An integrated microscope image analysis pipeline is developed for automatic analysis and quantification of phenotypes in zebrafish with altered expression of Alzheimer's disease (AD)-linked genes. We hypothesize that a slight impairment of neuronal integrity in a large number of zebrafish carrying the mutant genotype can be detected through the computerized image analysis method. Key functionalities of our zebrafish image processing pipeline include quantification of neuron loss in zebrafish embryos due to knockdown of AD-linked genes, automatic detection of defective somites, and quantitative measurement of gene expression levels in zebrafish with altered expression of AD-linked genes or treatment with a chemical compound. These quantitative measurements enable the archival of analyzed results and relevant meta-data. The structured database is organized for statistical analysis and data modeling to better understand neuronal integrity and phenotypic changes of zebrafish under different perturbations. Our results show that the computerized analysis is comparable to manual counting with equivalent accuracy and improved efficacy and consistency. Development of such an automated data analysis pipeline represents a significant step forward to achieve accurate and reproducible quantification of neuronal phenotypes in large scale or high-throughput zebrafish imaging studies.

  17. An acceleration system for Laplacian image fusion based on SoC

    NASA Astrophysics Data System (ADS)

    Gao, Liwen; Zhao, Hongtu; Qu, Xiujie; Wei, Tianbo; Du, Peng

    2018-04-01

    Based on the analysis of Laplacian image fusion algorithm, this paper proposes a partial pipelining and modular processing architecture, and a SoC based acceleration system is implemented accordingly. Full pipelining method is used for the design of each module, and modules in series form the partial pipelining with unified data formation, which is easy for management and reuse. Integrated with ARM processor, DMA and embedded bare-mental program, this system achieves 4 layers of Laplacian pyramid on the Zynq-7000 board. Experiments show that, with small resources consumption, a couple of 256×256 images can be fused within 1ms, maintaining a fine fusion effect at the same time.

  18. Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis

    PubMed Central

    Ollenschläger, Malte; Roth, Nils; Klucken, Jochen

    2017-01-01

    Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis. PMID:28832511

  19. Radioisotope measurements of the liquid-gas flow in the horizontal pipeline using phase method

    NASA Astrophysics Data System (ADS)

    Hanus, Robert; Zych, Marcin; Jaszczur, Marek; Petryka, Leszek; Świsulski, Dariusz

    2018-06-01

    The paper presents application of the gamma-absorption method to a two-phase liquid-gas flow investigation in a horizontal pipeline. The water-air mixture was examined by a set of two Am-241 radioactive sources and two NaI(Tl) scintillation probes. For analysis of the electrical signals obtained from detectors the cross-spectral density function (CSDF) was applied. Results of the gas phase average velocity measurements for CSDF were compared with results obtained by application of the classical cross-correlation function (CCF). It was found that the combined uncertainties of the gas-phase velocity in the presented experiments did not exceed 1.6% for CSDF method and 5.5% for CCF.

  20. Assessment of stem cell differentiation based on genome-wide expression profiles.

    PubMed

    Godoy, Patricio; Schmidt-Heck, Wolfgang; Hellwig, Birte; Nell, Patrick; Feuerborn, David; Rahnenführer, Jörg; Kattler, Kathrin; Walter, Jörn; Blüthgen, Nils; Hengstler, Jan G

    2018-07-05

    In recent years, protocols have been established to differentiate stem and precursor cells into more mature cell types. However, progress in this field has been hampered by difficulties to assess the differentiation status of stem cell-derived cells in an unbiased manner. Here, we present an analysis pipeline based on published data and methods to quantify the degree of differentiation and to identify transcriptional control factors explaining differences from the intended target cells or tissues. The pipeline requires RNA-Seq or gene array data of the stem cell starting population, derived 'mature' cells and primary target cells or tissue. It consists of a principal component analysis to represent global expression changes and to identify possible problems of the dataset that require special attention, such as: batch effects; clustering techniques to identify gene groups with similar features; over-representation analysis to characterize biological motifs and transcriptional control factors of the identified gene clusters; and metagenes as well as gene regulatory networks for quantitative cell-type assessment and identification of influential transcription factors. Possibilities and limitations of the analysis pipeline are illustrated using the example of human embryonic stem cell and human induced pluripotent cells to generate 'hepatocyte-like cells'. The pipeline quantifies the degree of incomplete differentiation as well as remaining stemness and identifies unwanted features, such as colon- and fibroblast-associated gene clusters that are absent in real hepatocytes but typically induced by currently available differentiation protocols. Finally, transcription factors responsible for incomplete and unwanted differentiation are identified. The proposed method is widely applicable and allows an unbiased and quantitative assessment of stem cell-derived cells.This article is part of the theme issue 'Designer human tissue: coming to a lab near you'. © 2018 The Author(s).

  1. Regulatory assessment with regulatory flexibility analysis and paperwork reduction act analysis : draft regulatory evaluation : Notice of Proposed Rulemaking -- Pipeline Safety : Polyamide-11 (PA-11) plastic pipe design pressures

    DOT National Transportation Integrated Search

    2007-06-01

    The Pipeline and Hazardous Materials Safety Administration (PHMSA) is proposing changes to the Federal pipeline safety regulations in 49 CFR Part 192, which cover the transportation of natural gas by pipeline. Specifically, PHMSA is proposing to chan...

  2. 76 FR 75894 - Information Collection Activities: Pipelines and Pipeline Rights-of-Way; Submitted for Office of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-05

    ... pipelines `` * * * for the transportation of oil, natural gas, sulphur, or other minerals, or under such...) Submit repair report 3 1008(f) Submit report of pipeline failure analysis...... 30 1008(g) Submit plan of.... BSEE-2011-0002; OMB Control Number 1010-0050] Information Collection Activities: Pipelines and Pipeline...

  3. Reliability-based management of buried pipelines considering external corrosion defects

    NASA Astrophysics Data System (ADS)

    Miran, Seyedeh Azadeh

    Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.

  4. A new segmentation strategy for processing magnetic anomaly detection data of shallow depth ferromagnetic pipeline

    NASA Astrophysics Data System (ADS)

    Feng, Shuo; Liu, Dejun; Cheng, Xing; Fang, Huafeng; Li, Caifang

    2017-04-01

    Magnetic anomalies produced by underground ferromagnetic pipelines because of the polarization of earth's magnetic field are used to obtain the information on the location, buried depth and other parameters of pipelines. In order to achieve a fast inversion and interpretation of measured data, it is necessary to develop a fast and stable forward method. Magnetic dipole reconstruction (MDR), as a kind of integration numerical method, is well suited for simulating a thin pipeline anomaly. In MDR the pipeline model must be cut into small magnetic dipoles through different segmentation methods. The segmentation method has an impact on the stability and speed of forward calculation. Rapid and accurate simulation of deep-buried pipelines has been achieved by exciting segmentation method. However, in practical measurement, the depth of underground pipe is uncertain. When it comes to the shallow-buried pipeline, the present segmentation may generate significant errors. This paper aims at solving this problem in three stages. First, the cause of inaccuracy is analyzed by simulation experiment. Secondly, new variable interval section segmentation is proposed based on the existing segmentation. It can help MDR method to obtain simulation results in a fast way under the premise of ensuring the accuracy of different depth models. Finally, the measured data is inversed based on new segmentation method. The result proves that the inversion based on the new segmentation can achieve fast and accurate inversion of depth parameters of underground pipes without being limited by pipeline depth.

  5. Reusable, extensible, and modifiable R scripts and Kepler workflows for comprehensive single set ChIP-seq analysis.

    PubMed

    Cormier, Nathan; Kolisnik, Tyler; Bieda, Mark

    2016-07-05

    There has been an enormous expansion of use of chromatin immunoprecipitation followed by sequencing (ChIP-seq) technologies. Analysis of large-scale ChIP-seq datasets involves a complex series of steps and production of several specialized graphical outputs. A number of systems have emphasized custom development of ChIP-seq pipelines. These systems are primarily based on custom programming of a single, complex pipeline or supply libraries of modules and do not produce the full range of outputs commonly produced for ChIP-seq datasets. It is desirable to have more comprehensive pipelines, in particular ones addressing common metadata tasks, such as pathway analysis, and pipelines producing standard complex graphical outputs. It is advantageous if these are highly modular systems, available as both turnkey pipelines and individual modules, that are easily comprehensible, modifiable and extensible to allow rapid alteration in response to new analysis developments in this growing area. Furthermore, it is advantageous if these pipelines allow data provenance tracking. We present a set of 20 ChIP-seq analysis software modules implemented in the Kepler workflow system; most (18/20) were also implemented as standalone, fully functional R scripts. The set consists of four full turnkey pipelines and 16 component modules. The turnkey pipelines in Kepler allow data provenance tracking. Implementation emphasized use of common R packages and widely-used external tools (e.g., MACS for peak finding), along with custom programming. This software presents comprehensive solutions and easily repurposed code blocks for ChIP-seq analysis and pipeline creation. Tasks include mapping raw reads, peakfinding via MACS, summary statistics, peak location statistics, summary plots centered on the transcription start site (TSS), gene ontology, pathway analysis, and de novo motif finding, among others. These pipelines range from those performing a single task to those performing full analyses of ChIP-seq data. The pipelines are supplied as both Kepler workflows, which allow data provenance tracking, and, in the majority of cases, as standalone R scripts. These pipelines are designed for ease of modification and repurposing.

  6. TH-AB-207A-05: A Fully-Automated Pipeline for Generating CT Images Across a Range of Doses and Reconstruction Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S; Lo, P; Hoffman, J

    Purpose: To evaluate the robustness of CAD or Quantitative Imaging methods, they should be tested on a variety of cases and under a variety of image acquisition and reconstruction conditions that represent the heterogeneity encountered in clinical practice. The purpose of this work was to develop a fully-automated pipeline for generating CT images that represent a wide range of dose and reconstruction conditions. Methods: The pipeline consists of three main modules: reduced-dose simulation, image reconstruction, and quantitative analysis. The first two modules of the pipeline can be operated in a completely automated fashion, using configuration files and running the modulesmore » in a batch queue. The input to the pipeline is raw projection CT data; this data is used to simulate different levels of dose reduction using a previously-published algorithm. Filtered-backprojection reconstructions are then performed using FreeCT-wFBP, a freely-available reconstruction software for helical CT. We also added support for an in-house, model-based iterative reconstruction algorithm using iterative coordinate-descent optimization, which may be run in tandem with the more conventional recon methods. The reduced-dose simulations and image reconstructions are controlled automatically by a single script, and they can be run in parallel on our research cluster. The pipeline was tested on phantom and lung screening datasets from a clinical scanner (Definition AS, Siemens Healthcare). Results: The images generated from our test datasets appeared to represent a realistic range of acquisition and reconstruction conditions that we would expect to find clinically. The time to generate images was approximately 30 minutes per dose/reconstruction combination on a hybrid CPU/GPU architecture. Conclusion: The automated research pipeline promises to be a useful tool for either training or evaluating performance of quantitative imaging software such as classifiers and CAD algorithms across the range of acquisition and reconstruction parameters present in the clinical environment. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less

  7. Spectral analysis of pipe-to-soil potentials with variations of the Earth's magnetic field in the Australian region

    NASA Astrophysics Data System (ADS)

    Marshall, R. A.; Waters, C. L.; Sciffer, M. D.

    2010-05-01

    Long, steel pipelines used to transport essential resources such as gas and oil are potentially vulnerable to space weather. In order to inhibit corrosion, the pipelines are usually coated in an insulating material and maintained at a negative electric potential with respect to Earth using cathodic protection units. During periods of enhanced geomagnetic activity, potential differences between the pipeline and surrounding soil (referred to as pipe-to-soil potentials (PSPs)) may exhibit large voltage swings which place the pipeline outside the recommended "safe range" and at an increased risk of corrosion. The PSP variations result from the "geoelectric" field at the Earth's surface and associated geomagnetic field variations. Previous research investigating the relationship between the surface geoelectric field and geomagnetic source fields has focused on the high-latitude regions where line currents in the ionosphere E region are often the assumed source of the geomagnetic field variations. For the Australian region Sq currents also contribute to the geomagnetic field variations and provide the major contribution during geomagnetic quiet times. This paper presents the results of a spectral analysis of PSP measurements from four pipeline networks from the Australian region with geomagnetic field variations from nearby magnetometers. The pipeline networks extend from Queensland in the north of Australia to Tasmania in the south and provide PSP variations during both active and quiet geomagnetic conditions. The spectral analyses show both consistent phase and amplitude relationships across all pipelines, even for large separations between magnetometer and PSP sites and for small-amplitude signals. Comparison between the observational relationships and model predictions suggests a method for deriving a geoelectric field proxy suitable for indicating PSP-related space weather conditions.

  8. Design of oil pipeline leak detection and communication system based on optical fiber technology

    NASA Astrophysics Data System (ADS)

    Tu, Yaqing; Chen, Huabo

    1999-08-01

    The integrity of oil pipeline is always a major concern of operators. Pipeline leak not only leads to loss of oil, but pollutes environment. A new pipeline leak detection and communication system based on optical fiber technology to ensure the pipeline reliability is presented. Combined direct leak detection method with an indirect one, the system will greatly reduce the rate of false alarm. According, to the practical features of oil pipeline,the pipeline communication system is designed employing the state-of-the-art optic fiber communication technology. The system has such feature as high location accuracy of leak detection, good real-time characteristic, etc. which overcomes the disadvantages of traditional leak detection methods and communication system effectively.

  9. Characterizing differential gene expression in polyploid grasses lacking a reference transcriptome

    USDA-ARS?s Scientific Manuscript database

    Basal transcriptome characterization and differential gene expression in response to varying conditions are often addressed through next generation sequencing (NGS) and data analysis techniques. While these strategies are commonly used, there are countless tools, pipelines, data analysis methods an...

  10. Health, safety and environmental risk of a gas pipeline in an oil exploring area of Gachsaran.

    PubMed

    Kalatpoor, Omid; Goshtasp, Kambiz; Khavaji, Solieman

    2011-01-01

    The purpose of this study was assessing health, safety and environmental risk of a gas transfer pipeline in an oily area of Gachsaran. In this method, we used the Kent's pipeline risk assessment method except that to facilitate using the method more practically some changes were exerted into Kent's method. A pipeline with 16 kilometers length was selected considering surrounding nature of the pipeline. It was divided into two sections. Analogous to Kent's method, in this method, parameters included: interested party's injuries, corrosion, design factor, incorrect operation index and consequence scoring. The difference here was that for consequence scoring we used ALOHA 5.6 software instead of Kent's pattern. Results showed that health, safety and environmental risks of section 2 (the next 13 kilometers of outgoing pipeline from gas station after the first 3 kilometers) were greater. It seems the main cause of gaining a bigger risk number was related to more activities of interested parties around section 2. Because all figures gathered from indexes are almost close to gather except third parties activity.

  11. Putting the environment into the NPV calculation -- Quantifying pipeline environmental costs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dott, D.R.; Wirasinghe, S.C.; Chakma, A.

    1996-12-31

    Pipeline projects impact the environment through soil and habitat disturbance, noise during construction and compressor operation, river crossing disturbance and the risk of rupture. Assigning monetary value to these negative project consequences enables the environment to be represented in the project cost-benefit analysis. This paper presents the mechanics and implications of two environmental valuation techniques: (1) the contingent valuation method and (2) the stated preference method. The use of environmental value at the project economic-evaluation stage is explained. A summary of research done on relevant environmental attribute valuation is presented and discussed. Recommendations for further research in the field aremore » made.« less

  12. PipelineDog: a simple and flexible graphic pipeline construction and maintenance tool.

    PubMed

    Zhou, Anbo; Zhang, Yeting; Sun, Yazhou; Xing, Jinchuan

    2018-05-01

    Analysis pipelines are an essential part of bioinformatics research, and ad hoc pipelines are frequently created by researchers for prototyping and proof-of-concept purposes. However, most existing pipeline management system or workflow engines are too complex for rapid prototyping or learning the pipeline concept. A lightweight, user-friendly and flexible solution is thus desirable. In this study, we developed a new pipeline construction and maintenance tool, PipelineDog. This is a web-based integrated development environment with a modern web graphical user interface. It offers cross-platform compatibility, project management capabilities, code formatting and error checking functions and an online repository. It uses an easy-to-read/write script system that encourages code reuse. With the online repository, it also encourages sharing of pipelines, which enhances analysis reproducibility and accountability. For most users, PipelineDog requires no software installation. Overall, this web application provides a way to rapidly create and easily manage pipelines. PipelineDog web app is freely available at http://web.pipeline.dog. The command line version is available at http://www.npmjs.com/package/pipelinedog and online repository at http://repo.pipeline.dog. ysun@kean.edu or xing@biology.rutgers.edu or ysun@diagnoa.com. Supplementary data are available at Bioinformatics online.

  13. Nurse manager succession planning: a concept analysis.

    PubMed

    Titzer, Jennifer L; Shirey, Maria R

    2013-01-01

    The current nursing leadership pipeline is inadequate and demands strategic succession planning methods. This article provides concept clarification regarding nurse manager succession planning. Attributes common to succession planning include organizational commitment and resource allocation, proactive and visionary leadership approach, and a mentoring and coaching environment. Strategic planning, current and future leadership analysis, high-potential identification, and leadership development are succession planning antecedents. Consequences of succession planning are improved leadership and organizational culture continuity, and increased leadership bench strength. Health care has failed to strategically plan for future leadership. Developing a strong nursing leadership pipeline requires deliberate and strategic succession planning. © 2013 Wiley Periodicals, Inc.

  14. Using simulated fluorescence cell micrographs for the evaluation of cell image segmentation algorithms.

    PubMed

    Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas

    2017-03-18

    Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.

  15. Analysing concurrent transcranial magnetic stimulation and electroencephalographic data: A review and introduction to the open-source TESA software.

    PubMed

    Rogasch, Nigel C; Sullivan, Caley; Thomson, Richard H; Rose, Nathan S; Bailey, Neil W; Fitzgerald, Paul B; Farzan, Faranak; Hernandez-Pavon, Julio C

    2017-02-15

    The concurrent use of transcranial magnetic stimulation with electroencephalography (TMS-EEG) is growing in popularity as a method for assessing various cortical properties such as excitability, oscillations and connectivity. However, this combination of methods is technically challenging, resulting in artifacts both during recording and following typical EEG analysis methods, which can distort the underlying neural signal. In this article, we review the causes of artifacts in EEG recordings resulting from TMS, as well as artifacts introduced during analysis (e.g. as the result of filtering over high-frequency, large amplitude artifacts). We then discuss methods for removing artifacts, and ways of designing pipelines to minimise analysis-related artifacts. Finally, we introduce the TMS-EEG signal analyser (TESA), an open-source extension for EEGLAB, which includes functions that are specific for TMS-EEG analysis, such as removing and interpolating the TMS pulse artifact, removing and minimising TMS-evoked muscle activity, and analysing TMS-evoked potentials. The aims of TESA are to provide users with easy access to current TMS-EEG analysis methods and to encourage direct comparisons of these methods and pipelines. It is hoped that providing open-source functions will aid in both improving and standardising analysis across the field of TMS-EEG research. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  16. The connectome mapper: an open-source processing pipeline to map connectomes with MRI.

    PubMed

    Daducci, Alessandro; Gerhard, Stephan; Griffa, Alessandra; Lemkaddem, Alia; Cammoun, Leila; Gigandet, Xavier; Meuli, Reto; Hagmann, Patric; Thiran, Jean-Philippe

    2012-01-01

    Researchers working in the field of global connectivity analysis using diffusion magnetic resonance imaging (MRI) can count on a wide selection of software packages for processing their data, with methods ranging from the reconstruction of the local intra-voxel axonal structure to the estimation of the trajectories of the underlying fibre tracts. However, each package is generally task-specific and uses its own conventions and file formats. In this article we present the Connectome Mapper, a software pipeline aimed at helping researchers through the tedious process of organising, processing and analysing diffusion MRI data to perform global brain connectivity analyses. Our pipeline is written in Python and is freely available as open-source at www.cmtk.org.

  17. 75 FR 35366 - Pipeline Safety: Applying Safety Regulation to All Rural Onshore Hazardous Liquid Low-Stress Lines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-22

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Onshore Hazardous Liquid Low-Stress Lines AGENCY: Pipeline and Hazardous Materials Safety Administration... pipelines to perform a complete ``could affect'' analysis to determine which rural low-stress pipeline...

  18. Computer models of complex multiloop branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.

    2013-11-01

    This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.

  19. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    PubMed Central

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941

  20. Comprehensive machine learning analysis of Hydra behavior reveals a stable basal behavioral repertoire.

    PubMed

    Han, Shuting; Taralova, Ekaterina; Dupre, Christophe; Yuste, Rafael

    2018-03-28

    Animal behavior has been studied for centuries, but few efficient methods are available to automatically identify and classify it. Quantitative behavioral studies have been hindered by the subjective and imprecise nature of human observation, and the slow speed of annotating behavioral data. Here, we developed an automatic behavior analysis pipeline for the cnidarian Hydra vulgaris using machine learning. We imaged freely behaving Hydra , extracted motion and shape features from the videos, and constructed a dictionary of visual features to classify pre-defined behaviors. We also identified unannotated behaviors with unsupervised methods. Using this analysis pipeline, we quantified 6 basic behaviors and found surprisingly similar behavior statistics across animals within the same species, regardless of experimental conditions. Our analysis indicates that the fundamental behavioral repertoire of Hydra is stable. This robustness could reflect a homeostatic neural control of "housekeeping" behaviors which could have been already present in the earliest nervous systems. © 2018, Han et al.

  1. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing

    PubMed Central

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678

  2. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.

    PubMed

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.

  3. PARPs database: A LIMS systems for protein-protein interaction data mining or laboratory information management system

    PubMed Central

    Droit, Arnaud; Hunter, Joanna M; Rouleau, Michèle; Ethier, Chantal; Picard-Cloutier, Aude; Bourgais, David; Poirier, Guy G

    2007-01-01

    Background In the "post-genome" era, mass spectrometry (MS) has become an important method for the analysis of proteins and the rapid advancement of this technique, in combination with other proteomics methods, results in an increasing amount of proteome data. This data must be archived and analysed using specialized bioinformatics tools. Description We herein describe "PARPs database," a data analysis and management pipeline for liquid chromatography tandem mass spectrometry (LC-MS/MS) proteomics. PARPs database is a web-based tool whose features include experiment annotation, protein database searching, protein sequence management, as well as data-mining of the peptides and proteins identified. Conclusion Using this pipeline, we have successfully identified several interactions of biological significance between PARP-1 and other proteins, namely RFC-1, 2, 3, 4 and 5. PMID:18093328

  4. Oil and gas pipeline construction cost analysis and developing regression models for cost estimation

    NASA Astrophysics Data System (ADS)

    Thaduri, Ravi Kiran

    In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.

  5. Algorithms for parallel flow solvers on message passing architectures

    NASA Technical Reports Server (NTRS)

    Vanderwijngaart, Rob F.

    1995-01-01

    The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those immediately adjacent to them, then the first processor in the pipeline will receive a computational load that is less than that of subsequent processors, magnifying the pipeline slowdown effect. Extra compensation is needed for grid boundary effects, even if all grid blocks are equally sized.

  6. Seeking unique and common biological themes in multiple gene lists or datasets: pathway pattern extraction pipeline for pathway-level comparative analysis.

    PubMed

    Yi, Ming; Mudunuri, Uma; Che, Anney; Stephens, Robert M

    2009-06-29

    One of the challenges in the analysis of microarray data is to integrate and compare the selected (e.g., differential) gene lists from multiple experiments for common or unique underlying biological themes. A common way to approach this problem is to extract common genes from these gene lists and then subject these genes to enrichment analysis to reveal the underlying biology. However, the capacity of this approach is largely restricted by the limited number of common genes shared by datasets from multiple experiments, which could be caused by the complexity of the biological system itself. We now introduce a new Pathway Pattern Extraction Pipeline (PPEP), which extends the existing WPS application by providing a new pathway-level comparative analysis scheme. To facilitate comparing and correlating results from different studies and sources, PPEP contains new interfaces that allow evaluation of the pathway-level enrichment patterns across multiple gene lists. As an exploratory tool, this analysis pipeline may help reveal the underlying biological themes at both the pathway and gene levels. The analysis scheme provided by PPEP begins with multiple gene lists, which may be derived from different studies in terms of the biological contexts, applied technologies, or methodologies. These lists are then subjected to pathway-level comparative analysis for extraction of pathway-level patterns. This analysis pipeline helps to explore the commonality or uniqueness of these lists at the level of pathways or biological processes from different but relevant biological systems using a combination of statistical enrichment measurements, pathway-level pattern extraction, and graphical display of the relationships of genes and their associated pathways as Gene-Term Association Networks (GTANs) within the WPS platform. As a proof of concept, we have used the new method to analyze many datasets from our collaborators as well as some public microarray datasets. This tool provides a new pathway-level analysis scheme for integrative and comparative analysis of data derived from different but relevant systems. The tool is freely available as a Pathway Pattern Extraction Pipeline implemented in our existing software package WPS, which can be obtained at http://www.abcc.ncifcrf.gov/wps/wps_index.php.

  7. Airborne LIDAR Pipeline Inspection System (ALPIS) Mapping Tests

    DOT National Transportation Integrated Search

    2003-06-06

    Natural gas and hazardous liquid pipeline operators have a need to identify where leaks are occurring along their pipelines in order to lower the risks the pipelines pose to people and the environment. Current methods of locating natural gas and haza...

  8. Human Factors Analysis of Pipeline Monitoring and Control Operations: Final Technical Report

    DOT National Transportation Integrated Search

    2008-11-26

    The purpose of the Human Factors Analysis of Pipeline Monitoring and Control Operations project was to develop procedures that could be used by liquid pipeline operators to assess and manage the human factors risks in their control rooms that may adv...

  9. G-CNV: A GPU-Based Tool for Preparing Data to Detect CNVs with Read-Depth Methods.

    PubMed

    Manconi, Andrea; Manca, Emanuele; Moscatelli, Marco; Gnocchi, Matteo; Orro, Alessandro; Armano, Giuliano; Milanesi, Luciano

    2015-01-01

    Copy number variations (CNVs) are the most prevalent types of structural variations (SVs) in the human genome and are involved in a wide range of common human diseases. Different computational methods have been devised to detect this type of SVs and to study how they are implicated in human diseases. Recently, computational methods based on high-throughput sequencing (HTS) are increasingly used. The majority of these methods focus on mapping short-read sequences generated from a donor against a reference genome to detect signatures distinctive of CNVs. In particular, read-depth based methods detect CNVs by analyzing genomic regions with significantly different read-depth from the other ones. The pipeline analysis of these methods consists of four main stages: (i) data preparation, (ii) data normalization, (iii) CNV regions identification, and (iv) copy number estimation. However, available tools do not support most of the operations required at the first two stages of this pipeline. Typically, they start the analysis by building the read-depth signal from pre-processed alignments. Therefore, third-party tools must be used to perform most of the preliminary operations required to build the read-depth signal. These data-intensive operations can be efficiently parallelized on graphics processing units (GPUs). In this article, we present G-CNV, a GPU-based tool devised to perform the common operations required at the first two stages of the analysis pipeline. G-CNV is able to filter low-quality read sequences, to mask low-quality nucleotides, to remove adapter sequences, to remove duplicated read sequences, to map the short-reads, to resolve multiple mapping ambiguities, to build the read-depth signal, and to normalize it. G-CNV can be efficiently used as a third-party tool able to prepare data for the subsequent read-depth signal generation and analysis. Moreover, it can also be integrated in CNV detection tools to generate read-depth signals.

  10. FMAP: Functional Mapping and Analysis Pipeline for metagenomics and metatranscriptomics studies.

    PubMed

    Kim, Jiwoong; Kim, Min Soo; Koh, Andrew Y; Xie, Yang; Zhan, Xiaowei

    2016-10-10

    Given the lack of a complete and comprehensive library of microbial reference genomes, determining the functional profile of diverse microbial communities is challenging. The available functional analysis pipelines lack several key features: (i) an integrated alignment tool, (ii) operon-level analysis, and (iii) the ability to process large datasets. Here we introduce our open-sourced, stand-alone functional analysis pipeline for analyzing whole metagenomic and metatranscriptomic sequencing data, FMAP (Functional Mapping and Analysis Pipeline). FMAP performs alignment, gene family abundance calculations, and statistical analysis (three levels of analyses are provided: differentially-abundant genes, operons and pathways). The resulting output can be easily visualized with heatmaps and functional pathway diagrams. FMAP functional predictions are consistent with currently available functional analysis pipelines. FMAP is a comprehensive tool for providing functional analysis of metagenomic/metatranscriptomic sequencing data. With the added features of integrated alignment, operon-level analysis, and the ability to process large datasets, FMAP will be a valuable addition to the currently available functional analysis toolbox. We believe that this software will be of great value to the wider biology and bioinformatics communities.

  11. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    PubMed

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. © 2016 Society for Laboratory Automation and Screening.

  12. Applicability of interferometric SAR technology to ground movement and pipeline monitoring

    NASA Astrophysics Data System (ADS)

    Grivas, Dimitri A.; Bhagvati, Chakravarthy; Schultz, B. C.; Trigg, Alan; Rizkalla, Moness

    1998-03-01

    This paper summarizes the findings of a cooperative effort between NOVA Gas Transmission Ltd. (NGTL), the Italian Natural Gas Transmission Company (SNAM), and Arista International, Inc., to determine whether current remote sensing technologies can be utilized to monitor small-scale ground movements over vast geographical areas. This topic is of interest due to the potential for small ground movements to cause strain accumulation in buried pipeline facilities. Ground movements are difficult to monitor continuously, but their cumulative effect over time can have a significant impact on the safety of buried pipelines. Interferometric synthetic aperture radar (InSAR or SARI) is identified as the most promising technique of those considered. InSAR analysis involves combining multiple images from consecutive passes of a radar imaging platform. The resulting composite image can detect changes as small as 2.5 to 5.0 centimeters (based on current analysis methods and radar satellite data of 5 centimeter wavelength). Research currently in progress shows potential for measuring ground movements as small as a few millimeters. Data needed for InSAR analysis is currently commercially available from four satellites, and additional satellites are planned for launch in the near future. A major conclusion of the present study is that InSAR technology is potentially useful for pipeline integrity monitoring. A pilot project is planned to test operational issues.

  13. Regulatory assessment with regulatory flexibility analysis : draft regulatory evaluation - Notice of Proposed Rulemaking -- Pipeline Safety : safety standards for increasing the maximum allowable operating pressure for natural gas transmission pipelines.

    DOT National Transportation Integrated Search

    2008-02-01

    The Pipeline and Hazardous Materials Safety Administration (PHMSA) is proposing changes to the Federal pipeline safety regulations in 49 CFR Part 192, which cover the transportation of natural gas by pipeline. Specifically, PHMSA proposes allowing na...

  14. Methylation Integration (Mint) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    A comprehensive software pipeline and set of Galaxy tools/workflows for integrative analysis of genome-wide DNA methylation and hydroxymethylation data. Data types can be either bisulfite sequencing and/or pull-down methods.

  15. A hydrogen energy carrier. Volume 2: Systems analysis

    NASA Technical Reports Server (NTRS)

    Savage, R. L. (Editor); Blank, L. (Editor); Cady, T. (Editor); Cox, K. (Editor); Murray, R. (Editor); Williams, R. D. (Editor)

    1973-01-01

    A systems analysis of hydrogen as an energy carrier in the United States indicated that it is feasible to use hydrogen in all energy use areas, except some types of transportation. These use areas are industrial, residential and commercial, and electric power generation. Saturation concept and conservation concept forecasts of future total energy demands were made. Projected costs of producing hydrogen from coal or from nuclear heat combined with thermochemical decomposition of water are in the range $1.00 to $1.50 per million Btu of hydrogen produced. Other methods are estimated to be more costly. The use of hydrogen as a fuel will require the development of large-scale transmission and storage systems. A pipeline system similar to the existing natural gas pipeline system appears practical, if design factors are included to avoid hydrogen environment embrittlement of pipeline metals. Conclusions from the examination of the safety, legal, environmental, economic, political and societal aspects of hydrogen fuel are that a hydrogen energy carrier system would be compatible with American values and the existing energy system.

  16. Mortise terrorism on the main pipelines

    NASA Astrophysics Data System (ADS)

    Komarov, V. A.; Nigrey, N. N.; Bronnikov, D. A.; Nigrey, A. A.

    2018-01-01

    The research aim of the work is to analyze the effectiveness of the methods of physical protection of main pipelines proposed in the article from the "mortise terrorism" A mathematical model has been developed that made it possible to predict the dynamics of "mortise terrorism" in the short term. An analysis of the effectiveness of physical protection methods proposed in the article to prevent unauthorized impacts on the objects under investigation is given. A variant of a video analytics system has been developed that allows detecting violators with recognition of the types of work they perform at a distance of 150 meters in conditions of complex natural backgrounds and precipitation. Probability of detection is 0.959.

  17. NMR in the SPINE Structural Proteomics project.

    PubMed

    Ab, E; Atkinson, A R; Banci, L; Bertini, I; Ciofi-Baffoni, S; Brunner, K; Diercks, T; Dötsch, V; Engelke, F; Folkers, G E; Griesinger, C; Gronwald, W; Günther, U; Habeck, M; de Jong, R N; Kalbitzer, H R; Kieffer, B; Leeflang, B R; Loss, S; Luchinat, C; Marquardsen, T; Moskau, D; Neidig, K P; Nilges, M; Piccioli, M; Pierattelli, R; Rieping, W; Schippmann, T; Schwalbe, H; Travé, G; Trenner, J; Wöhnert, J; Zweckstetter, M; Kaptein, R

    2006-10-01

    This paper describes the developments, role and contributions of the NMR spectroscopy groups in the Structural Proteomics In Europe (SPINE) consortium. Focusing on the development of high-throughput (HTP) pipelines for NMR structure determinations of proteins, all aspects from sample preparation, data acquisition, data processing, data analysis to structure determination have been improved with respect to sensitivity, automation, speed, robustness and validation. Specific highlights are protonless (13)C-direct detection methods and inferential structure determinations (ISD). In addition to technological improvements, these methods have been applied to deliver over 60 NMR structures of proteins, among which are five that failed to crystallize. The inclusion of NMR spectroscopy in structural proteomics pipelines improves the success rate for protein structure determinations.

  18. 76 FR 22944 - Pipeline Safety: Notice of Public Webinars on Implementation of Distribution Integrity Management...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-25

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... Management Programs AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION... Nation's gas distribution pipeline systems through development of inspection methods and guidance for the...

  19. 49 CFR Appendix C to Part 195 - Guidance for Implementation of an Integrity Management Program

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED... understanding and analysis of the failure mechanisms or threats to integrity of each pipeline segment. (2) An... pipeline, information and data used for the information analysis; (13) results of the information analyses...

  20. A Comparative Analysis of the Lyve-SET Phylogenomics Pipeline for Genomic Epidemiology of Foodborne Pathogens

    PubMed Central

    Katz, Lee S.; Griswold, Taylor; Williams-Newkirk, Amanda J.; Wagner, Darlene; Petkau, Aaron; Sieffert, Cameron; Van Domselaar, Gary; Deng, Xiangyu; Carleton, Heather A.

    2017-01-01

    Modern epidemiology of foodborne bacterial pathogens in industrialized countries relies increasingly on whole genome sequencing (WGS) techniques. As opposed to profiling techniques such as pulsed-field gel electrophoresis, WGS requires a variety of computational methods. Since 2013, United States agencies responsible for food safety including the CDC, FDA, and USDA, have been performing whole-genome sequencing (WGS) on all Listeria monocytogenes found in clinical, food, and environmental samples. Each year, more genomes of other foodborne pathogens such as Escherichia coli, Campylobacter jejuni, and Salmonella enterica are being sequenced. Comparing thousands of genomes across an entire species requires a fast method with coarse resolution; however, capturing the fine details of highly related isolates requires a computationally heavy and sophisticated algorithm. Most L. monocytogenes investigations employing WGS depend on being able to identify an outbreak clade whose inter-genomic distances are less than an empirically determined threshold. When the difference between a few single nucleotide polymorphisms (SNPs) can help distinguish between genomes that are likely outbreak-associated and those that are less likely to be associated, we require a fine-resolution method. To achieve this level of resolution, we have developed Lyve-SET, a high-quality SNP pipeline. We evaluated Lyve-SET by retrospectively investigating 12 outbreak data sets along with four other SNP pipelines that have been used in outbreak investigation or similar scenarios. To compare these pipelines, several distance and phylogeny-based comparison methods were applied, which collectively showed that multiple pipelines were able to identify most outbreak clusters and strains. Currently in the US PulseNet system, whole genome multi-locus sequence typing (wgMLST) is the preferred primary method for foodborne WGS cluster detection and outbreak investigation due to its ability to name standardized genomic profiles, its central database, and its ability to be run in a graphical user interface. However, creating a functional wgMLST scheme requires extended up-front development and subject-matter expertise. When a scheme does not exist or when the highest resolution is needed, SNP analysis is used. Using three Listeria outbreak data sets, we demonstrated the concordance between Lyve-SET SNP typing and wgMLST. Availability: Lyve-SET can be found at https://github.com/lskatz/Lyve-SET. PMID:28348549

  1. Fast interactive exploration of 4D MRI flow data

    NASA Astrophysics Data System (ADS)

    Hennemuth, A.; Friman, O.; Schumann, C.; Bock, J.; Drexl, J.; Huellebrand, M.; Markl, M.; Peitgen, H.-O.

    2011-03-01

    1- or 2-directional MRI blood flow mapping sequences are an integral part of standard MR protocols for diagnosis and therapy control in heart diseases. Recent progress in rapid MRI has made it possible to acquire volumetric, 3-directional cine images in reasonable scan time. In addition to flow and velocity measurements relative to arbitrarily oriented image planes, the analysis of 3-dimensional trajectories enables the visualization of flow patterns, local features of flow trajectories or possible paths into specific regions. The anatomical and functional information allows for advanced hemodynamic analysis in different application areas like stroke risk assessment, congenital and acquired heart disease, aneurysms or abdominal collaterals and cranial blood flow. The complexity of the 4D MRI flow datasets and the flow related image analysis tasks makes the development of fast comprehensive data exploration software for advanced flow analysis a challenging task. Most existing tools address only individual aspects of the analysis pipeline such as pre-processing, quantification or visualization, or are difficult to use for clinicians. The goal of the presented work is to provide a software solution that supports the whole image analysis pipeline and enables data exploration with fast intuitive interaction and visualization methods. The implemented methods facilitate the segmentation and inspection of different vascular systems. Arbitrary 2- or 3-dimensional regions for quantitative analysis and particle tracing can be defined interactively. Synchronized views of animated 3D path lines, 2D velocity or flow overlays and flow curves offer a detailed insight into local hemodynamics. The application of the analysis pipeline is shown for 6 cases from clinical practice, illustrating the usefulness for different clinical questions. Initial user tests show that the software is intuitive to learn and even inexperienced users achieve good results within reasonable processing times.

  2. Historical analysis of US pipeline accidents triggered by natural hazards

    NASA Astrophysics Data System (ADS)

    Girgin, Serkan; Krausmann, Elisabeth

    2015-04-01

    Natural hazards, such as earthquakes, floods, landslides, or lightning, can initiate accidents in oil and gas pipelines with potentially major consequences on the population or the environment due to toxic releases, fires and explosions. Accidents of this type are also referred to as Natech events. Many major accidents highlight the risk associated with natural-hazard impact on pipelines transporting dangerous substances. For instance, in the USA in 1994, flooding of the San Jacinto River caused the rupture of 8 and the undermining of 29 pipelines by the floodwaters. About 5.5 million litres of petroleum and related products were spilled into the river and ignited. As a results, 547 people were injured and significant environmental damage occurred. Post-incident analysis is a valuable tool for better understanding the causes, dynamics and impacts of pipeline Natech accidents in support of future accident prevention and mitigation. Therefore, data on onshore hazardous-liquid pipeline accidents collected by the US Pipeline and Hazardous Materials Safety Administration (PHMSA) was analysed. For this purpose, a database-driven incident data analysis system was developed to aid the rapid review and categorization of PHMSA incident reports. Using an automated data-mining process followed by a peer review of the incident records and supported by natural hazard databases and external information sources, the pipeline Natechs were identified. As a by-product of the data-collection process, the database now includes over 800,000 incidents from all causes in industrial and transportation activities, which are automatically classified in the same way as the PHMSA record. This presentation describes the data collection and reviewing steps conducted during the study, provides information on the developed database and data analysis tools, and reports the findings of a statistical analysis of the identified hazardous liquid pipeline incidents in terms of accident dynamics and consequences.

  3. 49 CFR 192.921 - How is the baseline assessment to be conducted?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas... the covered pipeline segments for the baseline assessment according to a risk analysis that considers...

  4. Putting the Oxylipidome to Work: A Novel Lipidomics Pipeline Reveals Candidate Biomarkers for Photooxidative Stress in Phytoplankton

    NASA Astrophysics Data System (ADS)

    Collins, J.; Edwards, B. R.; Fredricks, H. F.; Van Mooy, B. A.

    2016-02-01

    The lipids of marine plankton encompass a diversity of biochemical functions and chemotaxonomic specificities that make them ideal molecular biomarkers in living biomass. While core, nonpolar lipids such as free fatty acids (FFA) have formed the basis for many biomarker studies in fresh biomass, methods that enable the simultaneous profiling of core lipids and intact polar lipids (IPL) have opened new avenues for characterization of environmental stressors. We demonstrate the application of a novel, rules-based lipidomics data analysis pipeline to putatively identify a broad range of intact polar lipids, intact oxidized lipids (ox-lipids) and oxylipins in accurate-mass HPLC-ESI-MS data. Using mass spectra from a lipid peroxidation experiment conducted under the natural, ultraviolet-enriched light field in West Antarctica, we use the pipeline to identify ox-lipid and oxylipin biomarkers that might serve as indicators of photooxidative stress in phytoplankton. The lipidomics pipeline derives much of its functionality from two boutique lipid-oxylipin databases, which together contain entries for more than 60,000 candidate lipid biomarkers. These databases and all scripts required by the pipeline will be publicly available online to other users.

  5. Design and analysis of quantitative differential proteomics investigations using LC-MS technology.

    PubMed

    Bukhman, Yury V; Dharsee, Moyez; Ewing, Rob; Chu, Peter; Topaloglou, Thodoros; Le Bihan, Thierry; Goh, Theo; Duewel, Henry; Stewart, Ian I; Wisniewski, Jacek R; Ng, Nancy F

    2008-02-01

    Liquid chromatography-mass spectrometry (LC-MS)-based proteomics is becoming an increasingly important tool in characterizing the abundance of proteins in biological samples of various types and across conditions. Effects of disease or drug treatments on protein abundance are of particular interest for the characterization of biological processes and the identification of biomarkers. Although state-of-the-art instrumentation is available to make high-quality measurements and commercially available software is available to process the data, the complexity of the technology and data presents challenges for bioinformaticians and statisticians. Here, we describe a pipeline for the analysis of quantitative LC-MS data. Key components of this pipeline include experimental design (sample pooling, blocking, and randomization) as well as deconvolution and alignment of mass chromatograms to generate a matrix of molecular abundance profiles. An important challenge in LC-MS-based quantitation is to be able to accurately identify and assign abundance measurements to members of protein families. To address this issue, we implement a novel statistical method for inferring the relative abundance of related members of protein families from tryptic peptide intensities. This pipeline has been used to analyze quantitative LC-MS data from multiple biomarker discovery projects. We illustrate our pipeline here with examples from two of these studies, and show that the pipeline constitutes a complete workable framework for LC-MS-based differential quantitation. Supplementary material is available at http://iec01.mie.utoronto.ca/~thodoros/Bukhman/.

  6. Bio-Docklets: virtualization containers for single-step execution of NGS pipelines.

    PubMed

    Kim, Baekdoo; Ali, Thahmina; Lijeron, Carlos; Afgan, Enis; Krampis, Konstantinos

    2017-08-01

    Processing of next-generation sequencing (NGS) data requires significant technical skills, involving installation, configuration, and execution of bioinformatics data pipelines, in addition to specialized postanalysis visualization and data mining software. In order to address some of these challenges, developers have leveraged virtualization containers toward seamless deployment of preconfigured bioinformatics software and pipelines on any computational platform. We present an approach for abstracting the complex data operations of multistep, bioinformatics pipelines for NGS data analysis. As examples, we have deployed 2 pipelines for RNA sequencing and chromatin immunoprecipitation sequencing, preconfigured within Docker virtualization containers we call Bio-Docklets. Each Bio-Docklet exposes a single data input and output endpoint and from a user perspective, running the pipelines as simply as running a single bioinformatics tool. This is achieved using a "meta-script" that automatically starts the Bio-Docklets and controls the pipeline execution through the BioBlend software library and the Galaxy Application Programming Interface. The pipeline output is postprocessed by integration with the Visual Omics Explorer framework, providing interactive data visualizations that users can access through a web browser. Our goal is to enable easy access to NGS data analysis pipelines for nonbioinformatics experts on any computing environment, whether a laboratory workstation, university computer cluster, or a cloud service provider. Beyond end users, the Bio-Docklets also enables developers to programmatically deploy and run a large number of pipeline instances for concurrent analysis of multiple datasets. © The Authors 2017. Published by Oxford University Press.

  7. Bio-Docklets: virtualization containers for single-step execution of NGS pipelines

    PubMed Central

    Kim, Baekdoo; Ali, Thahmina; Lijeron, Carlos; Afgan, Enis

    2017-01-01

    Abstract Processing of next-generation sequencing (NGS) data requires significant technical skills, involving installation, configuration, and execution of bioinformatics data pipelines, in addition to specialized postanalysis visualization and data mining software. In order to address some of these challenges, developers have leveraged virtualization containers toward seamless deployment of preconfigured bioinformatics software and pipelines on any computational platform. We present an approach for abstracting the complex data operations of multistep, bioinformatics pipelines for NGS data analysis. As examples, we have deployed 2 pipelines for RNA sequencing and chromatin immunoprecipitation sequencing, preconfigured within Docker virtualization containers we call Bio-Docklets. Each Bio-Docklet exposes a single data input and output endpoint and from a user perspective, running the pipelines as simply as running a single bioinformatics tool. This is achieved using a “meta-script” that automatically starts the Bio-Docklets and controls the pipeline execution through the BioBlend software library and the Galaxy Application Programming Interface. The pipeline output is postprocessed by integration with the Visual Omics Explorer framework, providing interactive data visualizations that users can access through a web browser. Our goal is to enable easy access to NGS data analysis pipelines for nonbioinformatics experts on any computing environment, whether a laboratory workstation, university computer cluster, or a cloud service provider. Beyond end users, the Bio-Docklets also enables developers to programmatically deploy and run a large number of pipeline instances for concurrent analysis of multiple datasets. PMID:28854616

  8. Targeting Neuronal-like Metabolism of Metastatic Tumor Cells as a Novel Therapy for Breast Cancer Brain Metastasis

    DTIC Science & Technology

    2017-03-01

    Contribution to Project: Ian primarily focuses on developing tissue imaging pipeline and perform imaging data analysis . Funding Support: Partially...3D ReconsTruction), a multi-faceted image analysis pipeline , permitting quantitative interrogation of functional implications of heterogeneous... analysis pipeline , to observe and quantify phenotypic metastatic landscape heterogeneity in situ with spatial and molecular resolution. Our implementation

  9. High-throughput Screening of Recalcitrance Variations in Lignocellulosic Biomass: Total Lignin, Lignin Monomers, and Enzymatic Sugar Release

    PubMed Central

    Decker, Stephen R.; Sykes, Robert W.; Turner, Geoffrey B.; Lupoi, Jason S.; Doepkke, Crissa; Tucker, Melvin P.; Schuster, Logan A.; Mazza, Kimberly; Himmel, Michael E.; Davis, Mark F.; Gjersing, Erica

    2015-01-01

    The conversion of lignocellulosic biomass to fuels, chemicals, and other commodities has been explored as one possible pathway toward reductions in the use of non-renewable energy sources. In order to identify which plants, out of a diverse pool, have the desired chemical traits for downstream applications, attributes, such as cellulose and lignin content, or monomeric sugar release following an enzymatic saccharification, must be compared. The experimental and data analysis protocols of the standard methods of analysis can be time-consuming, thereby limiting the number of samples that can be measured. High-throughput (HTP) methods alleviate the shortcomings of the standard methods, and permit the rapid screening of available samples to isolate those possessing the desired traits. This study illustrates the HTP sugar release and pyrolysis-molecular beam mass spectrometry pipelines employed at the National Renewable Energy Lab. These pipelines have enabled the efficient assessment of thousands of plants while decreasing experimental time and costs through reductions in labor and consumables. PMID:26437006

  10. High-Throughput Screening of Recalcitrance Variations in Lignocellulosic Biomass: Total Lignin, Lignin Monomers, and Enzymatic Sugar Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Decker, Stephen R.; Sykes, Robert W.; Turner, Geoffrey B.

    The conversion of lignocellulosic biomass to fuels, chemicals, and other commodities has been explored as one possible pathway toward reductions in the use of non-renewable energy sources. In order to identify which plants, out of a diverse pool, have the desired chemical traits for downstream applications, attributes, such as cellulose and lignin content, or monomeric sugar release following an enzymatic saccharification, must be compared. The experimental and data analysis protocols of the standard methods of analysis can be time-consuming, thereby limiting the number of samples that can be measured. High-throughput (HTP) methods alleviate the shortcomings of the standard methods, andmore » permit the rapid screening of available samples to isolate those possessing the desired traits. This study illustrates the HTP sugar release and pyrolysis-molecular beam mass spectrometry pipelines employed at the National Renewable Energy Lab. These pipelines have enabled the efficient assessment of thousands of plants while decreasing experimental time and costs through reductions in labor and consumables.« less

  11. Development and Applications of Pipeline Steel in Long-Distance Gas Pipeline of China

    NASA Astrophysics Data System (ADS)

    Chunyong, Huo; Yang, Li; Lingkang, Ji

    In past decades, with widely utilizing of Microalloying and Thermal Mechanical Control Processing (TMCP) technology, the good matching of strength, toughness, plasticity and weldability on pipeline steel has been reached so that oil and gas pipeline has been greatly developed in China to meet the demand of strong domestic consumption of energy. In this paper, development history of pipeline steel and gas pipeline in china is briefly reviewed. The microstructure characteristic and mechanical performance of pipeline steel used in some representative gas pipelines of china built in different stage are summarized. Through the analysis on the evolution of pipeline service environment, some prospective development trend of application of pipeline steel in China is also presented.

  12. A New Method for Automated Identification and Morphometry of Myelinated Fibers Through Light Microscopy Image Analysis.

    PubMed

    Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar

    2016-02-01

    Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and the vestibulocochlear nerve of adult guinea pigs were used herein. The proposed pipeline for fiber segmentation is based on the techniques of competitive clustering and concavity analysis. The evaluation of the proposed method for segmentation of images was done by comparing the automatic segmentation with the manual segmentation. To further evaluate the proposed method considering morphometric features extracted from the segmented images, the distributions of these features were tested for statistical significant difference. The method achieved a high overall sensitivity and very low false-positive rates per image. We detect no statistical difference between the distribution of the features extracted from the manual and the pipeline segmentations. The method presented a good overall performance, showing widespread potential in experimental and clinical settings allowing large-scale image analysis and, thus, leading to more reliable results.

  13. IMPACT: a whole-exome sequencing analysis pipeline for integrating molecular profiles with actionable therapeutics in clinical samples

    PubMed Central

    Hintzsche, Jennifer; Kim, Jihye; Yadav, Vinod; Amato, Carol; Robinson, Steven E; Seelenfreund, Eric; Shellman, Yiqun; Wisell, Joshua; Applegate, Allison; McCarter, Martin; Box, Neil; Tentler, John; De, Subhajyoti

    2016-01-01

    Objective Currently, there is a disconnect between finding a patient’s relevant molecular profile and predicting actionable therapeutics. Here we develop and implement the Integrating Molecular Profiles with Actionable Therapeutics (IMPACT) analysis pipeline, linking variants detected from whole-exome sequencing (WES) to actionable therapeutics. Methods and materials The IMPACT pipeline contains 4 analytical modules: detecting somatic variants, calling copy number alterations, predicting drugs against deleterious variants, and analyzing tumor heterogeneity. We tested the IMPACT pipeline on whole-exome sequencing data in The Cancer Genome Atlas (TCGA) lung adenocarcinoma samples with known EGFR mutations. We also used IMPACT to analyze melanoma patient tumor samples before treatment, after BRAF-inhibitor treatment, and after BRAF- and MEK-inhibitor treatment. Results IMPACT Food and Drug Administration (FDA) correctly identified known EGFR mutations in the TCGA lung adenocarcinoma samples. IMPACT linked these EGFR mutations to the appropriate FDA-approved EGFR inhibitors. For the melanoma patient samples, we identified NRAS p.Q61K as an acquired resistance mutation to BRAF-inhibitor treatment. We also identified CDKN2A deletion as a novel acquired resistance mutation to BRAFi/MEKi inhibition. The IMPACT analysis pipeline predicts these somatic variants to actionable therapeutics. We observed the clonal dynamic in the tumor samples after various treatments. We showed that IMPACT not only helped in successful prioritization of clinically relevant variants but also linked these variations to possible targeted therapies. Conclusion IMPACT provides a new bioinformatics strategy to delineate candidate somatic variants and actionable therapies. This approach can be applied to other patient tumor samples to discover effective drug targets for personalized medicine. IMPACT is publicly available at http://tanlab.ucdenver.edu/IMPACT. PMID:27026619

  14. Comprehensive investigation into historical pipeline construction costs and engineering economic analysis of Alaska in-state gas pipeline

    NASA Astrophysics Data System (ADS)

    Rui, Zhenhua

    This study analyzes historical cost data of 412 pipelines and 220 compressor stations. On the basis of this analysis, the study also evaluates the feasibility of an Alaska in-state gas pipeline using Monte Carlo simulation techniques. Analysis of pipeline construction costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary by diameter, length, volume, year, and location. Overall average learning rates for pipeline material and labor costs are 6.1% and 12.4%, respectively. Overall average cost shares for pipeline material, labor, miscellaneous, and right of way (ROW) are 31%, 40%, 23%, and 7%, respectively. Regression models are developed to estimate pipeline component costs for different lengths, cross-sectional areas, and locations. An analysis of inaccuracy in pipeline cost estimation demonstrates that the cost estimation of pipeline cost components is biased except for in the case of total costs. Overall overrun rates for pipeline material, labor, miscellaneous, ROW, and total costs are 4.9%, 22.4%, -0.9%, 9.1%, and 6.5%, respectively, and project size, capacity, diameter, location, and year of completion have different degrees of impacts on cost overruns of pipeline cost components. Analysis of compressor station costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary in terms of capacity, year, and location. Average learning rates for compressor station material and labor costs are 12.1% and 7.48%, respectively. Overall average cost shares of material, labor, miscellaneous, and ROW are 50.6%, 27.2%, 21.5%, and 0.8%, respectively. Regression models are developed to estimate compressor station component costs in different capacities and locations. An investigation into inaccuracies in compressor station cost estimation demonstrates that the cost estimation for compressor stations is biased except for in the case of material costs. Overall average overrun rates for compressor station material, labor, miscellaneous, land, and total costs are 3%, 60%, 2%, -14%, and 11%, respectively, and cost overruns for cost components are influenced by location and year of completion to different degrees. Monte Carlo models are developed and simulated to evaluate the feasibility of an Alaska in-state gas pipeline by assigning triangular distribution of the values of economic parameters. Simulated results show that the construction of an Alaska in-state natural gas pipeline is feasible at three scenarios: 500 million cubic feet per day (mmcfd), 750 mmcfd, and 1000 mmcfd.

  15. Lateral instability of high temperature pipelines, the 20-in. Sleipner Vest pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saevik, S.; Levold, E.; Johnsen, O.K.

    1996-12-01

    The present paper addresses methods to control snaking behavior of high temperature pipelines resting on a flat sea bed. A case study is presented based on the detail engineering of the 12.5 km long 20 inch gas pipeline connecting the Sleipner Vest wellhead platform to the Sleipner T processing platform in the North Sea. The study includes screening and evaluation of alternative expansion control methods, ending up with a recommended method. The methodology and philosophy, used as basis to ensure sufficient structural strength throughout the lifetime of the pipeline, are thereafter presented. The results show that in order to findmore » the optimum technical solution to control snaking behavior, many aspects need to be considered such as process requirements, allowable strain, hydrodynamic stability, vertical profile, pipelay installation and trawlboard loading. It is concluded that by proper consideration of all the above aspects, the high temperature pipeline can be designed to obtain sufficient safety level.« less

  16. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible. PMID:23637895

  17. Simultaneous analysis and quality assurance for diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible.

  18. GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.

    PubMed

    Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A

    2017-03-01

    We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.

  19. Head-to-Head Comparison of Two Popular Cortical Thickness Extraction Algorithms: A Cross-Sectional and Longitudinal Study

    PubMed Central

    Redolfi, Alberto; Manset, David; Barkhof, Frederik; Wahlund, Lars-Olof; Glatard, Tristan; Mangin, Jean-François; Frisoni, Giovanni B.

    2015-01-01

    Background and Purpose The measurement of cortical shrinkage is a candidate marker of disease progression in Alzheimer’s. This study evaluated the performance of two pipelines: Civet-CLASP (v1.1.9) and Freesurfer (v5.3.0). Methods Images from 185 ADNI1 cases (69 elderly controls (CTR), 37 stable MCI (sMCI), 27 progressive MCI (pMCI), and 52 Alzheimer (AD) patients) scanned at baseline, month 12, and month 24 were processed using the two pipelines and two interconnected e-infrastructures: neuGRID (https://neugrid4you.eu) and VIP (http://vip.creatis.insa-lyon.fr). The vertex-by-vertex cross-algorithm comparison was made possible applying the 3D gradient vector flow (GVF) and closest point search (CPS) techniques. Results The cortical thickness measured with Freesurfer was systematically lower by one third if compared to Civet’s. Cross-sectionally, Freesurfer’s effect size was significantly different in the posterior division of the temporal fusiform cortex. Both pipelines were weakly or mildly correlated with the Mini Mental State Examination score (MMSE) and the hippocampal volumetry. Civet differed significantly from Freesurfer in large frontal, parietal, temporal and occipital regions (p<0.05). In a discriminant analysis with cortical ROIs having effect size larger than 0.8, both pipelines gave no significant differences in area under the curve (AUC). Longitudinally, effect sizes were not significantly different in any of the 28 ROIs tested. Both pipelines weakly correlated with MMSE decay, showing no significant differences. Freesurfer mildly correlated with hippocampal thinning rate and differed in the supramarginal gyrus, temporal gyrus, and in the lateral occipital cortex compared to Civet (p<0.05). In a discriminant analysis with ROIs having effect size larger than 0.6, both pipelines yielded no significant differences in the AUC. Conclusions Civet appears slightly more sensitive to the typical AD atrophic pattern at the MCI stage, but both pipelines can accurately characterize the topography of cortical thinning at the dementia stage. PMID:25781983

  20. Identification and validation of loss of function variants in clinical contexts.

    PubMed

    Lescai, Francesco; Marasco, Elena; Bacchelli, Chiara; Stanier, Philip; Mantovani, Vilma; Beales, Philip

    2014-01-01

    The choice of an appropriate variant calling pipeline for exome sequencing data is becoming increasingly more important in translational medicine projects and clinical contexts. Within GOSgene, which facilitates genetic analysis as part of a joint effort of the University College London and the Great Ormond Street Hospital, we aimed to optimize a variant calling pipeline suitable for our clinical context. We implemented the GATK/Queue framework and evaluated the performance of its two callers: the classical UnifiedGenotyper and the new variant discovery tool HaplotypeCaller. We performed an experimental validation of the loss-of-function (LoF) variants called by the two methods using Sequenom technology. UnifiedGenotyper showed a total validation rate of 97.6% for LoF single-nucleotide polymorphisms (SNPs) and 92.0% for insertions or deletions (INDELs), whereas HaplotypeCaller was 91.7% for SNPs and 55.9% for INDELs. We confirm that GATK/Queue is a reliable pipeline in translational medicine and clinical context. We conclude that in our working environment, UnifiedGenotyper is the caller of choice, being an accurate method, with a high validation rate of error-prone calls like LoF variants. We finally highlight the importance of experimental validation, especially for INDELs, as part of a standard pipeline in clinical environments.

  1. Applications of the pipeline environment for visual informatics and genomics computations

    PubMed Central

    2011-01-01

    Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102

  2. Diagnostic Inspection of Pipelines for Estimating the State of Stress in Them

    NASA Astrophysics Data System (ADS)

    Subbotin, V. A.; Kolotilov, Yu. V.; Smirnova, V. Yu.; Ivashko, S. K.

    2017-12-01

    The diagnostic inspection used to estimate the technical state of a pipeline is described. The problems of inspection works are listed, and a functional-structural scheme is developed to estimate the state of stress in a pipeline. Final conclusions regarding the actual loading of a pipeline section are drawn upon a cross analysis of the entire information obtained during pipeline inspection.

  3. Short Communication: Analysis of Minor Populations of Human Immunodeficiency Virus by Primer Identification and Insertion-Deletion and Carry Forward Correction Pipelines.

    PubMed

    Hughes, Paul; Deng, Wenjie; Olson, Scott C; Coombs, Robert W; Chung, Michael H; Frenkel, Lisa M

    2016-03-01

    Accurate analysis of minor populations of drug-resistant HIV requires analysis of a sufficient number of viral templates. We assessed the effect of experimental conditions on the analysis of HIV pol 454 pyrosequences generated from plasma using (1) the "Insertion-deletion (indel) and Carry Forward Correction" (ICC) pipeline, which clusters sequence reads using a nonsubstitution approach and can correct for indels and carry forward errors, and (2) the "Primer Identification (ID)" method, which facilitates construction of a consensus sequence to correct for sequencing errors and allelic skewing. The Primer ID and ICC methods produced similar estimates of viral diversity, but differed in the number of sequence variants generated. Sequence preparation for ICC was comparably simple, but was limited by an inability to assess the number of templates analyzed and allelic skewing. The more costly Primer ID method corrected for allelic skewing and provided the number of viral templates analyzed, which revealed that amplifiable HIV templates varied across specimens and did not correlate with clinical viral load. This latter observation highlights the value of the Primer ID method, which by determining the number of templates amplified, enables more accurate assessment of minority species in the virus population, which may be relevant to prescribing effective antiretroviral therapy.

  4. Stress and Strain State Analysis of Defective Pipeline Portion

    NASA Astrophysics Data System (ADS)

    Burkov, P. V.; Burkova, S. P.; Knaub, S. A.

    2015-09-01

    The paper presents computer simulation results of the pipeline having defects in a welded joint. Autodesk Inventor software is used for simulation of the stress and strain state of the pipeline. Places of the possible failure and stress concentrators are predicted on the defective portion of the pipeline.

  5. 76 FR 25576 - Pipeline Safety: Applying Safety Regulations to All Rural Onshore Hazardous Liquid Low-Stress Lines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-05

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... to All Rural Onshore Hazardous Liquid Low-Stress Lines AGENCY: Pipeline and Hazardous Materials... burdensome to require operators of these pipelines to perform a complete ``could affect'' analysis to...

  6. Method for oil pipeline leak detection based on distributed fiber optic technology

    NASA Astrophysics Data System (ADS)

    Chen, Huabo; Tu, Yaqing; Luo, Ting

    1998-08-01

    Pipeline leak detection is a difficult problem to solve up to now. Some traditional leak detection methods have such problems as high rate of false alarm or missing detection, low location estimate capability. For the problems given above, a method for oil pipeline leak detection based on distributed optical fiber sensor with special coating is presented. The fiber's coating interacts with hydrocarbon molecules in oil, which alters the refractive indexed of the coating. Therefore the light-guiding properties of the fiber are modified. Thus pipeline leak location can be determined by OTDR. Oil pipeline lead detection system is designed based on the principle. The system has some features like real time, multi-point detection at the same time and high location accuracy. In the end, some factors that probably influence detection are analyzed and primary improving actions are given.

  7. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  8. A method for simulating the release of natural gas from the rupture of high-pressure pipelines in any terrain.

    PubMed

    Deng, Yajun; Hu, Hongbing; Yu, Bo; Sun, Dongliang; Hou, Lei; Liang, Yongtu

    2018-01-15

    The rupture of a high-pressure natural gas pipeline can pose a serious threat to human life and environment. In this research, a method has been proposed to simulate the release of natural gas from the rupture of high-pressure pipelines in any terrain. The process of gas releases from the rupture of a high-pressure pipeline is divided into three stages, namely the discharge, jet, and dispersion stages. Firstly, a discharge model is established to calculate the release rate of the orifice. Secondly, an improved jet model is proposed to obtain the parameters of the pseudo source. Thirdly, a fast-modeling method applicable to any terrain is introduced. Finally, based upon these three steps, a dispersion model, which can take any terrain into account, is established. Then, the dispersion scenarios of released gas in four different terrains are studied. Moreover, the effects of pipeline pressure, pipeline diameter, wind speed and concentration of hydrogen sulfide on the dispersion scenario in real terrain are systematically analyzed. The results provide significant guidance for risk assessment and contingency planning of a ruptured natural gas pipeline. Copyright © 2017. Published by Elsevier B.V.

  9. Dual-tree complex wavelet transform and SVD based acoustic noise reduction and its application in leak detection for natural gas pipeline

    NASA Astrophysics Data System (ADS)

    Yu, Xuchao; Liang, Wei; Zhang, Laibin; Jin, Hao; Qiu, Jingwei

    2016-05-01

    During the last decades, leak detection for natural gas pipeline has become one of the paramount concerns of pipeline operators and researchers across the globe. However, acoustic wave method has been proved to be an effective way to identify and localize leakage for gas pipeline. Considering the fact that noises inevitably exist in the acoustic signals collected, noise reduction should be enforced on the signals for subsequent data mining and analysis. Thus, an integrated acoustic noise reduction method based on DTCWT and SVD is proposed in this study. The method is put forward based on the idea that noise reduction strategy should match the characteristics of the noisy signal. According to previous studies, it is known that the energy of acoustic signals collected under leaking condition is mainly concentrated in low-frequency portion (0-100 Hz). And ultralow-frequency component (0-5 Hz), which is taken as the characteristic frequency band in this study, can propagate a relatively longer distance and be captured by sensors. Therefore, in order to filter the noises and to reserve the characteristic frequency band, DTCWT is taken as the core to conduct multilevel decomposition and refining for acoustic signals and SVD is employed to eliminate noises in non-characteristic bands. Both simulation and field experiments show that DTCWT-SVD is an excellent method for acoustic noise reduction. At the end of this study, application in leakage localization shows that it becomes much easier and a little more accurate to estimate the location of leak hole after noise reduction by DTCWT-SVD.

  10. Automatic Generalizability Method of Urban Drainage Pipe Network Considering Multi-Features

    NASA Astrophysics Data System (ADS)

    Zhu, S.; Yang, Q.; Shao, J.

    2018-05-01

    Urban drainage systems are indispensable dataset for storm-flooding simulation. Given data availability and current computing power, the structure and complexity of urban drainage systems require to be simplify. However, till data, the simplify procedure mainly depend on manual operation that always leads to mistakes and lower work efficiency. This work referenced the classification methodology of road system, and proposed a conception of pipeline stroke. Further, length of pipeline, angle between two pipelines, the pipeline belonged road level and diameter of pipeline were chosen as the similarity criterion to generate the pipeline stroke. Finally, designed the automatic method to generalize drainage systems with the concern of multi-features. This technique can improve the efficiency and accuracy of the generalization of drainage systems. In addition, it is beneficial to the study of urban storm-floods.

  11. Common Data Analysis Pipeline | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    CPTAC supports analyses of the mass spectrometry raw data (mapping of spectra to peptide sequences and protein identification) for the public using a Common Data Analysis Pipeline (CDAP). The data types available on the public portal are described below. A general overview of this pipeline can be downloaded here. Mass Spectrometry Data Formats RAW (Vendor) Format

  12. Compact Graphical Representation of Phylogenetic Data and Metadata with GraPh1An

    DTIC Science & Technology

    2016-09-12

    pipelines . This allows for a higher degree of analysis reproducibility, but the software must correspondingly be available for local installation and callable...these operations are available in the GraPhlAn software repository). Reproducible integration with existing analysis tools and pipelines Graphical...from different analysis pipelines , generating the necessary input files for GraPhlAn. Export2graphlan directly supports MetaPhlAn2, LEfSe, and HUMAnN

  13. MitoFish and MiFish Pipeline: A Mitochondrial Genome Database of Fish with an Analysis Pipeline for Environmental DNA Metabarcoding.

    PubMed

    Sato, Yukuto; Miya, Masaki; Fukunaga, Tsukasa; Sado, Tetsuya; Iwasaki, Wataru

    2018-06-01

    Fish mitochondrial genome (mitogenome) data form a fundamental basis for revealing vertebrate evolution and hydrosphere ecology. Here, we report recent functional updates of MitoFish, which is a database of fish mitogenomes with a precise annotation pipeline MitoAnnotator. Most importantly, we describe implementation of MiFish pipeline for metabarcoding analysis of fish mitochondrial environmental DNA, which is a fast-emerging and powerful technology in fish studies. MitoFish, MitoAnnotator, and MiFish pipeline constitute a key platform for studies of fish evolution, ecology, and conservation, and are freely available at http://mitofish.aori.u-tokyo.ac.jp/ (last accessed April 7th, 2018).

  14. 77 FR 66568 - Revisions to Procedural Regulations Governing Transportation by Intrastate Pipelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-06

    ... filings by those natural gas pipelines that fall under the Commission's jurisdiction pursuant to the Natural Gas Policy Act of 1978 or the Natural Gas Act. An intrastate pipeline may elect to use these... Pipelines C. Withdrawal Procedures 20 III. Information Collection Statement 21 IV. Environmental Analysis 28...

  15. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  16. Leak detection in gas pipeline by acoustic and signal processing - A review

    NASA Astrophysics Data System (ADS)

    Adnan, N. F.; Ghazali, M. F.; Amin, M. M.; Hamat, A. M. A.

    2015-12-01

    The pipeline system is the most important part in media transport in order to deliver fluid to another station. The weak maintenance and poor safety will contribute to financial losses in term of fluid waste and environmental impacts. There are many classifications of techniques to make it easier to show their specific method and application. This paper's discussion about gas leak detection in pipeline system using acoustic method will be presented in this paper. The wave propagation in the pipeline is a key parameter in acoustic method when the leak occurs and the pressure balance of the pipe will generated by the friction between wall in the pipe. The signal processing is used to decompose the raw signal and show in time- frequency. Findings based on the acoustic method can be used for comparative study in the future. Acoustic signal and HHT is the best method to detect leak in gas pipelines. More experiments and simulation need to be carried out to get the fast result of leaking and estimation of their location.

  17. A De Novo-Assembly Based Data Analysis Pipeline for Plant Obligate Parasite Metatranscriptomic Studies.

    PubMed

    Guo, Li; Allen, Kelly S; Deiulio, Greg; Zhang, Yong; Madeiras, Angela M; Wick, Robert L; Ma, Li-Jun

    2016-01-01

    Current and emerging plant diseases caused by obligate parasitic microbes such as rusts, downy mildews, and powdery mildews threaten worldwide crop production and food safety. These obligate parasites are typically unculturable in the laboratory, posing technical challenges to characterize them at the genetic and genomic level. Here we have developed a data analysis pipeline integrating several bioinformatic software programs. This pipeline facilitates rapid gene discovery and expression analysis of a plant host and its obligate parasite simultaneously by next generation sequencing of mixed host and pathogen RNA (i.e., metatranscriptomics). We applied this pipeline to metatranscriptomic sequencing data of sweet basil (Ocimum basilicum) and its obligate downy mildew parasite Peronospora belbahrii, both lacking a sequenced genome. Even with a single data point, we were able to identify both candidate host defense genes and pathogen virulence genes that are highly expressed during infection. This demonstrates the power of this pipeline for identifying genes important in host-pathogen interactions without prior genomic information for either the plant host or the obligate biotrophic pathogen. The simplicity of this pipeline makes it accessible to researchers with limited computational skills and applicable to metatranscriptomic data analysis in a wide range of plant-obligate-parasite systems.

  18. Modelling of non-equilibrium flow in the branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Sumskoi, S. I.; Sverchkov, A. M.; Lisanov, M. V.; Egorov, A. F.

    2016-09-01

    This article presents a mathematical model and a numerical method for solving the task of water hammer in the branched pipeline system. The task is considered in the onedimensional non-stationary formulation taking into account the realities such as the change in the diameter of the pipeline and its branches. By comparison with the existing analytic solution it has been shown that the proposed method possesses good accuracy. With the help of the developed model and numerical method the task has been solved concerning the transmission of the compression waves complex in the branching pipeline system when several shut down valves operate. It should be noted that the offered model and method may be easily introduced to a number of other tasks, for example, to describe the flow of blood in the vessels.

  19. Pipeline for illumination correction of images for high-throughput microscopy.

    PubMed

    Singh, S; Bray, M-A; Jones, T R; Carpenter, A E

    2014-12-01

    The presence of systematic noise in images in high-throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non-homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high-content screen readouts due to software-based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real-world high-throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z'-factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high-content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post-hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open-source image analysis pipelines publicly available. This software-based solution has the potential to improve outcomes for a wide-variety of image-based HTS experiments. © 2014 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  20. Improved Photometry for the DASCH Pipeline

    NASA Astrophysics Data System (ADS)

    Tang, Sumin; Grindlay, Jonathan; Los, Edward; Servillat, Mathieu

    2013-07-01

    The Digital Access to a Sky Century@Harvard (DASCH) project is digitizing the ˜500,000 glass plate images obtained (full sky) by the Harvard College Observatory from 1885 to 1992. Astrometry and photometry for each resolved object are derived with photometric rms values of ˜0.15 mag for the initial photometry analysis pipeline. Here we describe new developments for DASCH photometry, applied to the Kepler field, that have yielded further improvements, including better identification of image blends and plate defects by measuring image profiles and astrometric deviations. A local calibration procedure using nearby stars in a similar magnitude range as the program star (similar to what has been done for visual photometry from the plates) yields additional improvement for a net photometric rms of ˜0.1 mag. We also describe statistical measures of light curves that are now used in the DASCH pipeline processing to identify new variables autonomously. The DASCH photometry methods described here are used in the pipeline processing for the data releases of DASCH data,5 as well as for a forthcoming paper on the long-term variables discovered by DASCH in the Kepler field.

  1. Material property relationships for pipeline steels and the potential for application of NDE

    NASA Astrophysics Data System (ADS)

    Smart, Lucinda; Bond, Leonard J.

    2016-02-01

    The oil and gas industry in the USA has an extensive infrastructure of pipelines, 70% of which were installed prior to 1980, and almost half were installed during the 1950s and 1960s. Ideally the mechanical properties (i.e. yield strength, tensile strength, transition temperature, and fracture toughness) of a steel pipe must be known in order to respond to detected defects in an appropriate manner. Neither current in-ditch methods nor the ILI inspection data have yet determined and map the desired mechanical properties with adequate confidence. In the quest to obtain the mechanical properties of a steel pipe using a nondestructive method, it is important to understand that there are many inter-related variables. This paper reports a literature review and an analysis of a sample set of data. There is promise for correlating the results of NDE measurement modalities to the information required to develop relationships between those measurements and the mechanical measurements desired for pipelines to ensure proper response to defects which are of significant threat.

  2. Microarray Data Processing Techniques for Genome-Scale Network Inference from Large Public Repositories.

    PubMed

    Chockalingam, Sriram; Aluru, Maneesha; Aluru, Srinivas

    2016-09-19

    Pre-processing of microarray data is a well-studied problem. Furthermore, all popular platforms come with their own recommended best practices for differential analysis of genes. However, for genome-scale network inference using microarray data collected from large public repositories, these methods filter out a considerable number of genes. This is primarily due to the effects of aggregating a diverse array of experiments with different technical and biological scenarios. Here we introduce a pre-processing pipeline suitable for inferring genome-scale gene networks from large microarray datasets. We show that partitioning of the available microarray datasets according to biological relevance into tissue- and process-specific categories significantly extends the limits of downstream network construction. We demonstrate the effectiveness of our pre-processing pipeline by inferring genome-scale networks for the model plant Arabidopsis thaliana using two different construction methods and a collection of 11,760 Affymetrix ATH1 microarray chips. Our pre-processing pipeline and the datasets used in this paper are made available at http://alurulab.cc.gatech.edu/microarray-pp.

  3. Leak detection in medium density polyethylene (MDPE) pipe using pressure transient method

    NASA Astrophysics Data System (ADS)

    Amin, M. M.; Ghazali, M. F.; PiRemli, M. A.; Hamat, A. M. A.; Adnan, N. F.

    2015-12-01

    Water is an essential part of commodity for a daily life usage for an average person, from personal uses such as residential or commercial consumers to industries utilization. This study emphasizes on detection of leaking in medium density polyethylene (MDPE) pipe using pressure transient method. This type of pipe is used to analyze the position of the leakage in the pipeline by using Ensemble Empirical Mode Decomposition Method (EEMD) with signal masking. Water hammer would induce an impulse throughout the pipeline that caused the system turns into a surge of water wave. Thus, solenoid valve is used to create a water hammer through the pipelines. The data from the pressure sensor is collected using DASYLab software. The data analysis of the pressure signal will be decomposed into a series of wave composition using EEMD signal masking method in matrix laboratory (MATLAB) software. The series of decomposition of signals is then carefully selected which reflected intrinsic mode function (IMF). These IMFs will be displayed by using a mathematical algorithm, known as Hilbert transform (HT) spectrum. The IMF signal was analysed to capture the differences. The analyzed data is compared with the actual measurement of the leakage in term of percentage error. The error recorded is below than 1% and it is proved that this method highly reliable and accurate for leak detection.

  4. Description of the TCERT Vetting Reports for Data Release 25

    NASA Technical Reports Server (NTRS)

    Van Cleve, Jeffrey E.; Caldwell, Douglas A.

    2016-01-01

    This document, the Kepler Instrument Handbook (KIH), is for Kepler and K2 observers, which includes the Kepler Science Team, Guest Observers (GOs), and astronomers doing archival research on Kepler and K2 data in NASAs Astrophysics Data Analysis Program (ADAP). The KIH provides information about the design, performance, and operational constraints of the Kepler flight hardware and software, and an overview of the pixel data sets available. The KIH is meant to be read with these companion documents:1. Kepler Data Processing Handbook (KSCI-19081) or KDPH (Jenkins et al., 2016). The KDPH describes how pixels downlinked from the spacecraft are converted by the Kepler Data Processing Pipeline (henceforth just the pipeline) into the data products delivered to the MAST archive. 2. Kepler Archive Manual (KDMC-10008) or KAM (Thompson et al., 2016). The KAM describes the format and content of the data products, and how to search for them.3. Kepler Data Characteristics Handbook (KSCI-19040) or KDCH (Christiansen et al., 2016). The KDCH describes recurring non-astrophysical features of the Kepler data due to instrument signatures, spacecraft events, or solar activity, and explains how these characteristics are handled by the pipeline.4. Kepler Data Release Notes 25 (KSCI-19065) or DRN 25 (Thompson et al., 2015). DRN 25 describes signatures and events peculiar to individual quarters, and the pipeline software changes between a data release and the one preceding it.Together, these documents supply the information necessary for obtaining and understanding Kepler results, given the real properties of the hardware and the data analysis methods used, and for an independent evaluation of the methods used if so desired.

  5. Disentangling methodological and biological sources of gene tree discordance on Oryza (Poaceae) chromosome 3.

    PubMed

    Zwickl, Derrick J; Stein, Joshua C; Wing, Rod A; Ware, Doreen; Sanderson, Michael J

    2014-09-01

    We describe new methods for characterizing gene tree discordance in phylogenomic data sets, which screen for deviations from neutral expectations, summarize variation in statistical support among gene trees, and allow comparison of the patterns of discordance induced by various analysis choices. Using an exceptionally complete set of genome sequences for the short arm of chromosome 3 in Oryza (rice) species, we applied these methods to identify the causes and consequences of differing patterns of discordance in the sets of gene trees inferred using a panel of 20 distinct analysis pipelines. We found that discordance patterns were strongly affected by aspects of data selection, alignment, and alignment masking. Unusual patterns of discordance evident when using certain pipelines were reduced or eliminated by using alternative pipelines, suggesting that they were the product of methodological biases rather than evolutionary processes. In some cases, once such biases were eliminated, evolutionary processes such as introgression could be implicated. Additionally, patterns of gene tree discordance had significant downstream impacts on species tree inference. For example, inference from supermatrices was positively misleading when pipelines that led to biased gene trees were used. Several results may generalize to other data sets: we found that gene tree and species tree inference gave more reasonable results when intron sequence was included during sequence alignment and tree inference, the alignment software PRANK was used, and detectable "block-shift" alignment artifacts were removed. We discuss our findings in the context of well-established relationships in Oryza and continuing controversies regarding the domestication history of O. sativa. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. UOE Pipe Numerical Model: Manufacturing Process And Von Mises Residual Stresses Resulted After Each Technological Step

    NASA Astrophysics Data System (ADS)

    Delistoian, Dmitri; Chirchor, Mihael

    2017-12-01

    Fluid transportation from production areas to final customer is effectuated by pipelines. For oil and gas industry, pipeline safety and reliability represents a priority. From this reason, pipe quality guarantee directly influence pipeline designed life, but first of all protects environment. A significant number of longitudinally welded pipes, for onshore/offshore pipelines, are manufactured by UOE method. This method is based on cold forming. In present study, using finite element method is modeled UOE pipe manufacturing process and is obtained von Mises stresses for each step. Numerical simulation is performed for L415 MB (X60) steel plate with 7,9 mm thickness, length 30 mm and width 1250mm, as result it is obtained a DN 400 pipe.

  7. A Pipeline for the Analysis of APOGEE Spectra Based on Equivalent Widths

    NASA Astrophysics Data System (ADS)

    Arfon Williams, Rob; Bosley, Corinne; Jones, Hayden; Schiavon, Ricardo P.; Allende-Prieto, Carlos; Bizyaev, Dmitry; Carrera, Ricardo; Cunha, Katia M. L.; Nguyen, Duy; Feuillet, Diane; Frinchaboy, Peter M.; García Pérez, Ana; Hasselquist, Sten; Hayden, Michael R.; Hearty, Fred R.; Holtzman, Jon A.; Johnson, Jennifer; Majewski, Steven R.; Meszaros, Szabolcs; Nidever, David L.; Shetrone, Matthew D.; Smith, Verne V.; Sobeck, Jennifer; Troup, Nicholas William; Wilson, John C.; Zasowski, Gail

    2015-01-01

    The Apache Point Galactic Evolution Experiment (APOGEE) forms part of the third Sloan Digital Sky Survey and has obtained high resolution, high signal-to-noise infrared spectra for ~1.3 x 105 stars across the galactic bulge, disc and halo. From these, stellar parameters are derived together with abundances for various elements using the APOGEE Stellar Parameters and Chemical Abundance Pipeline (ASPCAP). In this poster we report preliminary results from application of an alternative stellar parameters and abundances pipeline, based on measurements of equivalent widths of absorption lines in APOGEE spectra. The method is based on a sequential grid inversion algorithm, originally designed for the derivation of ages and elemental abundances of stellar populations from line indices in their integrated spectra. It allows for the rapid processing of large spectroscopic data sets from both current and future surveys, such as APOGEE and APOGEE 2, and it is easily adaptable for application to other very large data sets that are being/will be generated by other massive surveys of the stellar populations of the Galaxy. It will also allow the cross checking of ASPCAP results using an independent method. In this poster we present preliminary results showing estimates of effective temperature and iron abundance [Fe/H] for a subset of the APOGEE sample, comparing with DR12 numbers produced by the ASPCAP pipeline.

  8. Gas pipeline relief valves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bright, G.F.

    1974-01-01

    A discussion of the increasing activity of natural gas pipeline companies in the analysis of the overpressure protection methods for complying with the provisions of Part 192, Title 49, Code of Federal Regulations ''Transportation of Natural and Other Gas by Pipelines; Minimum Federal Safety Standards'' and with the USAS B31.8 Code covers the basic requirements for protection against accidental overpressure as being essentially the same in both documents, i.e., at the maximum allowable operating overpressure in a gas system can be exceeded either at a compressor station or downstream of a pressure control valve; mandatory use of overpressure protection devicesmore » in these situations, except for those cases which exempt some service regulators because the distribution system pressure is less than 60 psig and six other requirements of design, performance, and size are met; and basic design requirements of a pressure relief or limiting station and the components used.« less

  9. Deep Convolutional Neural Networks Enable Discrimination of Heterogeneous Digital Pathology Images.

    PubMed

    Khosravi, Pegah; Kazemi, Ehsan; Imielinski, Marcin; Elemento, Olivier; Hajirasouliha, Iman

    2018-01-01

    Pathological evaluation of tumor tissue is pivotal for diagnosis in cancer patients and automated image analysis approaches have great potential to increase precision of diagnosis and help reduce human error. In this study, we utilize several computational methods based on convolutional neural networks (CNN) and build a stand-alone pipeline to effectively classify different histopathology images across different types of cancer. In particular, we demonstrate the utility of our pipeline to discriminate between two subtypes of lung cancer, four biomarkers of bladder cancer, and five biomarkers of breast cancer. In addition, we apply our pipeline to discriminate among four immunohistochemistry (IHC) staining scores of bladder and breast cancers. Our classification pipeline includes a basic CNN architecture, Google's Inceptions with three training strategies, and an ensemble of two state-of-the-art algorithms, Inception and ResNet. Training strategies include training the last layer of Google's Inceptions, training the network from scratch, and fine-tunning the parameters for our data using two pre-trained version of Google's Inception architectures, Inception-V1 and Inception-V3. We demonstrate the power of deep learning approaches for identifying cancer subtypes, and the robustness of Google's Inceptions even in presence of extensive tumor heterogeneity. On average, our pipeline achieved accuracies of 100%, 92%, 95%, and 69% for discrimination of various cancer tissues, subtypes, biomarkers, and scores, respectively. Our pipeline and related documentation is freely available at https://github.com/ih-_lab/CNN_Smoothie. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Modeling genome-wide dynamic regulatory network in mouse lungs with influenza infection using high-dimensional ordinary differential equations.

    PubMed

    Wu, Shuang; Liu, Zhi-Ping; Qiu, Xing; Wu, Hulin

    2014-01-01

    The immune response to viral infection is regulated by an intricate network of many genes and their products. The reverse engineering of gene regulatory networks (GRNs) using mathematical models from time course gene expression data collected after influenza infection is key to our understanding of the mechanisms involved in controlling influenza infection within a host. A five-step pipeline: detection of temporally differentially expressed genes, clustering genes into co-expressed modules, identification of network structure, parameter estimate refinement, and functional enrichment analysis, is developed for reconstructing high-dimensional dynamic GRNs from genome-wide time course gene expression data. Applying the pipeline to the time course gene expression data from influenza-infected mouse lungs, we have identified 20 distinct temporal expression patterns in the differentially expressed genes and constructed a module-based dynamic network using a linear ODE model. Both intra-module and inter-module annotations and regulatory relationships of our inferred network show some interesting findings and are highly consistent with existing knowledge about the immune response in mice after influenza infection. The proposed method is a computationally efficient, data-driven pipeline bridging experimental data, mathematical modeling, and statistical analysis. The application to the influenza infection data elucidates the potentials of our pipeline in providing valuable insights into systematic modeling of complicated biological processes.

  11. The pipeline system for Octave and Matlab (PSOM): a lightweight scripting framework and execution engine for scientific workflows.

    PubMed

    Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P; Zijdenbos, Alex P; Evans, Alan C

    2012-01-01

    The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources.

  12. The pipeline system for Octave and Matlab (PSOM): a lightweight scripting framework and execution engine for scientific workflows

    PubMed Central

    Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P.; Zijdenbos, Alex P.; Evans, Alan C.

    2012-01-01

    The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources. PMID:22493575

  13. Closha: bioinformatics workflow system for the analysis of massive sequencing data.

    PubMed

    Ko, GunHwan; Kim, Pan-Gyu; Yoon, Jongcheol; Han, Gukhee; Park, Seong-Jin; Song, Wangho; Lee, Byungwook

    2018-02-19

    While next-generation sequencing (NGS) costs have fallen in recent years, the cost and complexity of computation remain substantial obstacles to the use of NGS in bio-medical care and genomic research. The rapidly increasing amounts of data available from the new high-throughput methods have made data processing infeasible without automated pipelines. The integration of data and analytic resources into workflow systems provides a solution to the problem by simplifying the task of data analysis. To address this challenge, we developed a cloud-based workflow management system, Closha, to provide fast and cost-effective analysis of massive genomic data. We implemented complex workflows making optimal use of high-performance computing clusters. Closha allows users to create multi-step analyses using drag and drop functionality and to modify the parameters of pipeline tools. Users can also import the Galaxy pipelines into Closha. Closha is a hybrid system that enables users to use both analysis programs providing traditional tools and MapReduce-based big data analysis programs simultaneously in a single pipeline. Thus, the execution of analytics algorithms can be parallelized, speeding up the whole process. We also developed a high-speed data transmission solution, KoDS, to transmit a large amount of data at a fast rate. KoDS has a file transfer speed of up to 10 times that of normal FTP and HTTP. The computer hardware for Closha is 660 CPU cores and 800 TB of disk storage, enabling 500 jobs to run at the same time. Closha is a scalable, cost-effective, and publicly available web service for large-scale genomic data analysis. Closha supports the reliable and highly scalable execution of sequencing analysis workflows in a fully automated manner. Closha provides a user-friendly interface to all genomic scientists to try to derive accurate results from NGS platform data. The Closha cloud server is freely available for use from http://closha.kobic.re.kr/ .

  14. A bioinformatic pipeline for identifying informative SNP panels for parentage assignment from RADseq data.

    PubMed

    Andrews, Kimberly R; Adams, Jennifer R; Cassirer, E Frances; Plowright, Raina K; Gardner, Colby; Dwire, Maggie; Hohenlohe, Paul A; Waits, Lisette P

    2018-06-05

    The development of high-throughput sequencing technologies is dramatically increasing the use of single nucleotide polymorphisms (SNPs) across the field of genetics, but most parentage studies of wild populations still rely on microsatellites. We developed a bioinformatic pipeline for identifying SNP panels that are informative for parentage analysis from restriction site-associated DNA sequencing (RADseq) data. This pipeline includes options for analysis with or without a reference genome, and provides methods to maximize genotyping accuracy and select sets of unlinked loci that have high statistical power. We test this pipeline on small populations of Mexican gray wolf and bighorn sheep, for which parentage analyses are expected to be challenging due to low genetic diversity and the presence of many closely related individuals. We compare the results of parentage analysis across SNP panels generated with or without the use of a reference genome, and between SNPs and microsatellites. For Mexican gray wolf, we conducted parentage analyses for 30 pups from a single cohort where samples were available from 64% of possible mothers and 53% of possible fathers, and the accuracy of parentage assignments could be estimated because true identities of parents were known a priori based on field data. For bighorn sheep, we conducted maternity analyses for 39 lambs from five cohorts where 77% of possible mothers were sampled, but true identities of parents were unknown. Analyses with and without a reference genome produced SNP panels with >95% parentage assignment accuracy for Mexican gray wolf, outperforming microsatellites at 78% accuracy. Maternity assignments were completely consistent across all SNP panels for the bighorn sheep, and were 74.4% consistent with assignments from microsatellites. Accuracy and consistency of parentage analysis were not reduced when using as few as 284 SNPs for Mexican gray wolf and 142 SNPs for bighorn sheep, indicating our pipeline can be used to develop SNP genotyping assays for parentage analysis with relatively small numbers of loci. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. Mercury: Next-gen Data Analysis and Annotation Pipeline (Seventh Annual Sequencing, Finishing, Analysis in the Future (SFAF) Meeting 2012)

    ScienceCinema

    Sexton, David

    2018-01-22

    David Sexton (Baylor) gives a talk titled "Mercury: Next-gen Data Analysis and Annotation Pipeline" at the 7th Annual Sequencing, Finishing, Analysis in the Future (SFAF) Meeting held in June, 2012 in Santa Fe, NM.

  16. Mercury: Next-gen Data Analysis and Annotation Pipeline (Seventh Annual Sequencing, Finishing, Analysis in the Future (SFAF) Meeting 2012)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sexton, David

    2012-06-01

    David Sexton (Baylor) gives a talk titled "Mercury: Next-gen Data Analysis and Annotation Pipeline" at the 7th Annual Sequencing, Finishing, Analysis in the Future (SFAF) Meeting held in June, 2012 in Santa Fe, NM.

  17. Snow as building material for construction of technological along-the-route roads of main pipelines

    NASA Astrophysics Data System (ADS)

    Merdanov, S. M.; Egorov, A. L.; Kostyrchenko, V. A.; Madyarov, T. M.

    2018-05-01

    The article deals with the process of compacting snow in a closed volume with the use of vacuum processing for the construction of technological along-the-route roads of main pipelines. The relevance of the chosen study is substantiated; methods and designs for snow compaction are considered. The publication activity and defenses of doctoral and candidate dissertations on the research subject are analyzed. Patent analysis of existing methods and equipment for snow compaction with indication of their disadvantages is carried out. A design calculation was carried out using computer programs in which a strength calculation was performed to identify the most stressed places in the construction of a vibrating hydraulic tyre-type roller. A 3D model of the experimental setup was developed.

  18. Evaluation of fishing gear induced pipeline damage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellinas, C.P.; King, B.; Davies, R.

    1995-12-31

    Impact and damage to pipelines due to fishing activities is one of the hazards faced by North Sea pipelines during their operating lives. Available data indicate that about one in ten of reported incidents are due to fishing activities. This paper is concerned with one such occurrence, the assessment of the resulting damage, the methods used to confirm pipeline integrity and the approaches developed for its repair.

  19. United States petroleum pipelines: An empirical analysis of pipeline sizing

    NASA Astrophysics Data System (ADS)

    Coburn, L. L.

    1980-12-01

    The undersizing theory hypothesizes that integrated oil companies have a strong economic incentive to size the petroleum pipelines they own and ship over in a way that means that some of the demand must utilize higher cost alternatives. The DOJ theory posits that excess or monopoly profits are earned due to the natural monopoly characteristics of petroleum pipelines and the existence of market power in some pipelines at either the upstream or downstream market. The theory holds that independent petroleum pipelines owned by companies not otherwise affiliated with the petroleum industry (independent pipelines) do not have these incentives and all the efficiencies of pipeline transportation are passed to the ultimate consumer. Integrated oil companies on the other hand, keep these cost efficiencies for themselves in the form of excess profits.

  20. Understanding Magnetic Flux Leakage (MFL) Signals from Mechanical Damage in Pipelines - Phase I

    DOT National Transportation Integrated Search

    2007-09-18

    Pipeline inspection tools based on Magnetic Flux Leakage (MFL) principles represent the most cost-effective method for in-line detection and monitoring of pipeline corrosion defects. Mechanical damage also produces MFL signals, but as yet these signa...

  1. A De Novo-Assembly Based Data Analysis Pipeline for Plant Obligate Parasite Metatranscriptomic Studies

    PubMed Central

    Guo, Li; Allen, Kelly S.; Deiulio, Greg; Zhang, Yong; Madeiras, Angela M.; Wick, Robert L.; Ma, Li-Jun

    2016-01-01

    Current and emerging plant diseases caused by obligate parasitic microbes such as rusts, downy mildews, and powdery mildews threaten worldwide crop production and food safety. These obligate parasites are typically unculturable in the laboratory, posing technical challenges to characterize them at the genetic and genomic level. Here we have developed a data analysis pipeline integrating several bioinformatic software programs. This pipeline facilitates rapid gene discovery and expression analysis of a plant host and its obligate parasite simultaneously by next generation sequencing of mixed host and pathogen RNA (i.e., metatranscriptomics). We applied this pipeline to metatranscriptomic sequencing data of sweet basil (Ocimum basilicum) and its obligate downy mildew parasite Peronospora belbahrii, both lacking a sequenced genome. Even with a single data point, we were able to identify both candidate host defense genes and pathogen virulence genes that are highly expressed during infection. This demonstrates the power of this pipeline for identifying genes important in host–pathogen interactions without prior genomic information for either the plant host or the obligate biotrophic pathogen. The simplicity of this pipeline makes it accessible to researchers with limited computational skills and applicable to metatranscriptomic data analysis in a wide range of plant-obligate-parasite systems. PMID:27462318

  2. Towards photometry pipeline of the Indonesian space surveillance system

    NASA Astrophysics Data System (ADS)

    Priyatikanto, Rhorom; Religia, Bahar; Rachman, Abdul; Dani, Tiar

    2015-09-01

    Optical observation through sub-meter telescope equipped with CCD camera becomes alternative method for increasing orbital debris detection and surveillance. This observational mode is expected to eye medium-sized objects in higher orbits (e.g. MEO, GTO, GSO & GEO), beyond the reach of usual radar system. However, such observation of fast moving objects demands special treatment and analysis technique. In this study, we performed photometric analysis of the satellite track images photographed using rehabilitated Schmidt Bima Sakti telescope in Bosscha Observatory. The Hough transformation was implemented to automatically detect linear streak from the images. From this analysis and comparison to USSPACECOM catalog, two satellites were identified and associated with inactive Thuraya-3 satellite and Satcom-3 debris which are located at geostationary orbit. Further aperture photometry analysis revealed the periodicity of tumbling Satcom-3 debris. In the near future, it is not impossible to apply similar scheme to establish an analysis pipeline for optical space surveillance system hosted in Indonesia.

  3. A pipeline leakage locating method based on the gradient descent algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yulong; Yang, Fan; Ni, Na

    2018-04-01

    A pipeline leakage locating method based on the gradient descent algorithm is proposed in this paper. The method has low computing complexity, which is suitable for practical application. We have built experimental environment in real underground pipeline network. A lot of real data has been gathered in the past three months. Every leak point has been certificated by excavation. Results show that positioning error is within 0.4 meter. Rate of false alarm and missing alarm are both under 20%. The calculating time is not above 5 seconds.

  4. Analysis of Bacterial and Archaeal Communities along a High-Molecular-Weight Polyacrylamide Transportation Pipeline System in an Oil Field

    PubMed Central

    Li, Cai-Yun; Li, Jing-Yan; Mbadinga, Serge Maurice; Liu, Jin-Feng; Gu, Ji-Dong; Mu, Bo-Zhong

    2015-01-01

    Viscosity loss of high-molecular-weight partially hydrolyzed polyacrylamide (HPAM) solution was observed in a water injection pipeline before being injected into subterranean oil wells. In order to investigate the possible involvement of microorganisms in HPAM viscosity loss, both bacterial and archaeal community compositions of four samples collected from different points of the transportation pipeline were analyzed using PCR-amplification of the 16S rRNA gene and clone library construction method together with the analysis of physicochemical properties of HPAM solution and environmental factors. Further, the relationship between environmental factors and HPAM properties with microorganisms were delineated by canonical correspondence analysis (CCA). Diverse bacterial and archaeal groups were detected in the four samples. The microbial community of initial solution S1 gathered from the make-up tank is similar to solution S2 gathered from the first filter, and that of solution S3 obtained between the first and the second filter is similar to that of solution S4 obtained between the second filter and the injection well. Members of the genus Acinetobacter sp. were detected with high abundance in S3 and S4 in which HPAM viscosity was considerably reduced, suggesting that they likely played a considerable role in HPAM viscosity loss. This study presents information on microbial community diversity in the HPAM transportation pipeline and the possible involvement of microorganisms in HPAM viscosity loss and biodegradation. The results will help to understand the microbial community contribution made to viscosity change and are beneficial for providing information for microbial control in oil fields. PMID:25849654

  5. Analysis of bacterial and archaeal communities along a high-molecular-weight polyacrylamide transportation pipeline system in an oil field.

    PubMed

    Li, Cai-Yun; Li, Jing-Yan; Mbadinga, Serge Maurice; Liu, Jin-Feng; Gu, Ji-Dong; Mu, Bo-Zhong

    2015-04-02

    Viscosity loss of high-molecular-weight partially hydrolyzed polyacrylamide (HPAM) solution was observed in a water injection pipeline before being injected into subterranean oil wells. In order to investigate the possible involvement of microorganisms in HPAM viscosity loss, both bacterial and archaeal community compositions of four samples collected from different points of the transportation pipeline were analyzed using PCR-amplification of the 16S rRNA gene and clone library construction method together with the analysis of physicochemical properties of HPAM solution and environmental factors. Further, the relationship between environmental factors and HPAM properties with microorganisms were delineated by canonical correspondence analysis (CCA). Diverse bacterial and archaeal groups were detected in the four samples. The microbial community of initial solution S1 gathered from the make-up tank is similar to solution S2 gathered from the first filter, and that of solution S3 obtained between the first and the second filter is similar to that of solution S4 obtained between the second filter and the injection well. Members of the genus Acinetobacter sp. were detected with high abundance in S3 and S4 in which HPAM viscosity was considerably reduced, suggesting that they likely played a considerable role in HPAM viscosity loss. This study presents information on microbial community diversity in the HPAM transportation pipeline and the possible involvement of microorganisms in HPAM viscosity loss and biodegradation. The results will help to understand the microbial community contribution made to viscosity change and are beneficial for providing information for microbial control in oil fields.

  6. 49 CFR 192.941 - What is a low stress reassessment?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity... gas analysis for corrosive agents at least once each calendar year; (2) Conduct periodic testing of...

  7. MICRA: an automatic pipeline for fast characterization of microbial genomes from high-throughput sequencing data.

    PubMed

    Caboche, Ségolène; Even, Gaël; Loywick, Alexandre; Audebert, Christophe; Hot, David

    2017-12-19

    The increase in available sequence data has advanced the field of microbiology; however, making sense of these data without bioinformatics skills is still problematic. We describe MICRA, an automatic pipeline, available as a web interface, for microbial identification and characterization through reads analysis. MICRA uses iterative mapping against reference genomes to identify genes and variations. Additional modules allow prediction of antibiotic susceptibility and resistance and comparing the results of several samples. MICRA is fast, producing few false-positive annotations and variant calls compared to current methods, making it a tool of great interest for fully exploiting sequencing data.

  8. An Optimized Informatics Pipeline for Mass Spectrometry-Based Peptidomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chaochao; Monroe, Matthew E.; Xu, Zhe

    2015-12-26

    Comprehensive MS analysis of peptidome, the intracellular and intercellular products of protein degradation, has the potential to provide novel insights on endogenous proteolytic processing and their utility in disease diagnosis and prognosis. Along with the advances in MS instrumentation, a plethora of proteomics data analysis tools have been applied for direct use in peptidomics; however an evaluation of the currently available informatics pipelines for peptidomics data analysis has yet to be reported. In this study, we set off by evaluating the results of several popular MS/MS database search engines including MS-GF+, SEQUEST and MS-Align+ for peptidomics data analysis, followed bymore » identification and label-free quantification using the well-established accurate mass and time (AMT) tag and newly developed informed quantification (IQ) approaches, both based on direct LC-MS analysis. Our result demonstrated that MS-GF+ outperformed both SEQUEST and MS-Align+ in identifying peptidome peptides. Using a database established from the MS-GF+ peptide identifications, both the AMT tag and IQ approaches provided significantly deeper peptidome coverage and less missing value for each individual data set than the MS/MS methods, while achieving robust label-free quantification. Besides having an excellent correlation with the AMT tag quantification results, IQ also provided slightly higher peptidome coverage than AMT. Taken together, we propose an optimal informatics pipeline combining MS-GF+ for initial database searching with IQ (or AMT) for identification and label-free quantification for high-throughput, comprehensive and quantitative peptidomics analysis.« less

  9. Central Stars of Planetary Nebulae in the LMC

    NASA Technical Reports Server (NTRS)

    Bianchi, Luciana

    2004-01-01

    In FUSE cycle 2's program B001 we studied Central Stars of Planetary Nebulae (CSPN) in the Large Magellanic Could. All FUSE observations have been successfully completed and have been reduced, analyzed and published. The analysis and the results are summarized below. The FUSE data were reduced using the latest available version of the FUSE calibration pipeline (CALFUSE v2.2.2). The flux of these LMC post-AGB objects is at the threshold of FUSE's sensitivity, and thus special care in the background subtraction was needed during the reduction. Because of their faintness, the targets required many orbit-long exposures, each of which typically had low (target) count-rates. Each calibrated extracted sequence was checked for unacceptable count-rate variations (a sign of detector drift), misplaced extraction windows, and other anomalies. All the good calibrated exposures were combined using FUSE pipeline routines. The default FUSE pipeline attempts to model the background measured off-target and subtracts it from the target spectrum. We found that, for these faint objects, the background appeared to be over-estimated by this method, particularly at shorter wavelengths (i.e., < 1000 A). We therefore tried two other reductions. In the first method, subtraction of the measured background is turned off and and the background is taken to be the model scattered-light scaled by the exposure time. In the second one, the first few steps of the pipeline were run on the individual exposures (correcting for effects unique to each exposure such as Doppler shift, grating motions, etc). Then the photon lists from the individual exposures were combined, and the remaining steps of the pipeline run on the combined file. Thus, more total counts for both the target and background allowed for a better extraction.

  10. Evaluation of next generation sequencing for the analysis of Eimeria communities in wildlife.

    PubMed

    Vermeulen, Elke T; Lott, Matthew J; Eldridge, Mark D B; Power, Michelle L

    2016-05-01

    Next-generation sequencing (NGS) techniques are well-established for studying bacterial communities but not yet for microbial eukaryotes. Parasite communities remain poorly studied, due in part to the lack of reliable and accessible molecular methods to analyse eukaryotic communities. We aimed to develop and evaluate a methodology to analyse communities of the protozoan parasite Eimeria from populations of the Australian marsupial Petrogale penicillata (brush-tailed rock-wallaby) using NGS. An oocyst purification method for small sample sizes and polymerase chain reaction (PCR) protocol for the 18S rRNA locus targeting Eimeria was developed and optimised prior to sequencing on the Illumina MiSeq platform. A data analysis approach was developed by modifying methods from bacterial metagenomics and utilising existing Eimeria sequences in GenBank. Operational taxonomic unit (OTU) assignment at a high similarity threshold (97%) was more accurate at assigning Eimeria contigs into Eimeria OTUs but at a lower threshold (95%) there was greater resolution between OTU consensus sequences. The assessment of two amplification PCR methods prior to Illumina MiSeq, single and nested PCR, determined that single PCR was more sensitive to Eimeria as more Eimeria OTUs were detected in single amplicons. We have developed a simple and cost-effective approach to a data analysis pipeline for community analysis of eukaryotic organisms using Eimeria communities as a model. The pipeline provides a basis for evaluation using other eukaryotic organisms and potential for diverse community analysis studies. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. The HEASARC Swift Gamma-Ray Burst Archive: The Pipeline and the Catalog

    NASA Technical Reports Server (NTRS)

    Donato, Davide; Angelini, Lorella; Padgett, C.A.; Reichard, T.; Gehrels, Neil; Marshall, Francis E.; Sakamoto, Takanori

    2012-01-01

    Since its launch in late 2004, the Swift satellite triggered or observed an average of one gamma-ray burst (GRB) every 3 days, for a total of 771 GRBs by 2012 January. Here, we report the development of a pipeline that semi automatically performs the data-reduction and data-analysis processes for the three instruments on board Swift (BAT, XRT, UVOT). The pipeline is written in Perl, and it uses only HEAsoft tools and can be used to perform the analysis of a majority of the point-like objects (e.g., GRBs, active galactic nuclei, pulsars) observed by Swift. We run the pipeline on the GRBs, and we present a database containing the screened data, the output products, and the results of our ongoing analysis. Furthermore, we created a catalog summarizing some GRB information, collected either by running the pipeline or from the literature. The Perl script, the database, and the catalog are available for downloading and querying at the HEASARC Web site.

  12. The HEASARC Swift Gamma-Ray Burst Archive: The Pipeline and the Catalog

    NASA Astrophysics Data System (ADS)

    Donato, D.; Angelini, L.; Padgett, C. A.; Reichard, T.; Gehrels, N.; Marshall, F. E.; Sakamoto, T.

    2012-11-01

    Since its launch in late 2004, the Swift satellite triggered or observed an average of one gamma-ray burst (GRB) every 3 days, for a total of 771 GRBs by 2012 January. Here, we report the development of a pipeline that semi-automatically performs the data-reduction and data-analysis processes for the three instruments on board Swift (BAT, XRT, UVOT). The pipeline is written in Perl, and it uses only HEAsoft tools and can be used to perform the analysis of a majority of the point-like objects (e.g., GRBs, active galactic nuclei, pulsars) observed by Swift. We run the pipeline on the GRBs, and we present a database containing the screened data, the output products, and the results of our ongoing analysis. Furthermore, we created a catalog summarizing some GRB information, collected either by running the pipeline or from the literature. The Perl script, the database, and the catalog are available for downloading and querying at the HEASARC Web site.

  13. The Brackets Design and Stress Analysis of a Refinery's Hot Water Pipeline

    NASA Astrophysics Data System (ADS)

    Zhou, San-Ping; He, Yan-Lin

    2016-05-01

    The reconstruction engineering which reconstructs the hot water pipeline from a power station to a heat exchange station requires the new hot water pipeline combine with old pipe racks. Taking the allowable span calculated based on GB50316 and the design philosophy of the pipeline supports into account, determine the types and locations of brackets. By analyzing the stresses of the pipeline in AutoPIPE, adjusting the supports at dangerous segments, recalculating in AutoPIPE, at last determine the types, locations and numbers of supports reasonably. Then the overall pipeline system will satisfy the requirement of the ASME B31.3.

  14. Transcriptator: An Automated Computational Pipeline to Annotate Assembled Reads and Identify Non Coding RNA.

    PubMed

    Tripathi, Kumar Parijat; Evangelista, Daniela; Zuccaro, Antonio; Guarracino, Mario Rosario

    2015-01-01

    RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool), QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery) tools. It offers a report on statistical analysis of functional and Gene Ontology (GO) annotation's enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein-protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA) by ab initio methods) helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is freely available at: http://www-labgtp.na.icar.cnr.it/Transcriptator.

  15. On the construction of a new stellar classification template library for the LAMOST spectral analysis pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Peng; Luo, Ali; Li, Yinbi

    2014-05-01

    The LAMOST spectral analysis pipeline, called the 1D pipeline, aims to classify and measure the spectra observed in the LAMOST survey. Through this pipeline, the observed stellar spectra are classified into different subclasses by matching with template spectra. Consequently, the performance of the stellar classification greatly depends on the quality of the template spectra. In this paper, we construct a new LAMOST stellar spectral classification template library, which is supposed to improve the precision and credibility of the present LAMOST stellar classification. About one million spectra are selected from LAMOST Data Release One to construct the new stellar templates, andmore » they are gathered in 233 groups by two criteria: (1) pseudo g – r colors obtained by convolving the LAMOST spectra with the Sloan Digital Sky Survey ugriz filter response curve, and (2) the stellar subclass given by the LAMOST pipeline. In each group, the template spectra are constructed using three steps. (1) Outliers are excluded using the Local Outlier Probabilities algorithm, and then the principal component analysis method is applied to the remaining spectra of each group. About 5% of the one million spectra are ruled out as outliers. (2) All remaining spectra are reconstructed using the first principal components of each group. (3) The weighted average spectrum is used as the template spectrum in each group. Using the previous 3 steps, we initially obtain 216 stellar template spectra. We visually inspect all template spectra, and 29 spectra are abandoned due to low spectral quality. Furthermore, the MK classification for the remaining 187 template spectra is manually determined by comparing with 3 template libraries. Meanwhile, 10 template spectra whose subclass is difficult to determine are abandoned. Finally, we obtain a new template library containing 183 LAMOST template spectra with 61 different MK classes by combining it with the current library.« less

  16. TESS Data Processing and Quick-look Pipeline

    NASA Astrophysics Data System (ADS)

    Fausnaugh, Michael; Huang, Xu; Glidden, Ana; Guerrero, Natalia; TESS Science Office

    2018-01-01

    We describe the data analysis procedures and pipelines for the Transiting Exoplanet Survey Satellite (TESS). We briefly review the processing pipeline developed and implemented by the Science Processing Operations Center (SPOC) at NASA Ames, including pixel/full-frame image calibration, photometric analysis, pre-search data conditioning, transiting planet search, and data validation. We also describe data-quality diagnostic analyses and photometric performance assessment tests. Finally, we detail a "quick-look pipeline" (QLP) that has been developed by the MIT branch of the TESS Science Office (TSO) to provide a fast and adaptable routine to search for planet candidates in the 30 minute full-frame images.

  17. Structural reliability assessment of the Oman India Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Sharif, A.M.; Preston, R.

    1996-12-31

    Reliability techniques are increasingly finding application in design. The special design conditions for the deep water sections of the Oman India Pipeline dictate their use since the experience basis for application of standard deterministic techniques is inadequate. The paper discusses the reliability analysis as applied to the Oman India Pipeline, including selection of a collapse model, characterization of the variability in the parameters that affect pipe resistance to collapse, and implementation of first and second order reliability analyses to assess the probability of pipe failure. The reliability analysis results are used as the basis for establishing the pipe wall thicknessmore » requirements for the pipeline.« less

  18. Arecibo Pulsar Survey Using ALFA. IV. Mock Spectrometer Data Analysis, Survey Sensitivity, and the Discovery of 40 Pulsars

    NASA Astrophysics Data System (ADS)

    Lazarus, P.; Brazier, A.; Hessels, J. W. T.; Karako-Argaman, C.; Kaspi, V. M.; Lynch, R.; Madsen, E.; Patel, C.; Ransom, S. M.; Scholz, P.; Swiggum, J.; Zhu, W. W.; Allen, B.; Bogdanov, S.; Camilo, F.; Cardoso, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Ferdman, R.; Freire, P. C. C.; Jenet, F. A.; Knispel, B.; Lee, K. J.; van Leeuwen, J.; Lorimer, D. R.; Lyne, A. G.; McLaughlin, M. A.; Siemens, X.; Spitler, L. G.; Stairs, I. H.; Stovall, K.; Venkataraman, A.

    2015-10-01

    The on-going Arecibo Pulsar-ALFA (PALFA) survey began in 2004 and is searching for radio pulsars in the Galactic plane at 1.4 GHz. Here we present a comprehensive description of one of its main data reduction pipelines that is based on the PRESTO software and includes new interference-excision algorithms and candidate selection heuristics. This pipeline has been used to discover 40 pulsars, bringing the survey’s discovery total to 144 pulsars. Of the new discoveries, eight are millisecond pulsars (MSPs; P\\lt 10 ms) and one is a Fast Radio Burst (FRB). This pipeline has also re-detected 188 previously known pulsars, 60 of them previously discovered by the other PALFA pipelines. We present a novel method for determining the survey sensitivity that accurately takes into account the effects of interference and red noise: we inject synthetic pulsar signals with various parameters into real survey observations and then attempt to recover them with our pipeline. We find that the PALFA survey achieves the sensitivity to MSPs predicted by theoretical models but suffers a degradation for P≳ 100 ms that gradually becomes up to ˜10 times worse for P\\gt 4 {{s}} at {DM}\\lt 150 pc cm-3. We estimate 33 ± 3% of the slower pulsars are missed, largely due to red noise. A population synthesis analysis using the sensitivity limits we measured suggests the PALFA survey should have found 224 ± 16 un-recycled pulsars in the data set analyzed, in agreement with the 241 actually detected. The reduced sensitivity could have implications on estimates of the number of long-period pulsars in the Galaxy.

  19. A Study on Optimal Sizing of Pipeline Transporting Equi-sized Particulate Solid-Liquid Mixture

    NASA Astrophysics Data System (ADS)

    Asim, Taimoor; Mishra, Rakesh; Pradhan, Suman; Ubbi, Kuldip

    2012-05-01

    Pipelines transporting solid-liquid mixtures are of practical interest to the oil and pipe industry throughout the world. Such pipelines are known as slurry pipelines where the solid medium of the flow is commonly known as slurry. The optimal designing of such pipelines is of commercial interests for their widespread acceptance. A methodology has been evolved for the optimal sizing of a pipeline transporting solid-liquid mixture. Least cost principle has been used in sizing such pipelines, which involves the determination of pipe diameter corresponding to the minimum cost for given solid throughput. The detailed analysis with regard to transportation of slurry having solids of uniformly graded particles size has been included. The proposed methodology can be used for designing a pipeline for transporting any solid material for different solid throughput.

  20. RAP: RNA-Seq Analysis Pipeline, a new cloud-based NGS web application

    PubMed Central

    2015-01-01

    Background The study of RNA has been dramatically improved by the introduction of Next Generation Sequencing platforms allowing massive and cheap sequencing of selected RNA fractions, also providing information on strand orientation (RNA-Seq). The complexity of transcriptomes and of their regulative pathways make RNA-Seq one of most complex field of NGS applications, addressing several aspects of the expression process (e.g. identification and quantification of expressed genes and transcripts, alternative splicing and polyadenylation, fusion genes and trans-splicing, post-transcriptional events, etc.). Moreover, the huge volume of data generated by NGS platforms introduces unprecedented computational and technological challenges to efficiently analyze and store sequence data and results. Methods In order to provide researchers with an effective and friendly resource for analyzing RNA-Seq data, we present here RAP (RNA-Seq Analysis Pipeline), a cloud computing web application implementing a complete but modular analysis workflow. This pipeline integrates both state-of-the-art bioinformatics tools for RNA-Seq analysis and in-house developed scripts to offer to the user a comprehensive strategy for data analysis. RAP is able to perform quality checks (adopting FastQC and NGS QC Toolkit), identify and quantify expressed genes and transcripts (with Tophat, Cufflinks and HTSeq), detect alternative splicing events (using SpliceTrap) and chimeric transcripts (with ChimeraScan). This pipeline is also able to identify splicing junctions and constitutive or alternative polyadenylation sites (implementing custom analysis modules) and call for statistically significant differences in genes and transcripts expression, splicing pattern and polyadenylation site usage (using Cuffdiff2 and DESeq). Results Through a user friendly web interface, the RAP workflow can be suitably customized by the user and it is automatically executed on our cloud computing environment. This strategy allows to access to bioinformatics tools and computational resources without specific bioinformatics and IT skills. RAP provides a set of tabular and graphical results that can be helpful to browse, filter and export analyzed data, according to the user needs. PMID:26046471

  1. An Analysis of the Impact of Valve Closure Time on the Course of Water Hammer

    NASA Astrophysics Data System (ADS)

    Kodura, Apoloniusz

    2016-06-01

    The knowledge of transient flow in pressure pipelines is very important for the designing and describing of pressure networks. The water hammer is the most common example of transient flow in pressure pipelines. During this phenomenon, the transformation of kinetic energy into pressure energy causes significant changes in pressure, which can lead to serious problems in the management of pressure networks. The phenomenon is very complex, and a large number of different factors influence its course. In the case of a water hammer caused by valve closing, the characteristic of gate closure is one of the most important factors. However, this factor is rarely investigated. In this paper, the results of physical experiments with water hammer in steel and PE pipelines are described and analyzed. For each water hammer, characteristics of pressure change and valve closing were recorded. The measurements were compared with the results of calculations perfomed by common methods used by engineers - Michaud's equation and Wood and Jones's method. The comparison revealed very significant differences between the results of calculations and the results of experiments. In addition, it was shown that, the characteristic of butterfly valve closure has a significant influence on water hammer, which should be taken into account in analyzing this phenomenon. Comparison of the results of experiments with the results of calculations? may lead to new, improved calculation methods and to new methods to describe transient flow.

  2. 49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...

  3. 49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...

  4. 49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...

  5. 49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...

  6. 49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...

  7. Failure Analysis of PRDS Pipe in a Thermal Power Plant Boiler

    NASA Astrophysics Data System (ADS)

    Ghosh, Debashis; Ray, Subrata; Mandal, Jiten; Mandal, Nilrudra; Shukla, Awdhesh Kumar

    2018-04-01

    The pressure reducer desuperheater (PRDS) pipeline is used for reducing the pressure and desuperheating of the steam in different auxiliary pipeline. When the PRDS pipeline is failed, the reliability of the boiler is affected. This paper investigates the probable cause/causes of failure of the PRDS tapping line. In that context, visual inspection, outside diameter and wall thickness measurement, chemical analysis, metallographic examination and hardness measurement are conducted as part of the investigative studies. Apart from these tests, mechanical testing and fractographic analysis are also conducted as supplements. Finally, it has been concluded that the PRDS pipeline has mainly failed due to graphitization due to prolonged exposure of the pipe at higher temperature. The improper material used is mainly responsible for premature failure of the pipe.

  8. Cancer Imaging Phenomics Toolkit (CaPTk) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    CaPTk is a software toolkit to facilitate translation of quantitative image analysis methods that help us obtain rich imaging phenotypic signatures of oncologic images and relate them to precision diagnostics and prediction of clinical outcomes, as well as to underlying molecular characteristics of cancer. The stand-alone graphical user interface of CaPTk brings analysis methods from the realm of medical imaging research to the clinic, and will be extended to use web-based services for computationally-demanding pipelines.

  9. SARTools: A DESeq2- and EdgeR-Based R Pipeline for Comprehensive Differential Analysis of RNA-Seq Data.

    PubMed

    Varet, Hugo; Brillet-Guéguen, Loraine; Coppée, Jean-Yves; Dillies, Marie-Agnès

    2016-01-01

    Several R packages exist for the detection of differentially expressed genes from RNA-Seq data. The analysis process includes three main steps, namely normalization, dispersion estimation and test for differential expression. Quality control steps along this process are recommended but not mandatory, and failing to check the characteristics of the dataset may lead to spurious results. In addition, normalization methods and statistical models are not exchangeable across the packages without adequate transformations the users are often not aware of. Thus, dedicated analysis pipelines are needed to include systematic quality control steps and prevent errors from misusing the proposed methods. SARTools is an R pipeline for differential analysis of RNA-Seq count data. It can handle designs involving two or more conditions of a single biological factor with or without a blocking factor (such as a batch effect or a sample pairing). It is based on DESeq2 and edgeR and is composed of an R package and two R script templates (for DESeq2 and edgeR respectively). Tuning a small number of parameters and executing one of the R scripts, users have access to the full results of the analysis, including lists of differentially expressed genes and a HTML report that (i) displays diagnostic plots for quality control and model hypotheses checking and (ii) keeps track of the whole analysis process, parameter values and versions of the R packages used. SARTools provides systematic quality controls of the dataset as well as diagnostic plots that help to tune the model parameters. It gives access to the main parameters of DESeq2 and edgeR and prevents untrained users from misusing some functionalities of both packages. By keeping track of all the parameters of the analysis process it fits the requirements of reproducible research.

  10. The microwave assisted synthesis of 1-alkyl-3-methylimidazolium bromide as potential corrosion inhibitor toward carbon steel in 1 M HCl solution saturated with carbon dioxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasasa, Norman Vincent A., E-mail: npasasa@gmail.com; Bundjali, Bunbun; Wahyuningrum, Deana

    Injection of corrosion inhibitor into the fluid current of oil and gas pipelines is an effective way to mitigate corrosion rate on the inner-surface parts of pipelines, especially carbon steel pipelines. In this research, two alkylimidazolium ionic liquids, 1-decyl-3-methylimidazolium bromide (IL1) and 1-dodecyl-3-methylimidazolium bromide (IL2) have been synthesized and studied as a potential corrosion inhibitor towards carbon steel in 1 M HCl solution saturated with carbon dioxide. IL1 and IL2 were synthesized using microwave assisted organic synthesis (MAOS) method. Mass Spectrometry analysis of IL1 and IL2 showed molecular mass [M-H+] peak at 223.2166 and 251.2484, respectively. The FTIR,{sup 1}H-NMR andmore » {sup 13}C-NMR spectra confirmed that IL1 and IL2 were successfully synthesized. Corrosion inhibition activity of IL1 and IL2 were determined using weight loss method. The results showed that IL1 and IL2 have the potential as good corrosion inhibitors with corrosion inhibition efficiency of IL1 and IL2 are 96.00% at 100 ppm (343 K) and 95.60% at 50 ppm (343 K), respectively. The increase in the concentration of IL1 and IL2 tends to improve their corrosion inhibition activities. Analysis of the data obtained from the weight loss method shows that the adsorption of IL1 and IL2 on carbon steel is classified into chemisorption which obeys Langmuir’s adsorption isotherm.« less

  11. Developing eThread pipeline using SAGA-pilot abstraction for large-scale structural bioinformatics.

    PubMed

    Ragothaman, Anjani; Boddu, Sairam Chowdary; Kim, Nayong; Feinstein, Wei; Brylinski, Michal; Jha, Shantenu; Kim, Joohyun

    2014-01-01

    While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread--a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure.

  12. Developing eThread Pipeline Using SAGA-Pilot Abstraction for Large-Scale Structural Bioinformatics

    PubMed Central

    Ragothaman, Anjani; Feinstein, Wei; Jha, Shantenu; Kim, Joohyun

    2014-01-01

    While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread—a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure. PMID:24995285

  13. 78 FR 71036 - Pipeline Safety: Random Drug Testing Rate; Contractor Management Information System Reporting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-27

    ... PHMSA-2013-0248] Pipeline Safety: Random Drug Testing Rate; Contractor Management Information System Reporting; and Obtaining Drug and Alcohol Management Information System Sign-In Information AGENCY: Pipeline... Management Information System (MIS) Data; and New Method for Operators to Obtain User Name and Password for...

  14. 76 FR 1504 - Pipeline Safety: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-10

    ... profile that is dependent upon the pipelines attributes, its geographical location, design, operating... type of threats posed by the pipeline segment, including consideration of the age, design, pipe... calculation. There are several methods available for establishing MAOP or MOP. A hydrostatic pressure test...

  15. Thermal interaction of underground pipeline with freezing heaving soil

    NASA Astrophysics Data System (ADS)

    Podorozhnikov, S. Y.; Mikhailov, P.; Puldas, L.; Shabarov, A.

    2018-05-01

    A mathematical model and a method for calculating the stress-strain state of a pipeline describing the heat-power interaction in the "underground pipeline - soil" system in the conditions of negative temperatures in the soils of soils are offered. Some results of computational-parametric research are presented.

  16. Power law of distribution of emergency situations on main gas pipeline

    NASA Astrophysics Data System (ADS)

    Voronin, K. S.; Akulov, K. A.

    2018-05-01

    The article presents the results of the analysis of emergency situations on a main gas pipeline. A power law of distribution of emergency situations is revealed. The possibility of conducting further scientific research to ensure the predictability of emergency situations on pipelines is justified.

  17. Gas pipeline leakage detection based on PZT sensors

    NASA Astrophysics Data System (ADS)

    Zhu, Junxiao; Ren, Liang; Ho, Siu-Chun; Jia, Ziguang; Song, Gangbing

    2017-02-01

    In this paper, an innovative method for rapid detection and location determination of pipeline leakage utilizing lead zirconate titanate (PZT) sensors is proposed. The negative pressure wave (NPW) is a stress wave generated by leakage in the pipeline, and propagates along the pipeline from the leakage point to both ends. Thus the NPW is associated with hoop strain variation along the pipe wall. PZT sensors mounted on the pipeline were used to measure the strain variation and allowed accurate (within 2% error) and repeatable location (within 4% variance) of five manually controlled leakage points. Experimental results have verified the effectiveness and the location accuracy for leakage in a 55 meter long model pipeline.

  18. Simulation of pipeline in the area of the underwater crossing

    NASA Astrophysics Data System (ADS)

    Burkov, P.; Chernyavskiy, D.; Burkova, S.; Konan, E. C.

    2014-08-01

    The article studies stress-strain behavior of the main oil-pipeline section Alexandrovskoye-Anzhero-Sudzhensk using software system Ansys. This method of examination and assessment of technical conditions of objects of pipeline transport studies the objects and the processes that affect the technical condition of these facilities, including the research on the basis of computer simulation. Such approach allows to develop the theory, methods of calculations and designing of objects of pipeline transport, units and parts of machines, regardless of their industry and destination with a view to improve the existing constructions and create new structures, machines of high performance, durability and reliability, maintainability, low material capacity and cost, which have competitiveness on the world market.

  19. Data processing pipeline for Herschel HIFI

    NASA Astrophysics Data System (ADS)

    Shipman, R. F.; Beaulieu, S. F.; Teyssier, D.; Morris, P.; Rengel, M.; McCoey, C.; Edwards, K.; Kester, D.; Lorenzani, A.; Coeur-Joly, O.; Melchior, M.; Xie, J.; Sanchez, E.; Zaal, P.; Avruch, I.; Borys, C.; Braine, J.; Comito, C.; Delforge, B.; Herpin, F.; Hoac, A.; Kwon, W.; Lord, S. D.; Marston, A.; Mueller, M.; Olberg, M.; Ossenkopf, V.; Puga, E.; Akyilmaz-Yabaci, M.

    2017-12-01

    Context. The HIFI instrument on the Herschel Space Observatory performed over 9100 astronomical observations, almost 900 of which were calibration observations in the course of the nearly four-year Herschel mission. The data from each observation had to be converted from raw telemetry into calibrated products and were included in the Herschel Science Archive. Aims: The HIFI pipeline was designed to provide robust conversion from raw telemetry into calibrated data throughout all phases of the HIFI missions. Pre-launch laboratory testing was supported as were routine mission operations. Methods: A modular software design allowed components to be easily added, removed, amended and/or extended as the understanding of the HIFI data developed during and after mission operations. Results: The HIFI pipeline processed data from all HIFI observing modes within the Herschel automated processing environment as well as within an interactive environment. The same software can be used by the general astronomical community to reprocess any standard HIFI observation. The pipeline also recorded the consistency of processing results and provided automated quality reports. Many pipeline modules were in use since the HIFI pre-launch instrument level testing. Conclusions: Processing in steps facilitated data analysis to discover and address instrument artefacts and uncertainties. The availability of the same pipeline components from pre-launch throughout the mission made for well-understood, tested, and stable processing. A smooth transition from one phase to the next significantly enhanced processing reliability and robustness. Herschel was an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  20. Analytical solution and numerical study on water hammer in a pipeline closed with an elastically attached valve

    NASA Astrophysics Data System (ADS)

    Henclik, Sławomir

    2018-03-01

    The influence of dynamic fluid-structure interaction (FSI) onto the course of water hammer (WH) can be significant in non-rigid pipeline systems. The essence of this effect is the dynamic transfer of liquid energy to the pipeline structure and back, which is important for elastic structures and can be negligible for rigid ones. In the paper a special model of such behavior is analyzed. A straight pipeline with a steady flow, fixed to the floor with several rigid supports is assumed. The transient is generated by a quickly closed valve installed at the end of the pipeline. FSI effects are assumed to be present mainly at the valve which is fixed with a spring dash-pot attachment. Analysis of WH runs, especially transient pressure changes, for various stiffness and damping parameters of the spring dash-pot valve attachment is presented in the paper. The solutions are found analytically and numerically. Numerical results have been computed with the use of an own computer program developed on the basis of the four equation model of WH-FSI and the specific boundary conditions formulated at the valve. Analytical solutions have been found with the separation of variables method for slightly simplified assumptions. Damping at the dash-pot is taken into account within the numerical study. The influence of valve attachment parameters onto the WH courses was discovered and it was found the transient amplitudes can be reduced. Such a system, elastically attached shut-off valve in a pipeline or other, equivalent design can be a real solution applicable in practice.

  1. Italian river crossing; Horizontal drilling meets pipeline project criteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1988-06-01

    The River Piave flows out of the Italian Alps, crossing the Veneto farmlands on its way to the Adriatic Sea. It is an important commerce-carrying waterway. SNAM, the Italian state gas pipeline company, wanted to install a 22-in. pipeline across the Piave just north of Venice. The method chosen for crossing the river had to meet several important criteria. InArc had used its river crossing method on seven previous SNAM projects and recommended the Piave crossing should be drilled. This paper describes the use of this horizontal drilling method for this application.

  2. Fiber glass reinforcement wrap gets DOT nod for gas-line use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-12-13

    Panhandle Eastern Corp.'s Texas Eastern Transmission Corp. has become the first US natural-gas pipeline company to install, under federal waiver, a fiber glass reinforcement on an in-service gas pipeline. The Clock Spring repair system was installed in August on six segments of Texas Eastern's 20-in. gas pipeline in Fayette County, Ohio, after the company had received a US Department of Transportation (DOT) waiver to use the system in place of conventional DOT-mandated repair methods. The paper describes the conventional methods, as well as comparing costs of both methods.

  3. A versatile pipeline for the multi-scale digital reconstruction and quantitative analysis of 3D tissue architecture

    PubMed Central

    Morales-Navarrete, Hernán; Segovia-Miranda, Fabián; Klukowski, Piotr; Meyer, Kirstin; Nonaka, Hidenori; Marsico, Giovanni; Chernykh, Mikhail; Kalaidzidis, Alexander; Zerial, Marino; Kalaidzidis, Yannis

    2015-01-01

    A prerequisite for the systems biology analysis of tissues is an accurate digital three-dimensional reconstruction of tissue structure based on images of markers covering multiple scales. Here, we designed a flexible pipeline for the multi-scale reconstruction and quantitative morphological analysis of tissue architecture from microscopy images. Our pipeline includes newly developed algorithms that address specific challenges of thick dense tissue reconstruction. Our implementation allows for a flexible workflow, scalable to high-throughput analysis and applicable to various mammalian tissues. We applied it to the analysis of liver tissue and extracted quantitative parameters of sinusoids, bile canaliculi and cell shapes, recognizing different liver cell types with high accuracy. Using our platform, we uncovered an unexpected zonation pattern of hepatocytes with different size, nuclei and DNA content, thus revealing new features of liver tissue organization. The pipeline also proved effective to analyse lung and kidney tissue, demonstrating its generality and robustness. DOI: http://dx.doi.org/10.7554/eLife.11214.001 PMID:26673893

  4. Disrupting the Pipeline: Critical Analyses of Student Pathways through Postsecondary STEM Education

    ERIC Educational Resources Information Center

    Metcalf, Heather E.

    2014-01-01

    Critical mixed methods approaches allow us to reflect upon the ways in which we collect, measure, interpret, and analyze data, providing novel alternatives for quantitative analysis. For institutional researchers, whose work influences institutional policies, programs, and practices, the approach has the transformative ability to expose and create…

  5. Development of a Pipeline for Exploratory Metabolic Profiling of Infant Urine

    PubMed Central

    Jackson, Frances; Georgakopoulou, Nancy; Kaluarachchi, Manuja; Kyriakides, Michael; Andreas, Nicholas; Przysiezna, Natalia; Hyde, Matthew J.; Modi, Neena; Nicholson, Jeremy K.; Wijeyesekera, Anisha; Holmes, Elaine

    2017-01-01

    Numerous metabolic profiling pipelines have been developed to characterize the composition of human biofluids and tissues, the vast majority of these being for studies in adults. To accommodate limited sample volume and to take into account the compositional differences between adult and infant biofluids, we developed and optimized sample handling and analytical procedures for studying urine from newborns. A robust pipeline for metabolic profiling using NMR spectroscopy was established, encompassing sample collection, preparation, spectroscopic measurement, and computational analysis. Longitudinal samples were collected from five infants from birth until 14 months of age. Methods of extraction and effects of freezing and sample dilution were assessed, and urinary contaminants from breakdown of polymers in a range of diapers and cotton wool balls were identified and compared, including propylene glycol, acrylic acid, and tert-butanol. Finally, assessment of urinary profiles obtained over the first few weeks of life revealed a dramatic change in composition, with concentrations of phenols, amino acids, and betaine altering systematically over the first few months of life. Therefore, neonatal samples require more stringent standardization of experimental design, sample handling, and analysis compared to that of adult samples to accommodate the variability and limited sample volume. PMID:27476583

  6. Double-pulse laser-induced breakdown spectroscopy analysis of scales from petroleum pipelines

    NASA Astrophysics Data System (ADS)

    Cavalcanti, G. H.; Rocha, A. A.; Damasceno, R. N.; Legnaioli, S.; Lorenzetti, G.; Pardini, L.; Palleschi, V.

    2013-09-01

    Pipeline scales from the Campos Bay Petroleum Field near Rio de Janeiro, Brazil have been analyzed by both Raman spectroscopy and by laser-induced breakdown spectroscopy (LIBS) using a double-pulse, calibration-free approach. Elements that are characteristic of petroleum (e.g. C, H, N, O, Mg, Na, Fe and V) were detected, in addition to the Ca, Al, and Si which form the matrix of the scale. The LIBS results were compared with the results of micro-Raman spectroscopy, which confirmed the nature of the incrustations inferred by the LIBS analysis. Results of this preliminary study suggest that diffusion of pipe material into the pipeline intake column plays an important role in the growth of scale. Thanks to the simplicity and relative low cost of equipment and to the fact that no special chemical pre-treatment of the samples is needed, LIBS can offer very fast acquisition of data and the possibility of in situ measurements. LIBS could thus represent an alternative or complementary method for the chemical characterization of the scales by comparison to conventional analytical techniques, such as X-ray diffraction or X-ray fluorescence.

  7. Changes in the Pipeline Transportation Market

    EIA Publications

    1999-01-01

    This analysis assesses the amount of capacity that may be turned back to pipeline companies, based on shippers' actions over the past several years and the profile of contracts in place as of July 1, 1998. It also examines changes in the characteristics of contracts between shippers and pipeline companies.

  8. Increased Sensitivity of Diagnostic Mutation Detection by Re-analysis Incorporating Local Reassembly of Sequence Reads.

    PubMed

    Watson, Christopher M; Camm, Nick; Crinnion, Laura A; Clokie, Samuel; Robinson, Rachel L; Adlard, Julian; Charlton, Ruth; Markham, Alexander F; Carr, Ian M; Bonthron, David T

    2017-12-01

    Diagnostic genetic testing programmes based on next-generation DNA sequencing have resulted in the accrual of large datasets of targeted raw sequence data. Most diagnostic laboratories process these data through an automated variant-calling pipeline. Validation of the chosen analytical methods typically depends on confirming the detection of known sequence variants. Despite improvements in short-read alignment methods, current pipelines are known to be comparatively poor at detecting large insertion/deletion mutations. We performed clinical validation of a local reassembly tool, ABRA (assembly-based realigner), through retrospective reanalysis of a cohort of more than 2000 hereditary cancer cases. ABRA enabled detection of a 96-bp deletion, 4-bp insertion mutation in PMS2 that had been initially identified using a comparative read-depth approach. We applied an updated pipeline incorporating ABRA to the entire cohort of 2000 cases and identified one previously undetected pathogenic variant, a 23-bp duplication in PTEN. We demonstrate the effect of read length on the ability to detect insertion/deletion variants by comparing HiSeq2500 (2 × 101-bp) and NextSeq500 (2 × 151-bp) sequence data for a range of variants and thereby show that the limitations of shorter read lengths can be mitigated using appropriate informatics tools. This work highlights the need for ongoing development of diagnostic pipelines to maximize test sensitivity. We also draw attention to the large differences in computational infrastructure required to perform day-to-day versus large-scale reprocessing tasks.

  9. New method for enhanced efficiency in detection of gravitational waves from supernovae using coherent network of detectors

    NASA Astrophysics Data System (ADS)

    Mukherjee, S.; Salazar, L.; Mittelstaedt, J.; Valdez, O.

    2017-11-01

    Supernovae in our universe are potential sources of gravitational waves (GW) that could be detected in a network of GW detectors like LIGO and Virgo. Core-collapse supernovae are rare, but the associated gravitational radiation is likely to carry profuse information about the underlying processes driving the supernovae. Calculations based on analytic models predict GW energies within the detection range of the Advanced LIGO detectors, out to tens of Mpc for certain types of signals e.g. coalescing binary neutron stars. For supernovae however, the corresponding distances are much less. Thus, methods that can improve the sensitivity of searches for GW signals from supernovae are desirable, especially in the advanced detector era. Several methods have been proposed based on various likelihood-based regulators that work on data from a network of detectors to detect burst-like signals (as is the case for signals from supernovae) from potential GW sources. To address this problem, we have developed an analysis pipeline based on a method of noise reduction known as the harmonic regeneration noise reduction (HRNR) algorithm. To demonstrate the method, sixteen supernova waveforms from the Murphy et al. 2009 catalog have been used in presence of LIGO science data. A comparative analysis is presented to show detection statistics for a standard network analysis as commonly used in GW pipelines and the same by implementing the new method in conjunction with the network. The result shows significant improvement in detection statistics.

  10. Simplified Technique for Predicting Offshore Pipeline Expansion

    NASA Astrophysics Data System (ADS)

    Seo, J. H.; Kim, D. K.; Choi, H. S.; Yu, S. Y.; Park, K. S.

    2018-06-01

    In this study, we propose a method for estimating the amount of expansion that occurs in subsea pipelines, which could be applied in the design of robust structures that transport oil and gas from offshore wells. We begin with a literature review and general discussion of existing estimation methods and terminologies with respect to subsea pipelines. Due to the effects of high pressure and high temperature, the production of fluid from offshore wells is typically caused by physical deformation of subsea structures, e.g., expansion and contraction during the transportation process. In severe cases, vertical and lateral buckling occurs, which causes a significant negative impact on structural safety, and which is related to on-bottom stability, free-span, structural collapse, and many other factors. In addition, these factors may affect the production rate with respect to flow assurance, wax, and hydration, to name a few. In this study, we developed a simple and efficient method for generating a reliable pipe expansion design in the early stage, which can lead to savings in both cost and computation time. As such, in this paper, we propose an applicable diagram, which we call the standard dimensionless ratio (SDR) versus virtual anchor length (L A ) diagram, that utilizes an efficient procedure for estimating subsea pipeline expansion based on applied reliable scenarios. With this user guideline, offshore pipeline structural designers can reliably determine the amount of subsea pipeline expansion and the obtained results will also be useful for the installation, design, and maintenance of the subsea pipeline.

  11. Understanding gene functions and disease mechanisms: Phenotyping pipelines in the German Mouse Clinic.

    PubMed

    Fuchs, Helmut; Aguilar-Pimentel, Juan Antonio; Amarie, Oana V; Becker, Lore; Calzada-Wack, Julia; Cho, Yi-Li; Garrett, Lillian; Hölter, Sabine M; Irmler, Martin; Kistler, Martin; Kraiger, Markus; Mayer-Kuckuk, Philipp; Moreth, Kristin; Rathkolb, Birgit; Rozman, Jan; da Silva Buttkus, Patricia; Treise, Irina; Zimprich, Annemarie; Gampe, Kristine; Hutterer, Christine; Stöger, Claudia; Leuchtenberger, Stefanie; Maier, Holger; Miller, Manuel; Scheideler, Angelika; Wu, Moya; Beckers, Johannes; Bekeredjian, Raffi; Brielmeier, Markus; Busch, Dirk H; Klingenspor, Martin; Klopstock, Thomas; Ollert, Markus; Schmidt-Weber, Carsten; Stöger, Tobias; Wolf, Eckhard; Wurst, Wolfgang; Yildirim, Ali Önder; Zimmer, Andreas; Gailus-Durner, Valérie; Hrabě de Angelis, Martin

    2017-09-29

    Since decades, model organisms have provided an important approach for understanding the mechanistic basis of human diseases. The German Mouse Clinic (GMC) was the first phenotyping facility that established a collaboration-based platform for phenotype characterization of mouse lines. In order to address individual projects by a tailor-made phenotyping strategy, the GMC advanced in developing a series of pipelines with tests for the analysis of specific disease areas. For a general broad analysis, there is a screening pipeline that covers the key parameters for the most relevant disease areas. For hypothesis-driven phenotypic analyses, there are thirteen additional pipelines with focus on neurological and behavioral disorders, metabolic dysfunction, respiratory system malfunctions, immune-system disorders and imaging techniques. In this article, we give an overview of the pipelines and describe the scientific rationale behind the different test combinations. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The Kepler Science Data Processing Pipeline Source Code Road Map

    NASA Technical Reports Server (NTRS)

    Wohler, Bill; Jenkins, Jon M.; Twicken, Joseph D.; Bryson, Stephen T.; Clarke, Bruce Donald; Middour, Christopher K.; Quintana, Elisa Victoria; Sanderfer, Jesse Thomas; Uddin, Akm Kamal; Sabale, Anima; hide

    2016-01-01

    We give an overview of the operational concepts and architecture of the Kepler Science Processing Pipeline. Designed, developed, operated, and maintained by the Kepler Science Operations Center (SOC) at NASA Ames Research Center, the Science Processing Pipeline is a central element of the Kepler Ground Data System. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center which hosts the computers required to perform data analysis. The SOC's charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Processing Pipeline, including, the software algorithms. We present the high-performance, parallel computing software modules of the pipeline that perform transit photometry, pixel-level calibration, systematic error correction, attitude determination, stellar target management, and instrument characterization.

  13. Main Pipelines Corrosion Monitoring Device

    NASA Astrophysics Data System (ADS)

    Anatoliy, Bazhenov; Galina, Bondareva; Natalia, Grivennaya; Sergey, Malygin; Mikhail, Goryainov

    2017-01-01

    The aim of the article is to substantiate the technical solution for the problem of monitoring corrosion changes in oil and gas pipelines with use (using) of an electromagnetic NDT method. Pipeline wall thinning under operating conditions can lead to perforations and leakage of the product to be transported outside the pipeline. In most cases there is danger for human life and environment. Monitoring of corrosion changes in pipeline inner wall under operating conditions is complicated because pipelines are mainly made of structural steels with conductive and magnetic properties that complicate test signal passage through the entire thickness of the object under study. The technical solution of this problem lies in monitoring of the internal corrosion changes in pipes under operating conditions in order to increase safety of pipelines by automated prediction of achieving the threshold pre-crash values due to corrosion.

  14. Amateur Image Pipeline Processing using Python plus PyRAF

    NASA Astrophysics Data System (ADS)

    Green, Wayne

    2012-05-01

    A template pipeline spanning observing planning to publishing is offered as a basis for establishing a long term observing program. The data reduction pipeline encapsulates all policy and procedures, providing an accountable framework for data analysis and a teaching framework for IRAF. This paper introduces the technical details of a complete pipeline processing environment using Python, PyRAF and a few other languages. The pipeline encapsulates all processing decisions within an auditable framework. The framework quickly handles the heavy lifting of image processing. It also serves as an excellent teaching environment for astronomical data management and IRAF reduction decisions.

  15. Demonstrating the Effects of Shop Flow Process Variability on the Air Force Depot Level Reparable Item Pipeline

    DTIC Science & Technology

    1992-09-01

    Crawford found that pipeline contents are extremely variable about their mean (10:24) and Kettner and Wheatley said that "a statistical analysis of data...write the results from this replication "* to the ANOVA files for later analysis . The first set outputs points "* for overall pipeline contents . The...families and friends for their unselfishness and support. Marvin A. Arostegui and Jon A. Larvick ii Table of Contents Page Preface

  16. Fast, accurate and easy-to-pipeline methods for amplicon sequence processing

    NASA Astrophysics Data System (ADS)

    Antonielli, Livio; Sessitsch, Angela

    2016-04-01

    Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.

  17. Bicycle: a bioinformatics pipeline to analyze bisulfite sequencing data.

    PubMed

    Graña, Osvaldo; López-Fernández, Hugo; Fdez-Riverola, Florentino; González Pisano, David; Glez-Peña, Daniel

    2018-04-15

    High-throughput sequencing of bisulfite-converted DNA is a technique used to measure DNA methylation levels. Although a considerable number of computational pipelines have been developed to analyze such data, none of them tackles all the peculiarities of the analysis together, revealing limitations that can force the user to manually perform additional steps needed for a complete processing of the data. This article presents bicycle, an integrated, flexible analysis pipeline for bisulfite sequencing data. Bicycle analyzes whole genome bisulfite sequencing data, targeted bisulfite sequencing data and hydroxymethylation data. To show how bicycle overtakes other available pipelines, we compared them on a defined number of features that are summarized in a table. We also tested bicycle with both simulated and real datasets, to show its level of performance, and compared it to different state-of-the-art methylation analysis pipelines. Bicycle is publicly available under GNU LGPL v3.0 license at http://www.sing-group.org/bicycle. Users can also download a customized Ubuntu LiveCD including bicycle and other bisulfite sequencing data pipelines compared here. In addition, a docker image with bicycle and its dependencies, which allows a straightforward use of bicycle in any platform (e.g. Linux, OS X or Windows), is also available. ograna@cnio.es or dgpena@uvigo.es. Supplementary data are available at Bioinformatics online.

  18. Maser: one-stop platform for NGS big data from analysis to visualization

    PubMed Central

    Kinjo, Sonoko; Monma, Norikazu; Misu, Sadahiko; Kitamura, Norikazu; Imoto, Junichi; Yoshitake, Kazutoshi; Gojobori, Takashi; Ikeo, Kazuho

    2018-01-01

    Abstract A major challenge in analyzing the data from high-throughput next-generation sequencing (NGS) is how to handle the huge amounts of data and variety of NGS tools and visualize the resultant outputs. To address these issues, we developed a cloud-based data analysis platform, Maser (Management and Analysis System for Enormous Reads), and an original genome browser, Genome Explorer (GE). Maser enables users to manage up to 2 terabytes of data to conduct analyses with easy graphical user interface operations and offers analysis pipelines in which several individual tools are combined as a single pipeline for very common and standard analyses. GE automatically visualizes genome assembly and mapping results output from Maser pipelines, without requiring additional data upload. With this function, the Maser pipelines can graphically display the results output from all the embedded tools and mapping results in a web browser. Therefore Maser realized a more user-friendly analysis platform especially for beginners by improving graphical display and providing the selected standard pipelines that work with built-in genome browser. In addition, all the analyses executed on Maser are recorded in the analysis history, helping users to trace and repeat the analyses. The entire process of analysis and its histories can be shared with collaborators or opened to the public. In conclusion, our system is useful for managing, analyzing, and visualizing NGS data and achieves traceability, reproducibility, and transparency of NGS analysis. Database URL: http://cell-innovation.nig.ac.jp/maser/ PMID:29688385

  19. Image analysis tools and emerging algorithms for expression proteomics

    PubMed Central

    English, Jane A.; Lisacek, Frederique; Morris, Jeffrey S.; Yang, Guang-Zhong; Dunn, Michael J.

    2012-01-01

    Since their origins in academic endeavours in the 1970s, computational analysis tools have matured into a number of established commercial packages that underpin research in expression proteomics. In this paper we describe the image analysis pipeline for the established 2-D Gel Electrophoresis (2-DE) technique of protein separation, and by first covering signal analysis for Mass Spectrometry (MS), we also explain the current image analysis workflow for the emerging high-throughput ‘shotgun’ proteomics platform of Liquid Chromatography coupled to MS (LC/MS). The bioinformatics challenges for both methods are illustrated and compared, whilst existing commercial and academic packages and their workflows are described from both a user’s and a technical perspective. Attention is given to the importance of sound statistical treatment of the resultant quantifications in the search for differential expression. Despite wide availability of proteomics software, a number of challenges have yet to be overcome regarding algorithm accuracy, objectivity and automation, generally due to deterministic spot-centric approaches that discard information early in the pipeline, propagating errors. We review recent advances in signal and image analysis algorithms in 2-DE, MS, LC/MS and Imaging MS. Particular attention is given to wavelet techniques, automated image-based alignment and differential analysis in 2-DE, Bayesian peak mixture models and functional mixed modelling in MS, and group-wise consensus alignment methods for LC/MS. PMID:21046614

  20. Seismic fragility formulations for segmented buried pipeline systems including the impact of differential ground subsidence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pineda Porras, Omar Andrey; Ordaz, Mario

    2009-01-01

    Though Differential Ground Subsidence (DGS) impacts the seismic response of segmented buried pipelines augmenting their vulnerability, fragility formulations to estimate repair rates under such condition are not available in the literature. Physical models to estimate pipeline seismic damage considering other cases of permanent ground subsidence (e.g. faulting, tectonic uplift, liquefaction, and landslides) have been extensively reported, not being the case of DGS. The refinement of the study of two important phenomena in Mexico City - the 1985 Michoacan earthquake scenario and the sinking of the city due to ground subsidence - has contributed to the analysis of the interrelation ofmore » pipeline damage, ground motion intensity, and DGS; from the analysis of the 48-inch pipeline network of the Mexico City's Water System, fragility formulations for segmented buried pipeline systems for two DGS levels are proposed. The novel parameter PGV{sup 2}/PGA, being PGV peak ground velocity and PGA peak ground acceleration, has been used as seismic parameter in these formulations, since it has shown better correlation to pipeline damage than PGV alone according to previous studies. By comparing the proposed fragilities, it is concluded that a change in the DGS level (from Low-Medium to High) could increase the pipeline repair rates (number of repairs per kilometer) by factors ranging from 1.3 to 2.0; being the higher the seismic intensity the lower the factor.« less

  1. A comparison of sequencing platforms and bioinformatics pipelines for compositional analysis of the gut microbiome.

    PubMed

    Allali, Imane; Arnold, Jason W; Roach, Jeffrey; Cadenas, Maria Belen; Butz, Natasha; Hassan, Hosni M; Koci, Matthew; Ballou, Anne; Mendoza, Mary; Ali, Rizwana; Azcarate-Peril, M Andrea

    2017-09-13

    Advancements in Next Generation Sequencing (NGS) technologies regarding throughput, read length and accuracy had a major impact on microbiome research by significantly improving 16S rRNA amplicon sequencing. As rapid improvements in sequencing platforms and new data analysis pipelines are introduced, it is essential to evaluate their capabilities in specific applications. The aim of this study was to assess whether the same project-specific biological conclusions regarding microbiome composition could be reached using different sequencing platforms and bioinformatics pipelines. Chicken cecum microbiome was analyzed by 16S rRNA amplicon sequencing using Illumina MiSeq, Ion Torrent PGM, and Roche 454 GS FLX Titanium platforms, with standard and modified protocols for library preparation. We labeled the bioinformatics pipelines included in our analysis QIIME1 and QIIME2 (de novo OTU picking [not to be confused with QIIME version 2 commonly referred to as QIIME2]), QIIME3 and QIIME4 (open reference OTU picking), UPARSE1 and UPARSE2 (each pair differs only in the use of chimera depletion methods), and DADA2 (for Illumina data only). GS FLX+ yielded the longest reads and highest quality scores, while MiSeq generated the largest number of reads after quality filtering. Declines in quality scores were observed starting at bases 150-199 for GS FLX+ and bases 90-99 for MiSeq. Scores were stable for PGM-generated data. Overall microbiome compositional profiles were comparable between platforms; however, average relative abundance of specific taxa varied depending on sequencing platform, library preparation method, and bioinformatics analysis. Specifically, QIIME with de novo OTU picking yielded the highest number of unique species and alpha diversity was reduced with UPARSE and DADA2 compared to QIIME. The three platforms compared in this study were capable of discriminating samples by treatment, despite differences in diversity and abundance, leading to similar biological conclusions. Our results demonstrate that while there were differences in depth of coverage and phylogenetic diversity, all workflows revealed comparable treatment effects on microbial diversity. To increase reproducibility and reliability and to retain consistency between similar studies, it is important to consider the impact on data quality and relative abundance of taxa when selecting NGS platforms and analysis tools for microbiome studies.

  2. Social cost impact assessment of pipeline infrastructure projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, John C., E-mail: matthewsj@battelle.org; Allouche, Erez N., E-mail: allouche@latech.edu; Sterling, Raymond L., E-mail: sterling@latech.edu

    A key advantage of trenchless construction methods compared with traditional open-cut methods is their ability to install or rehabilitate underground utility systems with limited disruption to the surrounding built and natural environments. The equivalent monetary values of these disruptions are commonly called social costs. Social costs are often ignored by engineers or project managers during project planning and design phases, partially because they cannot be calculated using standard estimating methods. In recent years some approaches for estimating social costs were presented. Nevertheless, the cost data needed for validation of these estimating methods is lacking. Development of such social cost databasesmore » can be accomplished by compiling relevant information reported in various case histories. This paper identifies eight most important social cost categories, presents mathematical methods for calculating them, and summarizes the social cost impacts for two pipeline construction projects. The case histories are analyzed in order to identify trends for the various social cost categories. The effectiveness of the methods used to estimate these values is also discussed. These findings are valuable for pipeline infrastructure engineers making renewal technology selection decisions by providing a more accurate process for the assessment of social costs and impacts. - Highlights: • Identified the eight most important social cost factors for pipeline construction • Presented mathematical methods for calculating those social cost factors • Summarized social cost impacts for two pipeline construction projects • Analyzed those projects to identify trends for the social cost factors.« less

  3. Hydrocarbonaceous material upgrading method

    DOEpatents

    Brecher, Lee E.; Mones, Charles G.; Guffey, Frank D.

    2015-06-02

    A hydrocarbonaceous material upgrading method may involve a novel combination of heating, vaporizing and chemically reacting hydrocarbonaceous feedstock that is substantially unpumpable at pipeline conditions, and condensation of vapors yielded thereby, in order to upgrade that feedstock to a hydrocarbonaceous material condensate that meets crude oil pipeline specification.

  4. Oman-India pipeline route survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mullee, J.E.

    1995-12-01

    Paper describes the geological setting in the Arabian Sea for a proposed 28-inch gas pipeline from Oman to India reaching 3,500-m water depths. Covers planning, execution, quality control and results of geophysical, geotechnical and oceanographic surveys. Outlines theory and application of pipeline stress analysis on board survey vessel for feasibility assessment, and specifies equipment used.

  5. NCBI prokaryotic genome annotation pipeline.

    PubMed

    Tatusova, Tatiana; DiCuccio, Michael; Badretdin, Azat; Chetvernin, Vyacheslav; Nawrocki, Eric P; Zaslavsky, Leonid; Lomsadze, Alexandre; Pruitt, Kim D; Borodovsky, Mark; Ostell, James

    2016-08-19

    Recent technological advances have opened unprecedented opportunities for large-scale sequencing and analysis of populations of pathogenic species in disease outbreaks, as well as for large-scale diversity studies aimed at expanding our knowledge across the whole domain of prokaryotes. To meet the challenge of timely interpretation of structure, function and meaning of this vast genetic information, a comprehensive approach to automatic genome annotation is critically needed. In collaboration with Georgia Tech, NCBI has developed a new approach to genome annotation that combines alignment based methods with methods of predicting protein-coding and RNA genes and other functional elements directly from sequence. A new gene finding tool, GeneMarkS+, uses the combined evidence of protein and RNA placement by homology as an initial map of annotation to generate and modify ab initio gene predictions across the whole genome. Thus, the new NCBI's Prokaryotic Genome Annotation Pipeline (PGAP) relies more on sequence similarity when confident comparative data are available, while it relies more on statistical predictions in the absence of external evidence. The pipeline provides a framework for generation and analysis of annotation on the full breadth of prokaryotic taxonomy. For additional information on PGAP see https://www.ncbi.nlm.nih.gov/genome/annotation_prok/ and the NCBI Handbook, https://www.ncbi.nlm.nih.gov/books/NBK174280/. Published by Oxford University Press on behalf of Nucleic Acids Research 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  6. Risk analysis of urban gas pipeline network based on improved bow-tie model

    NASA Astrophysics Data System (ADS)

    Hao, M. J.; You, Q. J.; Yue, Z.

    2017-11-01

    Gas pipeline network is a major hazard source in urban areas. In the event of an accident, there could be grave consequences. In order to understand more clearly the causes and consequences of gas pipeline network accidents, and to develop prevention and mitigation measures, the author puts forward the application of improved bow-tie model to analyze risks of urban gas pipeline network. The improved bow-tie model analyzes accident causes from four aspects: human, materials, environment and management; it also analyzes the consequences from four aspects: casualty, property loss, environment and society. Then it quantifies the causes and consequences. Risk identification, risk analysis, risk assessment, risk control, and risk management will be clearly shown in the model figures. Then it can suggest prevention and mitigation measures accordingly to help reduce accident rate of gas pipeline network. The results show that the whole process of an accident can be visually investigated using the bow-tie model. It can also provide reasons for and predict consequences of an unfortunate event. It is of great significance in order to analyze leakage failure of gas pipeline network.

  7. The PREP pipeline: standardized preprocessing for large-scale EEG analysis.

    PubMed

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.

  8. Experimental study and empirical prediction of fuel flow parameters under air evolution conditions

    NASA Astrophysics Data System (ADS)

    Kitanina, E. E.; Kitanin, E. L.; Bondarenko, D. A.; Kravtsov, P. A.; Peganova, M. M.; Stepanov, S. G.; Zherebzov, V. L.

    2017-11-01

    Air evolution in kerosene under the effect of gravity flow with various hydraulic resistances in the pipeline was studied experimentally. The study was conducted at pressure ranging from 0.2 to 1.0 bar and temperature varying between -20°C and +20°C. Through these experiments, the oversaturation limit beyond which dissolved air starts evolving intensively from the fuel was established and the correlations for the calculation of pressure losses and air evolution on local loss elements were obtained. A method of calculating two-phase flow behaviour in a titled pipeline segment with very low mass flow quality and fairly high volume flow quality was developed. The complete set of empirical correlations obtained by experimental analysis was implemented in the engineering code. The software simulation results were repeatedly verified against our experimental findings and Airbus test data to show that the two-phase flow simulation agrees quite well with the experimental results obtained in the complex branched pipelines.

  9. WFIRST: Simulating the Wide-Field Sky

    NASA Astrophysics Data System (ADS)

    Peeples, Molly; WFIRST Wide Field Imager Simulations Working Group

    2018-01-01

    As astronomy’s first high-resolution wide-field multi-mode instrument, simulated data will play a vital role in the planning for and analysis of data from WFIRST’s WFI (Wide Field Imager) instrument. Part of the key to WFIRST’s scientific success lies in our ability to push the systematics limit, but in order to do so, the WFI pipeline will need to be able to measure and take out said systematics. The efficacy of this pipeline can only be verified with large suites of synthetic data; these data must include both the range of astrophysical sky scenes (from crowded starfields to high-latitude grism data observations) and the systematics from the detector and telescope optics the WFI pipeline aims to mitigate. We summarize here(1) the status of current and planned astrophysical simulations in support of the WFI,(2) the status of current WFI instrument simulators and requirements on future generations thereof, and(3) plans, methods, and requirements on interfacing astrophysical simulations and WFI instrument simulators.

  10. SimVascular: An Open Source Pipeline for Cardiovascular Simulation.

    PubMed

    Updegrove, Adam; Wilson, Nathan M; Merkow, Jameson; Lan, Hongzhi; Marsden, Alison L; Shadden, Shawn C

    2017-03-01

    Patient-specific cardiovascular simulation has become a paradigm in cardiovascular research and is emerging as a powerful tool in basic, translational and clinical research. In this paper we discuss the recent development of a fully open-source SimVascular software package, which provides a complete pipeline from medical image data segmentation to patient-specific blood flow simulation and analysis. This package serves as a research tool for cardiovascular modeling and simulation, and has contributed to numerous advances in personalized medicine, surgical planning and medical device design. The SimVascular software has recently been refactored and expanded to enhance functionality, usability, efficiency and accuracy of image-based patient-specific modeling tools. Moreover, SimVascular previously required several licensed components that hindered new user adoption and code management and our recent developments have replaced these commercial components to create a fully open source pipeline. These developments foster advances in cardiovascular modeling research, increased collaboration, standardization of methods, and a growing developer community.

  11. Accident Prevention and Diagnostics of Underground Pipeline Systems

    NASA Astrophysics Data System (ADS)

    Trokhimchuk, M.; Bakhracheva, Y.

    2017-11-01

    Up to forty thousand accidents occur annually with underground pipelines due to corrosion. The comparison of the methods for assessing the quality of anti-corrosion coating is provided. It is proposed to use the device to be tied-in to existing pipeline which has a higher functionality in comparison with other types of the devices due to the possibility of tie-in to the pipelines with different diameters. The existing technologies and applied materials allow us to organize industrial production of the proposed device.

  12. MRI-compatible pipeline for three-dimensional MALDI imaging mass spectrometry using PAXgene fixation.

    PubMed

    Oetjen, Janina; Aichler, Michaela; Trede, Dennis; Strehlow, Jan; Berger, Judith; Heldmann, Stefan; Becker, Michael; Gottschalk, Michael; Kobarg, Jan Hendrik; Wirtz, Stefan; Schiffler, Stefan; Thiele, Herbert; Walch, Axel; Maass, Peter; Alexandrov, Theodore

    2013-09-02

    MALDI imaging mass spectrometry (MALDI-imaging) has emerged as a spatially-resolved label-free bioanalytical technique for direct analysis of biological samples and was recently introduced for analysis of 3D tissue specimens. We present a new experimental and computational pipeline for molecular analysis of tissue specimens which integrates 3D MALDI-imaging, magnetic resonance imaging (MRI), and histological staining and microscopy, and evaluate the pipeline by applying it to analysis of a mouse kidney. To ensure sample integrity and reproducible sectioning, we utilized the PAXgene fixation and paraffin embedding and proved its compatibility with MRI. Altogether, 122 serial sections of the kidney were analyzed using MALDI-imaging, resulting in a 3D dataset of 200GB comprised of 2million spectra. We show that elastic image registration better compensates for local distortions of tissue sections. The computational analysis of 3D MALDI-imaging data was performed using our spatial segmentation pipeline which determines regions of distinct molecular composition and finds m/z-values co-localized with these regions. For facilitated interpretation of 3D distribution of ions, we evaluated isosurfaces providing simplified visualization. We present the data in a multimodal fashion combining 3D MALDI-imaging with the MRI volume rendering and with light microscopic images of histologically stained sections. Our novel experimental and computational pipeline for 3D MALDI-imaging can be applied to address clinical questions such as proteomic analysis of the tumor morphologic heterogeneity. Examining the protein distribution as well as the drug distribution throughout an entire tumor using our pipeline will facilitate understanding of the molecular mechanisms of carcinogenesis. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. High-throughput Analysis of Large Microscopy Image Datasets on CPU-GPU Cluster Platforms

    PubMed Central

    Teodoro, George; Pan, Tony; Kurc, Tahsin M.; Kong, Jun; Cooper, Lee A. D.; Podhorszki, Norbert; Klasky, Scott; Saltz, Joel H.

    2014-01-01

    Analysis of large pathology image datasets offers significant opportunities for the investigation of disease morphology, but the resource requirements of analysis pipelines limit the scale of such studies. Motivated by a brain cancer study, we propose and evaluate a parallel image analysis application pipeline for high throughput computation of large datasets of high resolution pathology tissue images on distributed CPU-GPU platforms. To achieve efficient execution on these hybrid systems, we have built runtime support that allows us to express the cancer image analysis application as a hierarchical data processing pipeline. The application is implemented as a coarse-grain pipeline of stages, where each stage may be further partitioned into another pipeline of fine-grain operations. The fine-grain operations are efficiently managed and scheduled for computation on CPUs and GPUs using performance aware scheduling techniques along with several optimizations, including architecture aware process placement, data locality conscious task assignment, data prefetching, and asynchronous data copy. These optimizations are employed to maximize the utilization of the aggregate computing power of CPUs and GPUs and minimize data copy overheads. Our experimental evaluation shows that the cooperative use of CPUs and GPUs achieves significant improvements on top of GPU-only versions (up to 1.6×) and that the execution of the application as a set of fine-grain operations provides more opportunities for runtime optimizations and attains better performance than coarser-grain, monolithic implementations used in other works. An implementation of the cancer image analysis pipeline using the runtime support was able to process an image dataset consisting of 36,848 4Kx4K-pixel image tiles (about 1.8TB uncompressed) in less than 4 minutes (150 tiles/second) on 100 nodes of a state-of-the-art hybrid cluster system. PMID:25419546

  14. Coincidental match of numerical simulation and physics

    NASA Astrophysics Data System (ADS)

    Pierre, B.; Gudmundsson, J. S.

    2010-08-01

    Consequences of rapid pressure transients in pipelines range from increased fatigue to leakages and to complete ruptures of pipeline. Therefore, accurate predictions of rapid pressure transients in pipelines using numerical simulations are critical. State of the art modelling of pressure transient in general, and water hammer in particular include unsteady friction in addition to the steady frictional pressure drop, and numerical simulations rely on the method of characteristics. Comparison of rapid pressure transient calculations by the method of characteristics and a selected high resolution finite volume method highlights issues related to modelling of pressure waves and illustrates that matches between numerical simulations and physics are purely coincidental.

  15. Numerical Simulation of Pipeline Deformation Caused by Rockfall Impact

    PubMed Central

    Liang, Zheng; Han, Chuanjun

    2014-01-01

    Rockfall impact is one of the fatal hazards in pipeline transportation of oil and gas. The deformation of oil and gas pipeline caused by rockfall impact was investigated using the finite element method in this paper. Pipeline deformations under radial impact, longitudinal inclined impact, transverse inclined impact, and lateral eccentric impact of spherical and cube rockfalls were discussed, respectively. The effects of impact angle and eccentricity on the plastic strain of pipeline were analyzed. The results show that the crater depth on pipeline caused by spherical rockfall impact is deeper than by cube rockfall impact with the same volume. In the inclined impact condition, the maximum plastic strain of crater caused by spherical rockfall impact appears when incidence angle α is 45°. The pipeline is prone to rupture under the cube rockfall impact when α is small. The plastic strain distribution of impact crater is more uneven with the increasing of impact angle. In the eccentric impact condition, plastic strain zone of pipeline decreases with the increasing of eccentricity k. PMID:24959599

  16. Hydrocarbonaceous material processing methods and apparatus

    DOEpatents

    Brecher, Lee E [Laramie, WY

    2011-07-12

    Methods and apparatus are disclosed for possibly producing pipeline-ready heavy oil from substantially non-pumpable oil feeds. The methods and apparatus may be designed to produce such pipeline-ready heavy oils in the production field. Such methods and apparatus may involve thermal soaking of liquid hydrocarbonaceous inputs in thermal environments (2) to generate, though chemical reaction, an increased distillate amount as compared with conventional boiling technologies.

  17. From Description to Explanation: An Empirical Exploration of the African-American Pipeline Problem in STEM

    ERIC Educational Resources Information Center

    Brown, Bryan A.; Henderson, J. Bryan; Gray, Salina; Donovan, Brian; Sullivan, Shayna; Patterson, Alexis; Waggstaff, William

    2016-01-01

    We conducted a mixed-methods study of matriculation issues for African-Americans in the STEM pipeline. The project compares the experiences of students currently majoring in science (N?=?304) with the experiences of those who have succeeded in earning science degrees (N?=?307). Participants were surveyed about their pipeline experiences based on…

  18. Programmable calculator uses equation to figure steady-state gas-pipeline flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmberg, E.

    Because it is accurate and consistent over a wide range of variables, the Colebrook-White (C-W) formula serves as the basis for many methods of calculating turbulent flow in gas pipelines. Oilconsult reveals a simple way to adapt the C-W formula to calculate steady-state pipeline flow using the TI-59 programmable calculator.

  19. Quantitative analysis of factors that affect oil pipeline network accident based on Bayesian networks: A case study in China

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Qin, Ting Xin; Huang, Shuai; Wu, Jian Song; Meng, Xin Yan

    2018-06-01

    Some factors can affect the consequences of oil pipeline accident and their effects should be analyzed to improve emergency preparation and emergency response. Although there are some qualitative analysis models of risk factors' effects, the quantitative analysis model still should be researched. In this study, we introduce a Bayesian network (BN) model of risk factors' effects analysis in an oil pipeline accident case that happened in China. The incident evolution diagram is built to identify the risk factors. And the BN model is built based on the deployment rule for factor nodes in BN and the expert knowledge by Dempster-Shafer evidence theory. Then the probabilities of incident consequences and risk factors' effects can be calculated. The most likely consequences given by this model are consilient with the case. Meanwhile, the quantitative estimations of risk factors' effects may provide a theoretical basis to take optimal risk treatment measures for oil pipeline management, which can be used in emergency preparation and emergency response.

  20. voom: precision weights unlock linear model analysis tools for RNA-seq read counts

    PubMed Central

    2014-01-01

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods. PMID:24485249

  1. voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.

    PubMed

    Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K

    2014-02-03

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.

  2. Digital algorithms for parallel pipelined single-detector homodyne fringe counting in laser interferometry

    NASA Astrophysics Data System (ADS)

    Rerucha, Simon; Sarbort, Martin; Hola, Miroslava; Cizek, Martin; Hucl, Vaclav; Cip, Ondrej; Lazar, Josef

    2016-12-01

    The homodyne detection with only a single detector represents a promising approach in the interferometric application which enables a significant reduction of the optical system complexity while preserving the fundamental resolution and dynamic range of the single frequency laser interferometers. We present the design, implementation and analysis of algorithmic methods for computational processing of the single-detector interference signal based on parallel pipelined processing suitable for real time implementation on a programmable hardware platform (e.g. the FPGA - Field Programmable Gate Arrays or the SoC - System on Chip). The algorithmic methods incorporate (a) the single detector signal (sine) scaling, filtering, demodulations and mixing necessary for the second (cosine) quadrature signal reconstruction followed by a conic section projection in Cartesian plane as well as (a) the phase unwrapping together with the goniometric and linear transformations needed for the scale linearization and periodic error correction. The digital computing scheme was designed for bandwidths up to tens of megahertz which would allow to measure the displacements at the velocities around half metre per second. The algorithmic methods were tested in real-time operation with a PC-based reference implementation that employed the advantage pipelined processing by balancing the computational load among multiple processor cores. The results indicate that the algorithmic methods are suitable for a wide range of applications [3] and that they are bringing the fringe counting interferometry closer to the industrial applications due to their optical setup simplicity and robustness, computational stability, scalability and also a cost-effectiveness.

  3. PhenStat | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    PhenStat is a freely available R package that provides a variety of statistical methods for the identification of phenotypic associations from model organisms developed for the International Mouse Phenotyping Consortium (IMPC at www.mousephenotype.org ). The methods have been developed for high throughput phenotyping pipelines implemented across various experimental designs with an emphasis on managing temporal variation and is being adapted for analysis with PDX mouse strains.

  4. Sub-soil contamination due to oil spills in zones surrounding oil pipeline-pump stations and oil pipeline right-of-ways in Southwest-Mexico.

    PubMed

    Iturbe, Rosario; Flores, Carlos; Castro, Alejandrina; Torres, Luis G

    2007-10-01

    Oil spills due to oil pipelines is a very frequent problem in Mexico. Petroleos Mexicanos (PEMEX), very concerned with the environmental agenda, has been developing inspection and correction plans for zones around oil pipelines pumping stations and pipeline right-of-way. These stations are located at regular intervals of kilometres along the pipelines. In this study, two sections of an oil pipeline and two pipeline pumping stations zones are characterized in terms of the presence of Total Petroleum Hydrocarbons (TPHs) and Polycyclic Aromatic Hydrocarbons (PAHs). The study comprehends sampling of the areas, delimitation of contamination in the vertical and horizontal extension, analysis of the sampled soils regarding TPHs content and, in some cases, the 16 PAHs considered as priority by USEPA, calculation of areas and volumes contaminated (according to Mexican legislation, specifically NOM-EM-138-ECOL-2002) and, finally, a proposal for the best remediation techniques suitable for the contamination levels and the localization of contaminants.

  5. Formation and representation: Critical analyses of identity, supply, and demand in science, technology, engineering, and mathematics

    NASA Astrophysics Data System (ADS)

    Mandayam Doddamane, Prabha

    2011-12-01

    Considerable research, policy, and programmatic efforts have been dedicated to addressing the participation of particular populations in STEM for decades. Each of these efforts claims equity-related goals; yet, they heavily frame the problem, through pervasive STEM pipeline model discourse, in terms of national needs, workforce supply, and competitiveness. This particular framing of the problem may, indeed, be counter to equity goals, especially when paired with policy that largely relies on statistical significance and broad aggregation of data over exploring the identities and experiences of the populations targeted for equitable outcomes in that policy. In this study, I used the mixed-methods approach of critical discourse and critical quantitative analyses to understand how the pipeline model ideology has become embedded within academic discourse, research, and data surrounding STEM education and work and to provide alternatives for quantitative analysis. Using critical theory as a lens, I first conducted a critical discourse analysis of contemporary STEM workforce studies with a particular eye to pipeline ideology. Next, I used that analysis to inform logistic regression analyses of the 2006 SESTAT data. This quantitative analysis compared and contrasted different ways of thinking about identity and retention. Overall, the findings of this study show that many subjective choices are made in the construction of the large-scale datasets used to inform much national science and engineering policy and that these choices greatly influence likelihood of retention outcomes.

  6. Mathematical simulation for compensation capacities area of pipeline routes in ship systems

    NASA Astrophysics Data System (ADS)

    Ngo, G. V.; Sakhno, K. N.

    2018-05-01

    In this paper, the authors considered the problem of manufacturability’s enhancement of ship systems pipeline at the designing stage. The analysis of arrangements and possibilities for compensation of deviations for pipeline routes has been carried out. The task was set to produce the “fit pipe” together with the rest of the pipes in the route. It was proposed to compensate for deviations by movement of the pipeline route during pipe installation and to calculate maximum values of these displacements in the analyzed path. Theoretical bases of deviation compensation for pipeline routes using rotations of parallel section pairs of pipes are assembled. Mathematical and graphical simulations of compensation area capacities of pipeline routes with various configurations are completed. Prerequisites have been created for creating an automated program that will allow one to determine values of the compensatory capacities area for pipeline routes and to assign quantities of necessary allowances.

  7. Influence of Anchoring on Burial Depth of Submarine Pipelines

    PubMed Central

    Zhuang, Yuan; Li, Yang; Su, Wei

    2016-01-01

    Since the beginning of the twenty-first century, there has been widespread construction of submarine oil-gas transmission pipelines due to an increase in offshore oil exploration. Vessel anchoring operations are causing more damage to submarine pipelines due to shipping transportation also increasing. Therefore, it is essential that the influence of anchoring on the required burial depth of submarine pipelines is determined. In this paper, mathematical models for ordinary anchoring and emergency anchoring have been established to derive an anchor impact energy equation for each condition. The required effective burial depth for submarine pipelines has then been calculated via an energy absorption equation for the protection layer covering the submarine pipelines. Finally, the results of the model calculation have been verified by accident case analysis, and the impact of the anchoring height, anchoring water depth and the anchor weight on the required burial depth of submarine pipelines has been further analyzed. PMID:27166952

  8. A pipeline for the de novo assembly of the Themira biloba (Sepsidae: Diptera) transcriptome using a multiple k-mer length approach.

    PubMed

    Melicher, Dacotah; Torson, Alex S; Dworkin, Ian; Bowsher, Julia H

    2014-03-12

    The Sepsidae family of flies is a model for investigating how sexual selection shapes courtship and sexual dimorphism in a comparative framework. However, like many non-model systems, there are few molecular resources available. Large-scale sequencing and assembly have not been performed in any sepsid, and the lack of a closely related genome makes investigation of gene expression challenging. Our goal was to develop an automated pipeline for de novo transcriptome assembly, and to use that pipeline to assemble and analyze the transcriptome of the sepsid Themira biloba. Our bioinformatics pipeline uses cloud computing services to assemble and analyze the transcriptome with off-site data management, processing, and backup. It uses a multiple k-mer length approach combined with a second meta-assembly to extend transcripts and recover more bases of transcript sequences than standard single k-mer assembly. We used 454 sequencing to generate 1.48 million reads from cDNA generated from embryo, larva, and pupae of T. biloba and assembled a transcriptome consisting of 24,495 contigs. Annotation identified 16,705 transcripts, including those involved in embryogenesis and limb patterning. We assembled transcriptomes from an additional three non-model organisms to demonstrate that our pipeline assembled a higher-quality transcriptome than single k-mer approaches across multiple species. The pipeline we have developed for assembly and analysis increases contig length, recovers unique transcripts, and assembles more base pairs than other methods through the use of a meta-assembly. The T. biloba transcriptome is a critical resource for performing large-scale RNA-Seq investigations of gene expression patterns, and is the first transcriptome sequenced in this Dipteran family.

  9. Benchmark datasets for phylogenomic pipeline validation, applications for foodborne pathogen surveillance

    PubMed Central

    Rand, Hugh; Shumway, Martin; Trees, Eija K.; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E.; Defibaugh-Chavez, Stephanie; Carleton, Heather A.; Klimke, William A.; Katz, Lee S.

    2017-01-01

    Background As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. Methods We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and “known” phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Results Our “outbreak” benchmark datasets represent the four major foodborne bacterial pathogens (Listeria monocytogenes, Salmonella enterica, Escherichia coli, and Campylobacter jejuni) and one simulated dataset where the “known tree” can be accurately called the “true tree”. The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. Discussion These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools—we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines. PMID:29372115

  10. 77 FR 66454 - Gulf LNG Liquefaction Company, LLC; Application for Long-Term Authorization To Export Liquefied...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-05

    ... integrated U.S. natural gas pipeline system. GLLC notes that due to the Gulf LNG Terminal's direct access to multiple major interstate pipelines and indirect access to the national gas pipeline grid, the Project's... possible impacts that the Export Project might have on natural gas supply and pricing. Navigant's analysis...

  11. Non-biological synthetic spike-in controls and the AMPtk software pipeline improve mycobiome data

    Treesearch

    Jonathan M. Palmer; Michelle A. Jusino; Mark T. Banik; Daniel L. Lindner

    2018-01-01

    High-throughput amplicon sequencing (HTAS) of conserved DNA regions is a powerful technique to characterize microbial communities. Recently, spike-in mock communities have been used to measure accuracy of sequencing platforms and data analysis pipelines. To assess the ability of sequencing platforms and data processing pipelines using fungal internal transcribed spacer...

  12. 77 FR 26760 - Kinder Morgan, Inc.; Analysis of Proposed Agreement Containing Consent Orders To Aid Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ... to as natural gas liquids or NGLs. Interstate pipelines have a limit on how much NGLs natural gas can... gas processing plant to remove those liquids before it can be transported on interstate pipelines... Gas Transmission, and Trailblazer pipelines, as well as associated processing and storage capacity. On...

  13. Gender Equality in the Academy: The Pipeline Problem

    ERIC Educational Resources Information Center

    Monroe, Kristen Renwick; Chiu, William F.

    2010-01-01

    As part of the ongoing work by the Committee on the Status of Women in the Profession (CSWP), we offer an empirical analysis of the pipeline problem in academia. The image of a pipeline is a commonly advanced explanation for persistent discrimination that suggests that gender inequality will decline once there are sufficient numbers of qualified…

  14. Analysis of mammalian gene function through broad based phenotypic screens across a consortium of mouse clinics

    PubMed Central

    Adams, David J; Adams, Niels C; Adler, Thure; Aguilar-Pimentel, Antonio; Ali-Hadji, Dalila; Amann, Gregory; André, Philippe; Atkins, Sarah; Auburtin, Aurelie; Ayadi, Abdel; Becker, Julien; Becker, Lore; Bedu, Elodie; Bekeredjian, Raffi; Birling, Marie-Christine; Blake, Andrew; Bottomley, Joanna; Bowl, Mike; Brault, Véronique; Busch, Dirk H; Bussell, James N; Calzada-Wack, Julia; Cater, Heather; Champy, Marie-France; Charles, Philippe; Chevalier, Claire; Chiani, Francesco; Codner, Gemma F; Combe, Roy; Cox, Roger; Dalloneau, Emilie; Dierich, André; Di Fenza, Armida; Doe, Brendan; Duchon, Arnaud; Eickelberg, Oliver; Esapa, Chris T; El Fertak, Lahcen; Feigel, Tanja; Emelyanova, Irina; Estabel, Jeanne; Favor, Jack; Flenniken, Ann; Gambadoro, Alessia; Garrett, Lilian; Gates, Hilary; Gerdin, Anna-Karin; Gkoutos, George; Greenaway, Simon; Glasl, Lisa; Goetz, Patrice; Da Cruz, Isabelle Goncalves; Götz, Alexander; Graw, Jochen; Guimond, Alain; Hans, Wolfgang; Hicks, Geoff; Hölter, Sabine M; Höfler, Heinz; Hancock, John M; Hoehndorf, Robert; Hough, Tertius; Houghton, Richard; Hurt, Anja; Ivandic, Boris; Jacobs, Hughes; Jacquot, Sylvie; Jones, Nora; Karp, Natasha A; Katus, Hugo A; Kitchen, Sharon; Klein-Rodewald, Tanja; Klingenspor, Martin; Klopstock, Thomas; Lalanne, Valerie; Leblanc, Sophie; Lengger, Christoph; le Marchand, Elise; Ludwig, Tonia; Lux, Aline; McKerlie, Colin; Maier, Holger; Mandel, Jean-Louis; Marschall, Susan; Mark, Manuel; Melvin, David G; Meziane, Hamid; Micklich, Kateryna; Mittelhauser, Christophe; Monassier, Laurent; Moulaert, David; Muller, Stéphanie; Naton, Beatrix; Neff, Frauke; Nolan, Patrick M; Nutter, Lauryl MJ; Ollert, Markus; Pavlovic, Guillaume; Pellegata, Natalia S; Peter, Emilie; Petit-Demoulière, Benoit; Pickard, Amanda; Podrini, Christine; Potter, Paul; Pouilly, Laurent; Puk, Oliver; Richardson, David; Rousseau, Stephane; Quintanilla-Fend, Leticia; Quwailid, Mohamed M; Racz, Ildiko; Rathkolb, Birgit; Riet, Fabrice; Rossant, Janet; Roux, Michel; Rozman, Jan; Ryder, Ed; Salisbury, Jennifer; Santos, Luis; Schäble, Karl-Heinz; Schiller, Evelyn; Schrewe, Anja; Schulz, Holger; Steinkamp, Ralf; Simon, Michelle; Stewart, Michelle; Stöger, Claudia; Stöger, Tobias; Sun, Minxuan; Sunter, David; Teboul, Lydia; Tilly, Isabelle; Tocchini-Valentini, Glauco P; Tost, Monica; Treise, Irina; Vasseur, Laurent; Velot, Emilie; Vogt-Weisenhorn, Daniela; Wagner, Christelle; Walling, Alison; Weber, Bruno; Wendling, Olivia; Westerberg, Henrik; Willershäuser, Monja; Wolf, Eckhard; Wolter, Anne; Wood, Joe; Wurst, Wolfgang; Yildirim, Ali Önder; Zeh, Ramona; Zimmer, Andreas; Zimprich, Annemarie

    2015-01-01

    The function of the majority of genes in the mouse and human genomes remains unknown. The mouse ES cell knockout resource provides a basis for characterisation of relationships between gene and phenotype. The EUMODIC consortium developed and validated robust methodologies for broad-based phenotyping of knockouts through a pipeline comprising 20 disease-orientated platforms. We developed novel statistical methods for pipeline design and data analysis aimed at detecting reproducible phenotypes with high power. We acquired phenotype data from 449 mutant alleles, representing 320 unique genes, of which half had no prior functional annotation. We captured data from over 27,000 mice finding that 83% of the mutant lines are phenodeviant, with 65% demonstrating pleiotropy. Surprisingly, we found significant differences in phenotype annotation according to zygosity. Novel phenotypes were uncovered for many genes with unknown function providing a powerful basis for hypothesis generation and further investigation in diverse systems. PMID:26214591

  15. Finite-Element Analysis of Crack Arrest Properties of Fiber Reinforced Composites Application in Semi-Elliptical Cracked Pipelines

    NASA Astrophysics Data System (ADS)

    Wang, Linyuan; Song, Shulei; Deng, Hongbo; Zhong, Kai

    2018-04-01

    In nowadays, repair method using fiber reinforced composites as the mainstream pipe repair technology, it can provide security for X100 high-grade steel energy long-distance pipelines in engineering. In this paper, analysis of cracked X100 high-grade steel pipe was conducted, simulation analysis was made on structure of pipes and crack arresters (CAs) to obtain the J-integral value in virtue of ANSYS Workbench finite element software and evaluation on crack arrest effects was done through measured elastic-plastic fracture mechanics parameter J-integral and the crack arrest coefficient K, in a bid to summarize effect laws of composite CAs and size of pipes and cracks for repairing CAs. The results indicate that the K value is correlated with laying angle λ, laying length L2/D1, laying thickness T1/T2of CAs, crack depth c/T1 and crack length a/c, and calculate recommended parameters for repairing fiber reinforced composite CAs in terms of two different crack forms.

  16. [Comparison of gut microbiotal compositional analysis of patients with irritable bowel syndrome through different bioinformatics pipelines].

    PubMed

    Zhu, S W; Liu, Z J; Li, M; Zhu, H Q; Duan, L P

    2018-04-18

    To assess whether the same biological conclusion, diagnostic or curative effects regarding microbial composition of irritable bowel syndrome (IBS) patients could be reached through different bioinformatics pipelines, we used two common bioinformatics pipelines (Uparse V2.0 and Mothur V1.39.5)to analyze the same fecal microbial 16S rRNA high-throughput sequencing data. The two pipelines were used to analyze the diversity and richness of fecal microbial 16S rRNA high-throughput sequencing data of 27 samples, including 9 healthy controls (HC group), 9 diarrhea IBS patients before (IBS group) and after Rifaximin treatment (IBS-treatment, IBSt group). Analyses such as microbial diversity, principal co-ordinates analysis (PCoA), nonmetric multidimensional scaling (NMDS) and linear discriminant analysis effect size (LEfSe) were used to find out the microbial differences among HC group vs. IBS group and IBS group vs. IBSt group. (1) Microbial composition comparison of the 27 samples in the two pipelines showed significant variations at both family and genera levels while no significant variations at phylum level; (2) There was no significant difference in the comparison of HC vs. IBS or IBS vs. IBSt (Uparse: HC vs. IBS, F=0.98, P=0.445; IBS vs. IBSt, F=0.47,P=0.926; Mothur: HC vs.IBS, F=0.82, P=0.646; IBS vs. IBSt, F=0.37, P=0.961). The Shannon index was significantly decreased in IBSt; (3) Both workshops distinguished the significantly enriched genera between HC and IBS groups. For example, Nitrosomonas and Paraprevotella increased while Pseudoalteromonadaceae and Anaerotruncus decreased in HC group through Uparse pipeline, nevertheless Roseburia 62 increased while Butyricicoccus and Moraxellaceae decreased in HC group through Mothur pipeline.Only Uparse pipeline could pick out significant genera between IBS and IBSt, such as Pseudobutyricibrio, Clostridiaceae 1 and Clostridiumsensustricto 1. There were taxonomic and phylogenetic diversity differences between the two pipelines, Mothur can get more taxonomic details because the count number of each taxonomic level is higher. Both pipelines could distinguish the significantly enriched genera between HC and IBS groups, but Uparse was more capable to identity the difference between IBS and IBSt groups. To increase the reproducibility and reliability and to retain the consistency among similar studies, it is very important to consider the impact on different pipelines.

  17. Ballasting pipeline moving in horizontal well as method of control sticking phenomenon

    NASA Astrophysics Data System (ADS)

    Toropov, V. S.; Toropov, E. S.

    2018-05-01

    The mechanism of the phenomenon of sticking a pipeline moving in a well while pulled by the facility horizontal directional drilling is investigated. A quantitative evaluation of the force arising from sticking is given. At the same time, the working hypothesis takes a view of the combined effect of adhesion and friction interactions as the reasons that cause this phenomenon. As a measure to control the occurrence of sticking and to reduce the resistance force to movement of the pipeline in the well, several methods of ballasting the working pipeline are proposed, depending on the profile of the well and the ratio of the length of the curved sections of the inlet and outlet and the straight horizontal sections of the profile. It is shown that for crossings, which profile contains an extended horizontal section, it is possible to partially fill the pipeline with water to achieve zero buoyancy, and for crossings with curvature along the entire profile, the ballasting efficiency will be minimal.

  18. Application of Morphological Segmentation to Leaking Defect Detection in Sewer Pipelines

    PubMed Central

    Su, Tung-Ching; Yang, Ming-Der

    2014-01-01

    As one of major underground pipelines, sewerage is an important infrastructure in any modern city. The most common problem occurring in sewerage is leaking, whose position and failure level is typically idengified through closed circuit television (CCTV) inspection in order to facilitate rehabilitation process. This paper proposes a novel method of computer vision, morphological segmentation based on edge detection (MSED), to assist inspectors in detecting pipeline defects in CCTV inspection images. In addition to MSED, other mathematical morphology-based image segmentation methods, including opening top-hat operation (OTHO) and closing bottom-hat operation (CBHO), were also applied to the defect detection in vitrified clay sewer pipelines. The CCTV inspection images of the sewer system in the 9th district, Taichung City, Taiwan were selected as the experimental materials. The segmentation results demonstrate that MSED and OTHO are useful for the detection of cracks and open joints, respectively, which are the typical leakage defects found in sewer pipelines. PMID:24841247

  19. Diagnostic layer integration in FPGA-based pipeline measurement systems for HEP experiments

    NASA Astrophysics Data System (ADS)

    Pozniak, Krzysztof T.

    2007-08-01

    Integrated triggering and data acquisition systems for high energy physics experiments may be considered as fast, multichannel, synchronous, distributed, pipeline measurement systems. A considerable extension of functional, technological and monitoring demands, which has recently been imposed on them, forced a common usage of large field-programmable gate array (FPGA), digital signal processing-enhanced matrices and fast optical transmission for their realization. This paper discusses modelling, design, realization and testing of pipeline measurement systems. A distribution of synchronous data stream flows is considered in the network. A general functional structure of a single network node is presented. A suggested, novel block structure of the node model facilitates full implementation in the FPGA chip, circuit standardization and parametrization, as well as integration of functional and diagnostic layers. A general method for pipeline system design was derived. This method is based on a unified model of the synchronous data network node. A few examples of practically realized, FPGA-based, pipeline measurement systems were presented. The described systems were applied in ZEUS and CMS.

  20. CFD analysis of onshore oil pipelines in permafrost

    NASA Astrophysics Data System (ADS)

    Nardecchia, Fabio; Gugliermetti, Luca; Gugliermetti, Franco

    2017-07-01

    Underground pipelines are built all over the world and the knowledge of their thermal interaction with the soil is crucial for their design. This paper studies the "thermal influenced zone" produced by a buried pipeline and the parameters that can influence its extension by 2D-steady state CFD simulations with the aim to improve the design of new pipelines in permafrost. In order to represent a real case, the study is referred to the Eastern Siberia-Pacific Ocean Oil Pipeline at the three stations of Mo'he, Jiagedaqi and Qiqi'har. Different burial depth sand diameters of the pipe are analyzed; the simulation results show that the effect of the oil pipeline diameter on the thermal field increases with the increase of the distance from the starting station.

  1. Oman India Pipeline: An operational repair strategy based on a rational assessment of risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    German, P.

    1996-12-31

    This paper describes the development of a repair strategy for the operational phase of the Oman India Pipeline based upon the probability and consequences of a pipeline failure. Risk analyses and cost benefit analyses performed provide guidance on the level of deepwater repair development effort appropriate for the Oman India Pipeline project and identifies critical areas toward which more intense development effort should be directed. The risk analysis results indicate that the likelihood of a failure of the Oman India Pipeline during its 40-year life is low. Furthermore, the probability of operational failure of the pipeline in deepwater regions ismore » extremely low, the major proportion of operational failure risk being associated with the shallow water regions.« less

  2. 49 CFR Appendix E to Part 192 - Guidance on Determining High Consequence Areas and on Carrying out Requirements in the Integrity...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... addressing time dependent and independent threats for a transmission pipeline operating below 30% SMYS not in... pipeline system are covered for purposes of the integrity management program requirements, an operator must... system, or an operator may apply one method to individual portions of the pipeline system. (Refer to...

  3. Numerical research on the lateral global buckling characteristics of a high temperature and pressure pipeline with two initial imperfections

    PubMed Central

    Liu, Wenbin; Liu, Aimin

    2018-01-01

    With the exploitation of offshore oil and gas gradually moving to deep water, higher temperature differences and pressure differences are applied to the pipeline system, making the global buckling of the pipeline more serious. For unburied deep-water pipelines, the lateral buckling is the major buckling form. The initial imperfections widely exist in the pipeline system due to manufacture defects or the influence of uneven seabed, and the distribution and geometry features of initial imperfections are random. They can be divided into two kinds based on shape: single-arch imperfections and double-arch imperfections. This paper analyzed the global buckling process of a pipeline with 2 initial imperfections by using a numerical simulation method and revealed how the ratio of the initial imperfection’s space length to the imperfection’s wavelength and the combination of imperfections affects the buckling process. The results show that a pipeline with 2 initial imperfections may suffer the superposition of global buckling. The growth ratios of buckling displacement, axial force and bending moment in the superposition zone are several times larger than no buckling superposition pipeline. The ratio of the initial imperfection’s space length to the imperfection’s wavelength decides whether a pipeline suffers buckling superposition. The potential failure point of pipeline exhibiting buckling superposition is as same as the no buckling superposition pipeline, but the failure risk of pipeline exhibiting buckling superposition is much higher. The shape and direction of two nearby imperfections also affects the failure risk of pipeline exhibiting global buckling superposition. The failure risk of pipeline with two double-arch imperfections is higher than pipeline with two single-arch imperfections. PMID:29554123

  4. Analysis of pipeline transportation systems for carbon dioxide sequestration

    NASA Astrophysics Data System (ADS)

    Witkowski, Andrzej; Majkut, Mirosław; Rulik, Sebastian

    2014-03-01

    A commercially available ASPEN PLUS simulation using a pipe model was employed to determine the maximum safe pipeline distances to subsequent booster stations as a function of carbon dioxide (CO2) inlet pressure, ambient temperature and ground level heat flux parameters under three conditions: isothermal, adiabatic and with account of heat transfer. In the paper, the CO2 working area was assumed to be either in the liquid or in the supercritical state and results for these two states were compared. The following power station data were used: a 900 MW pulverized coal-fired power plant with 90% of CO2 recovered (156.43 kg/s) and the monothanolamine absorption method for separating CO2 from flue gases. The results show that a subcooled liquid transport maximizes energy efficiency and minimizes the cost of CO2 transport over long distances under isothermal, adiabatic and heat transfer conditions. After CO2 is compressed and boosted to above 9 MPa, its temperature is usually higher than ambient temperature. The thermal insulation layer slows down the CO2 temperature decrease process, increasing the pressure drop in the pipeline. Therefore in Poland, considering the atmospheric conditions, the thermal insulation layer should not be laid on the external surface of the pipeline.

  5. Technique of estimation of actual strength of a gas pipeline section at its deformation in landslide action zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tcherni, V.P.

    1996-12-31

    The technique is given which permits determination of stress and strain state (SSS) and estimation of actual strength of a section of a buried main gas pipeline (GP) in the case of its deformation in a landslide action zone. The technique is based on the use of three-dimensional coordinates of axial points of the deformed GP section. These coordinates are received by a full-scale survey. The deformed axis of the surveyed GP section is described by the polynomial. The unknown coefficients of the polynomial can be determined from the boundary conditions at points of connection with contiguous undeformed sections asmore » well as by use of minimization methods in mathematical processing of full-scale survey results. The resulting form of GP section`s axis allows one to determine curvatures and, accordingly, bending moments along all the length of the considered section. The influence of soil resistance to longitudinal displacements of a pipeline is used to determine longitudinal forces. Resulting values of bending moments and axial forces as well as the known value of internal pressure are used to analyze all necessary components of an actual SSS of pipeline section and to estimate its strength by elastic analysis.« less

  6. The bachelor's to Ph.D. STEM pipeline no longer leaks more women than men: a 30-year analysis.

    PubMed

    Miller, David I; Wai, Jonathan

    2015-01-01

    For decades, research and public discourse about gender and science have often assumed that women are more likely than men to "leak" from the science pipeline at multiple points after entering college. We used retrospective longitudinal methods to investigate how accurately this "leaky pipeline" metaphor has described the bachelor's to Ph.D. transition in science, technology, engineering, and mathematics (STEM) fields in the U.S. since the 1970s. Among STEM bachelor's degree earners in the 1970s and 1980s, women were less likely than men to later earn a STEM Ph.D. However, this gender difference closed in the 1990s. Qualitatively similar trends were found across STEM disciplines. The leaky pipeline metaphor therefore partially explains historical gender differences in the U.S., but no longer describes current gender differences in the bachelor's to Ph.D. transition in STEM. The results help constrain theories about women's underrepresentation in STEM. Overall, these results point to the need to understand gender differences at the bachelor's level and below to understand women's representation in STEM at the Ph.D. level and above. Consistent with trends at the bachelor's level, women's representation at the Ph.D. level has been recently declining for the first time in over 40 years.

  7. Field Performance of Recycled Plastic Foundation for Pipeline

    PubMed Central

    Kim, Seongkyum; Lee, Kwanho

    2015-01-01

    The incidence of failure of embedded pipelines has increased in Korea due to the increasing applied load and the improper compaction of bedding and backfill materials. To overcome these problems, a prefabricated lightweight plastic foundation using recycled plastic was developed for sewer pipelines. A small scale laboratory chamber test and two field tests were conducted to verify its construction workability and performance. From the small scale laboratory chamber test, the applied loads at 2.5% and 5.0% of deformation were 3.45 kgf/cm2 and 5.85 kgf/cm2 for Case S1, and 4.42 kgf/cm2 and 6.43 kgf/cm2 for Case S2, respectively. From the first field test, the vertical deformation of the recycled plastic foundation (Case A2) was very small. According to the analysis based on the PE pipe deformation at the connection (CN) and at the center (CT), the pipe deformation at each part for Case A1 was larger than that for Case A2, which adopted the recycled lightweight plastic foundation. From the second field test, the measured maximum settlements of Case B1 and Case B2 were 1.05 cm and 0.54 cm, respectively. The use of a plastic foundation can reduce the settlement of an embedded pipeline and be an alternative construction method.

  8. A bipolar population counter using wave pipelining to achieve 2.5 x normal clock frequency

    NASA Technical Reports Server (NTRS)

    Wong, Derek C.; De Micheli, Giovanni; Flynn, Michael J.; Huston, Robert E.

    1992-01-01

    Wave pipelining is a technique for pipelining digital systems that can increase clock frequency in practical circuits without increasing the number of storage elements. In wave pipelining, multiple coherent waves of data are sent through a block of combinational logic by applying new inputs faster than the delay through the logic. The throughput of a 63-b CML population counter was increased from 97 to 250 MHz using wave pipelining. The internal circuit is flowthrough combinational logic. Novel CAD methods have balanced all input-to-output paths to about the same delay. This allows multiple data waves to propagate in sequence when the circuit is clocked faster than its propagation delay.

  9. Study on Resources Assessment of Coal Seams covered by Long-Distance Oil & Gas Pipelines

    NASA Astrophysics Data System (ADS)

    Han, Bing; Fu, Qiang; Pan, Wei; Hou, Hanfang

    2018-01-01

    The assessment of mineral resources covered by construction projects plays an important role in reducing the overlaying of important mineral resources and ensuring the smooth implementation of construction projects. To take a planned long-distance gas pipeline as an example, the assessment method and principles for coal resources covered by linear projects are introduced. The areas covered by multiple coal seams are determined according to the linear projection method, and the resources covered by pipelines directly and indirectly are estimated by using area segmentation method on the basis of original blocks. The research results can provide references for route optimization of projects and compensation for mining right..

  10. Implementation of Cloud based next generation sequencing data analysis in a clinical laboratory.

    PubMed

    Onsongo, Getiria; Erdmann, Jesse; Spears, Michael D; Chilton, John; Beckman, Kenneth B; Hauge, Adam; Yohe, Sophia; Schomaker, Matthew; Bower, Matthew; Silverstein, Kevin A T; Thyagarajan, Bharat

    2014-05-23

    The introduction of next generation sequencing (NGS) has revolutionized molecular diagnostics, though several challenges remain limiting the widespread adoption of NGS testing into clinical practice. One such difficulty includes the development of a robust bioinformatics pipeline that can handle the volume of data generated by high-throughput sequencing in a cost-effective manner. Analysis of sequencing data typically requires a substantial level of computing power that is often cost-prohibitive to most clinical diagnostics laboratories. To address this challenge, our institution has developed a Galaxy-based data analysis pipeline which relies on a web-based, cloud-computing infrastructure to process NGS data and identify genetic variants. It provides additional flexibility, needed to control storage costs, resulting in a pipeline that is cost-effective on a per-sample basis. It does not require the usage of EBS disk to run a sample. We demonstrate the validation and feasibility of implementing this bioinformatics pipeline in a molecular diagnostics laboratory. Four samples were analyzed in duplicate pairs and showed 100% concordance in mutations identified. This pipeline is currently being used in the clinic and all identified pathogenic variants confirmed using Sanger sequencing further validating the software.

  11. Using steady-state equations for transient flow calculation in natural gas pipelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddox, R.N.; Zhou, P.

    1984-04-02

    Maddox and Zhou have extended their technique for calculating the unsteady-state behavior of straight gas pipelines to complex pipeline systems and networks. After developing the steady-state flow rate and pressure profile for each pipe in the network, analysts can perform the transient-state analysis in the real-time step-wise manner described for this technique.

  12. A Critique of the STEM Pipeline: Young People's Identities in Sweden and Science Education Policy

    ERIC Educational Resources Information Center

    Mendick, Heather; Berge, Maria; Danielsson, Anna

    2017-01-01

    In this article, we develop critiques of the pipeline model which dominates Western science education policy, using discourse analysis of interviews with two Swedish young women focused on "identity work". We argue that it is important to unpack the ways that the pipeline model fails to engage with intersections of gender, ethnicity,…

  13. A Mitigation Process for Impacts of the All American Pipeline on Oak Woodlands in Santa Barbara County

    Treesearch

    Germaine Reyes-French; Timothy J. Cohen

    1991-01-01

    This paper outlines a mitigation program for pipeline construction impacts to oak tree habitat by describing the requirements for the Offsite Oak Mitigation Program for the All American Pipeline (AAPL) in Santa Barbara County, California. After describing the initial environmental analysis, the County regulatory structure is described under which the plan was required...

  14. Landscape scale ecological monitoring as part of an EIA of major construction activities: experience at the Turkish section of the BTC crude oil pipeline project.

    PubMed

    Sahin, Sükran; Kurum, Ekrem

    2009-09-01

    Ecological monitoring is a complementary component of the overall environmental management and monitoring program of any Environmental Impact Assessment (EIA) report. The monitoring method should be developed for each project phase and allow for periodic reporting and assessment of compliance with the environmental conditions and requirements of the EIA. Also, this method should incorporate a variance request program since site-specific conditions can affect construction on a daily basis and require time-critical application of alternative construction scenarios or environmental management methods integrated with alternative mitigation measures. Finally, taking full advantage of the latest information and communication technologies can enhance the quality of, and public involvement in, the environmental management program. In this paper, a landscape-scale ecological monitoring method for major construction projects is described using, as a basis, 20 months of experience on the Baku-Tbilisi-Ceyhan (BTC) Crude Oil Pipeline Project, covering Turkish Sections Lot B and Lot C. This analysis presents suggestions for improving ecological monitoring for major construction activities.

  15. Computational analysis of PET by AIBL (CapAIBL): a cloud-based processing pipeline for the quantification of PET images

    NASA Astrophysics Data System (ADS)

    Bourgeat, Pierrick; Dore, Vincent; Fripp, Jurgen; Villemagne, Victor L.; Rowe, Chris C.; Salvado, Olivier

    2015-03-01

    With the advances of PET tracers for β-Amyloid (Aβ) detection in neurodegenerative diseases, automated quantification methods are desirable. For clinical use, there is a great need for PET-only quantification method, as MR images are not always available. In this paper, we validate a previously developed PET-only quantification method against MR-based quantification using 6 tracers: 18F-Florbetaben (N=148), 18F-Florbetapir (N=171), 18F-NAV4694 (N=47), 18F-Flutemetamol (N=180), 11C-PiB (N=381) and 18F-FDG (N=34). The results show an overall mean absolute percentage error of less than 5% for each tracer. The method has been implemented as a remote service called CapAIBL (http://milxcloud.csiro.au/capaibl). PET images are uploaded to a cloud platform where they are spatially normalised to a standard template and quantified. A report containing global as well as local quantification, along with surface projection of the β-Amyloid deposition is automatically generated at the end of the pipeline and emailed to the user.

  16. Bioinformatic pipelines in Python with Leaf

    PubMed Central

    2013-01-01

    Background An incremental, loosely planned development approach is often used in bioinformatic studies when dealing with custom data analysis in a rapidly changing environment. Unfortunately, the lack of a rigorous software structuring can undermine the maintainability, communicability and replicability of the process. To ameliorate this problem we propose the Leaf system, the aim of which is to seamlessly introduce the pipeline formality on top of a dynamical development process with minimum overhead for the programmer, thus providing a simple layer of software structuring. Results Leaf includes a formal language for the definition of pipelines with code that can be transparently inserted into the user’s Python code. Its syntax is designed to visually highlight dependencies in the pipeline structure it defines. While encouraging the developer to think in terms of bioinformatic pipelines, Leaf supports a number of automated features including data and session persistence, consistency checks between steps of the analysis, processing optimization and publication of the analytic protocol in the form of a hypertext. Conclusions Leaf offers a powerful balance between plan-driven and change-driven development environments in the design, management and communication of bioinformatic pipelines. Its unique features make it a valuable alternative to other related tools. PMID:23786315

  17. The PREP pipeline: standardized preprocessing for large-scale EEG analysis

    PubMed Central

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A.

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode. PMID:26150785

  18. Numerical Modeling of Mechanical Behavior for Buried Steel Pipelines Crossing Subsidence Strata

    PubMed Central

    Han, C. J.

    2015-01-01

    This paper addresses the mechanical behavior of buried steel pipeline crossing subsidence strata. The investigation is based on numerical simulation of the nonlinear response of the pipeline-soil system through finite element method, considering large strain and displacement, inelastic material behavior of buried pipeline and the surrounding soil, as well as contact and friction on the pipeline-soil interface. Effects of key parameters on the mechanical behavior of buried pipeline were investigated, such as strata subsidence, diameter-thickness ratio, buried depth, internal pressure, friction coefficient and soil properties. The results show that the maximum strain appears on the outer transition subsidence section of the pipeline, and its cross section is concave shaped. With the increasing of strata subsidence and diameter-thickness ratio, the out of roundness, longitudinal strain and equivalent plastic strain increase gradually. With the buried depth increasing, the deflection, out of roundness and strain of the pipeline decrease. Internal pressure and friction coefficient have little effect on the deflection of buried pipeline. Out of roundness is reduced and the strain is increased gradually with the increasing of internal pressure. The physical properties of soil have a great influence on the mechanical properties of buried pipeline. The results from the present study can be used for the development of optimization design and preventive maintenance for buried steel pipelines. PMID:26103460

  19. Forecasting and Evaluation of Gas Pipelines Geometric Forms Breach Hazard

    NASA Astrophysics Data System (ADS)

    Voronin, K. S.

    2016-10-01

    Main gas pipelines during operation are under the influence of the permanent pressure drops which leads to their lengthening and as a result, to instability of their position in space. In dynamic systems that have feedback, phenomena, preceding emergencies, should be observed. The article discusses the forced vibrations of the gas pipeline cylindrical surface under the influence of dynamic loads caused by pressure surges, and the process of its geometric shape deformation. Frequency of vibrations, arising in the pipeline at the stage preceding its bending, is being determined. Identification of this frequency can be the basis for the development of a method of monitoring the technical condition of the gas pipeline, and forecasting possible emergency situations allows planning and carrying out in due time reconstruction works on sections of gas pipeline with a possible deviation from the design position.

  20. CloVR-ITS: Automated internal transcribed spacer amplicon sequence analysis pipeline for the characterization of fungal microbiota

    PubMed Central

    2013-01-01

    Background Besides the development of comprehensive tools for high-throughput 16S ribosomal RNA amplicon sequence analysis, there exists a growing need for protocols emphasizing alternative phylogenetic markers such as those representing eukaryotic organisms. Results Here we introduce CloVR-ITS, an automated pipeline for comparative analysis of internal transcribed spacer (ITS) pyrosequences amplified from metagenomic DNA isolates and representing fungal species. This pipeline performs a variety of steps similar to those commonly used for 16S rRNA amplicon sequence analysis, including preprocessing for quality, chimera detection, clustering of sequences into operational taxonomic units (OTUs), taxonomic assignment (at class, order, family, genus, and species levels) and statistical analysis of sample groups of interest based on user-provided information. Using ITS amplicon pyrosequencing data from a previous human gastric fluid study, we demonstrate the utility of CloVR-ITS for fungal microbiota analysis and provide runtime and cost examples, including analysis of extremely large datasets on the cloud. We show that the largest fractions of reads from the stomach fluid samples were assigned to Dothideomycetes, Saccharomycetes, Agaricomycetes and Sordariomycetes but that all samples were dominated by sequences that could not be taxonomically classified. Representatives of the Candida genus were identified in all samples, most notably C. quercitrusa, while sequence reads assigned to the Aspergillus genus were only identified in a subset of samples. CloVR-ITS is made available as a pre-installed, automated, and portable software pipeline for cloud-friendly execution as part of the CloVR virtual machine package (http://clovr.org). Conclusion The CloVR-ITS pipeline provides fungal microbiota analysis that can be complementary to bacterial 16S rRNA and total metagenome sequence analysis allowing for more comprehensive studies of environmental and host-associated microbial communities. PMID:24451270

  1. Toward better drug repositioning: prioritizing and integrating existing methods into efficient pipelines.

    PubMed

    Jin, Guangxu; Wong, Stephen T C

    2014-05-01

    Recycling old drugs, rescuing shelved drugs and extending patents' lives make drug repositioning an attractive form of drug discovery. Drug repositioning accounts for approximately 30% of the newly US Food and Drug Administration (FDA)-approved drugs and vaccines in recent years. The prevalence of drug-repositioning studies has resulted in a variety of innovative computational methods for the identification of new opportunities for the use of old drugs. Questions often arise from customizing or optimizing these methods into efficient drug-repositioning pipelines for alternative applications. It requires a comprehensive understanding of the available methods gained by evaluating both biological and pharmaceutical knowledge and the elucidated mechanism-of-action of drugs. Here, we provide guidance for prioritizing and integrating drug-repositioning methods for specific drug-repositioning pipelines. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. External corrosion and leakage detection of oil and gas pipeline using FBG fiber optics and a trigger

    NASA Astrophysics Data System (ADS)

    Ge, Yaomou

    Oil and gas pipelines play a critical role in delivering the energy resources from producing fields to power communities around the world. However, there are many threats to pipeline integrity, which may lead to significant incidents, causing safety, environmental and economic problems. Corrosion has been a big threat to oil and gas pipelines for a long time, which has attributed to approximately 18% of the significant incidents in oil and gas pipelines. In addition, external corrosion of pipelines accounts for a significant portion (more than 25%) of pipeline failure. External corrosion detection is the research area of this thesis. In this thesis, a review of existing corrosion detection or monitoring methods is presented, and optical fiber sensors show a great promise in corrosion detection of oil and gas pipelines. Several scenarios of optical fiber corrosion sensors are discussed, and two of them are selected for future research. A new corrosion and leakage detection sensor, consisting of a custom designed trigger and a FBG optical fiber, will be presented. This new device has been experimentally tested and it shows great promise.

  3. Reasons for decision in the matter of Maritimes and Northeast Pipeline Management Ltd. application dated 24 February 1998 for approval of the plan, profile and book of reference respecting the detailed pipeline route from Goldboro, N.S. to St. Stephen, N.B.: MH-3-98

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-12-31

    In December 1997, Maritimes and Northeast Pipeline Management Ltd. received approval to construct and operate a natural gas pipeline consisting of about 558 kilometers of 762-millimeter pipe to be located within a one-kilometer-wide corridor extending from Goldboro, Nova Scotia to the international border near St. Stephen, New Brunswick. This report covers the second stage of the pipeline approval process where the detailed route is determined. It presents the views of the pipeline company, various landowners and mineral rights holders objecting to the proposed detailed route, and the National Energy Board with regard to two issues: The best possible detailed routemore » for the pipeline, and the most appropriate methods and timing of constructing the pipeline. Specific land/mineral rights owner cases including the nature of the objection, possible alternate routes, and the Board decision in each case are described.« less

  4. Quantification technology study on flaws in steam-filled pipelines based on image processing

    NASA Astrophysics Data System (ADS)

    Sun, Lina; Yuan, Peixin

    2009-07-01

    Starting from exploiting the applied detection system of gas transmission pipeline, a set of X-ray image processing methods and pipeline flaw quantificational evaluation methods are proposed. Defective and non-defective strings and rows in gray image were extracted and oscillogram was obtained. We can distinguish defects in contrast with two gray images division. According to the gray value of defects with different thicknesses, the gray level depth curve is founded. Through exponential and polynomial fitting way to obtain the attenuation mathematical model which the beam penetrates pipeline, thus attain flaw deep dimension. This paper tests on the PPR pipe in the production of simulated holes flaw and cracks flaw, 135KV used the X-ray source on the testing. Test results show that X-ray image processing method, which meet the needs of high efficient flaw detection and provide quality safeguard for thick oil recovery, can be used successfully in detecting corrosion of insulated pipe.

  5. Quantification technology study on flaws in steam-filled pipelines based on image processing

    NASA Astrophysics Data System (ADS)

    Yuan, Pei-xin; Cong, Jia-hui; Chen, Bo

    2008-03-01

    Starting from exploiting the applied detection system of gas transmission pipeline, a set of X-ray image processing methods and pipeline flaw quantificational evaluation methods are proposed. Defective and non-defective strings and rows in gray image were extracted and oscillogram was obtained. We can distinguish defects in contrast with two gray images division. According to the gray value of defects with different thicknesses, the gray level depth curve is founded. Through exponential and polynomial fitting way to obtain the attenuation mathematical model which the beam penetrates pipeline, thus attain flaw deep dimension. This paper tests on the PPR pipe in the production of simulated holes flaw and cracks flaw. The X-ray source tube voltage was selected as 130kv and valve current was 1.5mA.Test results show that X-ray image processing methods, which meet the needs of high efficient flaw detection and provide quality safeguard for thick oil recovery, can be used successfully in detecting corrosion of insulated pipe.

  6. Induced electric currents in the Alaska oil pipeline measured by gradient, fluxgate, and SQUID magnetometers

    NASA Technical Reports Server (NTRS)

    Campbell, W. H.; Zimmerman, J. E.

    1979-01-01

    The field gradient method for observing the electric currents in the Alaska pipeline provided consistent values for both the fluxgate and SQUID method of observation. These currents were linearly related to the regularly measured electric and magnetic field changes. Determinations of pipeline current were consistent with values obtained by a direct connection, current shunt technique at a pipeline site about 9.6 km away. The gradient method has the distinct advantage of portability and buried- pipe capability. Field gradients due to the pipe magnetization, geological features, or ionospheric source currents do not seem to contribute a measurable error to such pipe current determination. The SQUID gradiometer is inherently sensitive enough to detect very small currents in a linear conductor at 10 meters, or conversely, to detect small currents of one amphere or more at relatively great distances. It is fairly straightforward to achieve imbalance less than one part in ten thousand, and with extreme care, one part in one million or better.

  7. Infrared thermography for inspecting of pipeline specimen

    NASA Astrophysics Data System (ADS)

    Chen, Dapeng; Li, Xiaoli; Sun, Zuoming; Zhang, Xiaolong

    2018-02-01

    Infrared thermography is a fast and effective non-destructive testing method, which has an increasing application in the field of Aeronautics, Astronautic, architecture and medical, et al. Most of the reports about the application of this technology are focus on the specimens of planar, pulse light is often used as the heat stimulation and a plane heat source is generated on the surface of the specimen by the using of a lampshade, however, this method is not suitable for the specimen of non-planar, such as the pipeline. Therefore, in this paper, according the NDT problem of a steel and composite pipeline specimen, ultrasonic and hot water are applied as the heat source respectively, and an IR camera is used to record the temperature varies of the surface of the specimen, defects are revealed by the thermal images sequence processing. Furthermore, the results of light pulse thermography are also shown as comparison, it is indicated that choose the right stimulation method, can get a more effective NDT results for the pipeline specimen.

  8. Comparative Network-Based Recovery Analysis and Proteomic Profiling of Neurological Changes in Valproic Acid-Treated Mice

    PubMed Central

    2013-01-01

    Despite its prominence for characterization of complex mixtures, LC–MS/MS frequently fails to identify many proteins. Network-based analysis methods, based on protein–protein interaction networks (PPINs), biological pathways, and protein complexes, are useful for recovering non-detected proteins, thereby enhancing analytical resolution. However, network-based analysis methods do come in varied flavors for which the respective efficacies are largely unknown. We compare the recovery performance and functional insights from three distinct instances of PPIN-based approaches, viz., Proteomics Expansion Pipeline (PEP), Functional Class Scoring (FCS), and Maxlink, in a test scenario of valproic acid (VPA)-treated mice. We find that the most comprehensive functional insights, as well as best non-detected protein recovery performance, are derived from FCS utilizing real biological complexes. This outstrips other network-based methods such as Maxlink or Proteomics Expansion Pipeline (PEP). From FCS, we identified known biological complexes involved in epigenetic modifications, neuronal system development, and cytoskeletal rearrangements. This is congruent with the observed phenotype where adult mice showed an increase in dendritic branching to allow the rewiring of visual cortical circuitry and an improvement in their visual acuity when tested behaviorally. In addition, PEP also identified a novel complex, comprising YWHAB, NR1, NR2B, ACTB, and TJP1, which is functionally related to the observed phenotype. Although our results suggest different network analysis methods can produce different results, on the whole, the findings are mutually supportive. More critically, the non-overlapping information each provides can provide greater holistic understanding of complex phenotypes. PMID:23557376

  9. viGEN: An Open Source Pipeline for the Detection and Quantification of Viral RNA in Human Tumors.

    PubMed

    Bhuvaneshwar, Krithika; Song, Lei; Madhavan, Subha; Gusev, Yuriy

    2018-01-01

    An estimated 17% of cancers worldwide are associated with infectious causes. The extent and biological significance of viral presence/infection in actual tumor samples is generally unknown but could be measured using human transcriptome (RNA-seq) data from tumor samples. We present an open source bioinformatics pipeline viGEN, which allows for not only the detection and quantification of viral RNA, but also variants in the viral transcripts. The pipeline includes 4 major modules: The first module aligns and filter out human RNA sequences; the second module maps and count (remaining un-aligned) reads against reference genomes of all known and sequenced human viruses; the third module quantifies read counts at the individual viral-gene level thus allowing for downstream differential expression analysis of viral genes between case and controls groups. The fourth module calls variants in these viruses. To the best of our knowledge, there are no publicly available pipelines or packages that would provide this type of complete analysis in one open source package. In this paper, we applied the viGEN pipeline to two case studies. We first demonstrate the working of our pipeline on a large public dataset, the TCGA cervical cancer cohort. In the second case study, we performed an in-depth analysis on a small focused study of TCGA liver cancer patients. In the latter cohort, we performed viral-gene quantification, viral-variant extraction and survival analysis. This allowed us to find differentially expressed viral-transcripts and viral-variants between the groups of patients, and connect them to clinical outcome. From our analyses, we show that we were able to successfully detect the human papilloma virus among the TCGA cervical cancer patients. We compared the viGEN pipeline with two metagenomics tools and demonstrate similar sensitivity/specificity. We were also able to quantify viral-transcripts and extract viral-variants using the liver cancer dataset. The results presented corresponded with published literature in terms of rate of detection, and impact of several known variants of HBV genome. This pipeline is generalizable, and can be used to provide novel biological insights into microbial infections in complex diseases and tumorigeneses. Our viral pipeline could be used in conjunction with additional type of immuno-oncology analysis based on RNA-seq data of host RNA for cancer immunology applications. The source code, with example data and tutorial is available at: https://github.com/ICBI/viGEN/.

  10. Magnetic Flux Leakage and Principal Component Analysis for metal loss approximation in a pipeline

    NASA Astrophysics Data System (ADS)

    Ruiz, M.; Mujica, L. E.; Quintero, M.; Florez, J.; Quintero, S.

    2015-07-01

    Safety and reliability of hydrocarbon transportation pipelines represent a critical aspect for the Oil an Gas industry. Pipeline failures caused by corrosion, external agents, among others, can develop leaks or even rupture, which can negatively impact on population, natural environment, infrastructure and economy. It is imperative to have accurate inspection tools traveling through the pipeline to diagnose the integrity. In this way, over the last few years, different techniques under the concept of structural health monitoring (SHM) have continuously been in development. This work is based on a hybrid methodology that combines the Magnetic Flux Leakage (MFL) and Principal Components Analysis (PCA) approaches. The MFL technique induces a magnetic field in the pipeline's walls. The data are recorded by sensors measuring leakage magnetic field in segments with loss of metal, such as cracking, corrosion, among others. The data provide information of a pipeline with 15 years of operation approximately, which transports gas, has a diameter of 20 inches and a total length of 110 km (with several changes in the topography). On the other hand, PCA is a well-known technique that compresses the information and extracts the most relevant information facilitating the detection of damage in several structures. At this point, the goal of this work is to detect and localize critical loss of metal of a pipeline that are currently working.

  11. Nearshore Pipeline Installation Methods.

    DTIC Science & Technology

    1981-08-01

    inches b) Pipe, materials of construction: fully rigid, semi-rigid, flexible c) Pipeline length, maximum 2 miles d) Pipeline design life , minimum 15...common to their operations. Permanent facilities are specified in the Statement of Work. There- fore, a minimum design life of 15 years is chosen, which...makes the pipe leakproof and resists corrosion and abrasion. 5) Interlocked Z-shaped steel or stainless steel carcass - resists internal and external

  12. Uplifting behavior of shallow buried pipe in liquefiable soil by dynamic centrifuge test.

    PubMed

    Huang, Bo; Liu, Jingwen; Lin, Peng; Ling, Daosheng

    2014-01-01

    Underground pipelines are widely applied in the so-called lifeline engineerings. It shows according to seismic surveys that the damage from soil liquefaction to underground pipelines was the most serious, whose failures were mainly in the form of pipeline uplifting. In the present study, dynamic centrifuge model tests were conducted to study the uplifting behaviors of shallow-buried pipeline subjected to seismic vibration in liquefied sites. The uplifting mechanism was discussed through the responses of the pore water pressure and earth pressure around the pipeline. Additionally, the analysis of force, which the pipeline was subjected to before and during vibration, was introduced and proved to be reasonable by the comparison of the measured and the calculated results. The uplifting behavior of pipe is the combination effects of multiple forces, and is highly dependent on the excess pore pressure.

  13. Pipeline for effective denoising of digital mammography and digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Borges, Lucas R.; Bakic, Predrag R.; Foi, Alessandro; Maidment, Andrew D. A.; Vieira, Marcelo A. C.

    2017-03-01

    Denoising can be used as a tool to enhance image quality and enforce low radiation doses in X-ray medical imaging. The effectiveness of denoising techniques relies on the validity of the underlying noise model. In full-field digital mammography (FFDM) and digital breast tomosynthesis (DBT), calibration steps like the detector offset and flat-fielding can affect some assumptions made by most denoising techniques. Furthermore, quantum noise found in X-ray images is signal-dependent and can only be treated by specific filters. In this work we propose a pipeline for FFDM and DBT image denoising that considers the calibration steps and simplifies the modeling of the noise statistics through variance-stabilizing transformations (VST). The performance of a state-of-the-art denoising method was tested with and without the proposed pipeline. To evaluate the method, objective metrics such as the normalized root mean square error (N-RMSE), noise power spectrum, modulation transfer function (MTF) and the frequency signal-to-noise ratio (SNR) were analyzed. Preliminary tests show that the pipeline improves denoising. When the pipeline is not used, bright pixels of the denoised image are under-filtered and dark pixels are over-smoothed due to the assumption of a signal-independent Gaussian model. The pipeline improved denoising up to 20% in terms of spatial N-RMSE and up to 15% in terms of frequency SNR. Besides improving the denoising, the pipeline does not increase signal smoothing significantly, as shown by the MTF. Thus, the proposed pipeline can be used with state-of-the-art denoising techniques to improve the quality of DBT and FFDM images.

  14. BigDataScript: a scripting language for data pipelines.

    PubMed

    Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu

    2015-01-01

    The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. © The Author 2014. Published by Oxford University Press.

  15. BigDataScript: a scripting language for data pipelines

    PubMed Central

    Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu

    2015-01-01

    Motivation: The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. Results: We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. Availability and implementation: BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. Contact: pablo.e.cingolani@gmail.com PMID:25189778

  16. Learning normalized inputs for iterative estimation in medical image segmentation.

    PubMed

    Drozdzal, Michal; Chartrand, Gabriel; Vorontsov, Eugene; Shakeri, Mahsa; Di Jorio, Lisa; Tang, An; Romero, Adriana; Bengio, Yoshua; Pal, Chris; Kadoury, Samuel

    2018-02-01

    In this paper, we introduce a simple, yet powerful pipeline for medical image segmentation that combines Fully Convolutional Networks (FCNs) with Fully Convolutional Residual Networks (FC-ResNets). We propose and examine a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets. Our approach focuses upon the importance of a trainable pre-processing when using FC-ResNets and we show that a low-capacity FCN model can serve as a pre-processor to normalize medical input data. In our image segmentation pipeline, we use FCNs to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction. As in other fully convolutional approaches, our pipeline can be used off-the-shelf on different image modalities. We show that using this pipeline, we exhibit state-of-the-art performance on the challenging Electron Microscopy benchmark, when compared to other 2D methods. We improve segmentation results on CT images of liver lesions, when contrasting with standard FCN methods. Moreover, when applying our 2D pipeline on a challenging 3D MRI prostate segmentation challenge we reach results that are competitive even when compared to 3D methods. The obtained results illustrate the strong potential and versatility of the pipeline by achieving accurate segmentations on a variety of image modalities and different anatomical regions. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. The FieldTrip-SimBio pipeline for EEG forward solutions.

    PubMed

    Vorwerk, Johannes; Oostenveld, Robert; Piastra, Maria Carla; Magyari, Lilla; Wolters, Carsten H

    2018-03-27

    Accurately solving the electroencephalography (EEG) forward problem is crucial for precise EEG source analysis. Previous studies have shown that the use of multicompartment head models in combination with the finite element method (FEM) can yield high accuracies both numerically and with regard to the geometrical approximation of the human head. However, the workload for the generation of multicompartment head models has often been too high and the use of publicly available FEM implementations too complicated for a wider application of FEM in research studies. In this paper, we present a MATLAB-based pipeline that aims to resolve this lack of easy-to-use integrated software solutions. The presented pipeline allows for the easy application of five-compartment head models with the FEM within the FieldTrip toolbox for EEG source analysis. The FEM from the SimBio toolbox, more specifically the St. Venant approach, was integrated into the FieldTrip toolbox. We give a short sketch of the implementation and its application, and we perform a source localization of somatosensory evoked potentials (SEPs) using this pipeline. We then evaluate the accuracy that can be achieved using the automatically generated five-compartment hexahedral head model [skin, skull, cerebrospinal fluid (CSF), gray matter, white matter] in comparison to a highly accurate tetrahedral head model that was generated on the basis of a semiautomatic segmentation with very careful and time-consuming manual corrections. The source analysis of the SEP data correctly localizes the P20 component and achieves a high goodness of fit. The subsequent comparison to the highly detailed tetrahedral head model shows that the automatically generated five-compartment head model performs about as well as a highly detailed four-compartment head model (skin, skull, CSF, brain). This is a significant improvement in comparison to a three-compartment head model, which is frequently used in praxis, since the importance of modeling the CSF compartment has been shown in a variety of studies. The presented pipeline facilitates the use of five-compartment head models with the FEM for EEG source analysis. The accuracy with which the EEG forward problem can thereby be solved is increased compared to the commonly used three-compartment head models, and more reliable EEG source reconstruction results can be obtained.

  18. Analysis pipelines and packages for Infinium HumanMethylation450 BeadChip (450k) data

    PubMed Central

    Morris, Tiffany J.; Beck, Stephan

    2015-01-01

    The Illumina HumanMethylation450 BeadChip has become a popular platform for interrogating DNA methylation in epigenome-wide association studies (EWAS) and related projects as well as resource efforts such as the International Cancer Genome Consortium (ICGC) and the International Human Epigenome Consortium (IHEC). This has resulted in an exponential increase of 450k data in recent years and triggered the development of numerous integrated analysis pipelines and stand-alone packages. This review will introduce and discuss the currently most popular pipelines and packages and is particularly aimed at new 450k users. PMID:25233806

  19. 75 FR 23710 - Order Finding That the ICE PG&E Citygate Financial Basis Contract Traded on the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-04

    ... Pipe Line, LLC, serves as a juncture for 13 different pipelines. These pipelines bring in natural gas... ``hub'' refers to a juncture where two or more natural gas pipelines are connected. Hubs also serve as... analysis of PG&E Citygate natural gas prices showed that 55 percent of the observations were more than 2.5...

  20. Metatranscriptomic analysis of diverse microbial communities reveals core metabolic pathways and microbiome-specific functionality.

    PubMed

    Jiang, Yue; Xiong, Xuejian; Danska, Jayne; Parkinson, John

    2016-01-12

    Metatranscriptomics is emerging as a powerful technology for the functional characterization of complex microbial communities (microbiomes). Use of unbiased RNA-sequencing can reveal both the taxonomic composition and active biochemical functions of a complex microbial community. However, the lack of established reference genomes, computational tools and pipelines make analysis and interpretation of these datasets challenging. Systematic studies that compare data across microbiomes are needed to demonstrate the ability of such pipelines to deliver biologically meaningful insights on microbiome function. Here, we apply a standardized analytical pipeline to perform a comparative analysis of metatranscriptomic data from diverse microbial communities derived from mouse large intestine, cow rumen, kimchi culture, deep-sea thermal vent and permafrost. Sequence similarity searches allowed annotation of 19 to 76% of putative messenger RNA (mRNA) reads, with the highest frequency in the kimchi dataset due to its relatively low complexity and availability of closely related reference genomes. Metatranscriptomic datasets exhibited distinct taxonomic and functional signatures. From a metabolic perspective, we identified a common core of enzymes involved in amino acid, energy and nucleotide metabolism and also identified microbiome-specific pathways such as phosphonate metabolism (deep sea) and glycan degradation pathways (cow rumen). Integrating taxonomic and functional annotations within a novel visualization framework revealed the contribution of different taxa to metabolic pathways, allowing the identification of taxa that contribute unique functions. The application of a single, standard pipeline confirms that the rich taxonomic and functional diversity observed across microbiomes is not simply an artefact of different analysis pipelines but instead reflects distinct environmental influences. At the same time, our findings show how microbiome complexity and availability of reference genomes can impact comprehensive annotation of metatranscriptomes. Consequently, beyond the application of standardized pipelines, additional caution must be taken when interpreting their output and performing downstream, microbiome-specific, analyses. The pipeline used in these analyses along with a tutorial has been made freely available for download from our project website: http://www.compsysbio.org/microbiome .

  1. Experimental and Numerical Investigation of Local Scour Around Submarine Piggyback Pipeline Under Steady Current

    NASA Astrophysics Data System (ADS)

    Zhao, Enjin; Shi, Bing; Qu, Ke; Dong, Wenbin; Zhang, Jing

    2018-04-01

    As a new type of submarine pipeline, the piggyback pipeline has been gradually adopted in engineering practice to enhance the performance and safety of submarine pipelines. However, limited simulation work and few experimental studies have been published on the scour around the piggyback pipeline under steady current. This study numerically and experimentally investigates the local scour of the piggyback pipe under steady current. The influence of prominent factors such as pipe diameter, inflow Reynolds number, and gap between the main and small pipes, on the maximum scour depth have been examined and discussed in detail. Furthermore, one formula to predict the maximum scour depth under the piggyback pipeline has been derived based on the theoretical analysis of scour equilibrium. The feasibility of the proposed formula has been effectively calibrated by both experimental data and numerical results. The findings drawn from this study are instructive in the future design and application of the piggyback pipeline.

  2. Robust, high-throughput solution structural analyses by small angle X-ray scattering (SAXS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hura, Greg L.; Menon, Angeli L.; Hammel, Michal

    2009-07-20

    We present an efficient pipeline enabling high-throughput analysis of protein structure in solution with small angle X-ray scattering (SAXS). Our SAXS pipeline combines automated sample handling of microliter volumes, temperature and anaerobic control, rapid data collection and data analysis, and couples structural analysis with automated archiving. We subjected 50 representative proteins, mostly from Pyrococcus furiosus, to this pipeline and found that 30 were multimeric structures in solution. SAXS analysis allowed us to distinguish aggregated and unfolded proteins, define global structural parameters and oligomeric states for most samples, identify shapes and similar structures for 25 unknown structures, and determine envelopes formore » 41 proteins. We believe that high-throughput SAXS is an enabling technology that may change the way that structural genomics research is done.« less

  3. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    PubMed

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  4. The statistics of identifying differentially expressed genes in Expresso and TM4: a comparison

    PubMed Central

    Sioson, Allan A; Mane, Shrinivasrao P; Li, Pinghua; Sha, Wei; Heath, Lenwood S; Bohnert, Hans J; Grene, Ruth

    2006-01-01

    Background Analysis of DNA microarray data takes as input spot intensity measurements from scanner software and returns differential expression of genes between two conditions, together with a statistical significance assessment. This process typically consists of two steps: data normalization and identification of differentially expressed genes through statistical analysis. The Expresso microarray experiment management system implements these steps with a two-stage, log-linear ANOVA mixed model technique, tailored to individual experimental designs. The complement of tools in TM4, on the other hand, is based on a number of preset design choices that limit its flexibility. In the TM4 microarray analysis suite, normalization, filter, and analysis methods form an analysis pipeline. TM4 computes integrated intensity values (IIV) from the average intensities and spot pixel counts returned by the scanner software as input to its normalization steps. By contrast, Expresso can use either IIV data or median intensity values (MIV). Here, we compare Expresso and TM4 analysis of two experiments and assess the results against qRT-PCR data. Results The Expresso analysis using MIV data consistently identifies more genes as differentially expressed, when compared to Expresso analysis with IIV data. The typical TM4 normalization and filtering pipeline corrects systematic intensity-specific bias on a per microarray basis. Subsequent statistical analysis with Expresso or a TM4 t-test can effectively identify differentially expressed genes. The best agreement with qRT-PCR data is obtained through the use of Expresso analysis and MIV data. Conclusion The results of this research are of practical value to biologists who analyze microarray data sets. The TM4 normalization and filtering pipeline corrects microarray-specific systematic bias and complements the normalization stage in Expresso analysis. The results of Expresso using MIV data have the best agreement with qRT-PCR results. In one experiment, MIV is a better choice than IIV as input to data normalization and statistical analysis methods, as it yields as greater number of statistically significant differentially expressed genes; TM4 does not support the choice of MIV input data. Overall, the more flexible and extensive statistical models of Expresso achieve more accurate analytical results, when judged by the yardstick of qRT-PCR data, in the context of an experimental design of modest complexity. PMID:16626497

  5. Education biographies from the science pipeline: An analysis of Latino/a student perspectives on ethnic and gender identity in higher education

    NASA Astrophysics Data System (ADS)

    Lujan, Vanessa Beth

    This study is a qualitative narrative analysis on the importance and relevance of the ethnic and gender identities of 17 Latino/a (Hispanic) college students in the biological sciences. This research study asks the question of how one's higher education experience within the science pipeline shapes an individual's direction of study, attitudes toward science, and cultural/ethnic and gender identity development. By understanding the ideologies of these students, we are able to better comprehend the world-makings that these students bring with them to the learning process in the sciences. Informed by life history narrative analysis, this study examines Latino/as and their persisting involvement within the science pipeline in higher education and is based on qualitative observations and interviews of student perspectives on the importance of the college science experience on their ethnic identity and gender identity. The findings in this study show the multiple interrelationships from both Latino male and Latina female narratives, separate and intersecting, to reveal the complexities of the Latino/a group experience in college science. By understanding from a student perspective how the science pipeline affects one's cultural, ethnic, or gender identity, we can create a thought-provoking discussion on why and how underrepresented student populations persist in the science pipeline in higher education. The conditions created in the science pipeline and how they affect Latino/a undergraduate pathways may further be used to understand and improve the quality of the undergraduate learning experience.

  6. The measurement of substance use among adolescents: when is the 'bogus pipeline' method needed?

    PubMed

    Murray, D M; Perry, C L

    1987-01-01

    The use of objective measures to assess cigarette smoking among adolescents has become commonplace in research studies in recent years. This trend is based on evidence that this so called pipeline methodology can increase the disclosure of socially proscribed behaviors in a setting where adolescents might otherwise feel pressure to deny that they smoke. This paper examines the effects of the pipeline methodology alone and in combination with procedures designed to ensure anonymity on the disclosure of tobacco, alcohol, and marijuana use by young adolescents. The data indicate that the pipeline procedures significantly increase disclosure of tobacco and marijuana use when students are promised confidentiality but not anonymity. However, when anonymity was assured, disclosure of cigarette use was just as high without the pipeline; for marijuana use, disclosure was higher without the pipeline. No effects were observed for alcohol disclosure. These data are interpreted for their implications for prospective and cross sectional studies.

  7. AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.

    PubMed

    Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S

    2017-09-01

    Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. A Comparison between the Decimated Padé Approximant and Decimated Signal Diagonalization Methods for Leak Detection in Pipelines Equipped with Pressure Sensors.

    PubMed

    Lay-Ekuakille, Aimé; Fabbiano, Laura; Vacca, Gaetano; Kitoko, Joël Kidiamboko; Kulapa, Patrice Bibala; Telesca, Vito

    2018-06-04

    Pipelines conveying fluids are considered strategic infrastructures to be protected and maintained. They generally serve for transportation of important fluids such as drinkable water, waste water, oil, gas, chemicals, etc. Monitoring and continuous testing, especially on-line, are necessary to assess the condition of pipelines. The paper presents findings related to a comparison between two spectral response algorithms based on the decimated signal diagonalization (DSD) and decimated Padé approximant (DPA) techniques that allow to one to process signals delivered by pressure sensors mounted on an experimental pipeline.

  9. Towards a Fuzzy Bayesian Network Based Approach for Safety Risk Analysis of Tunnel-Induced Pipeline Damage.

    PubMed

    Zhang, Limao; Wu, Xianguo; Qin, Yawei; Skibniewski, Miroslaw J; Liu, Wenli

    2016-02-01

    Tunneling excavation is bound to produce significant disturbances to surrounding environments, and the tunnel-induced damage to adjacent underground buried pipelines is of considerable importance for geotechnical practice. A fuzzy Bayesian networks (FBNs) based approach for safety risk analysis is developed in this article with detailed step-by-step procedures, consisting of risk mechanism analysis, the FBN model establishment, fuzzification, FBN-based inference, defuzzification, and decision making. In accordance with the failure mechanism analysis, a tunnel-induced pipeline damage model is proposed to reveal the cause-effect relationships between the pipeline damage and its influential variables. In terms of the fuzzification process, an expert confidence indicator is proposed to reveal the reliability of the data when determining the fuzzy probability of occurrence of basic events, with both the judgment ability level and the subjectivity reliability level taken into account. By means of the fuzzy Bayesian inference, the approach proposed in this article is capable of calculating the probability distribution of potential safety risks and identifying the most likely potential causes of accidents under both prior knowledge and given evidence circumstances. A case concerning the safety analysis of underground buried pipelines adjacent to the construction of the Wuhan Yangtze River Tunnel is presented. The results demonstrate the feasibility of the proposed FBN approach and its application potential. The proposed approach can be used as a decision tool to provide support for safety assurance and management in tunnel construction, and thus increase the likelihood of a successful project in a complex project environment. © 2015 Society for Risk Analysis.

  10. Method for exploratory cluster analysis and visualisation of single-trial ERP ensembles.

    PubMed

    Williams, N J; Nasuto, S J; Saddy, J D

    2015-07-30

    The validity of ensemble averaging on event-related potential (ERP) data has been questioned, due to its assumption that the ERP is identical across trials. Thus, there is a need for preliminary testing for cluster structure in the data. We propose a complete pipeline for the cluster analysis of ERP data. To increase the signal-to-noise (SNR) ratio of the raw single-trials, we used a denoising method based on Empirical Mode Decomposition (EMD). Next, we used a bootstrap-based method to determine the number of clusters, through a measure called the Stability Index (SI). We then used a clustering algorithm based on a Genetic Algorithm (GA) to define initial cluster centroids for subsequent k-means clustering. Finally, we visualised the clustering results through a scheme based on Principal Component Analysis (PCA). After validating the pipeline on simulated data, we tested it on data from two experiments - a P300 speller paradigm on a single subject and a language processing study on 25 subjects. Results revealed evidence for the existence of 6 clusters in one experimental condition from the language processing study. Further, a two-way chi-square test revealed an influence of subject on cluster membership. Our analysis operates on denoised single-trials, the number of clusters are determined in a principled manner and the results are presented through an intuitive visualisation. Given the cluster structure in some experimental conditions, we suggest application of cluster analysis as a preliminary step before ensemble averaging. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Analysis of simultaneous MEG and intracranial LFP recordings during Deep Brain Stimulation: a protocol and experimental validation

    PubMed Central

    Oswal, Ashwini; Jha, Ashwani; Neal, Spencer; Reid, Alphonso; Bradbury, David; Aston, Peter; Limousin, Patricia; Foltynie, Tom; Zrinzo, Ludvic; Brown, Peter; Litvak, Vladimir

    2016-01-01

    Background Deep Brain Stimulation (DBS) is an effective treatment for several neurological and psychiatric disorders. In order to gain insights into the therapeutic mechanisms of DBS and to advance future therapies a better understanding of the effects of DBS on large-scale brain networks is required. New method In this paper, we describe an experimental protocol and analysis pipeline for simultaneously performing DBS and intracranial local field potential (LFP) recordings at a target brain region during concurrent magnetoencephalography (MEG) measurement. Firstly we describe a phantom setup that allowed us to precisely characterise the MEG artefacts that occurred during DBS at clinical settings. Results Using the phantom recordings we demonstrate that with MEG beamforming it is possible to recover oscillatory activity synchronised to a reference channel, despite the presence of high amplitude artefacts evoked by DBS. Finally, we highlight the applicability of these methods by illustrating in a single patient with Parkinson's disease (PD), that changes in cortical-subthalamic nucleus coupling can be induced by DBS. Comparison with existing approaches To our knowledge this paper provides the first technical description of a recording and analysis pipeline for combining simultaneous cortical recordings using MEG, with intracranial LFP recordings of a target brain nucleus during DBS. PMID:26698227

  12. Hybrid Semantic Analysis for Mapping Adverse Drug Reaction Mentions in Tweets to Medical Terminology.

    PubMed

    Emadzadeh, Ehsan; Sarker, Abeed; Nikfarjam, Azadeh; Gonzalez, Graciela

    2017-01-01

    Social networks, such as Twitter, have become important sources for active monitoring of user-reported adverse drug reactions (ADRs). Automatic extraction of ADR information can be crucial for healthcare providers, drug manufacturers, and consumers. However, because of the non-standard nature of social media language, automatically extracted ADR mentions need to be mapped to standard forms before they can be used by operational pharmacovigilance systems. We propose a modular natural language processing pipeline for mapping (normalizing) colloquial mentions of ADRs to their corresponding standardized identifiers. We seek to accomplish this task and enable customization of the pipeline so that distinct unlabeled free text resources can be incorporated to use the system for other normalization tasks. Our approach, which we call Hybrid Semantic Analysis (HSA), sequentially employs rule-based and semantic matching algorithms for mapping user-generated mentions to concept IDs in the Unified Medical Language System vocabulary. The semantic matching component of HSA is adaptive in nature and uses a regression model to combine various measures of semantic relatedness and resources to optimize normalization performance on the selected data source. On a publicly available corpus, our normalization method achieves 0.502 recall and 0.823 precision (F-measure: 0.624). Our proposed method outperforms a baseline based on latent semantic analysis and another that uses MetaMap.

  13. Data Pre-Processing for Label-Free Multiple Reaction Monitoring (MRM) Experiments

    PubMed Central

    Chung, Lisa M.; Colangelo, Christopher M.; Zhao, Hongyu

    2014-01-01

    Multiple Reaction Monitoring (MRM) conducted on a triple quadrupole mass spectrometer allows researchers to quantify the expression levels of a set of target proteins. Each protein is often characterized by several unique peptides that can be detected by monitoring predetermined fragment ions, called transitions, for each peptide. Concatenating large numbers of MRM transitions into a single assay enables simultaneous quantification of hundreds of peptides and proteins. In recognition of the important role that MRM can play in hypothesis-driven research and its increasing impact on clinical proteomics, targeted proteomics such as MRM was recently selected as the Nature Method of the Year. However, there are many challenges in MRM applications, especially data pre‑processing where many steps still rely on manual inspection of each observation in practice. In this paper, we discuss an analysis pipeline to automate MRM data pre‑processing. This pipeline includes data quality assessment across replicated samples, outlier detection, identification of inaccurate transitions, and data normalization. We demonstrate the utility of our pipeline through its applications to several real MRM data sets. PMID:24905083

  14. Data Pre-Processing for Label-Free Multiple Reaction Monitoring (MRM) Experiments.

    PubMed

    Chung, Lisa M; Colangelo, Christopher M; Zhao, Hongyu

    2014-06-05

    Multiple Reaction Monitoring (MRM) conducted on a triple quadrupole mass spectrometer allows researchers to quantify the expression levels of a set of target proteins. Each protein is often characterized by several unique peptides that can be detected by monitoring predetermined fragment ions, called transitions, for each peptide. Concatenating large numbers of MRM transitions into a single assay enables simultaneous quantification of hundreds of peptides and proteins. In recognition of the important role that MRM can play in hypothesis-driven research and its increasing impact on clinical proteomics, targeted proteomics such as MRM was recently selected as the Nature Method of the Year. However, there are many challenges in MRM applications, especially data pre‑processing where many steps still rely on manual inspection of each observation in practice. In this paper, we discuss an analysis pipeline to automate MRM data pre‑processing. This pipeline includes data quality assessment across replicated samples, outlier detection, identification of inaccurate transitions, and data normalization. We demonstrate the utility of our pipeline through its applications to several real MRM data sets.

  15. Uplifting Behavior of Shallow Buried Pipe in Liquefiable Soil by Dynamic Centrifuge Test

    PubMed Central

    Liu, Jingwen; Ling, Daosheng

    2014-01-01

    Underground pipelines are widely applied in the so-called lifeline engineerings. It shows according to seismic surveys that the damage from soil liquefaction to underground pipelines was the most serious, whose failures were mainly in the form of pipeline uplifting. In the present study, dynamic centrifuge model tests were conducted to study the uplifting behaviors of shallow-buried pipeline subjected to seismic vibration in liquefied sites. The uplifting mechanism was discussed through the responses of the pore water pressure and earth pressure around the pipeline. Additionally, the analysis of force, which the pipeline was subjected to before and during vibration, was introduced and proved to be reasonable by the comparison of the measured and the calculated results. The uplifting behavior of pipe is the combination effects of multiple forces, and is highly dependent on the excess pore pressure. PMID:25121140

  16. Modeling flows of heterogeneous media in pipelines when substantiating operating conditions of hydrocarbon field transportation systems

    NASA Astrophysics Data System (ADS)

    Dudin, S. M.; Novitskiy, D. V.

    2018-05-01

    The works of researchers at VNIIgaz, Giprovostokneft, Kuibyshev NIINP, Grozny Petroleum Institute, etc., are devoted to modeling heterogeneous medium flows in pipelines under laboratory conditions. In objective consideration, the empirical relationships obtained and the calculation procedures for pipelines transporting multiphase products are a bank of experimental data on the problem of pipeline transportation of multiphase systems. Based on the analysis of the published works, the main design requirements for experimental installations designed to study the flow regimes of gas-liquid flows in pipelines were formulated, which were taken into account by the authors when creating the experimental stand. The article describes the results of experimental studies of the flow regimes of a gas-liquid mixture in a pipeline, and also gives a methodological description of the experimental installation. Also the article describes the software of the experimental scientific and educational stand developed with the participation of the authors.

  17. Mathematical modeling of non-stationary gas flow in gas pipeline

    NASA Astrophysics Data System (ADS)

    Fetisov, V. G.; Nikolaev, A. K.; Lykov, Y. V.; Duchnevich, L. N.

    2018-03-01

    An analysis of the operation of the gas transportation system shows that for a considerable part of time pipelines operate in an unsettled regime of gas movement. Its pressure and flow rate vary along the length of pipeline and over time as a result of uneven consumption and selection, switching on and off compressor units, shutting off stop valves, emergence of emergency leaks. The operational management of such regimes is associated with difficulty of reconciling the operating modes of individual sections of gas pipeline with each other, as well as with compressor stations. Determining the grounds that cause change in the operating mode of the pipeline system and revealing patterns of these changes determine the choice of its parameters. Therefore, knowledge of the laws of changing the main technological parameters of gas pumping through pipelines in conditions of non-stationary motion is of great importance for practice.

  18. Reduction of movement resistance force of pipeline in horizontal curved well at stage of designing underground passage

    NASA Astrophysics Data System (ADS)

    Toropov, V. S.; Toropov, S. Yu

    2018-05-01

    A method has been developed to reduce the resistance to movement of a pipeline in a horizontal curved well in the construction of underground passages using trenchless technologies. The method can be applied at the design stage. The idea of the proposed method consists in approximating the trajectory of the designed trenchless passage to the equilibrium profile. It has been proved that in order to reduce the resistance to movement of the pipeline arising from contact with the borehole wall, the profile of its initial and final sections must correspond, depending on the initial conditions, to the parabola or hyperbolic cosine equation. Analytical dependences are obtained which allow supplementing the methods of calculation of traction effort in trenchless construction for the case when the profile of the well is given by an arbitrary function.

  19. 76 FR 26793 - Pipeline Safety: Request for Special Permit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-09

    ... certain pipeline safety regulations. The request includes a technical analysis provided by the operator... at 202-366-0113, or e-mail at [email protected] . Technical: Steve Nanney by telephone at 713-628...

  20. Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle

    2009-10-19

    Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less

  1. Study on Failure of Third-Party Damage for Urban Gas Pipeline Based on Fuzzy Comprehensive Evaluation.

    PubMed

    Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong

    2016-01-01

    Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention.

  2. An Investigation of the Cryogenic Freezing of Water in Non-Metallic Pipelines

    NASA Astrophysics Data System (ADS)

    Martin, C. I.; Richardson, R. N.; Bowen, R. J.

    2004-06-01

    Pipe freezing is increasingly used in a range of industries to solve otherwise intractable pipe line maintenance and servicing problems. This paper presents the interim results from an experimental study on deliberate freezing of polymeric pipelines. Previous and contemporary works are reviewed. The object of the current research is to confirm the feasibility of ice plug formation within a polymeric pipe as a method of isolation. Tests have been conducted on a range of polymeric pipes of various sizes. The results reported here all relate to freezing of horizontal pipelines. In each case the process of plug formation was photographed, the frozen plug pressure tested and the pipe inspected for signs of damage resulting from the freeze procedure. The time to freeze was recorded and various temperatures logged. These tests have demonstrated that despite the poor thermal and mechanical properties of the polymers, freezing offers a viable alternative method of isolation in polymeric pipelines.

  3. Implementing an X-ray validation pipeline for the Protein Data Bank

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gore, Swanand; Velankar, Sameer; Kleywegt, Gerard J., E-mail: gerard@ebi.ac.uk

    2012-04-01

    The implementation of a validation pipeline, based on community recommendations, for future depositions of X-ray crystal structures in the Protein Data Bank is described. There is an increasing realisation that the quality of the biomacromolecular structures deposited in the Protein Data Bank (PDB) archive needs to be assessed critically using established and powerful validation methods. The Worldwide Protein Data Bank (wwPDB) organization has convened several Validation Task Forces (VTFs) to advise on the methods and standards that should be used to validate all of the entries already in the PDB as well as all structures that will be deposited inmore » the future. The recommendations of the X-ray VTF are currently being implemented in a software pipeline. Here, ongoing work on this pipeline is briefly described as well as ways in which validation-related information could be presented to users of structural data.« less

  4. Rapid, Vehicle-Based Identification of Location and Magnitude of Urban Natural Gas Pipeline Leaks.

    PubMed

    von Fischer, Joseph C; Cooley, Daniel; Chamberlain, Sam; Gaylord, Adam; Griebenow, Claire J; Hamburg, Steven P; Salo, Jessica; Schumacher, Russ; Theobald, David; Ham, Jay

    2017-04-04

    Information about the location and magnitudes of natural gas (NG) leaks from urban distribution pipelines is important for minimizing greenhouse gas emissions and optimizing investment in pipeline management. To enable rapid collection of such data, we developed a relatively simple method using high-precision methane analyzers in Google Street View cars. Our data indicate that this automated leak survey system can document patterns in leak location and magnitude within and among cities, even without wind data. We found that urban areas with prevalent corrosion-prone distribution lines (Boston, MA, Staten Island, NY, and Syracuse, NY), leaked approximately 25-fold more methane than cities with more modern pipeline materials (Burlington, VT, and Indianapolis, IN). Although this mobile monitoring method produces conservative estimates of leak rates and leak counts, it can still help prioritize both leak repairs and replacement of leak-prone sections of distribution lines, thus minimizing methane emissions over short and long terms.

  5. Novel approaches for bioinformatic analysis of salivary RNA sequencing data for development.

    PubMed

    Kaczor-Urbanowicz, Karolina Elzbieta; Kim, Yong; Li, Feng; Galeev, Timur; Kitchen, Rob R; Gerstein, Mark; Koyano, Kikuye; Jeong, Sung-Hee; Wang, Xiaoyan; Elashoff, David; Kang, So Young; Kim, Su Mi; Kim, Kyoung; Kim, Sung; Chia, David; Xiao, Xinshu; Rozowsky, Joel; Wong, David T W

    2018-01-01

    Analysis of RNA sequencing (RNA-Seq) data in human saliva is challenging. Lack of standardization and unification of the bioinformatic procedures undermines saliva's diagnostic potential. Thus, it motivated us to perform this study. We applied principal pipelines for bioinformatic analysis of small RNA-Seq data of saliva of 98 healthy Korean volunteers including either direct or indirect mapping of the reads to the human genome using Bowtie1. Analysis of alignments to exogenous genomes by another pipeline revealed that almost all of the reads map to bacterial genomes. Thus, salivary exRNA has fundamental properties that warrant the design of unique additional steps while performing the bioinformatic analysis. Our pipelines can serve as potential guidelines for processing of RNA-Seq data of human saliva. Processing and analysis results of the experimental data generated by the exceRpt (v4.6.3) small RNA-seq pipeline (github.gersteinlab.org/exceRpt) are available from exRNA atlas (exrna-atlas.org). Alignment to exogenous genomes and their quantification results were used in this paper for the analyses of small RNAs of exogenous origin. dtww@ucla.edu. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. Kepler Science Operations Center Architecture

    NASA Technical Reports Server (NTRS)

    Middour, Christopher; Klaus, Todd; Jenkins, Jon; Pletcher, David; Cote, Miles; Chandrasekaran, Hema; Wohler, Bill; Girouard, Forrest; Gunter, Jay P.; Uddin, Kamal; hide

    2010-01-01

    We give an overview of the operational concepts and architecture of the Kepler Science Data Pipeline. Designed, developed, operated, and maintained by the Science Operations Center (SOC) at NASA Ames Research Center, the Kepler Science Data Pipeline is central element of the Kepler Ground Data System. The SOC charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Data Pipeline, including the hardware infrastructure, scientific algorithms, and operational procedures. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center that hosts the computers required to perform data analysis. We discuss the high-performance, parallel computing software modules of the Kepler Science Data Pipeline that perform transit photometry, pixel-level calibration, systematic error-correction, attitude determination, stellar target management, and instrument characterization. We explain how data processing environments are divided to support operational processing and test needs. We explain the operational timelines for data processing and the data constructs that flow into the Kepler Science Data Pipeline.

  7. Query-Driven Visualization and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruebel, Oliver; Bethel, E. Wes; Prabhat, Mr.

    2012-11-01

    This report focuses on an approach to high performance visualization and analysis, termed query-driven visualization and analysis (QDV). QDV aims to reduce the amount of data that needs to be processed by the visualization, analysis, and rendering pipelines. The goal of the data reduction process is to separate out data that is "scientifically interesting'' and to focus visualization, analysis, and rendering on that interesting subset. The premise is that for any given visualization or analysis task, the data subset of interest is much smaller than the larger, complete data set. This strategy---extracting smaller data subsets of interest and focusing ofmore » the visualization processing on these subsets---is complementary to the approach of increasing the capacity of the visualization, analysis, and rendering pipelines through parallelism. This report discusses the fundamental concepts in QDV, their relationship to different stages in the visualization and analysis pipelines, and presents QDV's application to problems in diverse areas, ranging from forensic cybersecurity to high energy physics.« less

  8. Spatiotemporal alignment of in utero BOLD-MRI series.

    PubMed

    Turk, Esra Abaci; Luo, Jie; Gagoski, Borjan; Pascau, Javier; Bibbo, Carolina; Robinson, Julian N; Grant, P Ellen; Adalsteinsson, Elfar; Golland, Polina; Malpica, Norberto

    2017-08-01

    To present a method for spatiotemporal alignment of in-utero magnetic resonance imaging (MRI) time series acquired during maternal hyperoxia for enabling improved quantitative tracking of blood oxygen level-dependent (BOLD) signal changes that characterize oxygen transport through the placenta to fetal organs. The proposed pipeline for spatiotemporal alignment of images acquired with a single-shot gradient echo echo-planar imaging includes 1) signal nonuniformity correction, 2) intravolume motion correction based on nonrigid registration, 3) correction of motion and nonrigid deformations across volumes, and 4) detection of the outlier volumes to be discarded from subsequent analysis. BOLD MRI time series collected from 10 pregnant women during 3T scans were analyzed using this pipeline. To assess pipeline performance, signal fluctuations between consecutive timepoints were examined. In addition, volume overlap and distance between manual region of interest (ROI) delineations in a subset of frames and the delineations obtained through propagation of the ROIs from the reference frame were used to quantify alignment accuracy. A previously demonstrated rigid registration approach was used for comparison. The proposed pipeline improved anatomical alignment of placenta and fetal organs over the state-of-the-art rigid motion correction methods. In particular, unexpected temporal signal fluctuations during the first normoxia period were significantly decreased (P < 0.01) and volume overlap and distance between region boundaries measures were significantly improved (P < 0.01). The proposed approach to align MRI time series enables more accurate quantitative studies of placental function by improving spatiotemporal alignment across placenta and fetal organs. 1 Technical Efficacy: Stage 1 J. MAGN. RESON. IMAGING 2017;46:403-412. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Validation of Coevolving Residue Algorithms via Pipeline Sensitivity Analysis: ELSC and OMES and ZNMI, Oh My!

    PubMed Central

    Brown, Christopher A.; Brown, Kevin S.

    2010-01-01

    Correlated amino acid substitution algorithms attempt to discover groups of residues that co-fluctuate due to either structural or functional constraints. Although these algorithms could inform both ab initio protein folding calculations and evolutionary studies, their utility for these purposes has been hindered by a lack of confidence in their predictions due to hard to control sources of error. To complicate matters further, naive users are confronted with a multitude of methods to choose from, in addition to the mechanics of assembling and pruning a dataset. We first introduce a new pair scoring method, called ZNMI (Z-scored-product Normalized Mutual Information), which drastically improves the performance of mutual information for co-fluctuating residue prediction. Second and more important, we recast the process of finding coevolving residues in proteins as a data-processing pipeline inspired by the medical imaging literature. We construct an ensemble of alignment partitions that can be used in a cross-validation scheme to assess the effects of choices made during the procedure on the resulting predictions. This pipeline sensitivity study gives a measure of reproducibility (how similar are the predictions given perturbations to the pipeline?) and accuracy (are residue pairs with large couplings on average close in tertiary structure?). We choose a handful of published methods, along with ZNMI, and compare their reproducibility and accuracy on three diverse protein families. We find that (i) of the algorithms tested, while none appear to be both highly reproducible and accurate, ZNMI is one of the most accurate by far and (ii) while users should be wary of predictions drawn from a single alignment, considering an ensemble of sub-alignments can help to determine both highly accurate and reproducible couplings. Our cross-validation approach should be of interest both to developers and end users of algorithms that try to detect correlated amino acid substitutions. PMID:20531955

  10. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    PubMed Central

    Hua, Kai-Lung; Hsu, Che-Hao; Hidayati, Shintami Chusnul; Cheng, Wen-Huang; Chen, Yu-Jen

    2015-01-01

    Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD) scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. PMID:26346558

  11. Experiment Research on Hot-Rolling Processing of Nonsmooth Pit Surface.

    PubMed

    Gu, Yun-Qing; Fan, Tian-Xing; Mou, Jie-Gang; Yu, Wei-Bo; Zhao, Gang; Wang, Evan

    2016-01-01

    In order to achieve the nonsmooth surface drag reduction structure on the inner polymer coating of oil and gas pipelines and improve the efficiency of pipeline transport, a structural model of the machining robot on the pipe inner coating is established. Based on machining robot, an experimental technique is applied to research embossing and coating problems of rolling-head, and then the molding process rules under different conditions of rolling temperatures speeds and depth are analyzed. Also, an orthogonal experiment analysis method is employed to analyze the different effects of hot-rolling process apparatus on the embossed pits morphology and quality of rolling. The results also reveal that elevating the rolling temperature or decreasing the rolling speed can also improve the pit structure replication rates of the polymer coating surface, and the rolling feed has little effect on replication rates. After the rolling-head separates from the polymer coating, phenomenon of rebounding and refluxing of the polymer coating occurs, which is the reason of inability of the process. A continuous hot-rolling method for processing is used in the robot and the hot-rolling process of the processing apparatus is put in a dynamics analysis.

  12. An experimental and computational investigation of flow in a radial inlet of an industrial pipeline centrifugal compressor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flathers, M.B.; Bache, G.E.; Rainsberger, R.

    1996-04-01

    The flow field of a complex three-dimensional radial inlet for an industrial pipeline centrifugal compressor has been experimentally determined on a half-scale model. Based on the experimental results, inlet guide vanes have been designed to correct pressure and swirl angle distribution deficiencies. The unvaned and vaned inlets are analyzed with a commercially available fully three-dimensional viscous Navier-Stokes code. Since experimental results were available prior to the numerical study, the unvaned analysis is considered a postdiction while the vaned analysis is considered a prediction. The computational results of the unvaned inlet have been compared to the previously obtained experimental results. Themore » experimental method utilized for the unvaned inlet is repeated for the vaned inlet and the data have been used to verify the computational results. The paper will discuss experimental, design, and computational procedures, grid generation, boundary conditions, and experimental versus computational methods. Agreement between experimental and computational results is very good, both in prediction and postdiction modes. The results of this investigation indicate that CFD offers a measurable advantage in design, schedule, and cost and can be applied to complex, three-dimensional radial inlets.« less

  13. Experiment Research on Hot-Rolling Processing of Nonsmooth Pit Surface

    PubMed Central

    Gu, Yun-qing; Fan, Tian-xing; Mou, Jie-gang; Yu, Wei-bo; Zhao, Gang; Wang, Evan

    2016-01-01

    In order to achieve the nonsmooth surface drag reduction structure on the inner polymer coating of oil and gas pipelines and improve the efficiency of pipeline transport, a structural model of the machining robot on the pipe inner coating is established. Based on machining robot, an experimental technique is applied to research embossing and coating problems of rolling-head, and then the molding process rules under different conditions of rolling temperatures speeds and depth are analyzed. Also, an orthogonal experiment analysis method is employed to analyze the different effects of hot-rolling process apparatus on the embossed pits morphology and quality of rolling. The results also reveal that elevating the rolling temperature or decreasing the rolling speed can also improve the pit structure replication rates of the polymer coating surface, and the rolling feed has little effect on replication rates. After the rolling-head separates from the polymer coating, phenomenon of rebounding and refluxing of the polymer coating occurs, which is the reason of inability of the process. A continuous hot-rolling method for processing is used in the robot and the hot-rolling process of the processing apparatus is put in a dynamics analysis. PMID:27022235

  14. GI-POP: a combinational annotation and genomic island prediction pipeline for ongoing microbial genome projects.

    PubMed

    Lee, Chi-Ching; Chen, Yi-Ping Phoebe; Yao, Tzu-Jung; Ma, Cheng-Yu; Lo, Wei-Cheng; Lyu, Ping-Chiang; Tang, Chuan Yi

    2013-04-10

    Sequencing of microbial genomes is important because of microbial-carrying antibiotic and pathogenetic activities. However, even with the help of new assembling software, finishing a whole genome is a time-consuming task. In most bacteria, pathogenetic or antibiotic genes are carried in genomic islands. Therefore, a quick genomic island (GI) prediction method is useful for ongoing sequencing genomes. In this work, we built a Web server called GI-POP (http://gipop.life.nthu.edu.tw) which integrates a sequence assembling tool, a functional annotation pipeline, and a high-performance GI predicting module, in a support vector machine (SVM)-based method called genomic island genomic profile scanning (GI-GPS). The draft genomes of the ongoing genome projects in contigs or scaffolds can be submitted to our Web server, and it provides the functional annotation and highly probable GI-predicting results. GI-POP is a comprehensive annotation Web server designed for ongoing genome project analysis. Researchers can perform annotation and obtain pre-analytic information include possible GIs, coding/non-coding sequences and functional analysis from their draft genomes. This pre-analytic system can provide useful information for finishing a genome sequencing project. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Computer-aided classification of lung nodules on computed tomography images via deep learning technique.

    PubMed

    Hua, Kai-Lung; Hsu, Che-Hao; Hidayati, Shintami Chusnul; Cheng, Wen-Huang; Chen, Yu-Jen

    2015-01-01

    Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD) scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain.

  16. Applications of UT results to confirm defects findings by utilization of relevant metallurgical investigations techniques on gas/condensate pipeline working in wet sour gas environment

    NASA Astrophysics Data System (ADS)

    El-Azhari, O. A.; Gajam, S. Y.

    2015-03-01

    The gas/condensate pipe line under investigation is a 12 inch diameter, 48 km ASTM, A106 steel pipeline, carrying hydrocarbons containing wet CO2 and H2S.The pipe line had exploded in a region 100m distance from its terminal; after 24 years of service. Hydrogen induced cracking (HIC) and sour gas corrosion were expected due to the presence of wet H2S in the gas analysis. In other areas of pipe line ultrasonic testing was performed to determine whether the pipeline can be re-operated. The results have shown presence of internal planner defects, this was attributed to the existence of either laminations, type II inclusions or some service defects such as HIC and step wise cracking (SWC).Metallurgical investigations were conducted on fractured samples as per NACE standard (TM-0284-84). The obtained results had shown macroscopic cracks in the form of SWC, microstructure of steel had MnS inclusions. Crack sensitivity analyses were calculated and the microhardness testing was conducted. These results had confirmed that the line material was suffering from sour gas deteriorations. This paper correlates the field UT inspection findings with those methods investigated in the laboratory. Based on the results obtained a new HIC resistance material pipeline needs to be selected.

  17. Developing a Comprehensive Risk Assessment Framework for Geological Storage CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duncan, Ian

    2014-08-31

    The operational risks for CCS projects include: risks of capturing, compressing, transporting and injecting CO₂; risks of well blowouts; risk that CO 2 will leak into shallow aquifers and contaminate potable water; and risk that sequestered CO 2 will leak into the atmosphere. This report examines these risks by using information on the risks associated with analogue activities such as CO 2 based enhanced oil recovery (CO 2-EOR), natural gas storage and acid gas disposal. We have developed a new analysis of pipeline risk based on Bayesian statistical analysis. Bayesian theory probabilities may describe states of partial knowledge, even perhapsmore » those related to non-repeatable events. The Bayesian approach enables both utilizing existing data and at the same time having the capability to adsorb new information thus to lower uncertainty in our understanding of complex systems. Incident rates for both natural gas and CO 2 pipelines have been widely used in papers and reports on risk of CO 2 pipelines as proxies for the individual risk created by such pipelines. Published risk studies of CO 2 pipelines suggest that the individual risk associated with CO2 pipelines is between 10 -3 and 10 -4, which reflects risk levels approaching those of mountain climbing, which many would find unacceptably high. This report concludes, based on a careful analysis of natural gas pipeline failures, suggests that the individual risk of CO 2 pipelines is likely in the range of 10-6 to 10-7, a risk range considered in the acceptable to negligible range in most countries. If, as is commonly thought, pipelines represent the highest risk component of CCS outside of the capture plant, then this conclusion suggests that most (if not all) previous quantitative- risk assessments of components of CCS may be orders of magnitude to high. The potential lethality of unexpected CO 2 releases from pipelines or wells are arguably the highest risk aspects of CO 2 enhanced oil recovery (CO2-EOR), carbon capture, and storage (CCS). Assertions in the CCS literature, that CO 2 levels of 10% for ten minutes, or 20 to 30% for a few minutes are lethal to humans, are not supported by the available evidence. The results of published experiments with animals exposed to CO 2, from mice to monkeys, at both normal and depleted oxygen levels, suggest that lethal levels of CO 2 toxicity are in the range 50 to 60%. These experiments demonstrate that CO 2 does not kill by asphyxia, but rather is toxic at high concentrations. It is concluded that quantitative risk assessments of CCS have overestimated the risk of fatalities by using values of lethality a factor two to six lower than the values estimated in this paper. In many dispersion models of CO 2 releases from pipelines, no fatalities would be predicted if appropriate levels of lethality for CO 2 had been used in the analysis.« less

  18. Practical Approach for Hyperspectral Image Processing in Python

    NASA Astrophysics Data System (ADS)

    Annala, L.; Eskelinen, M. A.; Hämäläinen, J.; Riihinen, A.; Pölönen, I.

    2018-04-01

    Python is a very popular programming language among data scientists around the world. Python can also be used in hyperspectral data analysis. There are some toolboxes designed for spectral imaging, such as Spectral Python and HyperSpy, but there is a need for analysis pipeline, which is easy to use and agile for different solutions. We propose a Python pipeline which is built on packages xarray, Holoviews and scikit-learn. We have developed some of own tools, MaskAccessor, VisualisorAccessor and a spectral index library. They also fulfill our goal of easy and agile data processing. In this paper we will present our processing pipeline and demonstrate it in practice.

  19. Solvepol: A Reduction Pipeline for Imaging Polarimetry Data

    NASA Astrophysics Data System (ADS)

    Ramírez, Edgar A.; Magalhães, Antônio M.; Davidson, James W., Jr.; Pereyra, Antonio; Rubinho, Marcelo

    2017-05-01

    We present a newly, fully automated, data pipeline, Solvepol, designed to reduce and analyze polarimetric data. It has been optimized for imaging data from the Instituto de Astronomía, Geofísica e Ciências Atmosféricas (IAG) of the University of São Paulo (USP), calcite Savart prism plate-based IAGPOL polarimeter. Solvepol is also the basis of a reduction pipeline for the wide-field optical polarimeter that will execute SOUTH POL, a survey of the polarized southern sky. Solvepol was written using the Interactive data language (IDL) and is based on the Image Reduction and Analysis Facility (IRAF) task PCCDPACK, developed by our polarimetry group. We present and discuss reduced data from standard stars and other fields and compare these results with those obtained in the IRAF environment. Our analysis shows that Solvepol, in addition to being a fully automated pipeline, produces results consistent with those reduced by PCCDPACK and reported in the literature.

  20. Evaluation and integration of functional annotation pipelines for newly sequenced organisms: the potato genome as a test case.

    PubMed

    Amar, David; Frades, Itziar; Danek, Agnieszka; Goldberg, Tatyana; Sharma, Sanjeev K; Hedley, Pete E; Proux-Wera, Estelle; Andreasson, Erik; Shamir, Ron; Tzfadia, Oren; Alexandersson, Erik

    2014-12-05

    For most organisms, even if their genome sequence is available, little functional information about individual genes or proteins exists. Several annotation pipelines have been developed for functional analysis based on sequence, 'omics', and literature data. However, researchers encounter little guidance on how well they perform. Here, we used the recently sequenced potato genome as a case study. The potato genome was selected since its genome is newly sequenced and it is a non-model plant even if there is relatively ample information on individual potato genes, and multiple gene expression profiles are available. We show that the automatic gene annotations of potato have low accuracy when compared to a "gold standard" based on experimentally validated potato genes. Furthermore, we evaluate six state-of-the-art annotation pipelines and show that their predictions are markedly dissimilar (Jaccard similarity coefficient of 0.27 between pipelines on average). To overcome this discrepancy, we introduce a simple GO structure-based algorithm that reconciles the predictions of the different pipelines. We show that the integrated annotation covers more genes, increases by over 50% the number of highly co-expressed GO processes, and obtains much higher agreement with the gold standard. We find that different annotation pipelines produce different results, and show how to integrate them into a unified annotation that is of higher quality than each single pipeline. We offer an improved functional annotation of both PGSC and ITAG potato gene models, as well as tools that can be applied to additional pipelines and improve annotation in other organisms. This will greatly aid future functional analysis of '-omics' datasets from potato and other organisms with newly sequenced genomes. The new potato annotations are available with this paper.

  1. Enhancement of Hydrodynamic Processes in Oil Pipelines Considering Rheologically Complex High-Viscosity Oils

    NASA Astrophysics Data System (ADS)

    Konakhina, I. A.; Khusnutdinova, E. M.; Khamidullina, G. R.; Khamidullina, A. F.

    2016-06-01

    This paper describes a mathematical model of flow-related hydrodynamic processes for rheologically complex high-viscosity bitumen oil and oil-water suspensions and presents methods to improve the design and performance of oil pipelines.

  2. Comparison of three microarray probe annotation pipelines: differences in strategies and their effect on downstream analysis

    PubMed Central

    Neerincx, Pieter BT; Casel, Pierrot; Prickett, Dennis; Nie, Haisheng; Watson, Michael; Leunissen, Jack AM; Groenen, Martien AM; Klopp, Christophe

    2009-01-01

    Background Reliable annotation linking oligonucleotide probes to target genes is essential for functional biological analysis of microarray experiments. We used the IMAD, OligoRAP and sigReannot pipelines to update the annotation for the ARK-Genomics Chicken 20 K array as part of a joined EADGENE/SABRE workshop. In this manuscript we compare their annotation strategies and results. Furthermore, we analyse the effect of differences in updated annotation on functional analysis for an experiment involving Eimeria infected chickens and finally we propose guidelines for optimal annotation strategies. Results IMAD, OligoRAP and sigReannot update both annotation and estimated target specificity. The 3 pipelines can assign oligos to target specificity categories although with varying degrees of resolution. Target specificity is judged based on the amount and type of oligo versus target-gene alignments (hits), which are determined by filter thresholds that users can adjust based on their experimental conditions. Linking oligos to annotation on the other hand is based on rigid rules, which differ between pipelines. For 52.7% of the oligos from a subset selected for in depth comparison all pipelines linked to one or more Ensembl genes with consensus on 44.0%. In 31.0% of the cases none of the pipelines could assign an Ensembl gene to an oligo and for the remaining 16.3% the coverage differed between pipelines. Differences in updated annotation were mainly due to different thresholds for hybridisation potential filtering of oligo versus target-gene alignments and different policies for expanding annotation using indirect links. The differences in updated annotation packages had a significant effect on GO term enrichment analysis with consensus on only 67.2% of the enriched terms. Conclusion In addition to flexible thresholds to determine target specificity, annotation tools should provide metadata describing the relationships between oligos and the annotation assigned to them. These relationships can then be used to judge the varying degrees of reliability allowing users to fine-tune the balance between reliability and coverage. This is important as it can have a significant effect on functional microarray analysis as exemplified by the lack of consensus on almost one third of the terms found with GO term enrichment analysis based on updated IMAD, OligoRAP or sigReannot annotation. PMID:19615109

  3. Comparison of three microarray probe annotation pipelines: differences in strategies and their effect on downstream analysis.

    PubMed

    Neerincx, Pieter Bt; Casel, Pierrot; Prickett, Dennis; Nie, Haisheng; Watson, Michael; Leunissen, Jack Am; Groenen, Martien Am; Klopp, Christophe

    2009-07-16

    Reliable annotation linking oligonucleotide probes to target genes is essential for functional biological analysis of microarray experiments. We used the IMAD, OligoRAP and sigReannot pipelines to update the annotation for the ARK-Genomics Chicken 20 K array as part of a joined EADGENE/SABRE workshop. In this manuscript we compare their annotation strategies and results. Furthermore, we analyse the effect of differences in updated annotation on functional analysis for an experiment involving Eimeria infected chickens and finally we propose guidelines for optimal annotation strategies. IMAD, OligoRAP and sigReannot update both annotation and estimated target specificity. The 3 pipelines can assign oligos to target specificity categories although with varying degrees of resolution. Target specificity is judged based on the amount and type of oligo versus target-gene alignments (hits), which are determined by filter thresholds that users can adjust based on their experimental conditions. Linking oligos to annotation on the other hand is based on rigid rules, which differ between pipelines.For 52.7% of the oligos from a subset selected for in depth comparison all pipelines linked to one or more Ensembl genes with consensus on 44.0%. In 31.0% of the cases none of the pipelines could assign an Ensembl gene to an oligo and for the remaining 16.3% the coverage differed between pipelines. Differences in updated annotation were mainly due to different thresholds for hybridisation potential filtering of oligo versus target-gene alignments and different policies for expanding annotation using indirect links. The differences in updated annotation packages had a significant effect on GO term enrichment analysis with consensus on only 67.2% of the enriched terms. In addition to flexible thresholds to determine target specificity, annotation tools should provide metadata describing the relationships between oligos and the annotation assigned to them. These relationships can then be used to judge the varying degrees of reliability allowing users to fine-tune the balance between reliability and coverage. This is important as it can have a significant effect on functional microarray analysis as exemplified by the lack of consensus on almost one third of the terms found with GO term enrichment analysis based on updated IMAD, OligoRAP or sigReannot annotation.

  4. Hierarchical Leak Detection and Localization Method in Natural Gas Pipeline Monitoring Sensor Networks

    PubMed Central

    Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning

    2012-01-01

    In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point’s position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate. PMID:22368464

  5. Hierarchical leak detection and localization method in natural gas pipeline monitoring sensor networks.

    PubMed

    Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning

    2012-01-01

    In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point's position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate.

  6. Improving Spleen Volume Estimation via Computer Assisted Segmentation on Clinically Acquired CT Scans

    PubMed Central

    Xu, Zhoubing; Gertz, Adam L.; Burke, Ryan P.; Bansal, Neil; Kang, Hakmook; Landman, Bennett A.; Abramson, Richard G.

    2016-01-01

    OBJECTIVES Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomical structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically-acquired CT scans. MATERIALS AND METHODS Under IRB approval, we obtained 294 deidentified (HIPAA-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1–manual segmentation of all scans, Pipeline 2–automated segmentation of all scans, Pipeline 3–automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, Pipelines 4 and 5–volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracy of Pipelines 2–5 (Dice similarity coefficient [DSC], Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1–5. RESULTS Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation 23.7 cm3, and 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient 0.98, absolute deviation 46.92 cm3, and 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. CONCLUSION A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency. PMID:27519156

  7. Design Optimization of Innovative High-Level Waste Pipeline Unplugging Technologies - 13341

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pribanic, T.; Awwad, A.; Varona, J.

    2013-07-01

    Florida International University (FIU) is currently working on the development and optimization of two innovative pipeline unplugging methods: the asynchronous pulsing system (APS) and the peristaltic crawler system (PCS). Experiments were conducted on the APS to determine how air in the pipeline influences the system's performance as well as determine the effectiveness of air mitigation techniques in a pipeline. The results obtained during the experimental phase of the project, including data from pipeline pressure pulse tests along with air bubble compression tests are presented. Single-cycle pulse amplification caused by a fast-acting cylinder piston pump in 21.8, 30.5, and 43.6 mmore » pipelines were evaluated. Experiments were conducted on fully flooded pipelines as well as pipelines that contained various amounts of air to evaluate the system's performance when air is present in the pipeline. Also presented are details of the improvements implemented to the third generation crawler system (PCS). The improvements include the redesign of the rims of the unit to accommodate a camera system that provides visual feedback of the conditions inside the pipeline. Visual feedback allows the crawler to be used as a pipeline unplugging and inspection tool. Tests conducted previously demonstrated a significant reduction of the crawler speed with increasing length of tether. Current improvements include the positioning of a pneumatic valve manifold system that is located in close proximity to the crawler, rendering tether length independent of crawler speed. Additional improvements to increase the crawler's speed were also investigated and presented. Descriptions of the test beds, which were designed to emulate possible scenarios present on the Department of Energy (DOE) pipelines, are presented. Finally, conclusions and recommendations for the systems are provided. (authors)« less

  8. ALEA: a toolbox for allele-specific epigenomics analysis.

    PubMed

    Younesy, Hamid; Möller, Torsten; Heravi-Moussavi, Alireza; Cheng, Jeffrey B; Costello, Joseph F; Lorincz, Matthew C; Karimi, Mohammad M; Jones, Steven J M

    2014-04-15

    The assessment of expression and epigenomic status using sequencing based methods provides an unprecedented opportunity to identify and correlate allelic differences with epigenomic status. We present ALEA, a computational toolbox for allele-specific epigenomics analysis, which incorporates allelic variation data within existing resources, allowing for the identification of significant associations between epigenetic modifications and specific allelic variants in human and mouse cells. ALEA provides a customizable pipeline of command line tools for allele-specific analysis of next-generation sequencing data (ChIP-seq, RNA-seq, etc.) that takes the raw sequencing data and produces separate allelic tracks ready to be viewed on genome browsers. The pipeline has been validated using human and hybrid mouse ChIP-seq and RNA-seq data. The package, test data and usage instructions are available online at http://www.bcgsc.ca/platform/bioinfo/software/alea CONTACT: : mkarimi1@interchange.ubc.ca or sjones@bcgsc.ca Supplementary information: Supplementary data are available at Bioinformatics online. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Methods of increasing efficiency and maintainability of pipeline systems

    NASA Astrophysics Data System (ADS)

    Ivanov, V. A.; Sokolov, S. M.; Ogudova, E. V.

    2018-05-01

    This study is dedicated to the issue of pipeline transportation system maintenance. The article identifies two classes of technical-and-economic indices, which are used to select an optimal pipeline transportation system structure. Further, the article determines various system maintenance strategies and strategy selection criteria. Meanwhile, the maintenance strategies turn out to be not sufficiently effective due to non-optimal values of maintenance intervals. This problem could be solved by running the adaptive maintenance system, which includes a pipeline transportation system reliability improvement algorithm, especially an equipment degradation computer model. In conclusion, three model building approaches for determining optimal technical systems verification inspections duration were considered.

  10. INTERNAL REPAIR OF PIPELINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robin Gordon; Bill Bruce; Nancy Porter

    2003-05-01

    The two broad categories of deposited weld metal repair and fiber-reinforced composite repair technologies were reviewed for potential application for internal repair of gas transmission pipelines. Both are used to some extent for other applications and could be further developed for internal, local, structural repair of gas transmission pipelines. Preliminary test programs were developed for both deposited weld metal repairs and for fiber-reinforced composite repair. To date, all of the experimental work pertaining to the evaluation of potential repair methods has focused on fiber-reinforced composite repairs. Hydrostatic testing was also conducted on four pipeline sections with simulated corrosion damage: twomore » with composite liners and two without.« less

  11. Characterization and Expression of Drug Resistance Genes in MDROs Originating from Combat Wound Infections

    DTIC Science & Technology

    2016-09-01

    assigned a classification. MLST analysis MLST was determined using an in-house automated pipeline that first searches for homologs of each gene of...and virulence mechanism contributing to their success as pathogens in the wound environment. A novel bioinformatics pipeline was used to incorporate...monitored in two ways: read-based genome QC and assembly based metrics. The JCVI Genome QC pipeline samples sequence reads and performs BLAST

  12. A Spatial Risk Analysis of Oil Refineries within the United States

    DTIC Science & Technology

    2012-03-01

    regulator and consumer. This is especially true within the energy sector which is composed of electrical power, oil , and gas infrastructure [10...Naphtali, "Analysis of Electrical Power and Oil and Gas Pipeline Failures," in International Federation for Information Processing, E. Goetz and S...61-67, September 1999. [5] J. Simonoff, C. Restrepo, R. Zimmerman, and Z. Naphtali, "Analysis of Electrical Power and Oil and Gas Pipeline Failures

  13. Calculating the Optimum Angle of Filament-Wound Pipes in Natural Gas Transmission Pipelines Using Approximation Methods.

    PubMed

    Reza Khoshravan Azar, Mohammad; Emami Satellou, Ali Akbar; Shishesaz, Mohammad; Salavati, Bahram

    2013-04-01

    Given the increasing use of composite materials in various industries, oil and gas industry also requires that more attention should be paid to these materials. Furthermore, due to variation in choice of materials, the materials needed for the mechanical strength, resistance in critical situations such as fire, costs and other priorities of the analysis carried out on them and the most optimal for achieving certain goals, are introduced. In this study, we will try to introduce appropriate choice for use in the natural gas transmission composite pipelines. Following a 4-layered filament-wound (FW) composite pipe will consider an offer our analyses under internal pressure. The analyses' results will be calculated for different combinations of angles 15 deg, 30 deg, 45 deg, 55 deg, 60 deg, 75 deg, and 80 deg. Finally, we will compare the calculated values and the optimal angle will be gained by using the Approximation methods. It is explained that this layering is as the symmetrical.

  14. Target-decoy Based False Discovery Rate Estimation for Large-scale Metabolite Identification.

    PubMed

    Wang, Xusheng; Jones, Drew R; Shaw, Timothy I; Cho, Ji-Hoon; Wang, Yuanyuan; Tan, Haiyan; Xie, Boer; Zhou, Suiping; Li, Yuxin; Peng, Junmin

    2018-05-23

    Metabolite identification is a crucial step in mass spectrometry (MS)-based metabolomics. However, it is still challenging to assess the confidence of assigned metabolites. In this study, we report a novel method for estimating false discovery rate (FDR) of metabolite assignment with a target-decoy strategy, in which the decoys are generated through violating the octet rule of chemistry by adding small odd numbers of hydrogen atoms. The target-decoy strategy was integrated into JUMPm, an automated metabolite identification pipeline for large-scale MS analysis, and was also evaluated with two other metabolomics tools, mzMatch and mzMine 2. The reliability of FDR calculation was examined by false datasets, which were simulated by altering MS1 or MS2 spectra. Finally, we used the JUMPm pipeline coupled with the target-decoy strategy to process unlabeled and stable-isotope labeled metabolomic datasets. The results demonstrate that the target-decoy strategy is a simple and effective method for evaluating the confidence of high-throughput metabolite identification.

  15. Heterogeneous Optimization Framework: Reproducible Preprocessing of Multi-Spectral Clinical MRI for Neuro-Oncology Imaging Research.

    PubMed

    Milchenko, Mikhail; Snyder, Abraham Z; LaMontagne, Pamela; Shimony, Joshua S; Benzinger, Tammie L; Fouke, Sarah Jost; Marcus, Daniel S

    2016-07-01

    Neuroimaging research often relies on clinically acquired magnetic resonance imaging (MRI) datasets that can originate from multiple institutions. Such datasets are characterized by high heterogeneity of modalities and variability of sequence parameters. This heterogeneity complicates the automation of image processing tasks such as spatial co-registration and physiological or functional image analysis. Given this heterogeneity, conventional processing workflows developed for research purposes are not optimal for clinical data. In this work, we describe an approach called Heterogeneous Optimization Framework (HOF) for developing image analysis pipelines that can handle the high degree of clinical data non-uniformity. HOF provides a set of guidelines for configuration, algorithm development, deployment, interpretation of results and quality control for such pipelines. At each step, we illustrate the HOF approach using the implementation of an automated pipeline for Multimodal Glioma Analysis (MGA) as an example. The MGA pipeline computes tissue diffusion characteristics of diffusion tensor imaging (DTI) acquisitions, hemodynamic characteristics using a perfusion model of susceptibility contrast (DSC) MRI, and spatial cross-modal co-registration of available anatomical, physiological and derived patient images. Developing MGA within HOF enabled the processing of neuro-oncology MR imaging studies to be fully automated. MGA has been successfully used to analyze over 160 clinical tumor studies to date within several research projects. Introduction of the MGA pipeline improved image processing throughput and, most importantly, effectively produced co-registered datasets that were suitable for advanced analysis despite high heterogeneity in acquisition protocols.

  16. An analytical pipeline to compare and characterise the anthocyanin antioxidant activities of purple sweet potato cultivars.

    PubMed

    Hu, Yijie; Deng, Liqing; Chen, Jinwu; Zhou, Siyu; Liu, Shuang; Fu, Yufan; Yang, Chunxian; Liao, Zhihua; Chen, Min

    2016-03-01

    Purple sweet potato (Ipomoea batatas L.) is rich in anthocyanin pigments, which are valuable constituents of the human diet. Techniques to identify and quantify anthocyanins and their antioxidant potential are desirable for cultivar selection and breeding. In this study, we performed a quantitative and qualitative chemical analysis of 30 purple sweet potato (PSP) cultivars, using various assays to measure reducing power radical-scavenging activities, and linoleic acid autoxidation inhibition activity. Grey relational analysis (GRA) was applied to establish relationships between the antioxidant activities and the chemical fingerprints, in order to identify key bioactive compounds. The results indicated that four peonidin-based anthocyanins and three cyanidin-based anthocyanins make significant contributions to antioxidant activity. We conclude that the analytical pipeline described here represents an effective method to evaluate the antioxidant potential of, and the contributing compounds present in, PSP cultivars. This approach may be used to guide future breeding strategies. Copyright © 2015. Published by Elsevier Ltd.

  17. Analysis of mammalian gene function through broad-based phenotypic screens across a consortium of mouse clinics.

    PubMed

    de Angelis, Martin Hrabě; Nicholson, George; Selloum, Mohammed; White, Jacqui; Morgan, Hugh; Ramirez-Solis, Ramiro; Sorg, Tania; Wells, Sara; Fuchs, Helmut; Fray, Martin; Adams, David J; Adams, Niels C; Adler, Thure; Aguilar-Pimentel, Antonio; Ali-Hadji, Dalila; Amann, Gregory; André, Philippe; Atkins, Sarah; Auburtin, Aurelie; Ayadi, Abdel; Becker, Julien; Becker, Lore; Bedu, Elodie; Bekeredjian, Raffi; Birling, Marie-Christine; Blake, Andrew; Bottomley, Joanna; Bowl, Mike; Brault, Véronique; Busch, Dirk H; Bussell, James N; Calzada-Wack, Julia; Cater, Heather; Champy, Marie-France; Charles, Philippe; Chevalier, Claire; Chiani, Francesco; Codner, Gemma F; Combe, Roy; Cox, Roger; Dalloneau, Emilie; Dierich, André; Di Fenza, Armida; Doe, Brendan; Duchon, Arnaud; Eickelberg, Oliver; Esapa, Chris T; El Fertak, Lahcen; Feigel, Tanja; Emelyanova, Irina; Estabel, Jeanne; Favor, Jack; Flenniken, Ann; Gambadoro, Alessia; Garrett, Lilian; Gates, Hilary; Gerdin, Anna-Karin; Gkoutos, George; Greenaway, Simon; Glasl, Lisa; Goetz, Patrice; Da Cruz, Isabelle Goncalves; Götz, Alexander; Graw, Jochen; Guimond, Alain; Hans, Wolfgang; Hicks, Geoff; Hölter, Sabine M; Höfler, Heinz; Hancock, John M; Hoehndorf, Robert; Hough, Tertius; Houghton, Richard; Hurt, Anja; Ivandic, Boris; Jacobs, Hughes; Jacquot, Sylvie; Jones, Nora; Karp, Natasha A; Katus, Hugo A; Kitchen, Sharon; Klein-Rodewald, Tanja; Klingenspor, Martin; Klopstock, Thomas; Lalanne, Valerie; Leblanc, Sophie; Lengger, Christoph; le Marchand, Elise; Ludwig, Tonia; Lux, Aline; McKerlie, Colin; Maier, Holger; Mandel, Jean-Louis; Marschall, Susan; Mark, Manuel; Melvin, David G; Meziane, Hamid; Micklich, Kateryna; Mittelhauser, Christophe; Monassier, Laurent; Moulaert, David; Muller, Stéphanie; Naton, Beatrix; Neff, Frauke; Nolan, Patrick M; Nutter, Lauryl Mj; Ollert, Markus; Pavlovic, Guillaume; Pellegata, Natalia S; Peter, Emilie; Petit-Demoulière, Benoit; Pickard, Amanda; Podrini, Christine; Potter, Paul; Pouilly, Laurent; Puk, Oliver; Richardson, David; Rousseau, Stephane; Quintanilla-Fend, Leticia; Quwailid, Mohamed M; Racz, Ildiko; Rathkolb, Birgit; Riet, Fabrice; Rossant, Janet; Roux, Michel; Rozman, Jan; Ryder, Ed; Salisbury, Jennifer; Santos, Luis; Schäble, Karl-Heinz; Schiller, Evelyn; Schrewe, Anja; Schulz, Holger; Steinkamp, Ralf; Simon, Michelle; Stewart, Michelle; Stöger, Claudia; Stöger, Tobias; Sun, Minxuan; Sunter, David; Teboul, Lydia; Tilly, Isabelle; Tocchini-Valentini, Glauco P; Tost, Monica; Treise, Irina; Vasseur, Laurent; Velot, Emilie; Vogt-Weisenhorn, Daniela; Wagner, Christelle; Walling, Alison; Weber, Bruno; Wendling, Olivia; Westerberg, Henrik; Willershäuser, Monja; Wolf, Eckhard; Wolter, Anne; Wood, Joe; Wurst, Wolfgang; Yildirim, Ali Önder; Zeh, Ramona; Zimmer, Andreas; Zimprich, Annemarie; Holmes, Chris; Steel, Karen P; Herault, Yann; Gailus-Durner, Valérie; Mallon, Ann-Marie; Brown, Steve Dm

    2015-09-01

    The function of the majority of genes in the mouse and human genomes remains unknown. The mouse embryonic stem cell knockout resource provides a basis for the characterization of relationships between genes and phenotypes. The EUMODIC consortium developed and validated robust methodologies for the broad-based phenotyping of knockouts through a pipeline comprising 20 disease-oriented platforms. We developed new statistical methods for pipeline design and data analysis aimed at detecting reproducible phenotypes with high power. We acquired phenotype data from 449 mutant alleles, representing 320 unique genes, of which half had no previous functional annotation. We captured data from over 27,000 mice, finding that 83% of the mutant lines are phenodeviant, with 65% demonstrating pleiotropy. Surprisingly, we found significant differences in phenotype annotation according to zygosity. New phenotypes were uncovered for many genes with previously unknown function, providing a powerful basis for hypothesis generation and further investigation in diverse systems.

  18. MetaMeta: integrating metagenome analysis tools to improve taxonomic profiling.

    PubMed

    Piro, Vitor C; Matschkowski, Marcel; Renard, Bernhard Y

    2017-08-14

    Many metagenome analysis tools are presently available to classify sequences and profile environmental samples. In particular, taxonomic profiling and binning methods are commonly used for such tasks. Tools available among these two categories make use of several techniques, e.g., read mapping, k-mer alignment, and composition analysis. Variations on the construction of the corresponding reference sequence databases are also common. In addition, different tools provide good results in different datasets and configurations. All this variation creates a complicated scenario to researchers to decide which methods to use. Installation, configuration and execution can also be difficult especially when dealing with multiple datasets and tools. We propose MetaMeta: a pipeline to execute and integrate results from metagenome analysis tools. MetaMeta provides an easy workflow to run multiple tools with multiple samples, producing a single enhanced output profile for each sample. MetaMeta includes a database generation, pre-processing, execution, and integration steps, allowing easy execution and parallelization. The integration relies on the co-occurrence of organisms from different methods as the main feature to improve community profiling while accounting for differences in their databases. In a controlled case with simulated and real data, we show that the integrated profiles of MetaMeta overcome the best single profile. Using the same input data, it provides more sensitive and reliable results with the presence of each organism being supported by several methods. MetaMeta uses Snakemake and has six pre-configured tools, all available at BioConda channel for easy installation (conda install -c bioconda metameta). The MetaMeta pipeline is open-source and can be downloaded at: https://gitlab.com/rki_bioinformatics .

  19. Supply Support of Air Force 463L Equipment: An Analysis of the 463L equipment Spare Parts Pipeline

    DTIC Science & Technology

    1989-09-01

    service; and 4) the order processing system created inherent delays in the pipeline because of outdated and indirect information systems and technology. Keywords: Materials handling equipment, Theses. (AW)

  20. A supertree pipeline for summarizing phylogenetic and taxonomic information for millions of species

    PubMed Central

    Redelings, Benjamin D.

    2017-01-01

    We present a new supertree method that enables rapid estimation of a summary tree on the scale of millions of leaves. This supertree method summarizes a collection of input phylogenies and an input taxonomy. We introduce formal goals and criteria for such a supertree to satisfy in order to transparently and justifiably represent the input trees. In addition to producing a supertree, our method computes annotations that describe which grouping in the input trees support and conflict with each group in the supertree. We compare our supertree construction method to a previously published supertree construction method by assessing their performance on input trees used to construct the Open Tree of Life version 4, and find that our method increases the number of displayed input splits from 35,518 to 39,639 and decreases the number of conflicting input splits from 2,760 to 1,357. The new supertree method also improves on the previous supertree construction method in that it produces no unsupported branches and avoids unnecessary polytomies. This pipeline is currently used by the Open Tree of Life project to produce all of the versions of project’s “synthetic tree” starting at version 5. This software pipeline is called “propinquity”. It relies heavily on “otcetera”—a set of C++ tools to perform most of the steps of the pipeline. All of the components are free software and are available on GitHub. PMID:28265520

  1. MONA – Interactive manipulation of molecule collections

    PubMed Central

    2013-01-01

    Working with small‐molecule datasets is a routine task for cheminformaticians and chemists. The analysis and comparison of vendor catalogues and the compilation of promising candidates as starting points for screening campaigns are but a few very common applications. The workflows applied for this purpose usually consist of multiple basic cheminformatics tasks such as checking for duplicates or filtering by physico‐chemical properties. Pipelining tools allow to create and change such workflows without much effort, but usually do not support interventions once the pipeline has been started. In many contexts, however, the best suited workflow is not known in advance, thus making it necessary to take the results of the previous steps into consideration before proceeding. To support intuition‐driven processing of compound collections, we developed MONA, an interactive tool that has been designed to prepare and visualize large small‐molecule datasets. Using an SQL database common cheminformatics tasks such as analysis and filtering can be performed interactively with various methods for visual support. Great care was taken in creating a simple, intuitive user interface which can be instantly used without any setup steps. MONA combines the interactivity of molecule database systems with the simplicity of pipelining tools, thus enabling the case‐to‐case application of chemistry expert knowledge. The current version is available free of charge for academic use and can be downloaded at http://www.zbh.uni‐hamburg.de/mona. PMID:23985157

  2. Early-type galaxies: Automated reduction and analysis of ROSAT PSPC data

    NASA Technical Reports Server (NTRS)

    Mackie, G.; Fabbiano, G.; Harnden, F. R., Jr.; Kim, D.-W.; Maggio, A.; Micela, G.; Sciortino, S.; Ciliegi, P.

    1996-01-01

    Preliminary results of early-type galaxies that will be part of a galaxy catalog to be derived from the complete Rosat data base are presented. The stored data were reduced and analyzed by an automatic pipeline. This pipeline is based on a command language scrip. The important features of the pipeline include new data time screening in order to maximize the signal to noise ratio of faint point-like sources, source detection via a wavelet algorithm, and the identification of sources with objects from existing catalogs. The pipeline outputs include reduced images, contour maps, surface brightness profiles, spectra, color and hardness ratios.

  3. Underground pipeline laying using the pipe-in-pipe system

    NASA Astrophysics Data System (ADS)

    Antropova, N.; Krets, V.; Pavlov, M.

    2016-09-01

    The problems of resource saving and environmental safety during the installation and operation of the underwater crossings are always relevant. The paper describes the existing methods of trenchless pipeline technology, the structure of multi-channel pipelines, the types of supporting and guiding systems. The rational design is suggested for the pipe-in-pipe system. The finite element model is presented for the most dangerous sections of the inner pipes, the optimum distance is detected between the roller supports.

  4. PMAnalyzer: a new web interface for bacterial growth curve analysis.

    PubMed

    Cuevas, Daniel A; Edwards, Robert A

    2017-06-15

    Bacterial growth curves are essential representations for characterizing bacteria metabolism within a variety of media compositions. Using high-throughput, spectrophotometers capable of processing tens of 96-well plates, quantitative phenotypic information can be easily integrated into the current data structures that describe a bacterial organism. The PMAnalyzer pipeline performs a growth curve analysis to parameterize the unique features occurring within microtiter wells containing specific growth media sources. We have expanded the pipeline capabilities and provide a user-friendly, online implementation of this automated pipeline. PMAnalyzer version 2.0 provides fast automatic growth curve parameter analysis, growth identification and high resolution figures of sample-replicate growth curves and several statistical analyses. PMAnalyzer v2.0 can be found at https://edwards.sdsu.edu/pmanalyzer/ . Source code for the pipeline can be found on GitHub at https://github.com/dacuevas/PMAnalyzer . Source code for the online implementation can be found on GitHub at https://github.com/dacuevas/PMAnalyzerWeb . dcuevas08@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  5. Text-based Analytics for Biosurveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, Lauren E.; Smith, William P.; Rounds, Jeremiah

    The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related tomore » biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when). The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related to biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when).« less

  6. Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm.

    PubMed

    Stropahl, Maren; Bauer, Anna-Katharina R; Debener, Stefan; Bleichner, Martin G

    2018-01-01

    Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization.

  7. Improving the result of forcasting using reservoir and surface network simulation

    NASA Astrophysics Data System (ADS)

    Hendri, R. S.; Winarta, J.

    2018-01-01

    This study was aimed to get more representative results in production forcasting using integrated simulation in pipeline gathering system of X field. There are 5 main scenarios which consist of the production forecast of the existing condition, work over, and infill drilling. Then, it’s determined the best development scenario. The methods of this study is Integrated Reservoir Simulator and Pipeline Simulator so-calle as Integrated Reservoir and Surface Network Simulation. After well data result from reservoir simulator was then integrated with pipeline networking simulator’s to construct a new schedule, which was input for all simulation procedure. The well design result was done by well modeling simulator then exported into pipeline simulator. Reservoir prediction depends on the minimum value of Tubing Head Pressure (THP) for each well, where the pressure drop on the Gathering Network is not necessary calculated. The same scenario was done also for the single-reservoir simulation. Integration Simulation produces results approaching the actual condition of the reservoir and was confirmed by the THP profile, which difference between those two methods. The difference between integrated simulation compared to single-modeling simulation is 6-9%. The aimed of solving back-pressure problem in pipeline gathering system of X field is achieved.

  8. Dynamic safety assessment of natural gas stations using Bayesian network.

    PubMed

    Zarei, Esmaeil; Azadeh, Ali; Khakzad, Nima; Aliabadi, Mostafa Mirzaei; Mohammadfam, Iraj

    2017-01-05

    Pipelines are one of the most popular and effective ways of transporting hazardous materials, especially natural gas. However, the rapid development of gas pipelines and stations in urban areas has introduced a serious threat to public safety and assets. Although different methods have been developed for risk analysis of gas transportation systems, a comprehensive methodology for risk analysis is still lacking, especially in natural gas stations. The present work is aimed at developing a dynamic and comprehensive quantitative risk analysis (DCQRA) approach for accident scenario and risk modeling of natural gas stations. In this approach, a FMEA is used for hazard analysis while a Bow-tie diagram and Bayesian network are employed to model the worst-case accident scenario and to assess the risks. The results have indicated that the failure of the regulator system was the worst-case accident scenario with the human error as the most contributing factor. Thus, in risk management plan of natural gas stations, priority should be given to the most probable root events and main contribution factors, which have identified in the present study, in order to reduce the occurrence probability of the accident scenarios and thus alleviate the risks. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. esATAC: An Easy-to-use Systematic pipeline for ATAC-seq data analysis.

    PubMed

    Wei, Zheng; Zhang, Wei; Fang, Huan; Li, Yanda; Wang, Xiaowo

    2018-03-07

    ATAC-seq is rapidly emerging as one of the major experimental approaches to probe chromatin accessibility genome-wide. Here, we present "esATAC", a highly integrated easy-to-use R/Bioconductor package, for systematic ATAC-seq data analysis. It covers essential steps for full analyzing procedure, including raw data processing, quality control and downstream statistical analysis such as peak calling, enrichment analysis and transcription factor footprinting. esATAC supports one command line execution for preset pipelines, and provides flexible interfaces for building customized pipelines. esATAC package is open source under the GPL-3.0 license. It is implemented in R and C ++. Source code and binaries for Linux, MAC OS X and Windows are available through Bioconductor https://www.bioconductor.org/packages/release/bioc/html/esATAC.html). xwwang@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online.

  10. Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*

    PubMed Central

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.

    2015-01-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363

  11. Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.

    PubMed

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L

    2015-02-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  12. Generation of ethylene tracer by noncatalytic pyrolysis of natural gas at elevated pressure

    USGS Publications Warehouse

    Lu, Y.; Chen, S.; Rostam-Abadi, M.; Ruch, R.; Coleman, D.; Benson, L.J.

    2005-01-01

    There is a critical need within the pipeline gas industry for an inexpensive and reliable technology to generate an identification tag or tracer that can be added to pipeline gas to identify gas that may escape and improve the deliverability and management of gas in underground storage fields. Ethylene is an ideal tracer, because it does not exist naturally in the pipeline gas, and because its physical properties are similar to the pipeline gas components. A pyrolysis process, known as the Tragen process, has been developed to continuously convert the ???2%-4% ethane component present in pipeline gas into ethylene at common pipeline pressures of 800 psi. In our studies of the Tragen process, pyrolysis without steam addition achieved a maximum ethylene yield of 28%-35% at a temperature range of 700-775 ??C, corresponding to an ethylene concentration of 4600-5800 ppm in the product gas. Coke deposition was determined to occur at a significant rate in the pyrolysis reactor without steam addition. The ?? 13C isotopic analysis of gas components showed a ?? 13C value of ethylene similar to ethane in the pipeline gas, indicating that most of the ethylene was generated from decomposition of the ethane in the raw gas. However, ?? 13C isotopic analysis of the deposited coke showed that coke was primarily produced from methane, rather than from ethane or other heavier hydrocarbons. No coke deposition was observed with the addition of steam at concentrations of > 20% by volume. The dilution with steam also improved the ethylene yield. ?? 2005 American Chemical Society.

  13. Bad Actors Criticality Assessment for Pipeline system

    NASA Astrophysics Data System (ADS)

    Nasir, Meseret; Chong, Kit wee; Osman, Sabtuni; Siaw Khur, Wee

    2015-04-01

    Failure of a pipeline system could bring huge economic loss. In order to mitigate such catastrophic loss, it is required to evaluate and rank the impact of each bad actor of the pipeline system. In this study, bad actors are known as the root causes or any potential factor leading to the system downtime. Fault Tree Analysis (FTA) is used to analyze the probability of occurrence for each bad actor. Bimbaum's Importance and criticality measure (BICM) is also employed to rank the impact of each bad actor on the pipeline system failure. The results demonstrate that internal corrosion; external corrosion and construction damage are critical and highly contribute to the pipeline system failure with 48.0%, 12.4% and 6.0% respectively. Thus, a minor improvement in internal corrosion; external corrosion and construction damage would bring significant changes in the pipeline system performance and reliability. These results could also be useful to develop efficient maintenance strategy by identifying the critical bad actors.

  14. Design and analysis of FBG based sensor for detection of damage in oil and gas pipelines for safety of marine life

    NASA Astrophysics Data System (ADS)

    Bedi, Amna; Kothari, Vaishali; Kumar, Santosh

    2018-02-01

    The under laid gas and oil pipelines on the seafloor are prone to various disturbances like seismic movements of the sea bed, oceanic currents, tsunamis. These factors tend to damage such pipelines connecting different locations of the world dependent on these pipelines for their day-to-day use of oil and natural gas. If damaged, the oil spills in the water bodies cause grave loss to marine life along with serious economic issues. It is not feasible to monitor the undersea pipelines manually because of the huge seafloor depth. For timely detection of such damage, a new technique using optical Fiber Bragg grating (FBG) sensors and its installation has been given in this work. The idea of an FBG sensor for detecting damage in pipeline structure based on the acoustic emission has been worked out. The numerical calculation has been done based on the fundamental of strain measurement and the output has been simulated using MATLAB.

  15. Reproducibility of neuroimaging analyses across operating systems

    PubMed Central

    Glatard, Tristan; Lewis, Lindsay B.; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C.

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed. PMID:25964757

  16. Reproducibility of neuroimaging analyses across operating systems.

    PubMed

    Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.

  17. Automated processing pipeline for neonatal diffusion MRI in the developing Human Connectome Project.

    PubMed

    Bastiani, Matteo; Andersson, Jesper L R; Cordero-Grande, Lucilio; Murgasova, Maria; Hutter, Jana; Price, Anthony N; Makropoulos, Antonios; Fitzgibbon, Sean P; Hughes, Emer; Rueckert, Daniel; Victor, Suresh; Rutherford, Mary; Edwards, A David; Smith, Stephen M; Tournier, Jacques-Donald; Hajnal, Joseph V; Jbabdi, Saad; Sotiropoulos, Stamatios N

    2018-05-28

    The developing Human Connectome Project is set to create and make available to the scientific community a 4-dimensional map of functional and structural cerebral connectivity from 20 to 44 weeks post-menstrual age, to allow exploration of the genetic and environmental influences on brain development, and the relation between connectivity and neurocognitive function. A large set of multi-modal MRI data from fetuses and newborn infants is currently being acquired, along with genetic, clinical and developmental information. In this overview, we describe the neonatal diffusion MRI (dMRI) image processing pipeline and the structural connectivity aspect of the project. Neonatal dMRI data poses specific challenges, and standard analysis techniques used for adult data are not directly applicable. We have developed a processing pipeline that deals directly with neonatal-specific issues, such as severe motion and motion-related artefacts, small brain sizes, high brain water content and reduced anisotropy. This pipeline allows automated analysis of in-vivo dMRI data, probes tissue microstructure, reconstructs a number of major white matter tracts, and includes an automated quality control framework that identifies processing issues or inconsistencies. We here describe the pipeline and present an exemplar analysis of data from 140 infants imaged at 38-44 weeks post-menstrual age. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Computational Pipeline for NIRS-EEG Joint Imaging of tDCS-Evoked Cerebral Responses-An Application in Ischemic Stroke.

    PubMed

    Guhathakurta, Debarpan; Dutta, Anirban

    2016-01-01

    Transcranial direct current stimulation (tDCS) modulates cortical neural activity and hemodynamics. Electrophysiological methods (electroencephalography-EEG) measure neural activity while optical methods (near-infrared spectroscopy-NIRS) measure hemodynamics coupled through neurovascular coupling (NVC). Assessment of NVC requires development of NIRS-EEG joint-imaging sensor montages that are sensitive to the tDCS affected brain areas. In this methods paper, we present a software pipeline incorporating freely available software tools that can be used to target vascular territories with tDCS and develop a NIRS-EEG probe for joint imaging of tDCS-evoked responses. We apply this software pipeline to target primarily the outer convexity of the brain territory (superficial divisions) of the middle cerebral artery (MCA). We then present a computational method based on Empirical Mode Decomposition of NIRS and EEG time series into a set of intrinsic mode functions (IMFs), and then perform a cross-correlation analysis on those IMFs from NIRS and EEG signals to model NVC at the lesional and contralesional hemispheres of an ischemic stroke patient. For the contralesional hemisphere, a strong positive correlation between IMFs of regional cerebral hemoglobin oxygen saturation and the log-transformed mean-power time-series of IMFs for EEG with a lag of about -15 s was found after a cumulative 550 s stimulation of anodal tDCS. It is postulated that system identification, for example using a continuous-time autoregressive model, of this coupling relation under tDCS perturbation may provide spatiotemporal discriminatory features for the identification of ischemia. Furthermore, portable NIRS-EEG joint imaging can be incorporated into brain computer interfaces to monitor tDCS-facilitated neurointervention as well as cortical reorganization.

  19. Computational Pipeline for NIRS-EEG Joint Imaging of tDCS-Evoked Cerebral Responses—An Application in Ischemic Stroke

    PubMed Central

    Guhathakurta, Debarpan; Dutta, Anirban

    2016-01-01

    Transcranial direct current stimulation (tDCS) modulates cortical neural activity and hemodynamics. Electrophysiological methods (electroencephalography-EEG) measure neural activity while optical methods (near-infrared spectroscopy-NIRS) measure hemodynamics coupled through neurovascular coupling (NVC). Assessment of NVC requires development of NIRS-EEG joint-imaging sensor montages that are sensitive to the tDCS affected brain areas. In this methods paper, we present a software pipeline incorporating freely available software tools that can be used to target vascular territories with tDCS and develop a NIRS-EEG probe for joint imaging of tDCS-evoked responses. We apply this software pipeline to target primarily the outer convexity of the brain territory (superficial divisions) of the middle cerebral artery (MCA). We then present a computational method based on Empirical Mode Decomposition of NIRS and EEG time series into a set of intrinsic mode functions (IMFs), and then perform a cross-correlation analysis on those IMFs from NIRS and EEG signals to model NVC at the lesional and contralesional hemispheres of an ischemic stroke patient. For the contralesional hemisphere, a strong positive correlation between IMFs of regional cerebral hemoglobin oxygen saturation and the log-transformed mean-power time-series of IMFs for EEG with a lag of about −15 s was found after a cumulative 550 s stimulation of anodal tDCS. It is postulated that system identification, for example using a continuous-time autoregressive model, of this coupling relation under tDCS perturbation may provide spatiotemporal discriminatory features for the identification of ischemia. Furthermore, portable NIRS-EEG joint imaging can be incorporated into brain computer interfaces to monitor tDCS-facilitated neurointervention as well as cortical reorganization. PMID:27378836

  20. Location of coating defects and assessment of level of cathodic protection on underground pipelines using AC impedance, deterministic and non-deterministic models

    NASA Astrophysics Data System (ADS)

    Castaneda-Lopez, Homero

    A methodology for detecting and locating defects or discontinuities on the outside covering of coated metal underground pipelines subjected to cathodic protection has been addressed. On the basis of wide range AC impedance signals for various frequencies applied to a steel-coated pipeline system and by measuring its corresponding transfer function under several laboratory simulation scenarios, a physical laboratory setup of an underground cathodic-protected, coated pipeline was built. This model included different variables and elements that exist under real conditions, such as soil resistivity, soil chemical composition, defect (holiday) location in the pipeline covering, defect area and geometry, and level of cathodic protection. The AC impedance data obtained under different working conditions were used to fit an electrical transmission line model. This model was then used as a tool to fit the impedance signal for different experimental conditions and to establish trends in the impedance behavior without the necessity of further experimental work. However, due to the chaotic nature of the transfer function response of this system under several conditions, it is believed that non-deterministic models based on pattern recognition algorithms are suitable for field condition analysis. A non-deterministic approach was used for experimental analysis by applying an artificial neural network (ANN) algorithm based on classification analysis capable of studying the pipeline system and differentiating the variables that can change impedance conditions. These variables include level of cathodic protection, location of discontinuities (holidays), and severity of corrosion. This work demonstrated a proof-of-concept for a well-known technique and a novel algorithm capable of classifying impedance data for experimental results to predict the exact location of the active holidays and defects on the buried pipelines. Laboratory findings from this procedure are promising, and efforts to develop it for field conditions should continue.

  1. Automated image alignment for 2D gel electrophoresis in a high-throughput proteomics pipeline.

    PubMed

    Dowsey, Andrew W; Dunn, Michael J; Yang, Guang-Zhong

    2008-04-01

    The quest for high-throughput proteomics has revealed a number of challenges in recent years. Whilst substantial improvements in automated protein separation with liquid chromatography and mass spectrometry (LC/MS), aka 'shotgun' proteomics, have been achieved, large-scale open initiatives such as the Human Proteome Organization (HUPO) Brain Proteome Project have shown that maximal proteome coverage is only possible when LC/MS is complemented by 2D gel electrophoresis (2-DE) studies. Moreover, both separation methods require automated alignment and differential analysis to relieve the bioinformatics bottleneck and so make high-throughput protein biomarker discovery a reality. The purpose of this article is to describe a fully automatic image alignment framework for the integration of 2-DE into a high-throughput differential expression proteomics pipeline. The proposed method is based on robust automated image normalization (RAIN) to circumvent the drawbacks of traditional approaches. These use symbolic representation at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in modelling and alignment. In RAIN, a third-order volume-invariant B-spline model is incorporated into a multi-resolution schema to correct for geometric and expression inhomogeneity at multiple scales. The normalized images can then be compared directly in the image domain for quantitative differential analysis. Through evaluation against an existing state-of-the-art method on real and synthetically warped 2D gels, the proposed analysis framework demonstrates substantial improvements in matching accuracy and differential sensitivity. High-throughput analysis is established through an accelerated GPGPU (general purpose computation on graphics cards) implementation. Supplementary material, software and images used in the validation are available at http://www.proteomegrid.org/rain/.

  2. Tracking B-Cell Repertoires and Clonal Histories in Normal and Malignant Lymphocytes.

    PubMed

    Weston-Bell, Nicola J; Cowan, Graeme; Sahota, Surinder S

    2017-01-01

    Methods for tracking B-cell repertoires and clonal history in normal and malignant B-cells based on immunoglobulin variable region (IGV) gene analysis have developed rapidly with the advent of massive parallel next-generation sequencing (mpNGS) protocols. mpNGS permits a depth of analysis of IGV genes not hitherto feasible, and presents challenges of bioinformatics analysis, which can be readily met by current pipelines. This strategy offers a potential resolution of B-cell usage at a depth that may capture fully the natural state, in a given biological setting. Conventional methods based on RT-PCR amplification and Sanger sequencing are also available where mpNGS is not accessible. Each method offers distinct advantages. Conventional methods for IGV gene sequencing are readily adaptable to most laboratories and provide an ease of analysis to capture salient features of B-cell use. This chapter describes two methods in detail for analysis of IGV genes, mpNGS and conventional RT-PCR with Sanger sequencing.

  3. Analysis pipelines and packages for Infinium HumanMethylation450 BeadChip (450k) data.

    PubMed

    Morris, Tiffany J; Beck, Stephan

    2015-01-15

    The Illumina HumanMethylation450 BeadChip has become a popular platform for interrogating DNA methylation in epigenome-wide association studies (EWAS) and related projects as well as resource efforts such as the International Cancer Genome Consortium (ICGC) and the International Human Epigenome Consortium (IHEC). This has resulted in an exponential increase of 450k data in recent years and triggered the development of numerous integrated analysis pipelines and stand-alone packages. This review will introduce and discuss the currently most popular pipelines and packages and is particularly aimed at new 450k users. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Cathodic Protection Measurement Through Inline Inspection Technology Uses and Observations

    NASA Astrophysics Data System (ADS)

    Ferguson, Briana Ley

    This research supports the evaluation of an impressed current cathodic protection (CP) system of a buried coated steel pipeline through alternative technology and methods, via an inline inspection device (ILI, CP ILI tool, or tool), in order to prevent and mitigate external corrosion. This thesis investigates the ability to measure the current density of a pipeline's CP system from inside of a pipeline rather than manually from outside, and then convert that CP ILI tool reading into a pipe-to-soil potential as required by regulations and standards. This was demonstrated through a mathematical model that utilizes applications of Ohm's Law, circuit concepts, and attenuation principles in order to match the results of the ILI sample data by varying parameters of the model (i.e., values for over potential and coating resistivity). This research has not been conducted previously in order to determine if the protected potential range can be achieved with respect to the predicted current density from the CP ILI device. Kirchhoff's method was explored, but certain principals could not be used in the model as manual measurements were required. This research was based on circuit concepts which indirectly affected electrochemical processes. Through Ohm's law, the results show that a constant current density is possible in the protected potential range; therefore, indicates polarization of the pipeline, which leads to calcareous deposit development with respect to electrochemistry. Calcareous deposit is desirable in industry since it increases the resistance of the pipeline coating and lowers current, thus slowing the oxygen diffusion process. This research conveys that an alternative method for CP evaluation from inside of the pipeline is possible where the pipe-to-soil potential can be estimated (as required by regulations) from the ILI tool's current density measurement.

  5. A cross docking pipeline for improving pose prediction and virtual screening performance

    NASA Astrophysics Data System (ADS)

    Kumar, Ashutosh; Zhang, Kam Y. J.

    2018-01-01

    Pose prediction and virtual screening performance of a molecular docking method depend on the choice of protein structures used for docking. Multiple structures for a target protein are often used to take into account the receptor flexibility and problems associated with a single receptor structure. However, the use of multiple receptor structures is computationally expensive when docking a large library of small molecules. Here, we propose a new cross-docking pipeline suitable to dock a large library of molecules while taking advantage of multiple target protein structures. Our method involves the selection of a suitable receptor for each ligand in a screening library utilizing ligand 3D shape similarity with crystallographic ligands. We have prospectively evaluated our method in D3R Grand Challenge 2 and demonstrated that our cross-docking pipeline can achieve similar or better performance than using either single or multiple-receptor structures. Moreover, our method displayed not only decent pose prediction performance but also better virtual screening performance over several other methods.

  6. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI

    PubMed Central

    Churchill, Nathan W.; Spring, Robyn; Afshin-Pour, Babak; Dong, Fan; Strother, Stephen C.

    2015-01-01

    BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the “pipeline”) significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard “fixed” preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each), demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets. PMID:26161667

  7. Integration of modern remote sensing technologies for faster utility mapping and data extraction

    NASA Astrophysics Data System (ADS)

    Ristic, Aleksandar; Govedarica, Miro; Vrtunski, Milan; Petrovacki, Dusan

    2015-04-01

    Analysis of the application of modern remote sensing technologies in current research shows a significant increase in interest in fast and efficient detection of underground installations. The most important reasons of the said application are preventing damage during excavation works, as well as the formation of the cadastre of underground utilities suitable for operating and maintaining of such resources. Given the wide area of application in the detection of underground installations, ground penetrating radar scanning technology (GPR), in this instance, is used as prevalent method for the purpose of the acquisition radargram of pipelines and the comparison with the results of the acquisition of Unmanned Aerial Vehicle - UAV drone Aibot X6 equipped with Optris PI Lightweight Kit (which consists of a miniaturized lightweight PC and a weight-optimized PI450 Optris LW infrared camera). The aim of the research presented in the this paper is to analyze the benefits of integrating a mobile system capable of very fast, reliable and relatively inexpensive detection of heating pipelines using thermal imaging aerial inspection and GPR technology for control sampling of radargrams on specific locations of routes in order to achieve following: a simple identification of the characteristics of heating pipelines, prevention and registration of damage, as well as automated data extraction. The results of integrated application of the above-mentioned remote sensing technologies have shown that, within 10min of planned flight, it is possible to detect and georeference routes of heating pipelines in the area of 50.000m2 by application of thermal imaging inspection that assigns an adequate temperature value to each pixel in an image. The experiment showed that the registration is also possible in the case of pre-insulated and conventionally insulated heating pipes, and the difference in temperature measurements above the routes and the environment was up to 4 degrees. It should be noted that it is necessary to perform imaging in the working period, which is when the water is heated in the heating pipelines. Analysis of the efficiently defined heating pipeline routes defined by using thermal imaging inspection shows the point of temperature anomalies where it is necessary to perform control measurements using GPR technology. The control radargrams are further interpreted by applying realized automatic identification strategies software. Since the heating pipes are characterized by a distinctive method of installation (two pipes within or without concrete channels), they form a characteristic reflection in radargram, from which it is possible to identify the dimensions of the heating pipes. The dimensions of heating pipes are determined either based on estimation of standard dimensions of a concrete channel of heating pipes or based on hyperbolic reflections of the two pipes. The research results show that by using integrated application of the above-mentioned technologies it is possible to achieve efficient and high-quality inspection of heating pipeline system with estimation of the most relevant parameters. This abstract is a contribution to the 2015 EGU GA Session GI3.1 "Civil Engineering Applications of Ground Penetrating Radar," organised by the COST Action TU1208

  8. Numerical Leak Detection in a Pipeline Network of Complex Structure with Unsteady Flow

    NASA Astrophysics Data System (ADS)

    Aida-zade, K. R.; Ashrafova, E. R.

    2017-12-01

    An inverse problem for a pipeline network of complex loopback structure is solved numerically. The problem is to determine the locations and amounts of leaks from unsteady flow characteristics measured at some pipeline points. The features of the problem include impulse functions involved in a system of hyperbolic differential equations, the absence of classical initial conditions, and boundary conditions specified as nonseparated relations between the states at the endpoints of adjacent pipeline segments. The problem is reduced to a parametric optimal control problem without initial conditions, but with nonseparated boundary conditions. The latter problem is solved by applying first-order optimization methods. Results of numerical experiments are presented.

  9. Bridging ImmunoGenomic Data Analysis Workflow Gaps (BIGDAWG): An integrated case-control analysis pipeline.

    PubMed

    Pappas, Derek J; Marin, Wesley; Hollenbach, Jill A; Mack, Steven J

    2016-03-01

    Bridging ImmunoGenomic Data-Analysis Workflow Gaps (BIGDAWG) is an integrated data-analysis pipeline designed for the standardized analysis of highly-polymorphic genetic data, specifically for the HLA and KIR genetic systems. Most modern genetic analysis programs are designed for the analysis of single nucleotide polymorphisms, but the highly polymorphic nature of HLA and KIR data require specialized methods of data analysis. BIGDAWG performs case-control data analyses of highly polymorphic genotype data characteristic of the HLA and KIR loci. BIGDAWG performs tests for Hardy-Weinberg equilibrium, calculates allele frequencies and bins low-frequency alleles for k×2 and 2×2 chi-squared tests, and calculates odds ratios, confidence intervals and p-values for each allele. When multi-locus genotype data are available, BIGDAWG estimates user-specified haplotypes and performs the same binning and statistical calculations for each haplotype. For the HLA loci, BIGDAWG performs the same analyses at the individual amino-acid level. Finally, BIGDAWG generates figures and tables for each of these comparisons. BIGDAWG obviates the error-prone reformatting needed to traffic data between multiple programs, and streamlines and standardizes the data-analysis process for case-control studies of highly polymorphic data. BIGDAWG has been implemented as the bigdawg R package and as a free web application at bigdawg.immunogenomics.org. Copyright © 2015 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.

  10. Method for Qualification of Composite Repairs for Pipelines: Patch Repairs and Considerations for Cathodic Protection

    DOT National Transportation Integrated Search

    2009-12-03

    While the mechanical properties of composite repairs for pipelines have been investigated extensively, the performance of the entire metal-composite system has not been addressed with regard to corrosion of the substrate, water intrusion at the compo...

  11. 49 CFR 192.453 - General.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Requirements for Corrosion Control § 192.453 General. The corrosion control procedures required by § 192.605(b)(2), including those for the design... direction of, a person qualified in pipeline corrosion control methods. [Amdt. 192-71, 59 FR 6584, Feb. 11...

  12. 49 CFR 192.453 - General.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Requirements for Corrosion Control § 192.453 General. The corrosion control procedures required by § 192.605(b)(2), including those for the design... direction of, a person qualified in pipeline corrosion control methods. [Amdt. 192-71, 59 FR 6584, Feb. 11...

  13. 49 CFR 192.453 - General.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Requirements for Corrosion Control § 192.453 General. The corrosion control procedures required by § 192.605(b)(2), including those for the design... direction of, a person qualified in pipeline corrosion control methods. [Amdt. 192-71, 59 FR 6584, Feb. 11...

  14. 49 CFR 192.453 - General.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Requirements for Corrosion Control § 192.453 General. The corrosion control procedures required by § 192.605(b)(2), including those for the design... direction of, a person qualified in pipeline corrosion control methods. [Amdt. 192-71, 59 FR 6584, Feb. 11...

  15. 49 CFR 192.453 - General.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Requirements for Corrosion Control § 192.453 General. The corrosion control procedures required by § 192.605(b)(2), including those for the design... direction of, a person qualified in pipeline corrosion control methods. [Amdt. 192-71, 59 FR 6584, Feb. 11...

  16. Development of an Automated Imaging Pipeline for the Analysis of the Zebrafish Larval Kidney

    PubMed Central

    Westhoff, Jens H.; Giselbrecht, Stefan; Schmidts, Miriam; Schindler, Sebastian; Beales, Philip L.; Tönshoff, Burkhard; Liebel, Urban; Gehrig, Jochen

    2013-01-01

    The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP) transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems. PMID:24324758

  17. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    PubMed

    Westhoff, Jens H; Giselbrecht, Stefan; Schmidts, Miriam; Schindler, Sebastian; Beales, Philip L; Tönshoff, Burkhard; Liebel, Urban; Gehrig, Jochen

    2013-01-01

    The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP) transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  18. Study on Failure of Third-Party Damage for Urban Gas Pipeline Based on Fuzzy Comprehensive Evaluation

    PubMed Central

    Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong

    2016-01-01

    Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention. PMID:27875545

  19. Determination of disease phenotypes and pathogenic variants from exome sequence data in the CAGI 4 gene panel challenge.

    PubMed

    Kundu, Kunal; Pal, Lipika R; Yin, Yizhou; Moult, John

    2017-09-01

    The use of gene panel sequence for diagnostic and prognostic testing is now widespread, but there are so far few objective tests of methods to interpret these data. We describe the design and implementation of a gene panel sequencing data analysis pipeline (VarP) and its assessment in a CAGI4 community experiment. The method was applied to clinical gene panel sequencing data of 106 patients, with the goal of determining which of 14 disease classes each patient has and the corresponding causative variant(s). The disease class was correctly identified for 36 cases, including 10 where the original clinical pipeline did not find causative variants. For a further seven cases, we found strong evidence of an alternative disease to that tested. Many of the potentially causative variants are missense, with no previous association with disease, and these proved the hardest to correctly assign pathogenicity or otherwise. Post analysis showed that three-dimensional structure data could have helped for up to half of these cases. Over-reliance on HGMD annotation led to a number of incorrect disease assignments. We used a largely ad hoc method to assign probabilities of pathogenicity for each variant, and there is much work still to be done in this area. © 2017 The Authors. **Human Mutation published by Wiley Periodicals, Inc.

  20. Architecting the Finite Element Method Pipeline for the GPU.

    PubMed

    Fu, Zhisong; Lewis, T James; Kirby, Robert M; Whitaker, Ross T

    2014-02-01

    The finite element method (FEM) is a widely employed numerical technique for approximating the solution of partial differential equations (PDEs) in various science and engineering applications. Many of these applications benefit from fast execution of the FEM pipeline. One way to accelerate the FEM pipeline is by exploiting advances in modern computational hardware, such as the many-core streaming processors like the graphical processing unit (GPU). In this paper, we present the algorithms and data-structures necessary to move the entire FEM pipeline to the GPU. First we propose an efficient GPU-based algorithm to generate local element information and to assemble the global linear system associated with the FEM discretization of an elliptic PDE. To solve the corresponding linear system efficiently on the GPU, we implement a conjugate gradient method preconditioned with a geometry-informed algebraic multi-grid (AMG) method preconditioner. We propose a new fine-grained parallelism strategy, a corresponding multigrid cycling stage and efficient data mapping to the many-core architecture of GPU. Comparison of our on-GPU assembly versus a traditional serial implementation on the CPU achieves up to an 87 × speedup. Focusing on the linear system solver alone, we achieve a speedup of up to 51 × versus use of a comparable state-of-the-art serial CPU linear system solver. Furthermore, the method compares favorably with other GPU-based, sparse, linear solvers.

  1. Feature extraction and identification in distributed optical-fiber vibration sensing system for oil pipeline safety monitoring

    NASA Astrophysics Data System (ADS)

    Wu, Huijuan; Qian, Ya; Zhang, Wei; Tang, Chenghao

    2017-12-01

    High sensitivity of a distributed optical-fiber vibration sensing (DOVS) system based on the phase-sensitivity optical time domain reflectometry (Φ-OTDR) technology also brings in high nuisance alarm rates (NARs) in real applications. In this paper, feature extraction methods of wavelet decomposition (WD) and wavelet packet decomposition (WPD) are comparatively studied for three typical field testing signals, and an artificial neural network (ANN) is built for the event identification. The comparison results prove that the WPD performs a little better than the WD for the DOVS signal analysis and identification in oil pipeline safety monitoring. The identification rate can be improved up to 94.4%, and the nuisance alarm rate can be effectively controlled as low as 5.6% for the identification network with the wavelet packet energy distribution features.

  2. Identifying miRNA-mediated signaling subpathways by integrating paired miRNA/mRNA expression data with pathway topology.

    PubMed

    Vrahatis, Aristidis G; Dimitrakopoulos, Georgios N; Tsakalidis, Athanasios K; Bezerianos, Anastasios

    2015-01-01

    In the road for network medicine the newly emerged systems-level subpathway-based analysis methods offer new disease genes, drug targets and network-based biomarkers. In parallel, paired miRNA/mRNA expression data enable simultaneously monitoring of the micronome effect upon the signaling pathways. Towards this orientation, we present a methodological pipeline for the identification of differentially expressed subpathways along with their miRNA regulators by using KEGG signaling pathway maps, miRNA-target interactions and expression profiles from paired miRNA/mRNA experiments. Our pipeline offered new biological insights on a real application of paired miRNA/mRNA expression profiles with respect to the dynamic changes from colostrum to mature milk whey; several literature supported genes and miRNAs were recontextualized through miRNA-mediated differentially expressed subpathways.

  3. Gap-free segmentation of vascular networks with automatic image processing pipeline.

    PubMed

    Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas

    2017-03-01

    Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Relationship between Pipeline Wall Thickness (Gr. X60) and Water Depth towards Avoiding Failure during Installation

    NASA Astrophysics Data System (ADS)

    Razak, K. Abdul; Othman, M. I. H.; Mat Yusuf, S.; Fuad, M. F. I. Ahmad; yahaya, Effah

    2018-05-01

    Oil and gas today being developed at different water depth characterized as shallow, deep and ultra-deep waters. Among the major components involved during the offshore installation is pipelines. Pipelines are a transportation method of material through a pipe. In oil and gas industry, pipeline come from a bunch of line pipe that welded together to become a long pipeline and can be divided into two which is gas pipeline and oil pipeline. In order to perform pipeline installation, we need pipe laying barge or pipe laying vessel. However, pipe laying vessel can be divided into two types: S-lay vessel and J-lay vessel. The function of pipe lay vessel is not only to perform pipeline installation. It also performed installation of umbilical or electrical cables. In the simple words, pipe lay vessel is performing the installation of subsea in all the connecting infrastructures. Besides that, the installation processes of pipelines require special focus to make the installation succeed. For instance, the heavy pipelines may exceed the lay vessel’s tension capacities in certain kind of water depth. Pipeline have their own characteristic and we can group it or differentiate it by certain parameters such as grade of material, type of material, size of diameter, size of wall thickness and the strength. For instances, wall thickness parameter studies indicate that if use the higher steel grade of the pipelines will have a significant contribution in pipeline wall thickness reduction. When running the process of pipe lay, water depth is the most critical thing that we need to monitor and concern about because of course we cannot control the water depth but we can control the characteristic of the pipe like apply line pipe that have wall thickness suitable with current water depth in order to avoid failure during the installation. This research will analyse whether the pipeline parameter meet the requirements limit and minimum yield stress. It will overlook to simulate pipe grade API 5L X60 which size from 8 to 20mm thickness with a water depth of 50 to 300m. Result shown that pipeline installation will fail from the wall thickness of 18mm onwards since it has been passed the critical yield percentage.

  5. Simulation and Experiment Research on Fatigue Life of High Pressure Air Pipeline Joint

    NASA Astrophysics Data System (ADS)

    Shang, Jin; Xie, Jianghui; Yu, Jian; Zhang, Deman

    2017-12-01

    High pressure air pipeline joint is important part of high pressure air system, whose reliability is related to the safety and stability of the system. This thesis developed a new type-high pressure air pipeline joint, carried out dynamics research on CB316-1995 and new type-high pressure air pipeline joint with finite element method, deeply analysed the join forms of different design schemes and effect of materials on stress, tightening torque and fatigue life of joint. Research team set up vibration/pulse test bench, carried out joint fatigue life contrast test. The result shows: the maximum stress of the joint is inverted in the inner side of the outer sleeve nut, which is consistent with the failure mode of the crack on the outer sleeve nut in practice. Simulation and experiment of fatigue life and tightening torque of new type-high pressure air pipeline joint are better than CB316-1995 joint.

  6. A comparative study of RNA-Seq and microarray data analysis on the two examples of rectal-cancer patients and Burkitt Lymphoma cells.

    PubMed

    Wolff, Alexander; Bayerlová, Michaela; Gaedcke, Jochen; Kube, Dieter; Beißbarth, Tim

    2018-01-01

    Pipeline comparisons for gene expression data are highly valuable for applied real data analyses, as they enable the selection of suitable analysis strategies for the dataset at hand. Such pipelines for RNA-Seq data should include mapping of reads, counting and differential gene expression analysis or preprocessing, normalization and differential gene expression in case of microarray analysis, in order to give a global insight into pipeline performances. Four commonly used RNA-Seq pipelines (STAR/HTSeq-Count/edgeR, STAR/RSEM/edgeR, Sailfish/edgeR, TopHat2/Cufflinks/CuffDiff)) were investigated on multiple levels (alignment and counting) and cross-compared with the microarray counterpart on the level of gene expression and gene ontology enrichment. For these comparisons we generated two matched microarray and RNA-Seq datasets: Burkitt Lymphoma cell line data and rectal cancer patient data. The overall mapping rate of STAR was 98.98% for the cell line dataset and 98.49% for the patient dataset. Tophat's overall mapping rate was 97.02% and 96.73%, respectively, while Sailfish had only an overall mapping rate of 84.81% and 54.44%. The correlation of gene expression in microarray and RNA-Seq data was moderately worse for the patient dataset (ρ = 0.67-0.69) than for the cell line dataset (ρ = 0.87-0.88). An exception were the correlation results of Cufflinks, which were substantially lower (ρ = 0.21-0.29 and 0.34-0.53). For both datasets we identified very low numbers of differentially expressed genes using the microarray platform. For RNA-Seq we checked the agreement of differentially expressed genes identified in the different pipelines and of GO-term enrichment results. In conclusion the combination of STAR aligner with HTSeq-Count followed by STAR aligner with RSEM and Sailfish generated differentially expressed genes best suited for the dataset at hand and in agreement with most of the other transcriptomics pipelines.

  7. MetAMOS: a modular and open source metagenomic assembly and analysis pipeline

    PubMed Central

    2013-01-01

    We describe MetAMOS, an open source and modular metagenomic assembly and analysis pipeline. MetAMOS represents an important step towards fully automated metagenomic analysis, starting with next-generation sequencing reads and producing genomic scaffolds, open-reading frames and taxonomic or functional annotations. MetAMOS can aid in reducing assembly errors, commonly encountered when assembling metagenomic samples, and improves taxonomic assignment accuracy while also reducing computational cost. MetAMOS can be downloaded from: https://github.com/treangen/MetAMOS. PMID:23320958

  8. Systematic comparison of variant calling pipelines using gold standard personal exome variants

    PubMed Central

    Hwang, Sohyun; Kim, Eiru; Lee, Insuk; Marcotte, Edward M.

    2015-01-01

    The success of clinical genomics using next generation sequencing (NGS) requires the accurate and consistent identification of personal genome variants. Assorted variant calling methods have been developed, which show low concordance between their calls. Hence, a systematic comparison of the variant callers could give important guidance to NGS-based clinical genomics. Recently, a set of high-confident variant calls for one individual (NA12878) has been published by the Genome in a Bottle (GIAB) consortium, enabling performance benchmarking of different variant calling pipelines. Based on the gold standard reference variant calls from GIAB, we compared the performance of thirteen variant calling pipelines, testing combinations of three read aligners—BWA-MEM, Bowtie2, and Novoalign—and four variant callers—Genome Analysis Tool Kit HaplotypeCaller (GATK-HC), Samtools mpileup, Freebayes and Ion Proton Variant Caller (TVC), for twelve data sets for the NA12878 genome sequenced by different platforms including Illumina2000, Illumina2500, and Ion Proton, with various exome capture systems and exome coverage. We observed different biases toward specific types of SNP genotyping errors by the different variant callers. The results of our study provide useful guidelines for reliable variant identification from deep sequencing of personal genomes. PMID:26639839

  9. 30 CFR 250.1016 - Granting pipeline rights-of-way.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Regional Supervisor shall consider the potential effect of the associated pipeline on the human, marine... area during construction and operational phases. The Regional Supervisor shall prepare an environmental analysis in accordance with applicable policies and guidelines. To aid in the evaluation and determinations...

  10. AMES Stereo Pipeline Derived DEM Accuracy Experiment Using LROC-NAC Stereopairs and Weighted Spatial Dependence Simulation for Lunar Site Selection

    NASA Astrophysics Data System (ADS)

    Laura, J. R.; Miller, D.; Paul, M. V.

    2012-03-01

    An accuracy assessment of AMES Stereo Pipeline derived DEMs for lunar site selection using weighted spatial dependence simulation and a call for outside AMES derived DEMs to facilitate a statistical precision analysis.

  11. 75 FR 3221 - Ruby Pipeline, L.L.C.; Notice of Availability of the Final Environmental Impact Statement for the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-20

    ... effects of the construction and operation of the following project facilities: About 675.2 miles of 42... any of the following methods: Web site: http://www.blm.gov/nv/st/en/info/nepa/ruby_pipeline_project...

  12. Basic overview towards the assessment of landslide and subsidence risks along a geothermal pipeline network

    NASA Astrophysics Data System (ADS)

    Astisiasari; Van Westen, Cees; Jetten, Victor; van der Meer, Freek; Rahmawati Hizbaron, Dyah

    2017-12-01

    An operating geothermal power plant consists of installation units that work systematically in a network. The pipeline network connects various engineering structures, e.g. well pads, separator, scrubber, and power station, in the process of transferring geothermal fluids to generate electricity. Besides, a pipeline infrastructure also delivers the brine back to earth, through the injection well-pads. Despite of its important functions, a geothermal pipeline may bear a threat to its vicinity through a pipeline failure. The pipeline can be impacted by perilous events like landslides, earthquakes, and subsidence. The pipeline failure itself may relate to physical deterioration over time, e.g. due to corrosion and fatigue. The geothermal reservoirs are usually located in mountainous areas that are associated with steep slopes, complex geology, and weathered soil. Geothermal areas record a noteworthy number of disasters, especially due to landslide and subsidence. Therefore, a proper multi-risk assessment along the geothermal pipeline is required, particularly for these two types of hazard. This is also to mention that the impact on human fatality and injury is not presently discussed here. This paper aims to give a basic overview on the existing approaches for the assessment of multi-risk assessment along geothermal pipelines. It delivers basic principles on the analysis of risks and its contributing variables, in order to model the loss consequences. By considering the loss consequences, as well as the alternatives for mitigation measures, the environmental safety in geothermal working area could be enforced.

  13. A computational genomics pipeline for prokaryotic sequencing projects.

    PubMed

    Kislyuk, Andrey O; Katz, Lee S; Agrawal, Sonia; Hagen, Matthew S; Conley, Andrew B; Jayaraman, Pushkala; Nelakuditi, Viswateja; Humphrey, Jay C; Sammons, Scott A; Govil, Dhwani; Mair, Raydel D; Tatti, Kathleen M; Tondella, Maria L; Harcourt, Brian H; Mayer, Leonard W; Jordan, I King

    2010-08-01

    New sequencing technologies have accelerated research on prokaryotic genomes and have made genome sequencing operations outside major genome sequencing centers routine. However, no off-the-shelf solution exists for the combined assembly, gene prediction, genome annotation and data presentation necessary to interpret sequencing data. The resulting requirement to invest significant resources into custom informatics support for genome sequencing projects remains a major impediment to the accessibility of high-throughput sequence data. We present a self-contained, automated high-throughput open source genome sequencing and computational genomics pipeline suitable for prokaryotic sequencing projects. The pipeline has been used at the Georgia Institute of Technology and the Centers for Disease Control and Prevention for the analysis of Neisseria meningitidis and Bordetella bronchiseptica genomes. The pipeline is capable of enhanced or manually assisted reference-based assembly using multiple assemblers and modes; gene predictor combining; and functional annotation of genes and gene products. Because every component of the pipeline is executed on a local machine with no need to access resources over the Internet, the pipeline is suitable for projects of a sensitive nature. Annotation of virulence-related features makes the pipeline particularly useful for projects working with pathogenic prokaryotes. The pipeline is licensed under the open-source GNU General Public License and available at the Georgia Tech Neisseria Base (http://nbase.biology.gatech.edu/). The pipeline is implemented with a combination of Perl, Bourne Shell and MySQL and is compatible with Linux and other Unix systems.

  14. Numerical Investigation of the Thermal Regime of Underground Channel Heat Pipelines Under Flooding Conditions with the Use of a Conductive-Convective Heat Transfer Model

    NASA Astrophysics Data System (ADS)

    Polovnikov, V. Yu.

    2018-05-01

    This paper presents the results of numerical analysis of thermal regimes and heat losses of underground channel heating systems under flooding conditions with the use of a convective-conductive heat transfer model with the example of the configuration of the heat pipeline widely used in the Russian Federation — a nonpassage ferroconcrete channel (crawlway) and pipelines insulated with mineral wool and a protective covering layer. It has been shown that convective motion of water in the channel cavity of the heat pipeline under flooding conditions has no marked effect on the intensification of heat losses. It has been established that for the case under consideration, heat losses of the heat pipeline under flooding conditions increase from 0.75 to 52.39% due to the sharp increase in the effective thermal characteristics of the covering layer and the heat insulator caused by their moistening.

  15. Numerical Investigation of the Thermal Regime of Underground Channel Heat Pipelines Under Flooding Conditions with the Use of a Conductive-Convective Heat Transfer Model

    NASA Astrophysics Data System (ADS)

    Polovnikov, V. Yu.

    2018-03-01

    This paper presents the results of numerical analysis of thermal regimes and heat losses of underground channel heating systems under flooding conditions with the use of a convective-conductive heat transfer model with the example of the configuration of the heat pipeline widely used in the Russian Federation — a nonpassage ferroconcrete channel (crawlway) and pipelines insulated with mineral wool and a protective covering layer. It has been shown that convective motion of water in the channel cavity of the heat pipeline under flooding conditions has no marked effect on the intensification of heat losses. It has been established that for the case under consideration, heat losses of the heat pipeline under flooding conditions increase from 0.75 to 52.39% due to the sharp increase in the effective thermal characteristics of the covering layer and the heat insulator caused by their moistening.

  16. Finite Element Analysis and Experimental Study on Elbow Vibration Transmission Characteristics

    NASA Astrophysics Data System (ADS)

    Qing-shan, Dai; Zhen-hai, Zhang; Shi-jian, Zhu

    2017-11-01

    Pipeline system vibration is one of the significant factors leading to the vibration and noise of vessel. Elbow is widely used in the pipeline system. However, the researches about vibration of elbow are little, and there is no systematic study. In this research, we firstly analysed the relationship between elbow vibration transmission characteristics and bending radius by ABAQUS finite element simulation. Then, we conducted the further vibration test to observe the vibration transmission characteristics of different elbows which have the same diameter and different bending radius under different flow velocity. The results of simulation calculation and experiment both showed that the vibration acceleration levels of the pipeline system decreased with the increase of bending radius of the elbow, which was beneficial to reduce the transmission of vibration in the pipeline system. The results could be used as reference for further studies and designs for the low noise installation of pipeline system.

  17. Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows

    NASA Astrophysics Data System (ADS)

    Jittamai, Phongchai

    This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.

  18. Mathematical modeling of ignition of woodlands resulted from accident on the pipeline

    NASA Astrophysics Data System (ADS)

    Perminov, V. A.; Loboda, E. L.; Reyno, V. V.

    2014-11-01

    Accidents occurring at the sites of pipelines, accompanied by environmental damage, economic loss, and sometimes loss of life. In this paper we calculated the sizes of the possible ignition zones in emergency situations on pipelines located close to the forest, accompanied by the appearance of fireballs. In this paper, using the method of mathematical modeling calculates the maximum size of the ignition zones of vegetation as a result of accidental releases of flammable substances. The paper suggested in the context of the general mathematical model of forest fires give a new mathematical setting and method of numerical solution of a problem of a forest fire modeling. The boundary-value problem is solved numerically using the method of splitting according to physical processes. The dependences of the size of the forest fuel for different amounts of leaked flammable substances and moisture content of vegetation.

  19. JGI Plant Genomics Gene Annotation Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Shengqiang; Rokhsar, Dan; Goodstein, David

    2014-07-14

    Plant genomes vary in size and are highly complex with a high amount of repeats, genome duplication and tandem duplication. Gene encodes a wealth of information useful in studying organism and it is critical to have high quality and stable gene annotation. Thanks to advancement of sequencing technology, many plant species genomes have been sequenced and transcriptomes are also sequenced. To use these vastly large amounts of sequence data to make gene annotation or re-annotation in a timely fashion, an automatic pipeline is needed. JGI plant genomics gene annotation pipeline, called integrated gene call (IGC), is our effort toward thismore » aim with aid of a RNA-seq transcriptome assembly pipeline. It utilizes several gene predictors based on homolog peptides and transcript ORFs. See Methods for detail. Here we present genome annotation of JGI flagship green plants produced by this pipeline plus Arabidopsis and rice except for chlamy which is done by a third party. The genome annotations of these species and others are used in our gene family build pipeline and accessible via JGI Phytozome portal whose URL and front page snapshot are shown below.« less

  20. Deep learning on temporal-spectral data for anomaly detection

    NASA Astrophysics Data System (ADS)

    Ma, King; Leung, Henry; Jalilian, Ehsan; Huang, Daniel

    2017-05-01

    Detecting anomalies is important for continuous monitoring of sensor systems. One significant challenge is to use sensor data and autonomously detect changes that cause different conditions to occur. Using deep learning methods, we are able to monitor and detect changes as a result of some disturbance in the system. We utilize deep neural networks for sequence analysis of time series. We use a multi-step method for anomaly detection. We train the network to learn spectral and temporal features from the acoustic time series. We test our method using fiber-optic acoustic data from a pipeline.

  1. Bioinformatics Pipelines for Targeted Resequencing and Whole-Exome Sequencing of Human and Mouse Genomes: A Virtual Appliance Approach for Instant Deployment

    PubMed Central

    Saeed, Isaam; Wong, Stephen Q.; Mar, Victoria; Goode, David L.; Caramia, Franco; Doig, Ken; Ryland, Georgina L.; Thompson, Ella R.; Hunter, Sally M.; Halgamuge, Saman K.; Ellul, Jason; Dobrovic, Alexander; Campbell, Ian G.; Papenfuss, Anthony T.; McArthur, Grant A.; Tothill, Richard W.

    2014-01-01

    Targeted resequencing by massively parallel sequencing has become an effective and affordable way to survey small to large portions of the genome for genetic variation. Despite the rapid development in open source software for analysis of such data, the practical implementation of these tools through construction of sequencing analysis pipelines still remains a challenging and laborious activity, and a major hurdle for many small research and clinical laboratories. We developed TREVA (Targeted REsequencing Virtual Appliance), making pre-built pipelines immediately available as a virtual appliance. Based on virtual machine technologies, TREVA is a solution for rapid and efficient deployment of complex bioinformatics pipelines to laboratories of all sizes, enabling reproducible results. The analyses that are supported in TREVA include: somatic and germline single-nucleotide and insertion/deletion variant calling, copy number analysis, and cohort-based analyses such as pathway and significantly mutated genes analyses. TREVA is flexible and easy to use, and can be customised by Linux-based extensions if required. TREVA can also be deployed on the cloud (cloud computing), enabling instant access without investment overheads for additional hardware. TREVA is available at http://bioinformatics.petermac.org/treva/. PMID:24752294

  2. MetaStorm: A Public Resource for Customizable Metagenomics Annotation

    PubMed Central

    Arango-Argoty, Gustavo; Singh, Gargi; Heath, Lenwood S.; Pruden, Amy; Xiao, Weidong; Zhang, Liqing

    2016-01-01

    Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution. PMID:27632579

  3. MetaStorm: A Public Resource for Customizable Metagenomics Annotation.

    PubMed

    Arango-Argoty, Gustavo; Singh, Gargi; Heath, Lenwood S; Pruden, Amy; Xiao, Weidong; Zhang, Liqing

    2016-01-01

    Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution.

  4. 78 FR 27169 - Regulatory Flexibility Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-09

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Chapter... parts 174, 177, 191, and 192... 2013 2014 Transportation of Natural and Other Gas by Pipeline; Annual... review of some of 49 CFR parts 106, 107, 171. The full analysis document for the hazardous materials...

  5. Quarry blasts assessment and their environmental impacts on the nearby oil pipelines, southeast of Helwan City, Egypt

    NASA Astrophysics Data System (ADS)

    Mohamed, Adel M. E.; Mohamed, Abuo El-Ela A.

    2013-06-01

    Ground vibrations induced by blasting in the cement quarries are one of the fundamental problems in the quarrying industry and may cause severe damage to the nearby utilities and pipelines. Therefore, a vibration control study plays an important role in the minimization of environmental effects of blasting in quarries. The current paper presents the influence of the quarry blasts at the National Cement Company (NCC) on the two oil pipelines of SUMED Company southeast of Helwan City, by measuring the ground vibrations in terms of Peak Particle Velocity (PPV). The seismic refraction for compressional waves deduced from the shallow seismic survey and the shear wave velocity obtained from the Multi channel Analysis of Surface Waves (MASW) technique are used to evaluate the closest site of the two pipelines to the quarry blasts. The results demonstrate that, the closest site of the two pipelines is of class B, according to the National Earthquake Hazard Reduction Program (NEHRP) classification and the safe distance to avoid any environmental effects is 650 m, following the deduced Peak Particle Velocity (PPV) and scaled distance (SD) relationship (PPV = 700.08 × SD-1.225) in mm/s and the Air over Pressure (air blast) formula (air blast = 170.23 × SD-0.071) in dB. In the light of prediction analysis, the maximum allowable charge weight per delay was found to be 591 kg with damage criterion of 12.5 mm/s at the closest site of the SUMED pipelines.

  6. Informatics for RNA Sequencing: A Web Resource for Analysis on the Cloud

    PubMed Central

    Griffith, Malachi; Walker, Jason R.; Spies, Nicholas C.; Ainscough, Benjamin J.; Griffith, Obi L.

    2015-01-01

    Massively parallel RNA sequencing (RNA-seq) has rapidly become the assay of choice for interrogating RNA transcript abundance and diversity. This article provides a detailed introduction to fundamental RNA-seq molecular biology and informatics concepts. We make available open-access RNA-seq tutorials that cover cloud computing, tool installation, relevant file formats, reference genomes, transcriptome annotations, quality-control strategies, expression, differential expression, and alternative splicing analysis methods. These tutorials and additional training resources are accompanied by complete analysis pipelines and test datasets made available without encumbrance at www.rnaseq.wiki. PMID:26248053

  7. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    PubMed

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  8. An integrated SNP mining and utilization (ISMU) pipeline for next generation sequencing data.

    PubMed

    Azam, Sarwar; Rathore, Abhishek; Shah, Trushar M; Telluri, Mohan; Amindala, BhanuPrakash; Ruperao, Pradeep; Katta, Mohan A V S K; Varshney, Rajeev K

    2014-01-01

    Open source single nucleotide polymorphism (SNP) discovery pipelines for next generation sequencing data commonly requires working knowledge of command line interface, massive computational resources and expertise which is a daunting task for biologists. Further, the SNP information generated may not be readily used for downstream processes such as genotyping. Hence, a comprehensive pipeline has been developed by integrating several open source next generation sequencing (NGS) tools along with a graphical user interface called Integrated SNP Mining and Utilization (ISMU) for SNP discovery and their utilization by developing genotyping assays. The pipeline features functionalities such as pre-processing of raw data, integration of open source alignment tools (Bowtie2, BWA, Maq, NovoAlign and SOAP2), SNP prediction (SAMtools/SOAPsnp/CNS2snp and CbCC) methods and interfaces for developing genotyping assays. The pipeline outputs a list of high quality SNPs between all pairwise combinations of genotypes analyzed, in addition to the reference genome/sequence. Visualization tools (Tablet and Flapjack) integrated into the pipeline enable inspection of the alignment and errors, if any. The pipeline also provides a confidence score or polymorphism information content value with flanking sequences for identified SNPs in standard format required for developing marker genotyping (KASP and Golden Gate) assays. The pipeline enables users to process a range of NGS datasets such as whole genome re-sequencing, restriction site associated DNA sequencing and transcriptome sequencing data at a fast speed. The pipeline is very useful for plant genetics and breeding community with no computational expertise in order to discover SNPs and utilize in genomics, genetics and breeding studies. The pipeline has been parallelized to process huge datasets of next generation sequencing. It has been developed in Java language and is available at http://hpc.icrisat.cgiar.org/ISMU as a standalone free software.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washenfelder, D. J.; Girardot, C. L.; Wilson, E. R.

    The twenty-eight double-shell underground radioactive waste storage tanks at the U. S. Department of Energy’s Hanford Site near Richland, WA are interconnected by the Waste Transfer System network of buried steel encased pipelines and pipe jumpers in below-grade pits. The pipeline material is stainless steel or carbon steel in 51 mm to 152 mm (2 in. to 6 in.) sizes. The pipelines carry slurries ranging up to 20 volume percent solids and supernatants with less than one volume percent solids at velocities necessary to prevent settling. The pipelines, installed between 1976 and 2011, were originally intended to last until themore » 2028 completion of the double-shell tank storage mission. The mission has been subsequently extended. In 2010 the Tank Operating Contractor began a systematic evaluation of the Waste Transfer System pipeline conditions applying guidelines from API 579-1/ASME FFS-1 (2007), Fitness-For-Service. Between 2010 and 2014 Fitness-for-Service examinations of the Waste Transfer System pipeline materials, sizes, and components were completed. In parallel, waste throughput histories were prepared allowing side-by-side pipeline wall thinning rate comparisons between carbon and stainless steel, slurries and supernatants and throughput volumes. The work showed that for transfer volumes up to 6.1E+05 m 3 (161 million gallons), the highest throughput of any pipeline segment examined, there has been no detectable wall thinning in either stainless or carbon steel pipeline material regardless of waste fluid characteristics or throughput. The paper describes the field and laboratory evaluation methods used for the Fitness-for-Service examinations, the results of the examinations, and the data reduction methodologies used to support Hanford Waste Transfer System pipeline wall thinning conclusions.« less

  10. An adaptive sparse deconvolution method for distinguishing the overlapping echoes of ultrasonic guided waves for pipeline crack inspection

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang

    2017-03-01

    In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.

  11. Domain selection combined with improved cloning strategy for high throughput expression of higher eukaryotic proteins

    PubMed Central

    Chen, Yunjia; Qiu, Shihong; Luan, Chi-Hao; Luo, Ming

    2007-01-01

    Background Expression of higher eukaryotic genes as soluble, stable recombinant proteins is still a bottleneck step in biochemical and structural studies of novel proteins today. Correct identification of stable domains/fragments within the open reading frame (ORF), combined with proper cloning strategies, can greatly enhance the success rate when higher eukaryotic proteins are expressed as these domains/fragments. Furthermore, a HTP cloning pipeline incorporated with bioinformatics domain/fragment selection methods will be beneficial to studies of structure and function genomics/proteomics. Results With bioinformatics tools, we developed a domain/domain boundary prediction (DDBP) method, which was trained by available experimental data. Combined with an improved cloning strategy, DDBP had been applied to 57 proteins from C. elegans. Expression and purification results showed there was a 10-fold increase in terms of obtaining purified proteins. Based on the DDBP method, the improved GATEWAY cloning strategy and a robotic platform, we constructed a high throughput (HTP) cloning pipeline, including PCR primer design, PCR, BP reaction, transformation, plating, colony picking and entry clones extraction, which have been successfully applied to 90 C. elegans genes, 88 Brucella genes, and 188 human genes. More than 97% of the targeted genes were obtained as entry clones. This pipeline has a modular design and can adopt different operations for a variety of cloning/expression strategies. Conclusion The DDBP method and improved cloning strategy were satisfactory. The cloning pipeline, combined with our recombinant protein HTP expression pipeline and the crystal screening robots, constitutes a complete platform for structure genomics/proteomics. This platform will increase the success rate of purification and crystallization dramatically and promote the further advancement of structure genomics/proteomics. PMID:17663785

  12. Corral framework: Trustworthy and fully functional data intensive parallel astronomical pipelines

    NASA Astrophysics Data System (ADS)

    Cabral, J. B.; Sánchez, B.; Beroiz, M.; Domínguez, M.; Lares, M.; Gurovich, S.; Granitto, P.

    2017-07-01

    Data processing pipelines represent an important slice of the astronomical software library that include chains of processes that transform raw data into valuable information via data reduction and analysis. In this work we present Corral, a Python framework for astronomical pipeline generation. Corral features a Model-View-Controller design pattern on top of an SQL Relational Database capable of handling: custom data models; processing stages; and communication alerts, and also provides automatic quality and structural metrics based on unit testing. The Model-View-Controller provides concept separation between the user logic and the data models, delivering at the same time multi-processing and distributed computing capabilities. Corral represents an improvement over commonly found data processing pipelines in astronomysince the design pattern eases the programmer from dealing with processing flow and parallelization issues, allowing them to focus on the specific algorithms needed for the successive data transformations and at the same time provides a broad measure of quality over the created pipeline. Corral and working examples of pipelines that use it are available to the community at https://github.com/toros-astro.

  13. Non-destructive, high-content analysis of wheat grain traits using X-ray micro computed tomography.

    PubMed

    Hughes, Nathan; Askew, Karen; Scotson, Callum P; Williams, Kevin; Sauze, Colin; Corke, Fiona; Doonan, John H; Nibau, Candida

    2017-01-01

    Wheat is one of the most widely grown crop in temperate climates for food and animal feed. In order to meet the demands of the predicted population increase in an ever-changing climate, wheat production needs to dramatically increase. Spike and grain traits are critical determinants of final yield and grain uniformity a commercially desired trait, but their analysis is laborious and often requires destructive harvest. One of the current challenges is to develop an accurate, non-destructive method for spike and grain trait analysis capable of handling large populations. In this study we describe the development of a robust method for the accurate extraction and measurement of spike and grain morphometric parameters from images acquired by X-ray micro-computed tomography (μCT). The image analysis pipeline developed automatically identifies plant material of interest in μCT images, performs image analysis, and extracts morphometric data. As a proof of principle, this integrated methodology was used to analyse the spikes from a population of wheat plants subjected to high temperatures under two different water regimes. Temperature has a negative effect on spike height and grain number with the middle of the spike being the most affected region. The data also confirmed that increased grain volume was correlated with the decrease in grain number under mild stress. Being able to quickly measure plant phenotypes in a non-destructive manner is crucial to advance our understanding of gene function and the effects of the environment. We report on the development of an image analysis pipeline capable of accurately and reliably extracting spike and grain traits from crops without the loss of positional information. This methodology was applied to the analysis of wheat spikes can be readily applied to other economically important crop species.

  14. Fracture control for the Oman India Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruno, T.V.

    1996-12-31

    This paper describes the evaluation of the resistance to fracture initiation and propagation for the high-strength, heavy-wall pipe required for the Oman India Pipeline (OIP). It discusses the unique aspects of this pipeline and their influence on fracture control, reviews conventional fracture control design methods, their limitations with regard to the pipe in question, the extent to which they can be utilized for this project, and other approaches being explored. Test pipe of the size and grade required for the OIP show fracture toughness well in excess of the minimum requirements.

  15. Workflows in bioinformatics: meta-analysis and prototype implementation of a workflow generator.

    PubMed

    Garcia Castro, Alexander; Thoraval, Samuel; Garcia, Leyla J; Ragan, Mark A

    2005-04-07

    Computational methods for problem solving need to interleave information access and algorithm execution in a problem-specific workflow. The structures of these workflows are defined by a scaffold of syntactic, semantic and algebraic objects capable of representing them. Despite the proliferation of GUIs (Graphic User Interfaces) in bioinformatics, only some of them provide workflow capabilities; surprisingly, no meta-analysis of workflow operators and components in bioinformatics has been reported. We present a set of syntactic components and algebraic operators capable of representing analytical workflows in bioinformatics. Iteration, recursion, the use of conditional statements, and management of suspend/resume tasks have traditionally been implemented on an ad hoc basis and hard-coded; by having these operators properly defined it is possible to use and parameterize them as generic re-usable components. To illustrate how these operations can be orchestrated, we present GPIPE, a prototype graphic pipeline generator for PISE that allows the definition of a pipeline, parameterization of its component methods, and storage of metadata in XML formats. This implementation goes beyond the macro capacities currently in PISE. As the entire analysis protocol is defined in XML, a complete bioinformatic experiment (linked sets of methods, parameters and results) can be reproduced or shared among users. http://if-web1.imb.uq.edu.au/Pise/5.a/gpipe.html (interactive), ftp://ftp.pasteur.fr/pub/GenSoft/unix/misc/Pise/ (download). From our meta-analysis we have identified syntactic structures and algebraic operators common to many workflows in bioinformatics. The workflow components and algebraic operators can be assimilated into re-usable software components. GPIPE, a prototype implementation of this framework, provides a GUI builder to facilitate the generation of workflows and integration of heterogeneous analytical tools.

  16. Development of a design methodology for pipelines in ice scoured seabeds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, J.I.; Paulin, M.J.; Lach, P.R.

    1994-12-31

    Large areas of the continental shelf of northern oceans are frequently scoured or gouged by moving bodies of ice such as icebergs and sea ice keels associated with pressure ridges. This phenomenon presents a formidable challenge when the route of a submarine pipeline is intersected by the scouring ice. It is generally acknowledged that if a pipeline, laid on the seabed, were hit by an iceberg or a pressure ridge keel, the forces imposed on the pipeline would be much greater than it could practically withstand. The pipeline must therefore be buried to avoid direct contact with ice, but itmore » is very important to determine with some assurance the minimum depth required for safety for both economical and environmental reasons. The safe burial depth of a pipeline, however, cannot be determined directly from the relatively straight forward measurement of maximum scour depth. The major design consideration is the determination of the potential sub-scour deformation of the ice scoured soil. Forces transmitted through the soil and soil displacement around the pipeline could load the pipeline to failure if not taken into account in the design. If the designer can predict the forces transmitted through the soil, the pipeline can be designed to withstand these external forces using conventional design practice. In this paper, the authors outline a design methodology that is based on phenomenological studies of ice scoured terrain, both modern and relict, laboratory tests, centrifuge modeling, and numerical analysis. The implications of these studies, which could assist in the safe and economical design of pipelines in ice scoured terrain, will also be discussed.« less

  17. Launching genomics into the cloud: deployment of Mercury, a next generation sequence analysis pipeline.

    PubMed

    Reid, Jeffrey G; Carroll, Andrew; Veeraraghavan, Narayanan; Dahdouli, Mahmoud; Sundquist, Andreas; English, Adam; Bainbridge, Matthew; White, Simon; Salerno, William; Buhay, Christian; Yu, Fuli; Muzny, Donna; Daly, Richard; Duyk, Geoff; Gibbs, Richard A; Boerwinkle, Eric

    2014-01-29

    Massively parallel DNA sequencing generates staggering amounts of data. Decreasing cost, increasing throughput, and improved annotation have expanded the diversity of genomics applications in research and clinical practice. This expanding scale creates analytical challenges: accommodating peak compute demand, coordinating secure access for multiple analysts, and sharing validated tools and results. To address these challenges, we have developed the Mercury analysis pipeline and deployed it in local hardware and the Amazon Web Services cloud via the DNAnexus platform. Mercury is an automated, flexible, and extensible analysis workflow that provides accurate and reproducible genomic results at scales ranging from individuals to large cohorts. By taking advantage of cloud computing and with Mercury implemented on the DNAnexus platform, we have demonstrated a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples.

  18. Significantly reducing the processing times of high-speed photometry data sets using a distributed computing model

    NASA Astrophysics Data System (ADS)

    Doyle, Paul; Mtenzi, Fred; Smith, Niall; Collins, Adrian; O'Shea, Brendan

    2012-09-01

    The scientific community is in the midst of a data analysis crisis. The increasing capacity of scientific CCD instrumentation and their falling costs is contributing to an explosive generation of raw photometric data. This data must go through a process of cleaning and reduction before it can be used for high precision photometric analysis. Many existing data processing pipelines either assume a relatively small dataset or are batch processed by a High Performance Computing centre. A radical overhaul of these processing pipelines is required to allow reduction and cleaning rates to process terabyte sized datasets at near capture rates using an elastic processing architecture. The ability to access computing resources and to allow them to grow and shrink as demand fluctuates is essential, as is exploiting the parallel nature of the datasets. A distributed data processing pipeline is required. It should incorporate lossless data compression, allow for data segmentation and support processing of data segments in parallel. Academic institutes can collaborate and provide an elastic computing model without the requirement for large centralized high performance computing data centers. This paper demonstrates how a base 10 order of magnitude improvement in overall processing time has been achieved using the "ACN pipeline", a distributed pipeline spanning multiple academic institutes.

  19. High-Throughput Live-Cell Microscopy Analysis of Association Between Chromosome Domains and the Nucleolus in S. cerevisiae.

    PubMed

    Wang, Renjie; Normand, Christophe; Gadal, Olivier

    2016-01-01

    Spatial organization of the genome has important impacts on all aspects of chromosome biology, including transcription, replication, and DNA repair. Frequent interactions of some chromosome domains with specific nuclear compartments, such as the nucleolus, are now well documented using genome-scale methods. However, direct measurement of distance and interaction frequency between loci requires microscopic observation of specific genomic domains and the nucleolus, followed by image analysis to allow quantification. The fluorescent repressor operator system (FROS) is an invaluable method to fluorescently tag DNA sequences and investigate chromosome position and dynamics in living cells. This chapter describes a combination of methods to define motion and region of confinement of a locus relative to the nucleolus in cell's nucleus, from fluorescence acquisition to automated image analysis using two dedicated pipelines.

  20. Remote control spill reduction technology : a survey and analysis of applications for liquid pipeline systems

    DOT National Transportation Integrated Search

    1995-01-01

    Given the 1988 directive, the OPS conducted a study on the potential for EFRDs : to minimize the volume of pipeline spills. They concluded that Remote Controlled Valves : (RCVs) and check valves are the only EFRDs that are effective on hazardous liqu...

  1. Tissue-aware RNA-Seq processing and normalization for heterogeneous and sparse data.

    PubMed

    Paulson, Joseph N; Chen, Cho-Yi; Lopes-Ramos, Camila M; Kuijjer, Marieke L; Platig, John; Sonawane, Abhijeet R; Fagny, Maud; Glass, Kimberly; Quackenbush, John

    2017-10-03

    Although ultrahigh-throughput RNA-Sequencing has become the dominant technology for genome-wide transcriptional profiling, the vast majority of RNA-Seq studies typically profile only tens of samples, and most analytical pipelines are optimized for these smaller studies. However, projects are generating ever-larger data sets comprising RNA-Seq data from hundreds or thousands of samples, often collected at multiple centers and from diverse tissues. These complex data sets present significant analytical challenges due to batch and tissue effects, but provide the opportunity to revisit the assumptions and methods that we use to preprocess, normalize, and filter RNA-Seq data - critical first steps for any subsequent analysis. We find that analysis of large RNA-Seq data sets requires both careful quality control and the need to account for sparsity due to the heterogeneity intrinsic in multi-group studies. We developed Yet Another RNA Normalization software pipeline (YARN), that includes quality control and preprocessing, gene filtering, and normalization steps designed to facilitate downstream analysis of large, heterogeneous RNA-Seq data sets and we demonstrate its use with data from the Genotype-Tissue Expression (GTEx) project. An R package instantiating YARN is available at http://bioconductor.org/packages/yarn .

  2. Antigen Receptor Galaxy: A User-Friendly, Web-Based Tool for Analysis and Visualization of T and B Cell Receptor Repertoire Data

    PubMed Central

    IJspeert, Hanna; van Schouwenburg, Pauline A.; van Zessen, David; Pico-Knijnenburg, Ingrid

    2017-01-01

    Antigen Receptor Galaxy (ARGalaxy) is a Web-based tool for analyses and visualization of TCR and BCR sequencing data of 13 species. ARGalaxy consists of four parts: the demultiplex tool, the international ImMunoGeneTics information system (IMGT) concatenate tool, the immune repertoire pipeline, and the somatic hypermutation (SHM) and class switch recombination (CSR) pipeline. Together they allow the analysis of all different aspects of the immune repertoire. All pipelines can be run independently or combined, depending on the available data and the question of interest. The demultiplex tool allows data trimming and demultiplexing, whereas with the concatenate tool multiple IMGT/HighV-QUEST output files can be merged into a single file. The immune repertoire pipeline is an extended version of our previously published ImmunoGlobulin Galaxy (IGGalaxy) virtual machine that was developed to visualize V(D)J gene usage. It allows analysis of both BCR and TCR rearrangements, visualizes CDR3 characteristics (length and amino acid usage) and junction characteristics, and calculates the diversity of the immune repertoire. Finally, ARGalaxy includes the newly developed SHM and CSR pipeline to analyze SHM and/or CSR in BCR rearrangements. It analyzes the frequency and patterns of SHM, Ag selection (including BASELINe), clonality (Change-O), and CSR. The functionality of the ARGalaxy tool is illustrated in several clinical examples of patients with primary immunodeficiencies. In conclusion, ARGalaxy is a novel tool for the analysis of the complete immune repertoire, which is applicable to many patient groups with disturbances in the immune repertoire such as autoimmune diseases, allergy, and leukemia, but it can also be used to address basic research questions in repertoire formation and selection. PMID:28416602

  3. Methodology for Image-Based Reconstruction of Ventricular Geometry for Patient-Specific Modeling of Cardiac Electrophysiology

    PubMed Central

    Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.

    2014-01-01

    Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771

  4. A computational genomics pipeline for prokaryotic sequencing projects

    PubMed Central

    Kislyuk, Andrey O.; Katz, Lee S.; Agrawal, Sonia; Hagen, Matthew S.; Conley, Andrew B.; Jayaraman, Pushkala; Nelakuditi, Viswateja; Humphrey, Jay C.; Sammons, Scott A.; Govil, Dhwani; Mair, Raydel D.; Tatti, Kathleen M.; Tondella, Maria L.; Harcourt, Brian H.; Mayer, Leonard W.; Jordan, I. King

    2010-01-01

    Motivation: New sequencing technologies have accelerated research on prokaryotic genomes and have made genome sequencing operations outside major genome sequencing centers routine. However, no off-the-shelf solution exists for the combined assembly, gene prediction, genome annotation and data presentation necessary to interpret sequencing data. The resulting requirement to invest significant resources into custom informatics support for genome sequencing projects remains a major impediment to the accessibility of high-throughput sequence data. Results: We present a self-contained, automated high-throughput open source genome sequencing and computational genomics pipeline suitable for prokaryotic sequencing projects. The pipeline has been used at the Georgia Institute of Technology and the Centers for Disease Control and Prevention for the analysis of Neisseria meningitidis and Bordetella bronchiseptica genomes. The pipeline is capable of enhanced or manually assisted reference-based assembly using multiple assemblers and modes; gene predictor combining; and functional annotation of genes and gene products. Because every component of the pipeline is executed on a local machine with no need to access resources over the Internet, the pipeline is suitable for projects of a sensitive nature. Annotation of virulence-related features makes the pipeline particularly useful for projects working with pathogenic prokaryotes. Availability and implementation: The pipeline is licensed under the open-source GNU General Public License and available at the Georgia Tech Neisseria Base (http://nbase.biology.gatech.edu/). The pipeline is implemented with a combination of Perl, Bourne Shell and MySQL and is compatible with Linux and other Unix systems. Contact: king.jordan@biology.gatech.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20519285

  5. Comparative coal transportation costs: an economic and engineering analysis of truck, belt, rail, barge and coal slurry and pneumatic pipelines. Volume 3. Coal slurry pipelines. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rieber, M.; Soo, S.L.

    1977-08-01

    A coal slurry pipeline system requires that the coal go through a number of processing stages before it is used by the power plant. Once mined, the coal is delivered to a preparation plant where it is pulverized to sizes between 18 and 325 mesh and then suspended in about an equal weight of water. This 50-50 slurry mixture has a consistency approximating toothpaste. It is pushed through the pipeline via electric pumping stations 70 to 100 miles apart. Flow velocity through the line must be maintained within a narrow range. For example, if a 3.5 mph design is usedmore » at 5 mph, the system must be able to withstand double the horsepower, peak pressure, and wear. Minimum flowrate must be maintained to avoid particle settling and plugging. However, in general, once a pipeline system has been designed, because of economic considerations on the one hand and design limits on the other, flowrate is rather inflexible. Pipelines that have a slowly moving throughput and a water carrier may be subject to freezing in northern areas during periods of severe cold. One of the problems associated with slurry pipeline analyses is the lack of operating experience.« less

  6. ToTem: a tool for variant calling pipeline optimization.

    PubMed

    Tom, Nikola; Tom, Ondrej; Malcikova, Jitka; Pavlova, Sarka; Kubesova, Blanka; Rausch, Tobias; Kolarik, Miroslav; Benes, Vladimir; Bystry, Vojtech; Pospisilova, Sarka

    2018-06-26

    High-throughput bioinformatics analyses of next generation sequencing (NGS) data often require challenging pipeline optimization. The key problem is choosing appropriate tools and selecting the best parameters for optimal precision and recall. Here we introduce ToTem, a tool for automated pipeline optimization. ToTem is a stand-alone web application with a comprehensive graphical user interface (GUI). ToTem is written in Java and PHP with an underlying connection to a MySQL database. Its primary role is to automatically generate, execute and benchmark different variant calling pipeline settings. Our tool allows an analysis to be started from any level of the process and with the possibility of plugging almost any tool or code. To prevent an over-fitting of pipeline parameters, ToTem ensures the reproducibility of these by using cross validation techniques that penalize the final precision, recall and F-measure. The results are interpreted as interactive graphs and tables allowing an optimal pipeline to be selected, based on the user's priorities. Using ToTem, we were able to optimize somatic variant calling from ultra-deep targeted gene sequencing (TGS) data and germline variant detection in whole genome sequencing (WGS) data. ToTem is a tool for automated pipeline optimization which is freely available as a web application at  https://totem.software .

  7. A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers

    PubMed Central

    2012-01-01

    Background Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? Results The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Conclusion Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway. PMID:23216969

  8. A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers.

    PubMed

    Günther, Oliver P; Chen, Virginia; Freue, Gabriela Cohen; Balshaw, Robert F; Tebbutt, Scott J; Hollander, Zsuzsanna; Takhar, Mandeep; McMaster, W Robert; McManus, Bruce M; Keown, Paul A; Ng, Raymond T

    2012-12-08

    Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway.

  9. Conversion events in gene clusters

    PubMed Central

    2011-01-01

    Background Gene clusters containing multiple similar genomic regions in close proximity are of great interest for biomedical studies because of their associations with inherited diseases. However, such regions are difficult to analyze due to their structural complexity and their complicated evolutionary histories, reflecting a variety of large-scale mutational events. In particular, conversion events can mislead inferences about the relationships among these regions, as traced by traditional methods such as construction of phylogenetic trees or multi-species alignments. Results To correct the distorted information generated by such methods, we have developed an automated pipeline called CHAP (Cluster History Analysis Package) for detecting conversion events. We used this pipeline to analyze the conversion events that affected two well-studied gene clusters (α-globin and β-globin) and three gene clusters for which comparative sequence data were generated from seven primate species: CCL (chemokine ligand), IFN (interferon), and CYP2abf (part of cytochrome P450 family 2). CHAP is freely available at http://www.bx.psu.edu/miller_lab. Conclusions These studies reveal the value of characterizing conversion events in the context of studying gene clusters in complex genomes. PMID:21798034

  10. OTG-snpcaller: An Optimized Pipeline Based on TMAP and GATK for SNP Calling from Ion Torrent Data

    PubMed Central

    Huang, Wenpan; Xi, Feng; Lin, Lin; Zhi, Qihuan; Zhang, Wenwei; Tang, Y. Tom; Geng, Chunyu; Lu, Zhiyuan; Xu, Xun

    2014-01-01

    Because the new Proton platform from Life Technologies produced markedly different data from those of the Illumina platform, the conventional Illumina data analysis pipeline could not be used directly. We developed an optimized SNP calling method using TMAP and GATK (OTG-snpcaller). This method combined our own optimized processes, Remove Duplicates According to AS Tag (RDAST) and Alignment Optimize Structure (AOS), together with TMAP and GATK, to call SNPs from Proton data. We sequenced four sets of exomes captured by Agilent SureSelect and NimbleGen SeqCap EZ Kit, using Life Technology’s Ion Proton sequencer. Then we applied OTG-snpcaller and compared our results with the results from Torrent Variants Caller. The results indicated that OTG-snpcaller can reduce both false positive and false negative rates. Moreover, we compared our results with Illumina results generated by GATK best practices, and we found that the results of these two platforms were comparable. The good performance in variant calling using GATK best practices can be primarily attributed to the high quality of the Illumina sequences. PMID:24824529

  11. OTG-snpcaller: an optimized pipeline based on TMAP and GATK for SNP calling from ion torrent data.

    PubMed

    Zhu, Pengyuan; He, Lingyu; Li, Yaqiao; Huang, Wenpan; Xi, Feng; Lin, Lin; Zhi, Qihuan; Zhang, Wenwei; Tang, Y Tom; Geng, Chunyu; Lu, Zhiyuan; Xu, Xun

    2014-01-01

    Because the new Proton platform from Life Technologies produced markedly different data from those of the Illumina platform, the conventional Illumina data analysis pipeline could not be used directly. We developed an optimized SNP calling method using TMAP and GATK (OTG-snpcaller). This method combined our own optimized processes, Remove Duplicates According to AS Tag (RDAST) and Alignment Optimize Structure (AOS), together with TMAP and GATK, to call SNPs from Proton data. We sequenced four sets of exomes captured by Agilent SureSelect and NimbleGen SeqCap EZ Kit, using Life Technology's Ion Proton sequencer. Then we applied OTG-snpcaller and compared our results with the results from Torrent Variants Caller. The results indicated that OTG-snpcaller can reduce both false positive and false negative rates. Moreover, we compared our results with Illumina results generated by GATK best practices, and we found that the results of these two platforms were comparable. The good performance in variant calling using GATK best practices can be primarily attributed to the high quality of the Illumina sequences.

  12. An Optimization-Driven Analysis Pipeline to Uncover Biomarkers and Signaling Paths: Cervix Cancer.

    PubMed

    Lorenzo, Enery; Camacho-Caceres, Katia; Ropelewski, Alexander J; Rosas, Juan; Ortiz-Mojer, Michael; Perez-Marty, Lynn; Irizarry, Juan; Gonzalez, Valerie; Rodríguez, Jesús A; Cabrera-Rios, Mauricio; Isaza, Clara

    2015-06-01

    Establishing how a series of potentially important genes might relate to each other is relevant to understand the origin and evolution of illnesses, such as cancer. High-throughput biological experiments have played a critical role in providing information in this regard. A special challenge, however, is that of trying to conciliate information from separate microarray experiments to build a potential genetic signaling path. This work proposes a two-step analysis pipeline, based on optimization, to approach meta-analysis aiming to build a proxy for a genetic signaling path.

  13. Aging Research Using Mouse Models

    PubMed Central

    Ackert-Bicknell, Cheryl L.; Anderson, Laura; Sheehan, Susan; Hill, Warren G.; Chang, Bo; Churchill, Gary A.; Chesler, Elissa J.; Korstanje, Ron; Peters, Luanne L.

    2015-01-01

    Despite the dramatic increase in human lifespan over the past century, there remains pronounced variability in “health-span”, or the period of time in which one is generally healthy and free of disease. Much of the variability in health-span and lifespan is thought to be genetic in origin. Understanding the genetic mechanisms of aging and identifying ways to boost longevity is a primary goal in aging research. Here, we describe a pipeline of phenotypic assays for assessing mouse models of aging. This pipeline includes behavior/cognition testing, body composition analysis, and tests of kidney function, hematopoiesis, immune function and physical parameters. We also describe study design methods for assessing lifespan and health-span, and other important considerations when conducting aging research in the laboratory mouse. The tools and assays provided can assist researchers with understanding the correlative relationships between age-associated phenotypes and, ultimately, the role of specific genes in the aging process. PMID:26069080

  14. STEREO TRansiting Exoplanet and Stellar Survey (STRESS) - I. Introduction and data pipeline

    NASA Astrophysics Data System (ADS)

    Sangaralingam, Vinothini; Stevens, Ian R.

    2011-12-01

    The Solar TErrestrial RElations Observatory (STEREO) is a system of two identical spacecraft in heliocentric Earth orbit. We use the two heliospheric imagers (HI), which are wide-angle imagers with multibaffle systems, to perform high-precision stellar photometry in order to search for exoplanetary transits and understand stellar variables. The large cadence (40 min for HI-1 and 2 h for HI-2), high precision, wide magnitude range (R mag: 4-12) and broad sky coverage (nearly 20 per cent for HI-1A alone and 60 per cent of the sky in the zodiacal region for all instruments combined) of this instrument place it in a region left largely devoid by other current projects. In this paper, we describe the semi-automated pipeline devised for reduction of the data, some of the interesting characteristics of the data obtained and data-analysis methods used, along with some early results.

  15. Anaconda: AN automated pipeline for somatic COpy Number variation Detection and Annotation from tumor exome sequencing data.

    PubMed

    Gao, Jianing; Wan, Changlin; Zhang, Huan; Li, Ao; Zang, Qiguang; Ban, Rongjun; Ali, Asim; Yu, Zhenghua; Shi, Qinghua; Jiang, Xiaohua; Zhang, Yuanwei

    2017-10-03

    Copy number variations (CNVs) are the main genetic structural variations in cancer genome. Detecting CNVs in genetic exome region is efficient and cost-effective in identifying cancer associated genes. Many tools had been developed accordingly and yet these tools lack of reliability because of high false negative rate, which is intrinsically caused by genome exonic bias. To provide an alternative option, here, we report Anaconda, a comprehensive pipeline that allows flexible integration of multiple CNV-calling methods and systematic annotation of CNVs in analyzing WES data. Just by one command, Anaconda can generate CNV detection result by up to four CNV detecting tools. Associated with comprehensive annotation analysis of genes involved in shared CNV regions, Anaconda is able to deliver a more reliable and useful report in assistance with CNV-associate cancer researches. Anaconda package and manual can be freely accessed at http://mcg.ustc.edu.cn/bsc/ANACONDA/ .

  16. A graph-based approach for designing extensible pipelines

    PubMed Central

    2012-01-01

    Background In bioinformatics, it is important to build extensible and low-maintenance systems that are able to deal with the new tools and data formats that are constantly being developed. The traditional and simplest implementation of pipelines involves hardcoding the execution steps into programs or scripts. This approach can lead to problems when a pipeline is expanding because the incorporation of new tools is often error prone and time consuming. Current approaches to pipeline development such as workflow management systems focus on analysis tasks that are systematically repeated without significant changes in their course of execution, such as genome annotation. However, more dynamism on the pipeline composition is necessary when each execution requires a different combination of steps. Results We propose a graph-based approach to implement extensible and low-maintenance pipelines that is suitable for pipeline applications with multiple functionalities that require different combinations of steps in each execution. Here pipelines are composed automatically by compiling a specialised set of tools on demand, depending on the functionality required, instead of specifying every sequence of tools in advance. We represent the connectivity of pipeline components with a directed graph in which components are the graph edges, their inputs and outputs are the graph nodes, and the paths through the graph are pipelines. To that end, we developed special data structures and a pipeline system algorithm. We demonstrate the applicability of our approach by implementing a format conversion pipeline for the fields of population genetics and genetic epidemiology, but our approach is also helpful in other fields where the use of multiple software is necessary to perform comprehensive analyses, such as gene expression and proteomics analyses. The project code, documentation and the Java executables are available under an open source license at http://code.google.com/p/dynamic-pipeline. The system has been tested on Linux and Windows platforms. Conclusions Our graph-based approach enables the automatic creation of pipelines by compiling a specialised set of tools on demand, depending on the functionality required. It also allows the implementation of extensible and low-maintenance pipelines and contributes towards consolidating openness and collaboration in bioinformatics systems. It is targeted at pipeline developers and is suited for implementing applications with sequential execution steps and combined functionalities. In the format conversion application, the automatic combination of conversion tools increased both the number of possible conversions available to the user and the extensibility of the system to allow for future updates with new file formats. PMID:22788675

  17. Image-Based Single Cell Profiling: High-Throughput Processing of Mother Machine Experiments

    PubMed Central

    Sachs, Christian Carsten; Grünberger, Alexander; Helfrich, Stefan; Probst, Christopher; Wiechert, Wolfgang; Kohlheyer, Dietrich; Nöh, Katharina

    2016-01-01

    Background Microfluidic lab-on-chip technology combined with live-cell imaging has enabled the observation of single cells in their spatio-temporal context. The mother machine (MM) cultivation system is particularly attractive for the long-term investigation of rod-shaped bacteria since it facilitates continuous cultivation and observation of individual cells over many generations in a highly parallelized manner. To date, the lack of fully automated image analysis software limits the practical applicability of the MM as a phenotypic screening tool. Results We present an image analysis pipeline for the automated processing of MM time lapse image stacks. The pipeline supports all analysis steps, i.e., image registration, orientation correction, channel/cell detection, cell tracking, and result visualization. Tailored algorithms account for the specialized MM layout to enable a robust automated analysis. Image data generated in a two-day growth study (≈ 90 GB) is analyzed in ≈ 30 min with negligible differences in growth rate between automated and manual evaluation quality. The proposed methods are implemented in the software molyso (MOther machine AnaLYsis SOftware) that provides a new profiling tool to analyze unbiasedly hitherto inaccessible large-scale MM image stacks. Conclusion Presented is the software molyso, a ready-to-use open source software (BSD-licensed) for the unsupervised analysis of MM time-lapse image stacks. molyso source code and user manual are available at https://github.com/modsim/molyso. PMID:27661996

  18. Large-scale retrieval for medical image analytics: A comprehensive review.

    PubMed

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. The Snapshot A Star SurveY (SASSY)

    NASA Astrophysics Data System (ADS)

    Garani, Jasmine I.; Nielsen, Eric; Marchis, Franck; Liu, Michael C.; Macintosh, Bruce; Rajan, Abhijith; De Rosa, Robert J.; Jinfei Wang, Jason; Esposito, Thomas M.; Best, William M. J.; Bowler, Brendan; Dupuy, Trent; Ruffio, Jean-Baptiste

    2018-01-01

    The Snapshot A Star Survey (SASSY) is an adaptive optics survey conducted using NIRC2 on the Keck II telescope to search for young, self-luminous planets and brown dwarfs (M > 5MJup) around high mass stars (M > 1.5 M⊙). We present the results of a custom data reduction pipeline developed for the coronagraphic observations of our 200 target stars. Our data analysis method includes basic near infrared data processing (flat-field correction, bad pixel removal, distortion correction) as well as performing PSF subtraction through a Reference Differential Imaging algorithm based on a library of PSFs derived from the observations using the pyKLIP routine. We present the results from the pipeline of a few stars from the survey with analysis of candidate companions. SASSY is sensitive to companions 600,000 times fainter than the host star withint the inner few arcseconds, allowing us to detect companions with masses ~8MJup at age 110 Myr. This work was supported by the Leadership Alliance's Summer Research Early Identification Program at Stanford University, the NSF REU program at the SETI Institute and NASA grant NNX14AJ80G.

  20. The Physician Pipeline to Rural and Underserved Areas in Pennsylvania

    ERIC Educational Resources Information Center

    Schwartz, Myron R.

    2008-01-01

    Context: An implicit objective of a state's investments in medical education is to promote in-state practice of state educated physicians. Purpose: To present a tool for evaluating this objective by analyzing the "pipeline" from medical education to patient care, primary care, rural areas, and underserved areas in Pennsylvania. Methods:…

  1. Supersonic air jets preserve tree roots in underground pipeline installation

    Treesearch

    Rob Gross; Michelle Julene

    2002-01-01

    Tree roots are often damaged during construction projects, particularly during trenching operations for pipeline installation. Although mechanical soil excavation using heavy equipment, such as an excavator or backhoe is considered the fastest the most economical method, it damages and destroys tree roots and can lead to unintentional tree loss, poor public relations,...

  2. Component-based control of oil-gas-water mixture composition in pipelines

    NASA Astrophysics Data System (ADS)

    Voytyuk, I. N.

    2018-03-01

    The article theoretically proves the method for measuring the changes in content of oil, gas and water in pipelines; also the measurement system design for implementation thereof is discussed. An assessment is presented in connection with random and systemic errors for the future system, and recommendations for optimization thereof are presented.

  3. Innovative Sensors for Pipeline Crawlers: Rotating Permanent Magnet Inspection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Bruce Nestleroth; Richard J. Davis; Stephanie Flamberg

    2006-09-30

    Internal inspection of pipelines is an important tool for ensuring safe and reliable delivery of fossil energy products. Current inspection systems that are propelled through the pipeline by the product flow cannot be used to inspect all pipelines because of the various physical barriers they may encounter. To facilitate inspection of these ''unpiggable'' pipelines, recent inspection development efforts have focused on a new generation of powered inspection platforms that are able to crawl slowly inside a pipeline and can maneuver past the physical barriers that limit internal inspection applicability, such as bore restrictions, low product flow rate, and low pressure.more » The first step in this research was to review existing inspection technologies for applicability and compatibility with crawler systems. Most existing inspection technologies, including magnetic flux leakage and ultrasonic methods, had significant implementation limitations including mass, physical size, inspection energy coupling requirements and technology maturity. The remote field technique was the most promising but power consumption was high and anomaly signals were low requiring sensitive detectors and electronics. After reviewing each inspection technology, it was decided to investigate the potential for a new inspection method. The new inspection method takes advantage of advances in permanent magnet strength, along with their wide availability and low cost. Called rotating permanent magnet inspection (RPMI), this patent pending technology employs pairs of permanent magnets rotating around the central axis of a cylinder to induce high current densities in the material under inspection. Anomalies and wall thickness variations are detected with an array of sensors that measure local changes in the magnetic field produced by the induced current flowing in the material. This inspection method is an alternative to the common concentric coil remote field technique that induces low-frequency eddy currents in ferromagnetic pipes and tubes. Since this is a new inspection method, both theory and experiment were used to determine fundamental capabilities and limitations. Fundamental finite element modeling analysis and experimental investigations performed during this development have led to the derivation of a first order analytical equation for designing rotating magnetizers to induce current and positioning sensors to record signals from anomalies. Experimental results confirm the analytical equation and the finite element calculations provide a firm basis for the design of RPMI systems. Experimental results have shown that metal loss anomalies and wall thickness variations can be detected with an array of sensors that measure local changes in the magnetic field produced by the induced current flowing in the material. The design exploits the phenomenon that circumferential currents are easily detectable at distances well away from the magnets. Current changes at anomalies were detectable with commercial low cost Hall Effect sensors. Commercial analog to digital converters can be used to measure the sensor output and data analysis can be performed in real time using PC computer systems. The technology was successfully demonstrated during two blind benchmark tests where numerous metal loss defects were detected. For this inspection technology, the detection threshold is a function of wall thickness and corrosion depth. For thinner materials, the detection threshold was experimentally shown to be comparable to magnetic flux leakage. For wall thicknesses greater than three tenths of an inch, the detection threshold increases with wall thickness. The potential for metal loss anomaly sizing was demonstrated in the second benchmarking study, again with accuracy comparable to existing magnetic flux leakage technologies. The rotating permanent magnet system has the potential for inspecting unpiggable pipelines since the magnetizer configurations can be sufficiently small with respect to the bore of the pipe to pass obstructions that limit the application of many inspection technologies. Also, since the largest dimension of the Hall Effect sensor is two tenths of an inch, the sensor packages can be small, flexible and light. The power consumption, on the order of ten watts, is low compared to some inspection systems; this would enable autonomous systems to inspect longer distances between charges. This project showed there are no technical barriers to building a field ready unit that can pass through narrow obstructions, such as plug valves. The next step in project implementation is to build a field ready unit that can begin to establish optimal performance capabilities including detection thresholds, sizing capability, and wall thickness limitations.« less

  4. Influence of a source line position on results of EM observations applied to the diagnostics of underground heating system pipelines in urban area

    NASA Astrophysics Data System (ADS)

    Vetrov, A.

    2009-05-01

    The condition of underground constructions, communication and supply systems in the cities has to be periodically monitored and controlled in order to prevent their breakage, which can result in serious accident, especially in urban area. The most risk of damage have the underground construction made of steal such as pipelines widely used for water, gas and heat supply. To ensure the pipeline survivability it is necessary to carry out the operative and inexpensive control of pipelines condition. Induced electromagnetic methods of geophysics can be applied to provide such diagnostics. The highly developed surface in urbane area is one of cause hampering the realization of electromagnetic methods of diagnostics. The main problem is in finding of an appropriate place for the source line and electrodes on a limited surface area and their optimal position relative to the observation path to minimize their influence on observed data. Author made a number of experiments of an underground heating system pipeline diagnostics using different position of the source line and electrodes. The experiments were made on a 200 meters section over 2 meters deep pipeline. The admissible length of the source line and angle between the source line and the observation path were determined. The minimal length of the source line for the experiment conditions and accuracy made 30 meters, the maximum admissible angle departure from the perpendicular position made 30 degrees. The work was undertaken in cooperation with diagnostics company DIsSO, Saint-Petersburg, Russia.

  5. Benchmark datasets for phylogenomic pipeline validation, applications for foodborne pathogen surveillance.

    PubMed

    Timme, Ruth E; Rand, Hugh; Shumway, Martin; Trees, Eija K; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E; Defibaugh-Chavez, Stephanie; Carleton, Heather A; Klimke, William A; Katz, Lee S

    2017-01-01

    As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and "known" phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Our "outbreak" benchmark datasets represent the four major foodborne bacterial pathogens ( Listeria monocytogenes , Salmonella enterica , Escherichia coli , and Campylobacter jejuni ) and one simulated dataset where the "known tree" can be accurately called the "true tree". The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools-we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines.

  6. Magnetic pipeline for coal and oil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knolle, E.

    1998-07-01

    A 1994 analysis of the recorded costs of the Alaska oil pipeline, in a paper entitled Maglev Crude Oil Pipeline, (NASA CP-3247 pp. 671--684) concluded that, had the Knolle Magnetrans pipeline technology been available and used, some $10 million per day in transportation costs could have been saved over the 20 years of the Alaska oil pipeline's existence. This over 800 mile long pipeline requires about 500 horsepower per mile in pumping power, which together with the cost of the pipeline's capital investment consumes about one-third of the energy value of the pumped oil. This does not include the costmore » of getting the oil out of the ground. The reason maglev technology performs superior to conventional pipelines is because by magnetically levitating the oil into contact-free suspense, there is no drag-causing adhesion. In addition, by using permanent magnets in repulsion, suspension is achieved without using energy. Also, the pumped oil's adhesion to the inside of pipes limits its speed. In the case of the Alaska pipeline the speed is limited to about 7 miles per hour, which, with its 48-inch pipe diameter and 1200 psi pressure, pumps about 2 million barrels per day. The maglev system, as developed by Knolle Magnetrans, would transport oil in magnetically suspended sealed containers and, thus free of adhesion, at speeds 10 to 20 times faster. Furthermore, the diameter of the levitated containers can be made smaller with the same capacity, which makes the construction of the maglev system light and inexpensive. There are similar advantages when using maglev technology to transport coal. Also, a maglev system has advantages over railroads in mountainous regions where coal is primarily mined. A maglev pipeline can travel, all-year and all weather, in a straight line to the end-user, whereas railroads have difficult circuitous routes. In contrast, a maglev pipeline can climb over steep hills without much difficulty.« less

  7. Development of a Dmt Monitor for Statistical Tracking of Gravitational-Wave Burst Triggers Generated from the Omega Pipeline

    NASA Astrophysics Data System (ADS)

    Li, Jun-Wei; Cao, Jun-Wei

    2010-04-01

    One challenge in large-scale scientific data analysis is to monitor data in real-time in a distributed environment. For the LIGO (Laser Interferometer Gravitational-wave Observatory) project, a dedicated suit of data monitoring tools (DMT) has been developed, yielding good extensibility to new data type and high flexibility to a distributed environment. Several services are provided, including visualization of data information in various forms and file output of monitoring results. In this work, a DMT monitor, OmegaMon, is developed for tracking statistics of gravitational-wave (OW) burst triggers that are generated from a specific OW burst data analysis pipeline, the Omega Pipeline. Such results can provide diagnostic information as reference of trigger post-processing and interferometer maintenance.

  8. The standard operating procedure of the DOE-JGI Metagenome Annotation Pipeline (MAP v.4)

    DOE PAGES

    Huntemann, Marcel; Ivanova, Natalia N.; Mavromatis, Konstantinos; ...

    2016-02-24

    The DOE-JGI Metagenome Annotation Pipeline (MAP v.4) performs structural and functional annotation for metagenomic sequences that are submitted to the Integrated Microbial Genomes with Microbiomes (IMG/M) system for comparative analysis. The pipeline runs on nucleotide sequences provide d via the IMG submission site. Users must first define their analysis projects in GOLD and then submit the associated sequence datasets consisting of scaffolds/contigs with optional coverage information and/or unassembled reads in fasta and fastq file formats. The MAP processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNAs, as well as CRISPR elements. Structural annotation ismore » followed by functional annotation including assignment of protein product names and connection to various protein family databases.« less

  9. The standard operating procedure of the DOE-JGI Metagenome Annotation Pipeline (MAP v.4)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huntemann, Marcel; Ivanova, Natalia N.; Mavromatis, Konstantinos

    The DOE-JGI Metagenome Annotation Pipeline (MAP v.4) performs structural and functional annotation for metagenomic sequences that are submitted to the Integrated Microbial Genomes with Microbiomes (IMG/M) system for comparative analysis. The pipeline runs on nucleotide sequences provide d via the IMG submission site. Users must first define their analysis projects in GOLD and then submit the associated sequence datasets consisting of scaffolds/contigs with optional coverage information and/or unassembled reads in fasta and fastq file formats. The MAP processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNAs, as well as CRISPR elements. Structural annotation ismore » followed by functional annotation including assignment of protein product names and connection to various protein family databases.« less

  10. Influence of ultrasound speckle tracking strategies for motion and strain estimation.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Aja-Fernández, Santiago

    2016-08-01

    Speckle Tracking is one of the most prominent techniques used to estimate the regional movement of the heart based on ultrasound acquisitions. Many different approaches have been proposed, proving their suitability to obtain quantitative and qualitative information regarding myocardial deformation, motion and function assessment. New proposals to improve the basic algorithm usually focus on one of these three steps: (1) the similarity measure between images and the speckle model; (2) the transformation model, i.e. the type of motion considered between images; (3) the optimization strategies, such as the use of different optimization techniques in the transformation step or the inclusion of structural information. While many contributions have shown their good performance independently, it is not always clear how they perform when integrated in a whole pipeline. Every step will have a degree of influence over the following and hence over the final result. Thus, a Speckle Tracking pipeline must be analyzed as a whole when developing novel methods, since improvements in a particular step might be undermined by the choices taken in further steps. This work presents two main contributions: (1) We provide a complete analysis of the influence of the different steps in a Speckle Tracking pipeline over the motion and strain estimation accuracy. (2) The study proposes a methodology for the analysis of Speckle Tracking systems specifically designed to provide an easy and systematic way to include other strategies. We close the analysis with some conclusions and recommendations that can be used as an orientation of the degree of influence of the models for speckle, the transformation models, interpolation schemes and optimization strategies over the estimation of motion features. They can be further use to evaluate and design new strategy into a Speckle Tracking system. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. UGbS-Flex, a novel bioinformatics pipeline for imputation-free SNP discovery in polyploids without a reference genome: finger millet as a case study.

    PubMed

    Qi, Peng; Gimode, Davis; Saha, Dipnarayan; Schröder, Stephan; Chakraborty, Debkanta; Wang, Xuewen; Dida, Mathews M; Malmberg, Russell L; Devos, Katrien M

    2018-06-15

    Research on orphan crops is often hindered by a lack of genomic resources. With the advent of affordable sequencing technologies, genotyping an entire genome or, for large-genome species, a representative fraction of the genome has become feasible for any crop. Nevertheless, most genotyping-by-sequencing (GBS) methods are geared towards obtaining large numbers of markers at low sequence depth, which excludes their application in heterozygous individuals. Furthermore, bioinformatics pipelines often lack the flexibility to deal with paired-end reads or to be applied in polyploid species. UGbS-Flex combines publicly available software with in-house python and perl scripts to efficiently call SNPs from genotyping-by-sequencing reads irrespective of the species' ploidy level, breeding system and availability of a reference genome. Noteworthy features of the UGbS-Flex pipeline are an ability to use paired-end reads as input, an effective approach to cluster reads across samples with enhanced outputs, and maximization of SNP calling. We demonstrate use of the pipeline for the identification of several thousand high-confidence SNPs with high representation across samples in an F 3 -derived F 2 population in the allotetraploid finger millet. Robust high-density genetic maps were constructed using the time-tested mapping program MAPMAKER which we upgraded to run efficiently and in a semi-automated manner in a Windows Command Prompt Environment. We exploited comparative GBS with one of the diploid ancestors of finger millet to assign linkage groups to subgenomes and demonstrate the presence of chromosomal rearrangements. The paper combines GBS protocol modifications, a novel flexible GBS analysis pipeline, UGbS-Flex, recommendations to maximize SNP identification, updated genetic mapping software, and the first high-density maps of finger millet. The modules used in the UGbS-Flex pipeline and for genetic mapping were applied to finger millet, an allotetraploid selfing species without a reference genome, as a case study. The UGbS-Flex modules, which can be run independently, are easily transferable to species with other breeding systems or ploidy levels.

  12. Complex solution of problem of all-season construction of roads and pipelines on universal composite pontoon units

    NASA Astrophysics Data System (ADS)

    Ryabkov, A. V.; Stafeeva, N. A.; Ivanov, V. A.; Zakuraev, A. F.

    2018-05-01

    A complex construction consisting of a universal floating pontoon road for laying pipelines in automatic mode on its body all year round and in any weather for Siberia and the Far North has been designed. A new method is proposed for the construction of pipelines on pontoon modules, which are made of composite materials. Pontoons made of composite materials for bedding pipelines with track-forming guides for automated wheeled transport, pipelayer, are designed. The proposed system eliminates the construction of a road along the route, ensures the buoyancy and smoothness of the self-propelled automated stacker in the form of a "centipede", which has a number of significant advantages in the construction and operation of the entire complex in the swamp and watered areas without overburden.

  13. Multiscale image analysis reveals structural heterogeneity of the cell microenvironment in homotypic spheroids.

    PubMed

    Schmitz, Alexander; Fischer, Sabine C; Mattheyer, Christian; Pampaloni, Francesco; Stelzer, Ernst H K

    2017-03-03

    Three-dimensional multicellular aggregates such as spheroids provide reliable in vitro substitutes for tissues. Quantitative characterization of spheroids at the cellular level is fundamental. We present the first pipeline that provides three-dimensional, high-quality images of intact spheroids at cellular resolution and a comprehensive image analysis that completes traditional image segmentation by algorithms from other fields. The pipeline combines light sheet-based fluorescence microscopy of optically cleared spheroids with automated nuclei segmentation (F score: 0.88) and concepts from graph analysis and computational topology. Incorporating cell graphs and alpha shapes provided more than 30 features of individual nuclei, the cellular neighborhood and the spheroid morphology. The application of our pipeline to a set of breast carcinoma spheroids revealed two concentric layers of different cell density for more than 30,000 cells. The thickness of the outer cell layer depends on a spheroid's size and varies between 50% and 75% of its radius. In differently-sized spheroids, we detected patches of different cell densities ranging from 5 × 10 5 to 1 × 10 6  cells/mm 3 . Since cell density affects cell behavior in tissues, structural heterogeneities need to be incorporated into existing models. Our image analysis pipeline provides a multiscale approach to obtain the relevant data for a system-level understanding of tissue architecture.

  14. VIV analysis of pipelines under complex span conditions

    NASA Astrophysics Data System (ADS)

    Wang, James; Steven Wang, F.; Duan, Gang; Jukes, Paul

    2009-06-01

    Spans occur when a pipeline is laid on a rough undulating seabed or when upheaval buckling occurs due to constrained thermal expansion. This not only results in static and dynamic loads on the flowline at span sections, but also generates vortex induced vibration (VIV), which can lead to fatigue issues. The phenomenon, if not predicted and controlled properly, will negatively affect pipeline integrity, leading to expensive remediation and intervention work. Span analysis can be complicated by: long span lengths, a large number of spans caused by a rough seabed, and multi-span interactions. In addition, the complexity can be more onerous and challenging when soil uncertainty, concrete degradation and unknown residual lay tension are considered in the analysis. This paper describes the latest developments and a ‘state-of-the-art’ finite element analysis program that has been developed to simulate the span response of a flowline under complex boundary and loading conditions. Both VIV and direct wave loading are captured in the analysis and the results are sequentially used for the ultimate limit state (ULS) check and fatigue life calculation.

  15. ReSeqTools: an integrated toolkit for large-scale next-generation sequencing based resequencing analysis.

    PubMed

    He, W; Zhao, S; Liu, X; Dong, S; Lv, J; Liu, D; Wang, J; Meng, Z

    2013-12-04

    Large-scale next-generation sequencing (NGS)-based resequencing detects sequence variations, constructs evolutionary histories, and identifies phenotype-related genotypes. However, NGS-based resequencing studies generate extraordinarily large amounts of data, making computations difficult. Effective use and analysis of these data for NGS-based resequencing studies remains a difficult task for individual researchers. Here, we introduce ReSeqTools, a full-featured toolkit for NGS (Illumina sequencing)-based resequencing analysis, which processes raw data, interprets mapping results, and identifies and annotates sequence variations. ReSeqTools provides abundant scalable functions for routine resequencing analysis in different modules to facilitate customization of the analysis pipeline. ReSeqTools is designed to use compressed data files as input or output to save storage space and facilitates faster and more computationally efficient large-scale resequencing studies in a user-friendly manner. It offers abundant practical functions and generates useful statistics during the analysis pipeline, which significantly simplifies resequencing analysis. Its integrated algorithms and abundant sub-functions provide a solid foundation for special demands in resequencing projects. Users can combine these functions to construct their own pipelines for other purposes.

  16. 4C-ker: A Method to Reproducibly Identify Genome-Wide Interactions Captured by 4C-Seq Experiments.

    PubMed

    Raviram, Ramya; Rocha, Pedro P; Müller, Christian L; Miraldi, Emily R; Badri, Sana; Fu, Yi; Swanzey, Emily; Proudhon, Charlotte; Snetkova, Valentina; Bonneau, Richard; Skok, Jane A

    2016-03-01

    4C-Seq has proven to be a powerful technique to identify genome-wide interactions with a single locus of interest (or "bait") that can be important for gene regulation. However, analysis of 4C-Seq data is complicated by the many biases inherent to the technique. An important consideration when dealing with 4C-Seq data is the differences in resolution of signal across the genome that result from differences in 3D distance separation from the bait. This leads to the highest signal in the region immediately surrounding the bait and increasingly lower signals in far-cis and trans. Another important aspect of 4C-Seq experiments is the resolution, which is greatly influenced by the choice of restriction enzyme and the frequency at which it can cut the genome. Thus, it is important that a 4C-Seq analysis method is flexible enough to analyze data generated using different enzymes and to identify interactions across the entire genome. Current methods for 4C-Seq analysis only identify interactions in regions near the bait or in regions located in far-cis and trans, but no method comprehensively analyzes 4C signals of different length scales. In addition, some methods also fail in experiments where chromatin fragments are generated using frequent cutter restriction enzymes. Here, we describe 4C-ker, a Hidden-Markov Model based pipeline that identifies regions throughout the genome that interact with the 4C bait locus. In addition, we incorporate methods for the identification of differential interactions in multiple 4C-seq datasets collected from different genotypes or experimental conditions. Adaptive window sizes are used to correct for differences in signal coverage in near-bait regions, far-cis and trans chromosomes. Using several datasets, we demonstrate that 4C-ker outperforms all existing 4C-Seq pipelines in its ability to reproducibly identify interaction domains at all genomic ranges with different resolution enzymes.

  17. 4C-ker: A Method to Reproducibly Identify Genome-Wide Interactions Captured by 4C-Seq Experiments

    PubMed Central

    Raviram, Ramya; Rocha, Pedro P.; Müller, Christian L.; Miraldi, Emily R.; Badri, Sana; Fu, Yi; Swanzey, Emily; Proudhon, Charlotte; Snetkova, Valentina

    2016-01-01

    4C-Seq has proven to be a powerful technique to identify genome-wide interactions with a single locus of interest (or “bait”) that can be important for gene regulation. However, analysis of 4C-Seq data is complicated by the many biases inherent to the technique. An important consideration when dealing with 4C-Seq data is the differences in resolution of signal across the genome that result from differences in 3D distance separation from the bait. This leads to the highest signal in the region immediately surrounding the bait and increasingly lower signals in far-cis and trans. Another important aspect of 4C-Seq experiments is the resolution, which is greatly influenced by the choice of restriction enzyme and the frequency at which it can cut the genome. Thus, it is important that a 4C-Seq analysis method is flexible enough to analyze data generated using different enzymes and to identify interactions across the entire genome. Current methods for 4C-Seq analysis only identify interactions in regions near the bait or in regions located in far-cis and trans, but no method comprehensively analyzes 4C signals of different length scales. In addition, some methods also fail in experiments where chromatin fragments are generated using frequent cutter restriction enzymes. Here, we describe 4C-ker, a Hidden-Markov Model based pipeline that identifies regions throughout the genome that interact with the 4C bait locus. In addition, we incorporate methods for the identification of differential interactions in multiple 4C-seq datasets collected from different genotypes or experimental conditions. Adaptive window sizes are used to correct for differences in signal coverage in near-bait regions, far-cis and trans chromosomes. Using several datasets, we demonstrate that 4C-ker outperforms all existing 4C-Seq pipelines in its ability to reproducibly identify interaction domains at all genomic ranges with different resolution enzymes. PMID:26938081

  18. Pressurizing the STEM Pipeline: An Expectancy-Value Theory Analysis of Youths' STEM Attitudes

    ERIC Educational Resources Information Center

    Ball, Christopher; Huang, Kuo-Ting; Cotten, Shelia R.; Rikard, R. V.

    2017-01-01

    Over the past decade, there has been a strong national push to increase minority students' positive attitudes towards STEM-related careers. However, despite this focus, minority students have remained underrepresented in these fields. Some researchers have directed their attention towards improving the STEM pipeline which carries students through…

  19. Landslide hazard analysis for pipelines: The case of the Simonette river crossing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grivas, D.A.; Schultz, B.C.; O`Neil, G.

    1995-12-31

    The overall objective of this study is to develop a probabilistic methodology to analyze landslide hazards and their effects on the safety of buried pipelines. The methodology incorporates a range of models that can accommodate differences in the ground movement modes and the amount and type of information available at various site locations. Two movement modes are considered, namely (a) instantaneous (catastrophic) slides, and (b) gradual ground movement which may result in cumulative displacements over the pipeline design life (30--40 years) that are in excess of allowable values. Probabilistic analysis is applied in each case to address the uncertainties associatedmore » with important factors that control slope stability. Availability of information ranges from relatively well studied, instrumented installations to cases where data is limited to what can be derived from topographic and geologic maps. The methodology distinguishes between procedures applied where there is little information and those that can be used when relatively extensive data is available. important aspects of the methodology are illustrated in a case study involving a pipeline located in Northern Alberta, Canada, in the Simonette river valley.« less

  20. An efficient and scalable analysis framework for variant extraction and refinement from population-scale DNA sequence data.

    PubMed

    Jun, Goo; Wing, Mary Kate; Abecasis, Gonçalo R; Kang, Hyun Min

    2015-06-01

    The analysis of next-generation sequencing data is computationally and statistically challenging because of the massive volume of data and imperfect data quality. We present GotCloud, a pipeline for efficiently detecting and genotyping high-quality variants from large-scale sequencing data. GotCloud automates sequence alignment, sample-level quality control, variant calling, filtering of likely artifacts using machine-learning techniques, and genotype refinement using haplotype information. The pipeline can process thousands of samples in parallel and requires less computational resources than current alternatives. Experiments with whole-genome and exome-targeted sequence data generated by the 1000 Genomes Project show that the pipeline provides effective filtering against false positive variants and high power to detect true variants. Our pipeline has already contributed to variant detection and genotyping in several large-scale sequencing projects, including the 1000 Genomes Project and the NHLBI Exome Sequencing Project. We hope it will now prove useful to many medical sequencing studies. © 2015 Jun et al.; Published by Cold Spring Harbor Laboratory Press.

Top