Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-03
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Distribution Systems, Gas Transmission and Gathering Systems, and Hazardous Liquid Systems AGENCY: Pipeline and.... SUMMARY: This notice advises owners and operators of gas pipeline facilities and hazardous liquid pipeline...
A graph-based approach for designing extensible pipelines
2012-01-01
Background In bioinformatics, it is important to build extensible and low-maintenance systems that are able to deal with the new tools and data formats that are constantly being developed. The traditional and simplest implementation of pipelines involves hardcoding the execution steps into programs or scripts. This approach can lead to problems when a pipeline is expanding because the incorporation of new tools is often error prone and time consuming. Current approaches to pipeline development such as workflow management systems focus on analysis tasks that are systematically repeated without significant changes in their course of execution, such as genome annotation. However, more dynamism on the pipeline composition is necessary when each execution requires a different combination of steps. Results We propose a graph-based approach to implement extensible and low-maintenance pipelines that is suitable for pipeline applications with multiple functionalities that require different combinations of steps in each execution. Here pipelines are composed automatically by compiling a specialised set of tools on demand, depending on the functionality required, instead of specifying every sequence of tools in advance. We represent the connectivity of pipeline components with a directed graph in which components are the graph edges, their inputs and outputs are the graph nodes, and the paths through the graph are pipelines. To that end, we developed special data structures and a pipeline system algorithm. We demonstrate the applicability of our approach by implementing a format conversion pipeline for the fields of population genetics and genetic epidemiology, but our approach is also helpful in other fields where the use of multiple software is necessary to perform comprehensive analyses, such as gene expression and proteomics analyses. The project code, documentation and the Java executables are available under an open source license at http://code.google.com/p/dynamic-pipeline. The system has been tested on Linux and Windows platforms. Conclusions Our graph-based approach enables the automatic creation of pipelines by compiling a specialised set of tools on demand, depending on the functionality required. It also allows the implementation of extensible and low-maintenance pipelines and contributes towards consolidating openness and collaboration in bioinformatics systems. It is targeted at pipeline developers and is suited for implementing applications with sequential execution steps and combined functionalities. In the format conversion application, the automatic combination of conversion tools increased both the number of possible conversions available to the user and the extensibility of the system to allow for future updates with new file formats. PMID:22788675
ORAC-DR: A generic data reduction pipeline infrastructure
NASA Astrophysics Data System (ADS)
Jenness, Tim; Economou, Frossie
2015-03-01
ORAC-DR is a general purpose data reduction pipeline system designed to be instrument and observatory agnostic. The pipeline works with instruments as varied as infrared integral field units, imaging arrays and spectrographs, and sub-millimeter heterodyne arrays and continuum cameras. This paper describes the architecture of the pipeline system and the implementation of the core infrastructure. We finish by discussing the lessons learned since the initial deployment of the pipeline system in the late 1990s.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-25
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... Management Programs AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION... Nation's gas distribution pipeline systems through development of inspection methods and guidance for the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-29
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Liquid Systems AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice; Issuance of Advisory Bulletin. SUMMARY: This notice advises owners and operators of gas pipeline...
Cormier, Nathan; Kolisnik, Tyler; Bieda, Mark
2016-07-05
There has been an enormous expansion of use of chromatin immunoprecipitation followed by sequencing (ChIP-seq) technologies. Analysis of large-scale ChIP-seq datasets involves a complex series of steps and production of several specialized graphical outputs. A number of systems have emphasized custom development of ChIP-seq pipelines. These systems are primarily based on custom programming of a single, complex pipeline or supply libraries of modules and do not produce the full range of outputs commonly produced for ChIP-seq datasets. It is desirable to have more comprehensive pipelines, in particular ones addressing common metadata tasks, such as pathway analysis, and pipelines producing standard complex graphical outputs. It is advantageous if these are highly modular systems, available as both turnkey pipelines and individual modules, that are easily comprehensible, modifiable and extensible to allow rapid alteration in response to new analysis developments in this growing area. Furthermore, it is advantageous if these pipelines allow data provenance tracking. We present a set of 20 ChIP-seq analysis software modules implemented in the Kepler workflow system; most (18/20) were also implemented as standalone, fully functional R scripts. The set consists of four full turnkey pipelines and 16 component modules. The turnkey pipelines in Kepler allow data provenance tracking. Implementation emphasized use of common R packages and widely-used external tools (e.g., MACS for peak finding), along with custom programming. This software presents comprehensive solutions and easily repurposed code blocks for ChIP-seq analysis and pipeline creation. Tasks include mapping raw reads, peakfinding via MACS, summary statistics, peak location statistics, summary plots centered on the transcription start site (TSS), gene ontology, pathway analysis, and de novo motif finding, among others. These pipelines range from those performing a single task to those performing full analyses of ChIP-seq data. The pipelines are supplied as both Kepler workflows, which allow data provenance tracking, and, in the majority of cases, as standalone R scripts. These pipelines are designed for ease of modification and repurposing.
A real-time coherent dedispersion pipeline for the giant metrewave radio telescope
NASA Astrophysics Data System (ADS)
De, Kishalay; Gupta, Yashwant
2016-02-01
A fully real-time coherent dedispersion system has been developed for the pulsar back-end at the Giant Metrewave Radio Telescope (GMRT). The dedispersion pipeline uses the single phased array voltage beam produced by the existing GMRT software back-end (GSB) to produce coherently dedispersed intensity output in real time, for the currently operational bandwidths of 16 MHz and 32 MHz. Provision has also been made to coherently dedisperse voltage beam data from observations recorded on disk. We discuss the design and implementation of the real-time coherent dedispersion system, describing the steps carried out to optimise the performance of the pipeline. Presently functioning on an Intel Xeon X5550 CPU equipped with a NVIDIA Tesla C2075 GPU, the pipeline allows dispersion free, high time resolution data to be obtained in real-time. We illustrate the significant improvements over the existing incoherent dedispersion system at the GMRT, and present some preliminary results obtained from studies of pulsars using this system, demonstrating its potential as a useful tool for low frequency pulsar observations. We describe the salient features of our implementation, comparing it with other recently developed real-time coherent dedispersion systems. This implementation of a real-time coherent dedispersion pipeline for a large, low frequency array instrument like the GMRT, will enable long-term observing programs using coherent dedispersion to be carried out routinely at the observatory. We also outline the possible improvements for such a pipeline, including prospects for the upgraded GMRT which will have bandwidths about ten times larger than at present.
Drive Control System for Pipeline Crawl Robot Based on CAN Bus
NASA Astrophysics Data System (ADS)
Chen, H. J.; Gao, B. T.; Zhang, X. H.; Deng2, Z. Q.
2006-10-01
Drive control system plays important roles in pipeline robot. In order to inspect the flaw and corrosion of seabed crude oil pipeline, an original mobile pipeline robot with crawler drive unit, power and monitor unit, central control unit, and ultrasonic wave inspection device is developed. The CAN bus connects these different function units and presents a reliable information channel. Considering the limited space, a compact hardware system is designed based on an ARM processor with two CAN controllers. With made-to-order CAN protocol for the crawl robot, an intelligent drive control system is developed. The implementation of the crawl robot demonstrates that the presented drive control scheme can meet the motion control requirements of the underwater pipeline crawl robot.
Component-based control of oil-gas-water mixture composition in pipelines
NASA Astrophysics Data System (ADS)
Voytyuk, I. N.
2018-03-01
The article theoretically proves the method for measuring the changes in content of oil, gas and water in pipelines; also the measurement system design for implementation thereof is discussed. An assessment is presented in connection with random and systemic errors for the future system, and recommendations for optimization thereof are presented.
Comeau, Donald C.; Liu, Haibin; Islamaj Doğan, Rezarta; Wilbur, W. John
2014-01-01
BioC is a new format and associated code libraries for sharing text and annotations. We have implemented BioC natural language preprocessing pipelines in two popular programming languages: C++ and Java. The current implementations interface with the well-known MedPost and Stanford natural language processing tool sets. The pipeline functionality includes sentence segmentation, tokenization, part-of-speech tagging, lemmatization and sentence parsing. These pipelines can be easily integrated along with other BioC programs into any BioC compliant text mining systems. As an application, we converted the NCBI disease corpus to BioC format, and the pipelines have successfully run on this corpus to demonstrate their functionality. Code and data can be downloaded from http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net PMID:24935050
Diagnostic layer integration in FPGA-based pipeline measurement systems for HEP experiments
NASA Astrophysics Data System (ADS)
Pozniak, Krzysztof T.
2007-08-01
Integrated triggering and data acquisition systems for high energy physics experiments may be considered as fast, multichannel, synchronous, distributed, pipeline measurement systems. A considerable extension of functional, technological and monitoring demands, which has recently been imposed on them, forced a common usage of large field-programmable gate array (FPGA), digital signal processing-enhanced matrices and fast optical transmission for their realization. This paper discusses modelling, design, realization and testing of pipeline measurement systems. A distribution of synchronous data stream flows is considered in the network. A general functional structure of a single network node is presented. A suggested, novel block structure of the node model facilitates full implementation in the FPGA chip, circuit standardization and parametrization, as well as integration of functional and diagnostic layers. A general method for pipeline system design was derived. This method is based on a unified model of the synchronous data network node. A few examples of practically realized, FPGA-based, pipeline measurement systems were presented. The described systems were applied in ZEUS and CMS.
Design Optimization of Innovative High-Level Waste Pipeline Unplugging Technologies - 13341
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pribanic, T.; Awwad, A.; Varona, J.
2013-07-01
Florida International University (FIU) is currently working on the development and optimization of two innovative pipeline unplugging methods: the asynchronous pulsing system (APS) and the peristaltic crawler system (PCS). Experiments were conducted on the APS to determine how air in the pipeline influences the system's performance as well as determine the effectiveness of air mitigation techniques in a pipeline. The results obtained during the experimental phase of the project, including data from pipeline pressure pulse tests along with air bubble compression tests are presented. Single-cycle pulse amplification caused by a fast-acting cylinder piston pump in 21.8, 30.5, and 43.6 mmore » pipelines were evaluated. Experiments were conducted on fully flooded pipelines as well as pipelines that contained various amounts of air to evaluate the system's performance when air is present in the pipeline. Also presented are details of the improvements implemented to the third generation crawler system (PCS). The improvements include the redesign of the rims of the unit to accommodate a camera system that provides visual feedback of the conditions inside the pipeline. Visual feedback allows the crawler to be used as a pipeline unplugging and inspection tool. Tests conducted previously demonstrated a significant reduction of the crawler speed with increasing length of tether. Current improvements include the positioning of a pneumatic valve manifold system that is located in close proximity to the crawler, rendering tether length independent of crawler speed. Additional improvements to increase the crawler's speed were also investigated and presented. Descriptions of the test beds, which were designed to emulate possible scenarios present on the Department of Energy (DOE) pipelines, are presented. Finally, conclusions and recommendations for the systems are provided. (authors)« less
Comeau, Donald C; Liu, Haibin; Islamaj Doğan, Rezarta; Wilbur, W John
2014-01-01
BioC is a new format and associated code libraries for sharing text and annotations. We have implemented BioC natural language preprocessing pipelines in two popular programming languages: C++ and Java. The current implementations interface with the well-known MedPost and Stanford natural language processing tool sets. The pipeline functionality includes sentence segmentation, tokenization, part-of-speech tagging, lemmatization and sentence parsing. These pipelines can be easily integrated along with other BioC programs into any BioC compliant text mining systems. As an application, we converted the NCBI disease corpus to BioC format, and the pipelines have successfully run on this corpus to demonstrate their functionality. Code and data can be downloaded from http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net. © The Author(s) 2014. Published by Oxford University Press.
Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi
2013-01-01
This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.
Implementation and design of a teleoperation system based on a VMEBUS/68020 pipelined architecture
NASA Technical Reports Server (NTRS)
Lee, Thomas S.
1989-01-01
A pipelined control design and architecture for a force-feedback teleoperation system that is being implemented at the Jet Propulsion Laboratory and which will be integrated with the autonomous portion of the testbed to achieve share control is described. At the local site, the operator sees real-time force/torque displays and moves two 6-degree of freedom (dof) force-reflecting hand-controllers as his hands feel the contact force/torques generated at the remote site where the robots interact with the environment. He also uses a graphical user menu to monitor robot states and specify system options. The teleoperation software is written in the C language and runs on MC68020-based processor boards in the VME chassis, which utilizes a real-time operating system; the hardware is configured to realize a four-stage pipeline configuration. The environment is very flexible, such that the system can easily be configured as a stand-alone facility for performing independent research in human factors, force control, and time-delayed systems.
Transforming microbial genotyping: a robotic pipeline for genotyping bacterial strains.
O'Farrell, Brian; Haase, Jana K; Velayudhan, Vimalkumar; Murphy, Ronan A; Achtman, Mark
2012-01-01
Microbial genotyping increasingly deals with large numbers of samples, and data are commonly evaluated by unstructured approaches, such as spread-sheets. The efficiency, reliability and throughput of genotyping would benefit from the automation of manual manipulations within the context of sophisticated data storage. We developed a medium- throughput genotyping pipeline for MultiLocus Sequence Typing (MLST) of bacterial pathogens. This pipeline was implemented through a combination of four automated liquid handling systems, a Laboratory Information Management System (LIMS) consisting of a variety of dedicated commercial operating systems and programs, including a Sample Management System, plus numerous Python scripts. All tubes and microwell racks were bar-coded and their locations and status were recorded in the LIMS. We also created a hierarchical set of items that could be used to represent bacterial species, their products and experiments. The LIMS allowed reliable, semi-automated, traceable bacterial genotyping from initial single colony isolation and sub-cultivation through DNA extraction and normalization to PCRs, sequencing and MLST sequence trace evaluation. We also describe robotic sequencing to facilitate cherrypicking of sequence dropouts. This pipeline is user-friendly, with a throughput of 96 strains within 10 working days at a total cost of < €25 per strain. Since developing this pipeline, >200,000 items were processed by two to three people. Our sophisticated automated pipeline can be implemented by a small microbiology group without extensive external support, and provides a general framework for semi-automated bacterial genotyping of large numbers of samples at low cost.
Transforming Microbial Genotyping: A Robotic Pipeline for Genotyping Bacterial Strains
Velayudhan, Vimalkumar; Murphy, Ronan A.; Achtman, Mark
2012-01-01
Microbial genotyping increasingly deals with large numbers of samples, and data are commonly evaluated by unstructured approaches, such as spread-sheets. The efficiency, reliability and throughput of genotyping would benefit from the automation of manual manipulations within the context of sophisticated data storage. We developed a medium- throughput genotyping pipeline for MultiLocus Sequence Typing (MLST) of bacterial pathogens. This pipeline was implemented through a combination of four automated liquid handling systems, a Laboratory Information Management System (LIMS) consisting of a variety of dedicated commercial operating systems and programs, including a Sample Management System, plus numerous Python scripts. All tubes and microwell racks were bar-coded and their locations and status were recorded in the LIMS. We also created a hierarchical set of items that could be used to represent bacterial species, their products and experiments. The LIMS allowed reliable, semi-automated, traceable bacterial genotyping from initial single colony isolation and sub-cultivation through DNA extraction and normalization to PCRs, sequencing and MLST sequence trace evaluation. We also describe robotic sequencing to facilitate cherrypicking of sequence dropouts. This pipeline is user-friendly, with a throughput of 96 strains within 10 working days at a total cost of < €25 per strain. Since developing this pipeline, >200,000 items were processed by two to three people. Our sophisticated automated pipeline can be implemented by a small microbiology group without extensive external support, and provides a general framework for semi-automated bacterial genotyping of large numbers of samples at low cost. PMID:23144721
75 FR 67450 - Pipeline Safety: Control Room Management Implementation Workshop
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-02
... PHMSA-2010-0294] Pipeline Safety: Control Room Management Implementation Workshop AGENCY: Pipeline and...) on the implementation of pipeline control room management. The workshop is intended to foster an understanding of the Control Room Management Rule issued by PHMSA on December 3, 2009, and is open to the public...
High-throughput bioinformatics with the Cyrille2 pipeline system
Fiers, Mark WEJ; van der Burgt, Ate; Datema, Erwin; de Groot, Joost CW; van Ham, Roeland CHJ
2008-01-01
Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1) a web based, graphical user interface (GUI) that enables a pipeline operator to manage the system; 2) the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3) the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines. PMID:18269742
Low-level processing for real-time image analysis
NASA Technical Reports Server (NTRS)
Eskenazi, R.; Wilf, J. M.
1979-01-01
A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.
Creation and Implementation of a Workforce Development Pipeline Program at MSFC
NASA Technical Reports Server (NTRS)
Hix, Billy
2003-01-01
Within the context of NASA's Education Programs, this Workforce Development Pipeline guide describes the goals and objectives of MSFC's Workforce Development Pipeline Program as well as the principles and strategies for guiding implementation. It is designed to support the initiatives described in the NASA Implementation Plan for Education, 1999-2003 (EP-1998-12-383-HQ) and represents the vision of the members of the Education Programs office at MSFC. This document: 1) Outlines NASA s Contribution to National Priorities; 2) Sets the context for the Workforce Development Pipeline Program; 3) Describes Workforce Development Pipeline Program Strategies; 4) Articulates the Workforce Development Pipeline Program Goals and Aims; 5) List the actions to build a unified approach; 6) Outlines the Workforce Development Pipeline Programs guiding Principles; and 7) The results of implementation.
High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras.
Lapray, Pierre-Jean; Thomas, Jean-Baptiste; Gouton, Pierre
2017-06-03
Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits us to embed this technology in practical vision systems with little adaptation of the existing solutions. In this communication, we define an imaging pipeline that permits high dynamic range (HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation of this pipeline on a prototype sensor and evaluate the quality of our implementation results on real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in particular we solve the problem of noise generated by the lack of energy balance. Data are provided to the community in an image database for further research.
NGSANE: a lightweight production informatics framework for high-throughput data analysis.
Buske, Fabian A; French, Hugh J; Smith, Martin A; Clark, Susan J; Bauer, Denis C
2014-05-15
The initial steps in the analysis of next-generation sequencing data can be automated by way of software 'pipelines'. However, individual components depreciate rapidly because of the evolving technology and analysis methods, often rendering entire versions of production informatics pipelines obsolete. Constructing pipelines from Linux bash commands enables the use of hot swappable modular components as opposed to the more rigid program call wrapping by higher level languages, as implemented in comparable published pipelining systems. Here we present Next Generation Sequencing ANalysis for Enterprises (NGSANE), a Linux-based, high-performance-computing-enabled framework that minimizes overhead for set up and processing of new projects, yet maintains full flexibility of custom scripting when processing raw sequence data. Ngsane is implemented in bash and publicly available under BSD (3-Clause) licence via GitHub at https://github.com/BauerLab/ngsane. Denis.Bauer@csiro.au Supplementary data are available at Bioinformatics online.
GPU-Powered Coherent Beamforming
NASA Astrophysics Data System (ADS)
Magro, A.; Adami, K. Zarb; Hickish, J.
2015-03-01
Graphics processing units (GPU)-based beamforming is a relatively unexplored area in radio astronomy, possibly due to the assumption that any such system will be severely limited by the PCIe bandwidth required to transfer data to the GPU. We have developed a CUDA-based GPU implementation of a coherent beamformer, specifically designed and optimized for deployment at the BEST-2 array which can generate an arbitrary number of synthesized beams for a wide range of parameters. It achieves ˜1.3 TFLOPs on an NVIDIA Tesla K20, approximately 10x faster than an optimized, multithreaded CPU implementation. This kernel has been integrated into two real-time, GPU-based time-domain software pipelines deployed at the BEST-2 array in Medicina: a standalone beamforming pipeline and a transient detection pipeline. We present performance benchmarks for the beamforming kernel as well as the transient detection pipeline with beamforming capabilities as well as results of test observation.
High-throughput Analysis of Large Microscopy Image Datasets on CPU-GPU Cluster Platforms
Teodoro, George; Pan, Tony; Kurc, Tahsin M.; Kong, Jun; Cooper, Lee A. D.; Podhorszki, Norbert; Klasky, Scott; Saltz, Joel H.
2014-01-01
Analysis of large pathology image datasets offers significant opportunities for the investigation of disease morphology, but the resource requirements of analysis pipelines limit the scale of such studies. Motivated by a brain cancer study, we propose and evaluate a parallel image analysis application pipeline for high throughput computation of large datasets of high resolution pathology tissue images on distributed CPU-GPU platforms. To achieve efficient execution on these hybrid systems, we have built runtime support that allows us to express the cancer image analysis application as a hierarchical data processing pipeline. The application is implemented as a coarse-grain pipeline of stages, where each stage may be further partitioned into another pipeline of fine-grain operations. The fine-grain operations are efficiently managed and scheduled for computation on CPUs and GPUs using performance aware scheduling techniques along with several optimizations, including architecture aware process placement, data locality conscious task assignment, data prefetching, and asynchronous data copy. These optimizations are employed to maximize the utilization of the aggregate computing power of CPUs and GPUs and minimize data copy overheads. Our experimental evaluation shows that the cooperative use of CPUs and GPUs achieves significant improvements on top of GPU-only versions (up to 1.6×) and that the execution of the application as a set of fine-grain operations provides more opportunities for runtime optimizations and attains better performance than coarser-grain, monolithic implementations used in other works. An implementation of the cancer image analysis pipeline using the runtime support was able to process an image dataset consisting of 36,848 4Kx4K-pixel image tiles (about 1.8TB uncompressed) in less than 4 minutes (150 tiles/second) on 100 nodes of a state-of-the-art hybrid cluster system. PMID:25419546
49 CFR 192.907 - What must an operator do to implement this subpart?
Code of Federal Regulations, 2010 CFR
2010-10-01
...) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.907 What must an operator do to implement this subpart? (a...
An acceleration system for Laplacian image fusion based on SoC
NASA Astrophysics Data System (ADS)
Gao, Liwen; Zhao, Hongtu; Qu, Xiujie; Wei, Tianbo; Du, Peng
2018-04-01
Based on the analysis of Laplacian image fusion algorithm, this paper proposes a partial pipelining and modular processing architecture, and a SoC based acceleration system is implemented accordingly. Full pipelining method is used for the design of each module, and modules in series form the partial pipelining with unified data formation, which is easy for management and reuse. Integrated with ARM processor, DMA and embedded bare-mental program, this system achieves 4 layers of Laplacian pyramid on the Zynq-7000 board. Experiments show that, with small resources consumption, a couple of 256×256 images can be fused within 1ms, maintaining a fine fusion effect at the same time.
Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph
2018-06-01
Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.
The Calibration Reference Data System
NASA Astrophysics Data System (ADS)
Greenfield, P.; Miller, T.
2016-07-01
We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.
Architecting the Finite Element Method Pipeline for the GPU.
Fu, Zhisong; Lewis, T James; Kirby, Robert M; Whitaker, Ross T
2014-02-01
The finite element method (FEM) is a widely employed numerical technique for approximating the solution of partial differential equations (PDEs) in various science and engineering applications. Many of these applications benefit from fast execution of the FEM pipeline. One way to accelerate the FEM pipeline is by exploiting advances in modern computational hardware, such as the many-core streaming processors like the graphical processing unit (GPU). In this paper, we present the algorithms and data-structures necessary to move the entire FEM pipeline to the GPU. First we propose an efficient GPU-based algorithm to generate local element information and to assemble the global linear system associated with the FEM discretization of an elliptic PDE. To solve the corresponding linear system efficiently on the GPU, we implement a conjugate gradient method preconditioned with a geometry-informed algebraic multi-grid (AMG) method preconditioner. We propose a new fine-grained parallelism strategy, a corresponding multigrid cycling stage and efficient data mapping to the many-core architecture of GPU. Comparison of our on-GPU assembly versus a traditional serial implementation on the CPU achieves up to an 87 × speedup. Focusing on the linear system solver alone, we achieve a speedup of up to 51 × versus use of a comparable state-of-the-art serial CPU linear system solver. Furthermore, the method compares favorably with other GPU-based, sparse, linear solvers.
A computational genomics pipeline for prokaryotic sequencing projects
Kislyuk, Andrey O.; Katz, Lee S.; Agrawal, Sonia; Hagen, Matthew S.; Conley, Andrew B.; Jayaraman, Pushkala; Nelakuditi, Viswateja; Humphrey, Jay C.; Sammons, Scott A.; Govil, Dhwani; Mair, Raydel D.; Tatti, Kathleen M.; Tondella, Maria L.; Harcourt, Brian H.; Mayer, Leonard W.; Jordan, I. King
2010-01-01
Motivation: New sequencing technologies have accelerated research on prokaryotic genomes and have made genome sequencing operations outside major genome sequencing centers routine. However, no off-the-shelf solution exists for the combined assembly, gene prediction, genome annotation and data presentation necessary to interpret sequencing data. The resulting requirement to invest significant resources into custom informatics support for genome sequencing projects remains a major impediment to the accessibility of high-throughput sequence data. Results: We present a self-contained, automated high-throughput open source genome sequencing and computational genomics pipeline suitable for prokaryotic sequencing projects. The pipeline has been used at the Georgia Institute of Technology and the Centers for Disease Control and Prevention for the analysis of Neisseria meningitidis and Bordetella bronchiseptica genomes. The pipeline is capable of enhanced or manually assisted reference-based assembly using multiple assemblers and modes; gene predictor combining; and functional annotation of genes and gene products. Because every component of the pipeline is executed on a local machine with no need to access resources over the Internet, the pipeline is suitable for projects of a sensitive nature. Annotation of virulence-related features makes the pipeline particularly useful for projects working with pathogenic prokaryotes. Availability and implementation: The pipeline is licensed under the open-source GNU General Public License and available at the Georgia Tech Neisseria Base (http://nbase.biology.gatech.edu/). The pipeline is implemented with a combination of Perl, Bourne Shell and MySQL and is compatible with Linux and other Unix systems. Contact: king.jordan@biology.gatech.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20519285
OS friendly microprocessor architecture: Hardware level computer security
NASA Astrophysics Data System (ADS)
Jungwirth, Patrick; La Fratta, Patrick
2016-05-01
We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.
Aerial image databases for pipeline rights-of-way management
NASA Astrophysics Data System (ADS)
Jadkowski, Mark A.
1996-03-01
Pipeline companies that own and manage extensive rights-of-way corridors are faced with ever-increasing regulatory pressures, operating issues, and the need to remain competitive in today's marketplace. Automation has long been an answer to the problem of having to do more work with less people, and Automated Mapping/Facilities Management/Geographic Information Systems (AM/FM/GIS) solutions have been implemented at several pipeline companies. Until recently, the ability to cost-effectively acquire and incorporate up-to-date aerial imagery into these computerized systems has been out of the reach of most users. NASA's Earth Observations Commercial Applications Program (EOCAP) is providing a means by which pipeline companies can bridge this gap. The EOCAP project described in this paper includes a unique partnership with NASA and James W. Sewall Company to develop an aircraft-mounted digital camera system and a ground-based computer system to geometrically correct and efficiently store and handle the digital aerial images in an AM/FM/GIS environment. This paper provides a synopsis of the project, including details on (1) the need for aerial imagery, (2) NASA's interest and role in the project, (3) the design of a Digital Aerial Rights-of-Way Monitoring System, (4) image georeferencing strategies for pipeline applications, and (5) commercialization of the EOCAP technology through a prototype project at Algonquin Gas Transmission Company which operates major gas pipelines in New England, New York, and New Jersey.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-21
... Registry of Pipeline and Liquefied Natural Gas Operators AGENCY: Pipeline and Hazardous Materials Safety... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts... Register (75 FR 72878) titled: ``Pipeline Safety: Updates to Pipeline and Liquefied Natural Gas Reporting...
Shaikh, Faiq; Franc, Benjamin; Allen, Erastus; Sala, Evis; Awan, Omer; Hendrata, Kenneth; Halabi, Safwan; Mohiuddin, Sohaib; Malik, Sana; Hadley, Dexter; Shrestha, Rasu
2018-03-01
Enterprise imaging has channeled various technological innovations to the field of clinical radiology, ranging from advanced imaging equipment and postacquisition iterative reconstruction tools to image analysis and computer-aided detection tools. More recently, the advancement in the field of quantitative image analysis coupled with machine learning-based data analytics, classification, and integration has ushered in the era of radiomics, a paradigm shift that holds tremendous potential in clinical decision support as well as drug discovery. However, there are important issues to consider to incorporate radiomics into a clinically applicable system and a commercially viable solution. In this two-part series, we offer insights into the development of the translational pipeline for radiomics from methodology to clinical implementation (Part 1) and from that point to enterprise development (Part 2). In Part 2 of this two-part series, we study the components of the strategy pipeline, from clinical implementation to building enterprise solutions. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
An implementation of a data-transmission pipelining algorithm on Imote2 platforms
NASA Astrophysics Data System (ADS)
Li, Xu; Dorvash, Siavash; Cheng, Liang; Pakzad, Shamim
2011-04-01
Over the past several years, wireless network systems and sensing technologies have been developed significantly. This has resulted in the broad application of wireless sensor networks (WSNs) in many engineering fields and in particular structural health monitoring (SHM). The movement of traditional SHM toward the new generation of SHM, which utilizes WSNs, relies on the advantages of this new approach such as relatively low costs, ease of implementation and the capability of onboard data processing and management. In the particular case of long span bridge monitoring, a WSN should be capable of transmitting commands and measurement data over long network geometry in a reliable manner. While using single-hop data transmission in such geometry requires a long radio range and consequently a high level of power supply, multi-hop communication may offer an effective and reliable way for data transmissions across the network. Using a multi-hop communication protocol, the network relays data from a remote node to the base station via intermediary nodes. We have proposed a data-transmission pipelining algorithm to enable an effective use of the available bandwidth and minimize the energy consumption and the delay performance by the multi-hop communication protocol. This paper focuses on the implementation aspect of the pipelining algorithm on Imote2 platforms for SHM applications, describes its interaction with underlying routing protocols, and presents the solutions to various implementation issues of the proposed pipelining algorithm. Finally, the performance of the algorithm is evaluated based on the results of an experimental implementation.
Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows
NASA Astrophysics Data System (ADS)
Jittamai, Phongchai
This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-02
... Management Program for Gas Distribution Pipelines; Correction AGENCY: Pipeline and Hazardous Materials Safety... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Regulations to require operators of gas distribution pipelines to develop and implement integrity management...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-13
... Natural Gas Operators AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...: ``Pipeline Safety: Updates to Pipeline and Liquefied Natural Gas Reporting Requirements.'' The final rule...
BigDataScript: a scripting language for data pipelines
Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu
2015-01-01
Motivation: The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. Results: We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. Availability and implementation: BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. Contact: pablo.e.cingolani@gmail.com PMID:25189778
Gazda, Nicholas P; Griffin, Emily; Hamrick, Kasey; Baskett, Jordan; Mellon, Meghan M; Eckel, Stephen F; Granko, Robert P
2018-04-01
Purpose: The purpose of this article is to share experiences after the development of a health-system pharmacy administration residency with a MS degree and express the need for additional programs in nonacademic medical center health-system settings. Summary: Experiences with the development and implementation of a health-system pharmacy administration residency at a large community teaching hospital are described. Resident candidates benefit from collaborations with other health-systems through master's degree programs and visibility to leaders at your health-system. Programs benefit from building a pipeline of future pharmacy administrators and by leveraging the skills of residents to contribute to projects and department-wide initiatives. Tools to assist in the implementation of a new pharmacy administration program are also described and include rotation and preceptor development, marketing and recruiting, financial evaluation, and steps to prepare for accreditation. Conclusion: Health-system pharmacy administration residents provide the opportunity to build a pipeline of high-quality leaders, provide high-level project involvement, and produce a positive return on investment (ROI) for health-systems. These programs should be explored in academic and nonacademic-based health-systems.
Automatic latency equalization in VHDL-implemented complex pipelined systems
NASA Astrophysics Data System (ADS)
Zabołotny, Wojciech M.
2016-09-01
In the pipelined data processing systems it is very important to ensure that parallel paths delay data by the same number of clock cycles. If that condition is not met, the processing blocks receive data not properly aligned in time and produce incorrect results. Manual equalization of latencies is a tedious and error-prone work. This paper presents an automatic method of latency equalization in systems described in VHDL. The proposed method uses simulation to measure latencies and verify introduced correction. The solution is portable between different simulation and synthesis tools. The method does not increase the complexity of the synthesized design comparing to the solution based on manual latency adjustment. The example implementation of the proposed methodology together with a simple design demonstrating its use is available as an open source project under BSD license.
Reproducibility of neuroimaging analyses across operating systems
Glatard, Tristan; Lewis, Lindsay B.; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C.
2015-01-01
Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed. PMID:25964757
Reproducibility of neuroimaging analyses across operating systems.
Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C
2015-01-01
Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.
Delory, Benjamin M; Li, Mao; Topp, Christopher N; Lobet, Guillaume
2018-01-01
Quantifying plant morphology is a very challenging task that requires methods able to capture the geometry and topology of plant organs at various spatial scales. Recently, the use of persistent homology as a mathematical framework to quantify plant morphology has been successfully demonstrated for leaves, shoots, and root systems. In this paper, we present a new data analysis pipeline implemented in the R package archiDART to analyse root system architectures using persistent homology. In addition, we also show that both geometric and topological descriptors are necessary to accurately compare root systems and assess their natural complexity.
archiDART v3.0: A new data analysis pipeline allowing the topological analysis of plant root systems
Delory, Benjamin M.; Li, Mao; Topp, Christopher N.; Lobet, Guillaume
2018-01-01
Quantifying plant morphology is a very challenging task that requires methods able to capture the geometry and topology of plant organs at various spatial scales. Recently, the use of persistent homology as a mathematical framework to quantify plant morphology has been successfully demonstrated for leaves, shoots, and root systems. In this paper, we present a new data analysis pipeline implemented in the R package archiDART to analyse root system architectures using persistent homology. In addition, we also show that both geometric and topological descriptors are necessary to accurately compare root systems and assess their natural complexity. PMID:29636899
Rapid simulation of X-ray transmission imaging for baggage inspection via GPU-based ray-tracing
NASA Astrophysics Data System (ADS)
Gong, Qian; Stoian, Razvan-Ionut; Coccarelli, David S.; Greenberg, Joel A.; Vera, Esteban; Gehm, Michael E.
2018-01-01
We present a pipeline that rapidly simulates X-ray transmission imaging for arbitrary system architectures using GPU-based ray-tracing techniques. The purpose of the pipeline is to enable statistical analysis of threat detection in the context of airline baggage inspection. As a faster alternative to Monte Carlo methods, we adopt a deterministic approach for simulating photoelectric absorption-based imaging. The highly-optimized NVIDIA OptiX API is used to implement ray-tracing, greatly speeding code execution. In addition, we implement the first hierarchical representation structure to determine the interaction path length of rays traversing heterogeneous media described by layered polygons. The accuracy of the pipeline has been validated by comparing simulated data with experimental data collected using a heterogenous phantom and a laboratory X-ray imaging system. On a single computer, our approach allows us to generate over 400 2D transmission projections (125 × 125 pixels per frame) per hour for a bag packed with hundreds of everyday objects. By implementing our approach on cloud-based GPU computing platforms, we find that the same 2D projections of approximately 3.9 million bags can be obtained in a single day using 400 GPU instances, at a cost of only 0.001 per bag.
NASA Astrophysics Data System (ADS)
Christensen, C.; Liu, S.; Scorzelli, G.; Lee, J. W.; Bremer, P. T.; Summa, B.; Pascucci, V.
2017-12-01
The creation, distribution, analysis, and visualization of large spatiotemporal datasets is a growing challenge for the study of climate and weather phenomena in which increasingly massive domains are utilized to resolve finer features, resulting in datasets that are simply too large to be effectively shared. Existing workflows typically consist of pipelines of independent processes that preclude many possible optimizations. As data sizes increase, these pipelines are difficult or impossible to execute interactively and instead simply run as large offline batch processes. Rather than limiting our conceptualization of such systems to pipelines (or dataflows), we propose a new model for interactive data analysis and visualization systems in which we comprehensively consider the processes involved from data inception through analysis and visualization in order to describe systems composed of these processes in a manner that facilitates interactive implementations of the entire system rather than of only a particular component. We demonstrate the application of this new model with the implementation of an interactive system that supports progressive execution of arbitrary user scripts for the analysis and visualization of massive, disparately located climate data ensembles. It is currently in operation as part of the Earth System Grid Federation server running at Lawrence Livermore National Lab, and accessible through both web-based and desktop clients. Our system facilitates interactive analysis and visualization of massive remote datasets up to petabytes in size, such as the 3.5 PB 7km NASA GEOS-5 Nature Run simulation, previously only possible offline or at reduced resolution. To support the community, we have enabled general distribution of our application using public frameworks including Docker and Anaconda.
Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P; Zijdenbos, Alex P; Evans, Alan C
2012-01-01
The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources.
Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P.; Zijdenbos, Alex P.; Evans, Alan C.
2012-01-01
The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources. PMID:22493575
Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1
2016-03-01
REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The
Choi, Kang-Il
2016-01-01
This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme. PMID:27695114
Kim, HyunJin; Choi, Kang-Il
2016-01-01
This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme.
NASA Technical Reports Server (NTRS)
Jenkins, Jon M.
2017-01-01
The experience acquired through development, implementation and operation of the KeplerK2 science pipelines can provide lessons learned for the development of science pipelines for other missions such as NASA's Transiting Exoplanet Survey Satellite, and ESA's PLATO mission.
Rapid Processing of Radio Interferometer Data for Transient Surveys
NASA Astrophysics Data System (ADS)
Bourke, S.; Mooley, K.; Hallinan, G.
2014-05-01
We report on a software infrastructure and pipeline developed to process large radio interferometer datasets. The pipeline is implemented using a radical redesign of the AIPS processing model. An infrastructure we have named AIPSlite is used to spawn, at runtime, minimal AIPS environments across a cluster. The pipeline then distributes and processes its data in parallel. The system is entirely free of the traditional AIPS distribution and is self configuring at runtime. This software has so far been used to process a EVLA Stripe 82 transient survey, the data for the JVLA-COSMOS project, and has been used to process most of the EVLA L-Band data archive imaging each integration to search for short duration transients.
Geolocation Support for Water Supply and Sewerage Projects in Azerbaijan
NASA Astrophysics Data System (ADS)
Qocamanov, M. H.; Gurbanov, Ch. Z.
2016-10-01
Drinking water supply and sewerage system designing and reconstruction projects are being extensively conducted in Azerbaijan Republic. During implementation of such projects, collecting large amount of information about the area and detailed investigations are crucial. Joint use of the aerospace monitoring and GIS play an essential role for the studies of the impact of environmental factors, development of the analytical information systems and others, while achieving the reliable performance of the existing and designed major water supply pipelines, as well as construction and exploitation of the technical installations. With our participation the GIS has been created in "Azersu" OJSC that includes systematic database of the drinking water supply and sewerage system, and rain water networks to carry out necessary geo information analysis. GIScreated based on "Microstation" platform and aerospace data. Should be mentioned that, in the country, specifically in large cities (i.e. Baku, Ganja, Sumqait, etc.,) drinking water supply pipelines cross regions with different physico-geographical conditions, geo-morphological compositions and seismotectonics.Mains water supply lines in many accidents occur during the operation, it also creates problems with drinking water consumers. In some cases the damage is caused by large-scale accidents. Long-term experience gives reason to say that the elimination of the consequences of accidents is a major cost. Therefore, to avoid such events and to prevent their exploitation and geodetic monitoring system to improve the rules on key issues. Therefore, constant control of the plan-height positioning, geodetic measurements for the detailed examination of the dynamics, repetition of the geodetic measurements for certain time intervals, or in other words regular monitoring is very important. During geodetic monitoring using the GIS has special significance. Given that, collecting geodetic monitoring measurements of the main pipelines on the same coordinate system and processing these data on a single GIS system allows the implementation of overall assessment of plan-height state of major water supply pipeline network facilities and the study of the impact of water supply network on environment and alternatively, the impact of natural processes on major pipeline.
Reclamation of the Wahsatch gathering system pipeline in southwestern Wyoming and northeastern Utah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strickland, D.; Dern, G.; Johnson, G.
1996-12-31
The Union Pacific Resources Company (UPRC) constructed a 40.4 mile pipeline in 1993 in Summit and Rich Countries, Utah and Uinta County, Wyoming. The pipeline collects and delivers natural gas from six existing wells to the Whitney Canyon Processing Plant north of Evanston, Wyoming. We describe reclamation of the pipeline, the cooperation received from landowners along the right-of-way, and mitigation measures implemented by UPRC to minimize impacts to wildlife. The reclamation procedure combines a 2 step topsoil separation, mulching with natural vegetation, native seed mixes, and measures designed to reduce the visual impacts of the pipeline. Topsoil is separated intomore » the top 4 inches of soil material, when present. The resulting top dressing is rich in native seed and rhizomes allowing a reduced seeding rate. The borders of the right-of-way are mowed in a curvilinear pattern to reduce the straight line effects of landowner cooperation on revegetation. Specifically, following 2 years of monitoring, significant differences in plant cover (0.01« less
SPAWAR Strategic Plan Execution Year 2017
2017-01-11
the PEO C4I domain. Completed C4I Baseline implementation activities including product roadmap system reviews, realignment of product fielding within...preloading applications in the CANES production facility to reduce installation timelines • Implemented Installation Management Office alignment and...software update process • For candidate technologies (endeavors) in the innovation pipeline, identified key attributes and acceleration factors that
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
NASA Technical Reports Server (NTRS)
Shao, H. M.; Deutsch, L. J.; Reed, I. S.
1987-01-01
A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
NASA Technical Reports Server (NTRS)
Shao, Howard M.; Reed, Irving S.
1988-01-01
A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.
Real time software tools and methodologies
NASA Technical Reports Server (NTRS)
Christofferson, M. J.
1981-01-01
Real time systems are characterized by high speed processing and throughput as well as asynchronous event processing requirements. These requirements give rise to particular implementations of parallel or pipeline multitasking structures, of intertask or interprocess communications mechanisms, and finally of message (buffer) routing or switching mechanisms. These mechanisms or structures, along with the data structue, describe the essential character of the system. These common structural elements and mechanisms are identified, their implementation in the form of routines, tasks or macros - in other words, tools are formalized. The tools developed support or make available the following: reentrant task creation, generalized message routing techniques, generalized task structures/task families, standardized intertask communications mechanisms, and pipeline and parallel processing architectures in a multitasking environment. Tools development raise some interesting prospects in the areas of software instrumentation and software portability. These issues are discussed following the description of the tools themselves.
A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Thirer, Nonel
2013-05-01
With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishkov, A.; Akopova, Gretta; Evans, Meredydd
This article will compare the natural gas transmission systems in the U.S. and Russia and review experience with methane mitigation technologies in the two countries. Russia and the United States (U.S.) are the world's largest consumers and producers of natural gas, and consequently, have some of the largest natural gas infrastructure. This paper compares the natural gas transmission systems in Russia and the U.S., their methane emissions and experiences in implementing methane mitigation technologies. Given the scale of the two systems, many international oil and natural gas companies have expressed interest in better understanding the methane emission volumes and trendsmore » as well as the methane mitigation options. This paper compares the two transmission systems and documents experiences in Russia and the U.S. in implementing technologies and programs for methane mitigation. The systems are inherently different. For instance, while the U.S. natural gas transmission system is represented by many companies, which operate pipelines with various characteristics, in Russia predominately one company, Gazprom, operates the gas transmission system. However, companies in both countries found that reducing methane emissions can be feasible and profitable. Examples of technologies in use include replacing wet seals with dry seals, implementing Directed Inspection and Maintenance (DI&M) programs, performing pipeline pump-down, applying composite wrap for non-leaking pipeline defects and installing low-bleed pneumatics. The research methodology for this paper involved a review of information on methane emissions trends and mitigation measures, analytical and statistical data collection; accumulation and analysis of operational data on compressor seals and other emission sources; and analysis of technologies used in both countries to mitigate methane emissions in the transmission sector. Operators of natural gas transmission systems have many options to reduce natural gas losses. Depending on the value of gas, simple, low-cost measures, such as adjusting leaking equipment components, or larger-scale measures, such as installing dry seals on compressors, can be applied.« less
Code of Federal Regulations, 2010 CFR
2010-10-01
... Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Distribution Pipeline Integrity Management (IM) § 192.1015 What must...
BigDataScript: a scripting language for data pipelines.
Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu
2015-01-01
The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. © The Author 2014. Published by Oxford University Press.
A VLSI pipeline design of a fast prime factor DFT on a finite field
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Shao, H. M.; Reed, I. S.; Shyu, H. C.
1986-01-01
A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). A pipeline structure is used to implement this prime factor DFT over GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.
Maria L. Sonett
1999-01-01
Integrated surface management techniques for pipeline construction through arid and semi-arid rangeland ecosystems are presented in a case history of a 412-mile pipeline construction project in New Mexico. Planning, implementation and monitoring for restoration of surface hydrology, soil stabilization, soil cover, and plant species succession are discussed. Planning...
NASA Astrophysics Data System (ADS)
Rerucha, Simon; Sarbort, Martin; Hola, Miroslava; Cizek, Martin; Hucl, Vaclav; Cip, Ondrej; Lazar, Josef
2016-12-01
The homodyne detection with only a single detector represents a promising approach in the interferometric application which enables a significant reduction of the optical system complexity while preserving the fundamental resolution and dynamic range of the single frequency laser interferometers. We present the design, implementation and analysis of algorithmic methods for computational processing of the single-detector interference signal based on parallel pipelined processing suitable for real time implementation on a programmable hardware platform (e.g. the FPGA - Field Programmable Gate Arrays or the SoC - System on Chip). The algorithmic methods incorporate (a) the single detector signal (sine) scaling, filtering, demodulations and mixing necessary for the second (cosine) quadrature signal reconstruction followed by a conic section projection in Cartesian plane as well as (a) the phase unwrapping together with the goniometric and linear transformations needed for the scale linearization and periodic error correction. The digital computing scheme was designed for bandwidths up to tens of megahertz which would allow to measure the displacements at the velocities around half metre per second. The algorithmic methods were tested in real-time operation with a PC-based reference implementation that employed the advantage pipelined processing by balancing the computational load among multiple processor cores. The results indicate that the algorithmic methods are suitable for a wide range of applications [3] and that they are bringing the fringe counting interferometry closer to the industrial applications due to their optical setup simplicity and robustness, computational stability, scalability and also a cost-effectiveness.
A novel pipeline based FPGA implementation of a genetic algorithm
NASA Astrophysics Data System (ADS)
Thirer, Nonel
2014-05-01
To solve problems when an analytical solution is not available, more and more bio-inspired computation techniques have been applied in the last years. Thus, an efficient algorithm is the Genetic Algorithm (GA), which imitates the biological evolution process, finding the solution by the mechanism of "natural selection", where the strong has higher chances to survive. A genetic algorithm is an iterative procedure which operates on a population of individuals called "chromosomes" or "possible solutions" (usually represented by a binary code). GA performs several processes with the population individuals to produce a new population, like in the biological evolution. To provide a high speed solution, pipelined based FPGA hardware implementations are used, with a nstages pipeline for a n-phases genetic algorithm. The FPGA pipeline implementations are constraints by the different execution time of each stage and by the FPGA chip resources. To minimize these difficulties, we propose a bio-inspired technique to modify the crossover step by using non identical twins. Thus two of the chosen chromosomes (parents) will build up two new chromosomes (children) not only one as in classical GA. We analyze the contribution of this method to reduce the execution time in the asynchronous and synchronous pipelines and also the possibility to a cheaper FPGA implementation, by using smaller populations. The full hardware architecture for a FPGA implementation to our target ALTERA development card is presented and analyzed.
A computational genomics pipeline for prokaryotic sequencing projects.
Kislyuk, Andrey O; Katz, Lee S; Agrawal, Sonia; Hagen, Matthew S; Conley, Andrew B; Jayaraman, Pushkala; Nelakuditi, Viswateja; Humphrey, Jay C; Sammons, Scott A; Govil, Dhwani; Mair, Raydel D; Tatti, Kathleen M; Tondella, Maria L; Harcourt, Brian H; Mayer, Leonard W; Jordan, I King
2010-08-01
New sequencing technologies have accelerated research on prokaryotic genomes and have made genome sequencing operations outside major genome sequencing centers routine. However, no off-the-shelf solution exists for the combined assembly, gene prediction, genome annotation and data presentation necessary to interpret sequencing data. The resulting requirement to invest significant resources into custom informatics support for genome sequencing projects remains a major impediment to the accessibility of high-throughput sequence data. We present a self-contained, automated high-throughput open source genome sequencing and computational genomics pipeline suitable for prokaryotic sequencing projects. The pipeline has been used at the Georgia Institute of Technology and the Centers for Disease Control and Prevention for the analysis of Neisseria meningitidis and Bordetella bronchiseptica genomes. The pipeline is capable of enhanced or manually assisted reference-based assembly using multiple assemblers and modes; gene predictor combining; and functional annotation of genes and gene products. Because every component of the pipeline is executed on a local machine with no need to access resources over the Internet, the pipeline is suitable for projects of a sensitive nature. Annotation of virulence-related features makes the pipeline particularly useful for projects working with pathogenic prokaryotes. The pipeline is licensed under the open-source GNU General Public License and available at the Georgia Tech Neisseria Base (http://nbase.biology.gatech.edu/). The pipeline is implemented with a combination of Perl, Bourne Shell and MySQL and is compatible with Linux and other Unix systems.
Implementing an X-ray validation pipeline for the Protein Data Bank
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gore, Swanand; Velankar, Sameer; Kleywegt, Gerard J., E-mail: gerard@ebi.ac.uk
2012-04-01
The implementation of a validation pipeline, based on community recommendations, for future depositions of X-ray crystal structures in the Protein Data Bank is described. There is an increasing realisation that the quality of the biomacromolecular structures deposited in the Protein Data Bank (PDB) archive needs to be assessed critically using established and powerful validation methods. The Worldwide Protein Data Bank (wwPDB) organization has convened several Validation Task Forces (VTFs) to advise on the methods and standards that should be used to validate all of the entries already in the PDB as well as all structures that will be deposited inmore » the future. The recommendations of the X-ray VTF are currently being implemented in a software pipeline. Here, ongoing work on this pipeline is briefly described as well as ways in which validation-related information could be presented to users of structural data.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-15
... the sales natural gas pipeline or to an emissions control unit when a natural gas sales pipeline is... vapor recovery unit (VRU) to be injected into a natural gas sales pipeline for conveyance to a natural gas plant. In the event that pipeline injection of recoverable natural gas is temporarily infeasible...
Implementing a genomic data management system using iRODS in the Wellcome Trust Sanger Institute
2011-01-01
Background Increasingly large amounts of DNA sequencing data are being generated within the Wellcome Trust Sanger Institute (WTSI). The traditional file system struggles to handle these increasing amounts of sequence data. A good data management system therefore needs to be implemented and integrated into the current WTSI infrastructure. Such a system enables good management of the IT infrastructure of the sequencing pipeline and allows biologists to track their data. Results We have chosen a data grid system, iRODS (Rule-Oriented Data management systems), to act as the data management system for the WTSI. iRODS provides a rule-based system management approach which makes data replication much easier and provides extra data protection. Unlike the metadata provided by traditional file systems, the metadata system of iRODS is comprehensive and allows users to customize their own application level metadata. Users and IT experts in the WTSI can then query the metadata to find and track data. The aim of this paper is to describe how we designed and used (from both system and user viewpoints) iRODS as a data management system. Details are given about the problems faced and the solutions found when iRODS was implemented. A simple use case describing how users within the WTSI use iRODS is also introduced. Conclusions iRODS has been implemented and works as the production system for the sequencing pipeline of the WTSI. Both biologists and IT experts can now track and manage data, which could not previously be achieved. This novel approach allows biologists to define their own metadata and query the genomic data using those metadata. PMID:21906284
The High Level Data Reduction Library
NASA Astrophysics Data System (ADS)
Ballester, P.; Gabasch, A.; Jung, Y.; Modigliani, A.; Taylor, J.; Coccato, L.; Freudling, W.; Neeser, M.; Marchetti, E.
2015-09-01
The European Southern Observatory (ESO) provides pipelines to reduce data for most of the instruments at its Very Large telescope (VLT). These pipelines are written as part of the development of VLT instruments, and are used both in the ESO's operational environment and by science users who receive VLT data. All the pipelines are highly specific geared toward instruments. However, experience showed that the independently developed pipelines include significant overlap, duplication and slight variations of similar algorithms. In order to reduce the cost of development, verification and maintenance of ESO pipelines, and at the same time improve the scientific quality of pipelines data products, ESO decided to develop a limited set of versatile high-level scientific functions that are to be used in all future pipelines. The routines are provided by the High-level Data Reduction Library (HDRL). To reach this goal, we first compare several candidate algorithms and verify them during a prototype phase using data sets from several instruments. Once the best algorithm and error model have been chosen, we start a design and implementation phase. The coding of HDRL is done in plain C and using the Common Pipeline Library (CPL) functionality. HDRL adopts consistent function naming conventions and a well defined API to minimise future maintenance costs, implements error propagation, uses pixel quality information, employs OpenMP to take advantage of multi-core processors, and is verified with extensive unit and regression tests. This poster describes the status of the project and the lesson learned during the development of reusable code implementing algorithms of high scientific quality.
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, H.M.; Reed, I.S.
A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous paper is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area, therefore making it possible to build a pipelinemore » Reed-Solomon decoder on a single VLSI chip.« less
Planning bioinformatics workflows using an expert system
Chen, Xiaoling; Chang, Jeffrey T.
2017-01-01
Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928
Morales-Navarrete, Hernán; Segovia-Miranda, Fabián; Klukowski, Piotr; Meyer, Kirstin; Nonaka, Hidenori; Marsico, Giovanni; Chernykh, Mikhail; Kalaidzidis, Alexander; Zerial, Marino; Kalaidzidis, Yannis
2015-01-01
A prerequisite for the systems biology analysis of tissues is an accurate digital three-dimensional reconstruction of tissue structure based on images of markers covering multiple scales. Here, we designed a flexible pipeline for the multi-scale reconstruction and quantitative morphological analysis of tissue architecture from microscopy images. Our pipeline includes newly developed algorithms that address specific challenges of thick dense tissue reconstruction. Our implementation allows for a flexible workflow, scalable to high-throughput analysis and applicable to various mammalian tissues. We applied it to the analysis of liver tissue and extracted quantitative parameters of sinusoids, bile canaliculi and cell shapes, recognizing different liver cell types with high accuracy. Using our platform, we uncovered an unexpected zonation pattern of hepatocytes with different size, nuclei and DNA content, thus revealing new features of liver tissue organization. The pipeline also proved effective to analyse lung and kidney tissue, demonstrating its generality and robustness. DOI: http://dx.doi.org/10.7554/eLife.11214.001 PMID:26673893
49 CFR Appendix C to Part 195 - Guidance for Implementation of an Integrity Management Program
Code of Federal Regulations, 2010 CFR
2010-10-01
... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED... understanding and analysis of the failure mechanisms or threats to integrity of each pipeline segment. (2) An... pipeline, information and data used for the information analysis; (13) results of the information analyses...
Capsule injection system for a hydraulic capsule pipelining system
Liu, Henry
1982-01-01
An injection system for injecting capsules into a hydraulic capsule pipelining system, the pipelining system comprising a pipeline adapted for flow of a carrier liquid therethrough, and capsules adapted to be transported through the pipeline by the carrier liquid flowing through the pipeline. The injection system comprises a reservoir of carrier liquid, the pipeline extending within the reservoir and extending downstream out of the reservoir, and a magazine in the reservoir for holding capsules in a series, one above another, for injection into the pipeline in the reservoir. The magazine has a lower end in communication with the pipeline in the reservoir for delivery of capsules from the magazine into the pipeline.
NASA Astrophysics Data System (ADS)
Tomarov, G. V.; Shipkov, A. A.; Lovchev, V. N.; Gutsev, D. F.
2016-10-01
Problems of metal flow-accelerated corrosion (FAC) in the pipelines and equipment of the condensate- feeding and wet-steam paths of NPP power-generating units (PGU) are examined. Goals, objectives, and main principles of the methodology for the implementation of an integrated program of AO Concern Rosenergoatom for the prevention of unacceptable FAC thinning and for increasing operational flow-accelerated corrosion resistance of NPP EaP are worded (further the Program). A role is determined and potentialities are shown for the use of Russian software packages in the evaluation and prediction of FAC rate upon solving practical problems for the timely detection of unacceptable FAC thinning in the elements of pipelines and equipment (EaP) of the secondary circuit of NPP PGU. Information is given concerning the structure, properties, and functions of the software systems for plant personnel support in the monitoring and planning of the inservice inspection of FAC thinning elements of pipelines and equipment of the secondary circuit of NPP PGUs, which are created and implemented at some Russian NPPs equipped with VVER-1000, VVER-440, and BN-600 reactors. It is noted that one of the most important practical results of software packages for supporting NPP personnel concerning the issue of flow-accelerated corrosion consists in revealing elements under a hazard of intense local FAC thinning. Examples are given for successful practice at some Russian NPP concerning the use of software systems for supporting the personnel in early detection of secondary-circuit pipeline elements with FAC thinning close to an unacceptable level. Intermediate results of working on the Program are presented and new tasks set in 2012 as a part of the updated program are denoted. The prospects of the developed methods and tools in the scope of the Program measures at the stages of design and construction of NPP PGU are discussed. The main directions of the work on solving the problems of flow-accelerated corrosion of pipelines and equipment in Russian NPP PGU are defined.
iGAS: A framework for using electronic intraoperative medical records for genomic discovery.
Levin, Matthew A; Joseph, Thomas T; Jeff, Janina M; Nadukuru, Rajiv; Ellis, Stephen B; Bottinger, Erwin P; Kenny, Eimear E
2017-03-01
Design and implement a HIPAA and Integrating the Healthcare Enterprise (IHE) profile compliant automated pipeline, the integrated Genomics Anesthesia System (iGAS), linking genomic data from the Mount Sinai Health System (MSHS) BioMe biobank to electronic anesthesia records, including physiological data collected during the perioperative period. The resulting repository of multi-dimensional data can be used for precision medicine analysis of physiological readouts, acute medical conditions, and adverse events that can occur during surgery. A structured pipeline was developed atop our existing anesthesia data warehouse using open-source tools. The pipeline is automated using scheduled tasks. The pipeline runs weekly, and finds and identifies all new and existing anesthetic records for BioMe participants. The pipeline went live in June 2015 with 49.2% (n=15,673) of BioMe participants linked to 40,947 anesthetics. The pipeline runs weekly in minimal time. After eighteen months, an additional 3671 participants were enrolled in BioMe and the number of matched anesthetic records grew 21% to 49,545. Overall percentage of BioMe patients with anesthetics remained similar at 51.1% (n=18,128). Seven patients opted out during this time. The median number of anesthetics per participant was 2 (range 1-144). Collectively, there were over 35 million physiologic data points and 480,000 medication administrations linked to genomic data. To date, two projects are using the pipeline at MSHS. Automated integration of biobank and anesthetic data sources is feasible and practical. This integration enables large-scale genomic analyses that might inform variable physiological response to anesthetic and surgical stress, and examine genetic factors underlying adverse outcomes during and after surgery. Copyright © 2017 Elsevier Inc. All rights reserved.
Automatising the analysis of stochastic biochemical time-series
2015-01-01
Background Mathematical and computational modelling of biochemical systems has seen a lot of effort devoted to the definition and implementation of high-performance mechanistic simulation frameworks. Within these frameworks it is possible to analyse complex models under a variety of configurations, eventually selecting the best setting of, e.g., parameters for a target system. Motivation This operational pipeline relies on the ability to interpret the predictions of a model, often represented as simulation time-series. Thus, an efficient data analysis pipeline is crucial to automatise time-series analyses, bearing in mind that errors in this phase might mislead the modeller's conclusions. Results For this reason we have developed an intuitive framework-independent Python tool to automate analyses common to a variety of modelling approaches. These include assessment of useful non-trivial statistics for simulation ensembles, e.g., estimation of master equations. Intuitive and domain-independent batch scripts will allow the researcher to automatically prepare reports, thus speeding up the usual model-definition, testing and refinement pipeline. PMID:26051821
Li, Weifeng; Ling, Wencui; Liu, Suoxiang; Zhao, Jing; Liu, Ruiping; Chen, Qiuwen; Qiang, Zhimin; Qu, Jiuhui
2011-01-01
Water leakage in drinking water distribution systems is a serious problem for many cities and a huge challenge for water utilities. An integrated system for the detection, early warning, and control of pipeline leakage has been developed and successfully used to manage the pipeline networks in selected areas of Beijing. A method based on the geographic information system has been proposed to quickly and automatically optimize the layout of the instruments which detect leaks. Methods are also proposed to estimate the probability of each pipe segment leaking (on the basis of historic leakage data), and to assist in locating the leakage points (based on leakage signals). The district metering area (DMA) strategy is used. Guidelines and a flowchart for establishing a DMA to manage the large-scale looped networks in Beijing are proposed. These different functions have been implemented into a central software system to simplify the day-to-day use of the system. In 2007 the system detected 102 non-obvious leakages (i.e., 14.2% of the total detected in Beijing) in the selected areas, which was estimated to save a total volume of 2,385,000 m3 of water. These results indicate the feasibility, efficiency and wider applicability of this system.
Braithwaite, Jeffrey; Churruca, Kate; Long, Janet C; Ellis, Louise A; Herkes, Jessica
2018-04-30
Implementation science has a core aim - to get evidence into practice. Early in the evidence-based medicine movement, this task was construed in linear terms, wherein the knowledge pipeline moved from evidence created in the laboratory through to clinical trials and, finally, via new tests, drugs, equipment, or procedures, into clinical practice. We now know that this straight-line thinking was naïve at best, and little more than an idealization, with multiple fractures appearing in the pipeline. The knowledge pipeline derives from a mechanistic and linear approach to science, which, while delivering huge advances in medicine over the last two centuries, is limited in its application to complex social systems such as healthcare. Instead, complexity science, a theoretical approach to understanding interconnections among agents and how they give rise to emergent, dynamic, systems-level behaviors, represents an increasingly useful conceptual framework for change. Herein, we discuss what implementation science can learn from complexity science, and tease out some of the properties of healthcare systems that enable or constrain the goals we have for better, more effective, more evidence-based care. Two Australian examples, one largely top-down, predicated on applying new standards across the country, and the other largely bottom-up, adopting medical emergency teams in over 200 hospitals, provide empirical support for a complexity-informed approach to implementation. The key lessons are that change can be stimulated in many ways, but a triggering mechanism is needed, such as legislation or widespread stakeholder agreement; that feedback loops are crucial to continue change momentum; that extended sweeps of time are involved, typically much longer than believed at the outset; and that taking a systems-informed, complexity approach, having regard for existing networks and socio-technical characteristics, is beneficial. Construing healthcare as a complex adaptive system implies that getting evidence into routine practice through a step-by-step model is not feasible. Complexity science forces us to consider the dynamic properties of systems and the varying characteristics that are deeply enmeshed in social practices, whilst indicating that multiple forces, variables, and influences must be factored into any change process, and that unpredictability and uncertainty are normal properties of multi-part, intricate systems.
Fluid-structure interaction in straight pipelines with different anchoring conditions
NASA Astrophysics Data System (ADS)
Ferras, David; Manso, Pedro A.; Schleiss, Anton J.; Covas, Dídia I. C.
2017-04-01
This investigation aims at assessing the fluid-structure interaction (FSI) occurring during hydraulic transients in straight pipeline systems fixed to anchor blocks. A two mode 4-equation model is implemented incorporating the main interacting mechanisms: Poisson, friction and junction coupling. The resistance to movement due to inertia and dry friction of the anchor blocks is treated as junction coupling. Unsteady skin friction is taken into account in friction coupling. Experimental waterhammer tests collected from a straight copper pipe-rig are used for model validation in terms of wave shape, timing and damping. Numerical results successfully reproduce laboratory measurements for realistic values of calibration parameters. The novelty of this paper is the presentation of a 1D FSI solver capable of describing the resistance to movement of anchor blocks and its effect on the transient pressure wave propagation in straight pipelines.
The CHARIS Integral Field Spectrograph with SCExAO: Data Reduction and Performance
NASA Astrophysics Data System (ADS)
Kasdin, N. Jeremy; Groff, Tyler; Brandt, Timothy; Currie, Thayne; Rizzo, Maxime; Chilcote, Jeffrey K.; Guyon, Olivier; Jovanovic, Nemanja; Lozi, Julien; Norris, Barnaby; Tamura, Motohide
2018-01-01
We summarize the data reduction pipeline and on-sky performance of the CHARIS Integral Field Spectrograph behind the SCExAO Adaptive Optics system on the Subaru Telescope. The open-source pipeline produces data cubes from raw detector reads using a Χ^2-based spectral extraction technique. It implements a number of advances, including a fit to the full nonlinear pixel response, suppression of up to a factor of ~2 in read noise, and deconvolution of the spectra with the line-spread function. The CHARIS team is currently developing the calibration and postprocessing software that will comprise the second component of the data reduction pipeline. Here, we show a range of CHARIS images, spectra, and contrast curves produced using provisional routines. CHARIS is now characterizing exoplanets simultaneously across the J, H, and K bands.
Pipeline active filter utilizing a booth type multiplier
NASA Technical Reports Server (NTRS)
Nathan, Robert (Inventor)
1987-01-01
Multiplier units of the modified Booth decoder and carry-save adder/full adder combination are used to implement a pipeline active filter wherein pixel data is processed sequentially, and each pixel need only be accessed once and multiplied by a predetermined number of weights simultaneously, one multiplier unit for each weight. Each multiplier unit uses only one row of carry-save adders, and the results are shifted to less significant multiplier positions and one row of full adders to add the carry to the sum in order to provide the correct binary number for the product Wp. The full adder is also used to add this product Wp to the sum of products .SIGMA.Wp from preceding multiply units. If m.times.m multiplier units are pipelined, the system would be capable of processing a kernel array of m.times.m weighting factors.
Experimental and analytical study of water pipe's rupture for damage identification purposes
NASA Astrophysics Data System (ADS)
Papakonstantinou, Konstantinos G.; Shinozuka, Masanobu; Beikae, Mohsen
2011-04-01
A malfunction, local damage or sudden pipe break of a pipeline system can trigger significant flow variations. As shown in the paper, pressure variations and pipe vibrations are two strongly correlated parameters. A sudden change in the flow velocity and pressure of a pipeline system can induce pipe vibrations. Thus, based on acceleration data, a rapid detection and localization of a possible damage may be carried out by inexpensive, nonintrusive monitoring techniques. To illustrate this approach, an experiment on a single pipe was conducted in the laboratory. Pressure gauges and accelerometers were installed and their correlation was checked during an artificially created transient flow. The experimental findings validated the correlation between the parameters. The interaction between pressure variations and pipe vibrations was also theoretically justified. The developed analytical model explains the connection among flow pressure, velocity, pressure wave propagation and pipe vibration. The proposed method provides a rapid, efficient and practical way to identify and locate sudden failures of a pipeline system and sets firm foundations for the development and implementation of an advanced, new generation Supervisory Control and Data Acquisition (SCADA) system for continuous health monitoring of pipe networks.
CloudNeo: a cloud pipeline for identifying patient-specific tumor neoantigens.
Bais, Preeti; Namburi, Sandeep; Gatti, Daniel M; Zhang, Xinyu; Chuang, Jeffrey H
2017-10-01
We present CloudNeo, a cloud-based computational workflow for identifying patient-specific tumor neoantigens from next generation sequencing data. Tumor-specific mutant peptides can be detected by the immune system through their interactions with the human leukocyte antigen complex, and neoantigen presence has recently been shown to correlate with anti T-cell immunity and efficacy of checkpoint inhibitor therapy. However computing capabilities to identify neoantigens from genomic sequencing data are a limiting factor for understanding their role. This challenge has grown as cancer datasets become increasingly abundant, making them cumbersome to store and analyze on local servers. Our cloud-based pipeline provides scalable computation capabilities for neoantigen identification while eliminating the need to invest in local infrastructure for data transfer, storage or compute. The pipeline is a Common Workflow Language (CWL) implementation of human leukocyte antigen (HLA) typing using Polysolver or HLAminer combined with custom scripts for mutant peptide identification and NetMHCpan for neoantigen prediction. We have demonstrated the efficacy of these pipelines on Amazon cloud instances through the Seven Bridges Genomics implementation of the NCI Cancer Genomics Cloud, which provides graphical interfaces for running and editing, infrastructure for workflow sharing and version tracking, and access to TCGA data. The CWL implementation is at: https://github.com/TheJacksonLaboratory/CloudNeo. For users who have obtained licenses for all internal software, integrated versions in CWL and on the Seven Bridges Cancer Genomics Cloud platform (https://cgc.sbgenomics.com/, recommended version) can be obtained by contacting the authors. jeff.chuang@jax.org. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
KAnalyze: a fast versatile pipelined K-mer toolkit
Audano, Peter; Vannberg, Fredrik
2014-01-01
Motivation: Converting nucleotide sequences into short overlapping fragments of uniform length, k-mers, is a common step in many bioinformatics applications. While existing software packages count k-mers, few are optimized for speed, offer an application programming interface (API), a graphical interface or contain features that make it extensible and maintainable. We designed KAnalyze to compete with the fastest k-mer counters, to produce reliable output and to support future development efforts through well-architected, documented and testable code. Currently, KAnalyze can output k-mer counts in a sorted tab-delimited file or stream k-mers as they are read. KAnalyze can process large datasets with 2 GB of memory. This project is implemented in Java 7, and the command line interface (CLI) is designed to integrate into pipelines written in any language. Results: As a k-mer counter, KAnalyze outperforms Jellyfish, DSK and a pipeline built on Perl and Linux utilities. Through extensive unit and system testing, we have verified that KAnalyze produces the correct k-mer counts over multiple datasets and k-mer sizes. Availability and implementation: KAnalyze is available on SourceForge: https://sourceforge.net/projects/kanalyze/ Contact: fredrik.vannberg@biology.gatech.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24642064
PROGRAPE-1: A Programmable, Multi-Purpose Computer for Many-Body Simulations
NASA Astrophysics Data System (ADS)
Hamada, Tsuyoshi; Fukushige, Toshiyuki; Kawai, Atsushi; Makino, Junichiro
2000-10-01
We have developed PROGRAPE-1 (PROgrammable GRAPE-1), a programmable multi-purpose computer for many-body simulations. The main difference between PROGRAPE-1 and ``traditional'' GRAPE systems is that the former uses FPGA (Field Programmable Gate Array) chips as the processing elements, while the latter relies on a hardwired pipeline processor specialized to gravitational interactions. Since the logic implemented in FPGA chips can be reconfigured, we can use PROGRAPE-1 to calculate not only gravitational interactions, but also other forms of interactions, such as the van der Waals force, hydro\\-dynamical interactions in the SPHr calculation, and so on. PROGRAPE-1 comprises two Altera EPF10K100 FPGA chips, each of which contains nominally 100000 gates. To evaluate the programmability and performance of PROGRAPE-1, we implemented a pipeline for gravitational interactions similar to that of GRAPE-3. One pipeline is fitted into a single FPGA chip, operated at 16 MHz clock. Thus, for gravitational interactions, PROGRAPE-1 provided a speed of 0.96 Gflops-equivalent. PROGRAPE will prove to be useful for a wide-range of particle-based simulations in which the calculation cost of interactions other than gravity is high, such as the evaluation of SPH interactions.
Design and implementation of highly parallel pipelined VLSI systems
NASA Astrophysics Data System (ADS)
Delange, Alphonsus Anthonius Jozef
A methodology and its realization as a prototype CAD (Computer Aided Design) system for the design and analysis of complex multiprocessor systems is presented. The design is an iterative process in which the behavioral specifications of the system components are refined into structural descriptions consisting of interconnections and lower level components etc. A model for the representation and analysis of multiprocessor systems at several levels of abstraction and an implementation of a CAD system based on this model are described. A high level design language, an object oriented development kit for tool design, a design data management system, and design and analysis tools such as a high level simulator and graphics design interface which are integrated into the prototype system and graphics interface are described. Procedures for the synthesis of semiregular processor arrays, and to compute the switching of input/output signals, memory management and control of processor array, and sequencing and segmentation of input/output data streams due to partitioning and clustering of the processor array during the subsequent synthesis steps, are described. The architecture and control of a parallel system is designed and each component mapped to a module or module generator in a symbolic layout library, compacted for design rules of VLSI (Very Large Scale Integration) technology. An example of the design of a processor that is a useful building block for highly parallel pipelined systems in the signal/image processing domains is given.
NASA Astrophysics Data System (ADS)
Duan, Yanzhi
2017-01-01
The gas pipeline networks in Sichuan and Chongqing (Sichuan-Chongqing) region have formed a fully-fledged gas pipeline transportation system in China, which supports and promotes the rapid development of gas market in Sichuan-Chongqing region. In the circumstances of further developed market-oriented economy, it is necessary to carry out further the pipeline system reform in the areas of investment/financing system, operation system and pricing system to lay a solid foundation for improving future gas production and marketing capability and adapting itself to the national gas system reform, and to achieve the objectives of multiparty participated pipeline construction, improved pipeline transportation efficiency and fair and rational pipeline transportation prices. In this article, main thinking on reform in the three areas and major deployment are addressed, and corresponding measures on developing shared pipeline economy, providing financial support to pipeline construction, setting up independent regulatory agency to enhance the industrial supervision for gas pipeline transportation, and promoting the construction of regional gas trade market are recommended.
A single chip VLSI Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Hsu, I. S.; Deutsch, L. J.; Reed, I. S.
1986-01-01
A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous design is replaced by a time domain algorithm. A new architecture that implements such an algorithm permits efficient pipeline processing with minimum circuitry. A systolic array is also developed to perform erasure corrections in the new design. A modified form of Euclid's algorithm is implemented by a new architecture that maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and a significant reduction in silicon area, therefore making it possible to build a pipeline (31,15)RS decoder on a single VLSI chip.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wynne, Adam S.
2011-05-05
In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less
A First Step towards a Clinical Decision Support System for Post-traumatic Stress Disorders.
Ma, Sisi; Galatzer-Levy, Isaac R; Wang, Xuya; Fenyö, David; Shalev, Arieh Y
2016-01-01
PTSD is distressful and debilitating, following a non-remitting course in about 10% to 20% of trauma survivors. Numerous risk indicators of PTSD have been identified, but individual level prediction remains elusive. As an effort to bridge the gap between scientific discovery and practical application, we designed and implemented a clinical decision support pipeline to provide clinically relevant recommendation for trauma survivors. To meet the specific challenge of early prediction, this work uses data obtained within ten days of a traumatic event. The pipeline creates personalized predictive model for each individual, and computes quality metrics for each predictive model. Clinical recommendations are made based on both the prediction of the model and its quality, thus avoiding making potentially detrimental recommendations based on insufficient information or suboptimal model. The current pipeline outperforms the acute stress disorder, a commonly used clinical risk factor for PTSD development, both in terms of sensitivity and specificity.
Low-complexity nonlinear adaptive filter based on a pipelined bilinear recurrent neural network.
Zhao, Haiquan; Zeng, Xiangping; He, Zhengyou
2011-09-01
To reduce the computational complexity of the bilinear recurrent neural network (BLRNN), a novel low-complexity nonlinear adaptive filter with a pipelined bilinear recurrent neural network (PBLRNN) is presented in this paper. The PBLRNN, inheriting the modular architectures of the pipelined RNN proposed by Haykin and Li, comprises a number of BLRNN modules that are cascaded in a chained form. Each module is implemented by a small-scale BLRNN with internal dynamics. Since those modules of the PBLRNN can be performed simultaneously in a pipelined parallelism fashion, it would result in a significant improvement of computational efficiency. Moreover, due to nesting module, the performance of the PBLRNN can be further improved. To suit for the modular architectures, a modified adaptive amplitude real-time recurrent learning algorithm is derived on the gradient descent approach. Extensive simulations are carried out to evaluate the performance of the PBLRNN on nonlinear system identification, nonlinear channel equalization, and chaotic time series prediction. Experimental results show that the PBLRNN provides considerably better performance compared to the single BLRNN and RNN models.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-22
... interconnect pipelines to four existing offshore pipelines (Dauphin Natural Gas Pipeline, Williams Natural Gas Pipeline, Destin Natural Gas Pipeline, and Viosca Knoll Gathering System [VKGS] Gas Pipeline) that connect to the onshore natural gas transmission pipeline system. Natural gas would be delivered to customers...
Parallel processing in a host plus multiple array processor system for radar
NASA Technical Reports Server (NTRS)
Barkan, B. Z.
1983-01-01
Host plus multiple array processor architecture is demonstrated to yield a modular, fast, and cost-effective system for radar processing. Software methodology for programming such a system is developed. Parallel processing with pipelined data flow among the host, array processors, and discs is implemented. Theoretical analysis of performance is made and experimentally verified. The broad class of problems to which the architecture and methodology can be applied is indicated.
ERIC Educational Resources Information Center
Palmer, Mark H.; Elmore, R. Douglas; Watson, Mary Jo; Kloesel, Kevin; Palmer, Kristen
2009-01-01
Very few Native American students pursue careers in the geosciences. To address this national problem, several units at the University of Oklahoma are implementing a geoscience "pipeline" program that is designed to increase the number of Native American students entering geoscience disciplines. One of the program's strategies includes…
NASA Astrophysics Data System (ADS)
Elkurdi, Yousef; Fernández, David; Souleimanov, Evgueni; Giannacopoulos, Dennis; Gross, Warren J.
2008-04-01
The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. The trends in floating-point performance are moving in favor of Field-Programmable Gate Arrays (FPGAs), hence increasing interest has grown in the scientific community to exploit this technology. We present an architecture and implementation of an FPGA-based sparse matrix-vector multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. FEM matrices display specific sparsity patterns that can be exploited to improve the efficiency of hardware designs. Our architecture exploits FEM matrix sparsity structure to achieve a balance between performance and hardware resource requirements by relying on external SDRAM for data storage while utilizing the FPGAs computational resources in a stream-through systolic approach. The architecture is based on a pipelined linear array of processing elements (PEs) coupled with a hardware-oriented matrix striping algorithm and a partitioning scheme which enables it to process arbitrarily big matrices without changing the number of PEs in the architecture. Therefore, this architecture is only limited by the amount of external RAM available to the FPGA. The implemented SMVM-pipeline prototype contains 8 PEs and is clocked at 110 MHz obtaining a peak performance of 1.76 GFLOPS. For 8 GB/s of memory bandwidth typical of recent FPGA systems, this architecture can achieve 1.5 GFLOPS sustained performance. Using multiple instances of the pipeline, linear scaling of the peak and sustained performance can be achieved. Our stream-through architecture provides the added advantage of enabling an iterative implementation of the SMVM computation required by iterative solution techniques such as the conjugate gradient method, avoiding initialization time due to data loading and setup inside the FPGA internal memory.
Scalable Metadata Management for a Large Multi-Source Seismic Data Repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaylord, J. M.; Dodge, D. A.; Magana-Zook, S. A.
In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity. We began the effort with an assessment of open source data flow tools from the Hadoop ecosystem. We then began the construction of a layered architecture that is specifically designed to address many of the scalability and data quality issues we experience with our current pipeline. This included implementing basic functionality in each of the layers, such as establishing a data lake, designing a unifiedmore » metadata schema, tracking provenance, and calculating data quality metrics. Our original intent was to test and validate the new ingestion framework with data from a large-scale field deployment in a temporary network. This delivered somewhat unsatisfying results, since the new system immediately identified fatal flaws in the data relatively early in the pipeline. Although this is a correct result it did not allow us to sufficiently exercise the whole framework. We then widened our scope to process all available metadata from over a dozen online seismic data sources to further test the implementation and validate the design. This experiment also uncovered a higher than expected frequency of certain types of metadata issues that challenged us to further tune our data management strategy to handle them. Our result from this project is a greatly improved understanding of real world data issues, a validated design, and prototype implementations of major components of an eventual production framework. This successfully forms the basis of future development for the Geophysical Monitoring Program data pipeline, which is a critical asset supporting multiple programs. It also positions us very well to deliver valuable metadata management expertise to our sponsors, and has already resulted in an NNSA Office of Defense Nuclear Nonproliferation commitment to a multi-year project for follow-on work.« less
Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis
Ollenschläger, Malte; Roth, Nils; Klucken, Jochen
2017-01-01
Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis. PMID:28832511
Design and Implementation of CNEOST Image Database Based on NoSQL System
NASA Astrophysics Data System (ADS)
Wang, X.
2013-07-01
The China Near Earth Object Survey Telescope (CNEOST) is the largest Schmidt telescope in China, and it has acquired more than 3 TB astronomical image data since it saw the first light in 2006. After the upgradation of the CCD camera in 2013, over 10 TB data will be obtained every year. The management of massive images is not only an indispensable part of data processing pipeline but also the basis of data sharing. Based on the analysis of requirement, an image management system is designed and implemented by employing the non-relational database.
Design and Implementation of CNEOST Image Database Based on NoSQL System
NASA Astrophysics Data System (ADS)
Wang, Xin
2014-04-01
The China Near Earth Object Survey Telescope is the largest Schmidt telescope in China, and it has acquired more than 3 TB astronomical image data since it saw the first light in 2006. After the upgrade of the CCD camera in 2013, over 10 TB data will be obtained every year. The management of the massive images is not only an indispensable part of data processing pipeline but also the basis of data sharing. Based on the analysis of requirement, an image management system is designed and implemented by employing the non-relational database.
2017-03-01
Contribution to Project: Ian primarily focuses on developing tissue imaging pipeline and perform imaging data analysis . Funding Support: Partially...3D ReconsTruction), a multi-faceted image analysis pipeline , permitting quantitative interrogation of functional implications of heterogeneous... analysis pipeline , to observe and quantify phenotypic metastatic landscape heterogeneity in situ with spatial and molecular resolution. Our implementation
Combined Natural Gas and Solar Technologies for Heating and Cooling in the City of NIS in Serbia
NASA Astrophysics Data System (ADS)
Stefanović, Velimir P.; Bojić, Milorad Lj.
2010-06-01
The use of conventional systems for heat and electricity production in Niš and Serbia means a constant waste of energy, and money. This problem is present in both industrial and public sector. Using conventional systems, means not only low-energy efficient systems, and technologies, but also using very "dirty" technologies, which cause heavy environment pollution. The lack of electricity in our country, and region is also present. The gas pipeline in Niš was finished not long ago, and second gas pipeline is about to be made in the next couple of years. This opens a door for implementing new technologies and the use of new methods for production of heat and electricity, while preserving our environment. This paper reports discussion of this technology with management of public institutions, which use both heat and electricity.
78 FR 42889 - Pipeline Safety: Reminder of Requirements for Utility LP-Gas and LPG Pipeline Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-18
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 192 [Docket No. PHMSA-2013-0097] Pipeline Safety: Reminder of Requirements for Utility LP-Gas and LPG Pipeline Systems AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION...
Design of oil pipeline leak detection and communication system based on optical fiber technology
NASA Astrophysics Data System (ADS)
Tu, Yaqing; Chen, Huabo
1999-08-01
The integrity of oil pipeline is always a major concern of operators. Pipeline leak not only leads to loss of oil, but pollutes environment. A new pipeline leak detection and communication system based on optical fiber technology to ensure the pipeline reliability is presented. Combined direct leak detection method with an indirect one, the system will greatly reduce the rate of false alarm. According, to the practical features of oil pipeline,the pipeline communication system is designed employing the state-of-the-art optic fiber communication technology. The system has such feature as high location accuracy of leak detection, good real-time characteristic, etc. which overcomes the disadvantages of traditional leak detection methods and communication system effectively.
NASA Astrophysics Data System (ADS)
Zou, Liang; Fu, Zhuang; Zhao, YanZheng; Yang, JunYan
2010-07-01
This paper proposes a kind of pipelined electric circuit architecture implemented in FPGA, a very large scale integrated circuit (VLSI), which efficiently deals with the real time non-uniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPA). Dual Nios II soft-core processors and a DSP with a 64+ core together constitute this image system. Each processor undertakes own systematic task, coordinating its work with each other's. The system on programmable chip (SOPC) in FPGA works steadily under the global clock frequency of 96Mhz. Adequate time allowance makes FPGA perform NUC image pre-processing algorithm with ease, which has offered favorable guarantee for the work of post image processing in DSP. And at the meantime, this paper presents a hardware (HW) and software (SW) co-design in FPGA. Thus, this systematic architecture yields an image processing system with multiprocessor, and a smart solution to the satisfaction with the performance of the system.
About U.S. Natural Gas Pipelines
2007-01-01
This information product provides the interested reader with a broad and non-technical overview of how the U.S. natural gas pipeline network operates, along with some insights into the many individual pipeline systems that make up the network. While the focus of the presentation is the transportation of natural gas over the interstate and intrastate pipeline systems, information on subjects related to pipeline development, such as system design and pipeline expansion, are also included.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Distribution systems reporting transmission pipelines; transmission or gathering systems reporting distribution pipelines. 191.13 Section 191.13 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF...
Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha
2016-02-27
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-03-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-01-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-21
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0024] Pipeline Safety: Information Collection Activities, Revision to Gas Transmission and Gathering Pipeline Systems Annual Report, Gas Transmission and Gathering Pipeline Systems Incident Report...
Real-time inspection by submarine images
NASA Astrophysics Data System (ADS)
Tascini, Guido; Zingaretti, Primo; Conte, Giuseppe
1996-10-01
A real-time application of computer vision concerning tracking and inspection of a submarine pipeline is described. The objective is to develop automatic procedures for supporting human operators in the real-time analysis of images acquired by means of cameras mounted on underwater remotely operated vehicles (ROV) Implementation of such procedures gives rise to a human-machine system for underwater pipeline inspection that can automatically detect and signal the presence of the pipe, of its structural or accessory elements, and of dangerous or alien objects in its neighborhood. The possibility of modifying the image acquisition rate in the simulations performed on video- recorded images is used to prove that the system performs all necessary processing with an acceptable robustness working in real-time up to a speed of about 2.5 kn, widely greater than that the actual ROVs and the security features allow.
Sato, Yukuto; Miya, Masaki; Fukunaga, Tsukasa; Sado, Tetsuya; Iwasaki, Wataru
2018-06-01
Fish mitochondrial genome (mitogenome) data form a fundamental basis for revealing vertebrate evolution and hydrosphere ecology. Here, we report recent functional updates of MitoFish, which is a database of fish mitogenomes with a precise annotation pipeline MitoAnnotator. Most importantly, we describe implementation of MiFish pipeline for metabarcoding analysis of fish mitochondrial environmental DNA, which is a fast-emerging and powerful technology in fish studies. MitoFish, MitoAnnotator, and MiFish pipeline constitute a key platform for studies of fish evolution, ecology, and conservation, and are freely available at http://mitofish.aori.u-tokyo.ac.jp/ (last accessed April 7th, 2018).
Multinode reconfigurable pipeline computer
NASA Technical Reports Server (NTRS)
Nosenchuck, Daniel M. (Inventor); Littman, Michael G. (Inventor)
1989-01-01
A multinode parallel-processing computer is made up of a plurality of innerconnected, large capacity nodes each including a reconfigurable pipeline of functional units such as Integer Arithmetic Logic Processors, Floating Point Arithmetic Processors, Special Purpose Processors, etc. The reconfigurable pipeline of each node is connected to a multiplane memory by a Memory-ALU switch NETwork (MASNET). The reconfigurable pipeline includes three (3) basic substructures formed from functional units which have been found to be sufficient to perform the bulk of all calculations. The MASNET controls the flow of signals from the memory planes to the reconfigurable pipeline and vice versa. the nodes are connectable together by an internode data router (hyperspace router) so as to form a hypercube configuration. The capability of the nodes to conditionally configure the pipeline at each tick of the clock, without requiring a pipeline flush, permits many powerful algorithms to be implemented directly.
NASA Technical Reports Server (NTRS)
Couvidat, S.; Zhao, J.; Birch, A. C.; Kosovichev, A. G.; Duvall, T. L., Jr.; Parchevsky, K.; Scherrer, P. H.
2009-01-01
The Helioseismic and Magnetic Imager (HMI) instrument on board the Solar Dynamics Observatory (SDO) satellite is designed to produce high-resolution Doppler velocity maps of oscillations at the solar surface with high temporal cadence. To take advantage of these high-quality oscillation data, a time-distance helioseismology pipeline has been implemented at the Joint Science Operations Center (JSOC) at Stanford University. The aim of this pipeline is to generate maps of acoustic travel times from oscillations on the solar surface, and to infer subsurface 3D flow velocities and sound-speed perturbations. The wave travel times are measured from cross covariances of the observed solar oscillation signals. For implementation into the pipeline we have investigated three different travel-time definitions developed in time-distance helioseismology: a Gabor wavelet fitting (Kosovichev and Duvall, 1997), a minimization relative to a reference cross-covariance function (Gizon and Birch, 2002), and a linearized version of the minimization method (Gizon and Birch, 2004). Using Doppler velocity data from the Michelson Doppler Imager (MDI) instrument on board SOHO, we tested and compared these definitions for the mean and difference travel-time perturbations measured from reciprocal signals. Although all three procedures return similar travel times in a quiet Sun region, the method of Gizon and Birch (2004) gives travel times that are significantly different from the others in a magnetic (active) region. Thus, for the pipeline implementation we chose the procedures of Kosovichev and Duvall (1997) and Gizon and Birch (2002). We investigated the relationships among these three travel-time definitions, their sensitivities to fitting parameters, and estimated the random errors they produce
Automated Monitoring of Pipeline Rights-of-Way
NASA Technical Reports Server (NTRS)
Frost, Chard Ritchie
2010-01-01
NASA Ames Research Center and the Pipeline Research Council International, Inc. have partnered in the formation of a research program to identify and develop the key technologies required to enable automated detection of threats to gas and oil transmission and distribution pipelines. This presentation describes the Right-of-way Automated Monitoring (RAM) program and highlights research successes to date, continuing challenges to implementing the RAM objectives, and the program's ongoing work and plans.
Automated generation of image products for Mars Exploration Rover Mission tactical operations
NASA Technical Reports Server (NTRS)
Alexander, Doug; Zamani, Payam; Deen, Robert; Andres, Paul; Mortensen, Helen
2005-01-01
This paper will discuss, from design to implementation, the methodologies applied to MIPL's automated pipeline processing as a 'system of systems' integrated with the MER GDS. Overviews of the interconnected product generating systems will also be provided with emphasis on interdependencies, including those for a) geometric rectificationn of camera lens distortions, b) generation of stereo disparity, c) derivation of 3-dimensional coordinates in XYZ space, d) generation of unified terrain meshes, e) camera-to-target ranging (distance) and f) multi-image mosaicking.
Contributions to modeling functionality of a high frequency damper system
NASA Astrophysics Data System (ADS)
Sirbu, E. A.; Horga, S.; Vrabioiu, G.
2016-08-01
Due to the necessity of improving the handling performances of a motor vehicle, it is imperative to understand the suspensions properties that affects ride and directional respons.The construction of a fero-magnetic shock absorber is based on two bellows interconnected by a pipe-line. Through this pipe-line the fero-magnetic fluid is carried between the two bellows. The damping characteristic of the shock absorber is affected by the viscosity of the fero-magnetic fluid. The viscosity of the fluid, is controlled through a electric coil mounted on the bellows connecting pipe-line. Modifying the electrical field of the coil, the viscosity of the fluid will change, finally affecting the damping characteristic of the shock absorber. A recent system called „CCD Pothole Suspension” is implemented on Ford vehicles. By modifying the dampning characteristic of the shock absorbers, vehicle daynamics can be improved; also the risk of damaging the suspension will be decreased. The approach of this paper is to analyze the behaviour of the fero magnetic damper, thus determining how it will affect the performances of the vehicle suspensions. The experimental research will provide a better understanding of the behavior of the fero-magnetic shock absorber, and the possible advantages of using this system.
Digital hardware implementation of a stochastic two-dimensional neuron model.
Grassia, F; Kohno, T; Levi, T
2016-11-01
This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.
49 CFR 192.616 - Public awareness.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 3 2011-10-01 2011-10-01 false Public awareness. 192.616 Section 192.616... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Operations § 192.616 Public awareness. (a) Except for..., each pipeline operator must develop and implement a written continuing public education program that...
49 CFR 192.616 - Public awareness.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Public awareness. 192.616 Section 192.616... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Operations § 192.616 Public awareness. (a) Except for..., each pipeline operator must develop and implement a written continuing public education program that...
Chemical laser exhaust pipe design research
NASA Astrophysics Data System (ADS)
Sun, Yunqiang; Huang, Zhilong; Chen, Zhiqiang; Ren, Zebin; Guo, Longde
2016-10-01
In order to weaken the chemical laser exhaust gas influence of the optical transmission, a vent pipe is advised to emissions gas to the outside of the optical transmission area. Based on a variety of exhaust pipe design, a flow field characteristic of the pipe is carried out by numerical simulation and analysis in detail. The research results show that for uniform deflating exhaust pipe, although the pipeline structure is cyclical and convenient for engineering implementation, but there is a phenomenon of air reflows at the pipeline entrance slit which can be deduced from the numerical simulation results. So, this type of pipeline structure does not guarantee seal. For the design scheme of putting the pipeline contract part at the end of the exhaust pipe, or using the method of local area or tail contraction, numerical simulation results show that backflow phenomenon still exists at the pipeline entrance slit. Preliminary analysis indicates that the contraction of pipe would result in higher static pressure near the wall for the low speed flow field, so as to produce counter pressure gradient at the entrance slit. In order to eliminate backflow phenomenon at the pipe entrance slit, concerned with the pipeline type of radial size increase gradually along the flow, flow field property in the pipe is analyzed in detail by numerical simulation methods. Numerical simulation results indicate that there is not reflow phenomenon at entrance slit of the dilated duct. However the cold air inhaled in the slit which makes the temperature of the channel wall is lower than the center temperature. Therefore, this kind of pipeline structure can not only prevent the leak of the gas, but also reduce the wall temperature. In addition, compared with the straight pipe connection way, dilated pipe structure also has periodic structure, which can facilitate system integration installation.
Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun
2012-01-01
Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.
HESP: Instrument control, calibration and pipeline development
NASA Astrophysics Data System (ADS)
Anantha, Ch.; Roy, Jayashree; Mahesh, P. K.; Parihar, P. S.; Sangal, A. K.; Sriram, S.; Anand, M. N.; Anupama, G. C.; Giridhar, S.; Prabhu, T. P.; Sivarani, T.; Sundararajan, M. S.
Hanle Echelle SPectrograph (HESP) is a fibre-fed, high resolution (R = 30,000 and 60,000) spectrograph being developed for the 2m HCT telescope at IAO, Hanle. The major components of the instrument are a) Cassegrain unit b) Spectrometer instrument. An instrument control system interacting with a guiding unit at Cassegrain interface as well as handling spectrograph functions is being developed. An on-axis auto-guiding using the spill-over angular ring around the input pinhole is also being developed. The stellar light from the Cassegrain unit is taken to the spectrograph using an optical fiber which is being characterized for spectral transmission, focal ratio degradation and scrambling properties. The design of the thermal enclosure and thermal control for the spectrograph housing is presented. A data pipeline for the entire Echelle spectral reduction is being developed. We also plan to implement an instrument physical model based calibration into the main data pipeline and in the maintenance and quality control operations.
DKIST visible broadband imager data processing pipeline
NASA Astrophysics Data System (ADS)
Beard, Andrew; Cowan, Bruce; Ferayorni, Andrew
2014-07-01
The Daniel K. Inouye Solar Telescope (DKIST) Data Handling System (DHS) provides the technical framework and building blocks for developing on-summit instrument quality assurance and data reduction pipelines. The DKIST Visible Broadband Imager (VBI) is a first light instrument that alone will create two data streams with a bandwidth of 960 MB/s each. The high data rate and data volume of the VBI require near-real time processing capability for quality assurance and data reduction, and will be performed on-summit using Graphics Processing Unit (GPU) technology. The VBI data processing pipeline (DPP) is the first designed and developed using the DKIST DHS components, and therefore provides insight into the strengths and weaknesses of the framework. In this paper we lay out the design of the VBI DPP, examine how the underlying DKIST DHS components are utilized, and discuss how integration of the DHS framework with GPUs was accomplished. We present our results of the VBI DPP alpha release implementation of the calibration, frame selection reduction, and quality assurance display processing nodes.
Neural-Fuzzy model Based Steel Pipeline Multiple Cracks Classification
NASA Astrophysics Data System (ADS)
Elwalwal, Hatem Mostafa; Mahzan, Shahruddin Bin Hj.; Abdalla, Ahmed N.
2017-10-01
While pipes are cheaper than other means of transportation, this cost saving comes with a major price: pipes are subject to cracks, corrosion etc., which in turn can cause leakage and environmental damage. In this paper, Neural-Fuzzy model for multiple cracks classification based on Lamb Guide Wave. Simulation results for 42 sample were collected using ANSYS software. The current research object to carry on the numerical simulation and experimental study, aiming at finding an effective way to detection and the localization of cracks and holes defects in the main body of pipeline. Considering the damage form of multiple cracks and holes which may exist in pipeline, to determine the respective position in the steel pipe. In addition, the technique used in this research a guided lamb wave based structural health monitoring method whereas piezoelectric transducers will use as exciting and receiving sensors by Pitch-Catch method. Implementation of simple learning mechanism has been developed specially for the ANN for fuzzy the system represented.
Implementation of Cloud based next generation sequencing data analysis in a clinical laboratory.
Onsongo, Getiria; Erdmann, Jesse; Spears, Michael D; Chilton, John; Beckman, Kenneth B; Hauge, Adam; Yohe, Sophia; Schomaker, Matthew; Bower, Matthew; Silverstein, Kevin A T; Thyagarajan, Bharat
2014-05-23
The introduction of next generation sequencing (NGS) has revolutionized molecular diagnostics, though several challenges remain limiting the widespread adoption of NGS testing into clinical practice. One such difficulty includes the development of a robust bioinformatics pipeline that can handle the volume of data generated by high-throughput sequencing in a cost-effective manner. Analysis of sequencing data typically requires a substantial level of computing power that is often cost-prohibitive to most clinical diagnostics laboratories. To address this challenge, our institution has developed a Galaxy-based data analysis pipeline which relies on a web-based, cloud-computing infrastructure to process NGS data and identify genetic variants. It provides additional flexibility, needed to control storage costs, resulting in a pipeline that is cost-effective on a per-sample basis. It does not require the usage of EBS disk to run a sample. We demonstrate the validation and feasibility of implementing this bioinformatics pipeline in a molecular diagnostics laboratory. Four samples were analyzed in duplicate pairs and showed 100% concordance in mutations identified. This pipeline is currently being used in the clinic and all identified pathogenic variants confirmed using Sanger sequencing further validating the software.
Acoustic system for communication in pipelines
Martin, II, Louis Peter; Cooper, John F [Oakland, CA
2008-09-09
A system for communication in a pipe, or pipeline, or network of pipes containing a fluid. The system includes an encoding and transmitting sub-system connected to the pipe, or pipeline, or network of pipes that transmits a signal in the frequency range of 3-100 kHz into the pipe, or pipeline, or network of pipes containing a fluid, and a receiver and processor sub-system connected to the pipe, or pipeline, or network of pipes containing a fluid that receives said signal and uses said signal for a desired application.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-09
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... installed to lessen the volume of natural gas and hazardous liquid released during catastrophic pipeline... p.m. Panel 3: Considerations for Natural Gas Pipeline Leak Detection Systems 3:30 p.m. Break 3:45 p...
Full image-processing pipeline in field-programmable gate array for a small endoscopic camera
NASA Astrophysics Data System (ADS)
Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.
2017-01-01
Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.
ML-o-Scope: A Diagnostic Visualization System for Deep Machine Learning Pipelines
2014-05-16
ML-o-scope: a diagnostic visualization system for deep machine learning pipelines Daniel Bruckner Electrical Engineering and Computer Sciences... machine learning pipelines 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f...the system as a support for tuning large scale object-classification pipelines. 1 Introduction A new generation of pipelined machine learning models
49 CFR 195.440 - Public awareness.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 3 2011-10-01 2011-10-01 false Public awareness. 195.440 Section 195.440... PIPELINE Operation and Maintenance § 195.440 Public awareness. (a) Each pipeline operator must develop and implement a written continuing public education program that follows the guidance provided in the American...
49 CFR 195.440 - Public awareness.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Public awareness. 195.440 Section 195.440... PIPELINE Operation and Maintenance § 195.440 Public awareness. (a) Each pipeline operator must develop and implement a written continuing public education program that follows the guidance provided in the American...
FPGA-based real-time phase measuring profilometry algorithm design and implementation
NASA Astrophysics Data System (ADS)
Zhan, Guomin; Tang, Hongwei; Zhong, Kai; Li, Zhongwei; Shi, Yusheng
2016-11-01
Phase measuring profilometry (PMP) has been widely used in many fields, like Computer Aided Verification (CAV), Flexible Manufacturing System (FMS) et al. High frame-rate (HFR) real-time vision-based feedback control will be a common demands in near future. However, the instruction time delay in the computer caused by numerous repetitive operations greatly limit the efficiency of data processing. FPGA has the advantages of pipeline architecture and parallel execution, and it fit for handling PMP algorithm. In this paper, we design a fully pipelined hardware architecture for PMP. The functions of hardware architecture includes rectification, phase calculation, phase shifting, and stereo matching. The experiment verified the performance of this method, and the factors that may influence the computation accuracy was analyzed.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-12
... natural gas from existing pipeline systems to the LNG terminal facilities. The Project would be... room, warehouse, and shop. Pipeline Header System: A 29-mile-long, 42-inch-diameter natural gas pipeline extending northward from the shoreside facilities to nine natural gas interconnects southwest of...
Method and system for pipeline communication
Richardson,; John, G [Idaho Falls, ID
2008-01-29
A pipeline communication system and method includes a pipeline having a surface extending along at least a portion of the length of the pipeline. A conductive bus is formed to and extends along a portion of the surface of the pipeline. The conductive bus includes a first conductive trace and a second conductive trace with the first and second conductive traces being adapted to conformally couple with a pipeline at the surface extending along at least a portion of the length of the pipeline. A transmitter for sending information along the conductive bus on the pipeline is coupled thereto and a receiver for receiving the information from the conductive bus on the pipeline is also couple to the conductive bus.
Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C
2008-01-01
As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.
TESS Data Processing and Quick-look Pipeline
NASA Astrophysics Data System (ADS)
Fausnaugh, Michael; Huang, Xu; Glidden, Ana; Guerrero, Natalia; TESS Science Office
2018-01-01
We describe the data analysis procedures and pipelines for the Transiting Exoplanet Survey Satellite (TESS). We briefly review the processing pipeline developed and implemented by the Science Processing Operations Center (SPOC) at NASA Ames, including pixel/full-frame image calibration, photometric analysis, pre-search data conditioning, transiting planet search, and data validation. We also describe data-quality diagnostic analyses and photometric performance assessment tests. Finally, we detail a "quick-look pipeline" (QLP) that has been developed by the MIT branch of the TESS Science Office (TSO) to provide a fast and adaptable routine to search for planet candidates in the 30 minute full-frame images.
Structural reliability assessment of the Oman India Pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Sharif, A.M.; Preston, R.
1996-12-31
Reliability techniques are increasingly finding application in design. The special design conditions for the deep water sections of the Oman India Pipeline dictate their use since the experience basis for application of standard deterministic techniques is inadequate. The paper discusses the reliability analysis as applied to the Oman India Pipeline, including selection of a collapse model, characterization of the variability in the parameters that affect pipe resistance to collapse, and implementation of first and second order reliability analyses to assess the probability of pipe failure. The reliability analysis results are used as the basis for establishing the pipe wall thicknessmore » requirements for the pipeline.« less
NASA Astrophysics Data System (ADS)
Cruz Jiménez, Miriam Guadalupe; Meyer Baese, Uwe; Jovanovic Dolecek, Gordana
2017-12-01
New theoretical lower bounds for the number of operators needed in fixed-point constant multiplication blocks are presented. The multipliers are constructed with the shift-and-add approach, where every arithmetic operation is pipelined, and with the generalization that n-input pipelined additions/subtractions are allowed, along with pure pipelining registers. These lower bounds, tighter than the state-of-the-art theoretical limits, are particularly useful in early design stages for a quick assessment in the hardware utilization of low-cost constant multiplication blocks implemented in the newest families of field programmable gate array (FPGA) integrated circuits.
Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.
2014-01-01
Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771
40 CFR 63.11100 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Distribution Bulk Terminals, Bulk Plants, and Pipeline Facilities Other Requirements and Information § 63.11100... authority to implement the provisions of this subpart). Bulk gasoline plant means any gasoline storage and distribution facility that receives gasoline by pipeline, ship or barge, or cargo tank and has a gasoline...
State of art of seismic design and seismic hazard analysis for oil and gas pipeline system
NASA Astrophysics Data System (ADS)
Liu, Aiwen; Chen, Kun; Wu, Jian
2010-06-01
The purpose of this paper is to adopt the uniform confidence method in both water pipeline design and oil-gas pipeline design. Based on the importance of pipeline and consequence of its failure, oil and gas pipeline can be classified into three pipe classes, with exceeding probabilities over 50 years of 2%, 5% and 10%, respectively. Performance-based design requires more information about ground motion, which should be obtained by evaluating seismic safety for pipeline engineering site. Different from a city’s water pipeline network, the long-distance oil and gas pipeline system is a spatially linearly distributed system. For the uniform confidence of seismic safety, a long-distance oil and pipeline formed with pump stations and different-class pipe segments should be considered as a whole system when analyzing seismic risk. Considering the uncertainty of earthquake magnitude, the design-basis fault displacements corresponding to the different pipeline classes are proposed to improve deterministic seismic hazard analysis (DSHA). A new empirical relationship between the maximum fault displacement and the surface-wave magnitude is obtained with the supplemented earthquake data in East Asia. The estimation of fault displacement for a refined oil pipeline in Wenchuan M S8.0 earthquake is introduced as an example in this paper.
Towards photometry pipeline of the Indonesian space surveillance system
NASA Astrophysics Data System (ADS)
Priyatikanto, Rhorom; Religia, Bahar; Rachman, Abdul; Dani, Tiar
2015-09-01
Optical observation through sub-meter telescope equipped with CCD camera becomes alternative method for increasing orbital debris detection and surveillance. This observational mode is expected to eye medium-sized objects in higher orbits (e.g. MEO, GTO, GSO & GEO), beyond the reach of usual radar system. However, such observation of fast moving objects demands special treatment and analysis technique. In this study, we performed photometric analysis of the satellite track images photographed using rehabilitated Schmidt Bima Sakti telescope in Bosscha Observatory. The Hough transformation was implemented to automatically detect linear streak from the images. From this analysis and comparison to USSPACECOM catalog, two satellites were identified and associated with inactive Thuraya-3 satellite and Satcom-3 debris which are located at geostationary orbit. Further aperture photometry analysis revealed the periodicity of tumbling Satcom-3 debris. In the near future, it is not impossible to apply similar scheme to establish an analysis pipeline for optical space surveillance system hosted in Indonesia.
1996-01-01
Real-Time 19 5 Conclusion 23 List of References 25 ii LIST OF FIGURES FIGURE PAGE 3-1 Test Bench Pseudo Code 7 3-2 Fast Convolution...3-1 shows pseudo - code for a test bench with two application nodes. The outer test bench wrapper consists of three functions: pipeline_init, pipeline...exit_func); Figure 3-1. Test Bench Pseudo Code The application wrapper is contained in the pipeline routine and similarly consists of an
Expansive cements for the manufacture of the concrete protective bandages
NASA Astrophysics Data System (ADS)
Yakymechko, Yaroslav; Voloshynets, Vladyslav
2017-12-01
One of the promising directions of the use of expansive cements is making the protective bandages for the maintenance of pipelines. Bandages expansive application of the compositions of the pipeline reinforce the damaged area and reduce stress due to compressive stress in the cylindrical area. Such requirements are best suited for expansive compositions obtained from portland cement and modified quicklime. The article presents the results of expansive cements based on quick lime in order to implement protective bandages pipelines.
Toward cognitive pipelines of medical assistance algorithms.
Philipp, Patrick; Maleshkova, Maria; Katic, Darko; Weber, Christian; Götz, Michael; Rettinger, Achim; Speidel, Stefanie; Kämpgen, Benedikt; Nolden, Marco; Wekerle, Anna-Laura; Dillmann, Rüdiger; Kenngott, Hannes; Müller, Beat; Studer, Rudi
2016-09-01
Assistance algorithms for medical tasks have great potential to support physicians with their daily work. However, medicine is also one of the most demanding domains for computer-based support systems, since medical assistance tasks are complex and the practical experience of the physician is crucial. Recent developments in the area of cognitive computing appear to be well suited to tackle medicine as an application domain. We propose a system based on the idea of cognitive computing and consisting of auto-configurable medical assistance algorithms and their self-adapting combination. The system enables automatic execution of new algorithms, given they are made available as Medical Cognitive Apps and are registered in a central semantic repository. Learning components can be added to the system to optimize the results in the cases when numerous Medical Cognitive Apps are available for the same task. Our prototypical implementation is applied to the areas of surgical phase recognition based on sensor data and image progressing for tumor progression mappings. Our results suggest that such assistance algorithms can be automatically configured in execution pipelines, candidate results can be automatically scored and combined, and the system can learn from experience. Furthermore, our evaluation shows that the Medical Cognitive Apps are providing the correct results as they did for local execution and run in a reasonable amount of time. The proposed solution is applicable to a variety of medical use cases and effectively supports the automated and self-adaptive configuration of cognitive pipelines based on medical interpretation algorithms.
77 FR 17119 - Pipeline Safety: Cast Iron Pipe (Supplementary Advisory Bulletin)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-23
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... national attention and highlight the need for continued safety improvements to aging gas pipeline systems... 26, 1992) covering the continued use of cast iron pipe in natural gas distribution pipeline systems...
78 FR 6402 - Pipeline Safety: Accident and Incident Notification Time Limit
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-30
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No.... SUMMARY: Owners and operators of gas and hazardous liquid pipeline systems and liquefied natural gas (LNG... operators of gas and hazardous liquids pipeline systems and LNG facilities that, ``at the earliest...
Freight pipelines: Current status and anticipated future use
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-07-01
This report is issued by the Task Committee on Freight Pipelines, Pipeline Division, ASCE. Freight pipelines of various types (including slurry pipeline, pneumatic pipeline, and capsule pipeline) have been used throughout the world for over a century for transporting solid and sometimes even package products. Recent advancements in pipeline technology, aided by advanced computer control systems and trenchless technologies, have greatly facilitated the transportation of solids by pipelines. Today, in many situations, freight pipelines are not only the most economical and practical means for transporting solids, they are also the most reliable, safest and most environmentally friendly transportation mode. Increasedmore » use of underground pipelines to transport freight is anticipated in the future, especially as the technology continues to improve and surface transportation modes such as highways become more congested. This paper describes the state of the art and expected future uses of various types of freight pipelines. Obstacles hindering the development and use of the most advanced freight pipeline systems, such as the pneumatic capsule pipeline for interstate transport of freight, are discussed.« less
Implementation of a pulse coupled neural network in FPGA.
Waldemark, J; Millberg, M; Lindblad, T; Waldemark, K; Becanovic, V
2000-06-01
The Pulse Coupled neural network, PCNN, is a biologically inspired neural net and it can be used in various image analysis applications, e.g. time-critical applications in the field of image pre-processing like segmentation, filtering, etc. a VHDL implementation of the PCNN targeting FPGA was undertaken and the results presented here. The implementation contains many interesting features. By pipelining the PCNN structure a very high throughput of 55 million neuron iterations per second could be achieved. By making the coefficients re-configurable during operation, a complete recognition system could be implemented on one, or maybe two, chip(s). Reconsidering the ranges and resolutions of the constants may save a lot of hardware, since the higher resolution requires larger multipliers, adders, memories etc.
Redefining the Data Pipeline Using GPUs
NASA Astrophysics Data System (ADS)
Warner, C.; Eikenberry, S. S.; Gonzalez, A. H.; Packham, C.
2013-10-01
There are two major challenges facing the next generation of data processing pipelines: 1) handling an ever increasing volume of data as array sizes continue to increase and 2) the desire to process data in near real-time to maximize observing efficiency by providing rapid feedback on data quality. Combining the power of modern graphics processing units (GPUs), relational database management systems (RDBMSs), and extensible markup language (XML) to re-imagine traditional data pipelines will allow us to meet these challenges. Modern GPUs contain hundreds of processing cores, each of which can process hundreds of threads concurrently. Technologies such as Nvidia's Compute Unified Device Architecture (CUDA) platform and the PyCUDA (http://mathema.tician.de/software/pycuda) module for Python allow us to write parallel algorithms and easily link GPU-optimized code into existing data pipeline frameworks. This approach has produced speed gains of over a factor of 100 compared to CPU implementations for individual algorithms and overall pipeline speed gains of a factor of 10-25 compared to traditionally built data pipelines for both imaging and spectroscopy (Warner et al., 2011). However, there are still many bottlenecks inherent in the design of traditional data pipelines. For instance, file input/output of intermediate steps is now a significant portion of the overall processing time. In addition, most traditional pipelines are not designed to be able to process data on-the-fly in real time. We present a model for a next-generation data pipeline that has the flexibility to process data in near real-time at the observatory as well as to automatically process huge archives of past data by using a simple XML configuration file. XML is ideal for describing both the dataset and the processes that will be applied to the data. Meta-data for the datasets would be stored using an RDBMS (such as mysql or PostgreSQL) which could be easily and rapidly queried and file I/O would be kept at a minimum. We believe this redefined data pipeline will be able to process data at the telescope, concurrent with continuing observations, thus maximizing precious observing time and optimizing the observational process in general. We also believe that using this design, it is possible to obtain a speed gain of a factor of 30-40 over traditional data pipelines when processing large archives of data.
The value of pipelines to the transportation system of Texas : year one report
DOT National Transportation Integrated Search
2000-10-01
Pipelines represent a major transporter of petrochemical commodities in Texas. The Texas pipeline system represents as much as 17% of the total pipeline mileage in the U.S. and links many segments of the country with energy sources located on the Gul...
A FPGA Implementation of the CAR-FAC Cochlear Model.
Xu, Ying; Thakur, Chetan S; Singh, Ram K; Hamilton, Tara Julia; Wang, Runchun M; van Schaik, André
2018-01-01
This paper presents a digital implementation of the Cascade of Asymmetric Resonators with Fast-Acting Compression (CAR-FAC) cochlear model. The CAR part simulates the basilar membrane's (BM) response to sound. The FAC part models the outer hair cell (OHC), the inner hair cell (IHC), and the medial olivocochlear efferent system functions. The FAC feeds back to the CAR by moving the poles and zeros of the CAR resonators automatically. We have implemented a 70-section, 44.1 kHz sampling rate CAR-FAC system on an Altera Cyclone V Field Programmable Gate Array (FPGA) with 18% ALM utilization by using time-multiplexing and pipeline parallelizing techniques and present measurement results here. The fully digital reconfigurable CAR-FAC system is stable, scalable, easy to use, and provides an excellent input stage to more complex machine hearing tasks such as sound localization, sound segregation, speech recognition, and so on.
A FPGA Implementation of the CAR-FAC Cochlear Model
Xu, Ying; Thakur, Chetan S.; Singh, Ram K.; Hamilton, Tara Julia; Wang, Runchun M.; van Schaik, André
2018-01-01
This paper presents a digital implementation of the Cascade of Asymmetric Resonators with Fast-Acting Compression (CAR-FAC) cochlear model. The CAR part simulates the basilar membrane's (BM) response to sound. The FAC part models the outer hair cell (OHC), the inner hair cell (IHC), and the medial olivocochlear efferent system functions. The FAC feeds back to the CAR by moving the poles and zeros of the CAR resonators automatically. We have implemented a 70-section, 44.1 kHz sampling rate CAR-FAC system on an Altera Cyclone V Field Programmable Gate Array (FPGA) with 18% ALM utilization by using time-multiplexing and pipeline parallelizing techniques and present measurement results here. The fully digital reconfigurable CAR-FAC system is stable, scalable, easy to use, and provides an excellent input stage to more complex machine hearing tasks such as sound localization, sound segregation, speech recognition, and so on. PMID:29692700
Pipelining in a changing competitive environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, E.G.; Wishart, D.M.
1996-12-31
The changing competitive environment for the pipeline industry presents a broad spectrum of new challenges and opportunities: international cooperation; globalization of opportunities, organizations and competition; and integrated systems approach to system configuration, financing, contracting strategy, materials sourcing, and operations; cutting edge and emerging technologies; adherence to high standards of environmental protection; an emphasis on safety; innovative approaches to project financing; and advances in technology and programs to maintain the long term, cost effective integrity of operating pipeline systems. These challenges and opportunities are partially a result of the increasingly competitive nature of pipeline development and the public`s intolerance to incidentsmore » of pipeline failure. A creative systems approach to these challenges is often the key to the project moving ahead. This usually encompasses collaboration among users of the pipeline, pipeline owners and operators, international engineering and construction companies, equipment and materials suppliers, in-country engineers and constructors, international lending agencies and financial institutions.« less
78 FR 41991 - Pipeline Safety: Potential for Damage to Pipeline Facilities Caused by Flooding
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-12
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Notice; Issuance of Advisory... Gas and Hazardous Liquid Pipeline Systems. Subject: Potential for Damage to Pipeline Facilities Caused...
78 FR 57455 - Pipeline Safety: Information Collection Activities
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-18
... ``. . . system-specific information, including pipe diameter, operating pressure, product transported, and...) must provide contact information and geospatial data on their pipeline system. This information should... Mapping System (NPMS) to support various regulatory programs, pipeline inspections, and authorized...
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy.
Chiew, Wei-Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy
NASA Astrophysics Data System (ADS)
Chiew, Wei Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry.
76 FR 12090 - Commission Information Collection Activities (FERC-73); Comment Request; Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-04
... Pipelines Service Life Data'' (OMB No. 1902-0019) is used by the Commission to implement the statutory... depreciation rates, the pipeline companies are required to provide service life data as part of their data submissions if the proposed depreciation rates are based on the remaining physical life calculations. [[Page...
Evidence-Based Professional Development Considerations along the School-to-Prison Pipeline
ERIC Educational Resources Information Center
Houchins, David E.; Shippen, Margaret E.; Murphy, Kristin M.
2012-01-01
This article addresses professional development (PD) issues for those who provide services to students in the school-to-prison pipeline (STPP). Emphasis is on implementing evidence-based practices. The authors use a modified version of Desimone's PD framework for the structure of this article involving (a) collective participation and common…
PMAnalyzer: a new web interface for bacterial growth curve analysis.
Cuevas, Daniel A; Edwards, Robert A
2017-06-15
Bacterial growth curves are essential representations for characterizing bacteria metabolism within a variety of media compositions. Using high-throughput, spectrophotometers capable of processing tens of 96-well plates, quantitative phenotypic information can be easily integrated into the current data structures that describe a bacterial organism. The PMAnalyzer pipeline performs a growth curve analysis to parameterize the unique features occurring within microtiter wells containing specific growth media sources. We have expanded the pipeline capabilities and provide a user-friendly, online implementation of this automated pipeline. PMAnalyzer version 2.0 provides fast automatic growth curve parameter analysis, growth identification and high resolution figures of sample-replicate growth curves and several statistical analyses. PMAnalyzer v2.0 can be found at https://edwards.sdsu.edu/pmanalyzer/ . Source code for the pipeline can be found on GitHub at https://github.com/dacuevas/PMAnalyzer . Source code for the online implementation can be found on GitHub at https://github.com/dacuevas/PMAnalyzerWeb . dcuevas08@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
Automatic Generalizability Method of Urban Drainage Pipe Network Considering Multi-Features
NASA Astrophysics Data System (ADS)
Zhu, S.; Yang, Q.; Shao, J.
2018-05-01
Urban drainage systems are indispensable dataset for storm-flooding simulation. Given data availability and current computing power, the structure and complexity of urban drainage systems require to be simplify. However, till data, the simplify procedure mainly depend on manual operation that always leads to mistakes and lower work efficiency. This work referenced the classification methodology of road system, and proposed a conception of pipeline stroke. Further, length of pipeline, angle between two pipelines, the pipeline belonged road level and diameter of pipeline were chosen as the similarity criterion to generate the pipeline stroke. Finally, designed the automatic method to generalize drainage systems with the concern of multi-features. This technique can improve the efficiency and accuracy of the generalization of drainage systems. In addition, it is beneficial to the study of urban storm-floods.
Sensor and transmitter system for communication in pipelines
Cooper, John F.; Burnham, Alan K.
2013-01-29
A system for sensing and communicating in a pipeline that contains a fluid. An acoustic signal containing information about a property of the fluid is produced in the pipeline. The signal is transmitted through the pipeline. The signal is received with the information and used by a control.
DOT National Transportation Integrated Search
1997-07-14
These standards represent a guideline for preparing digital data for inclusion in the National Pipeline Mapping System Repository. The standards were created with input from the pipeline industry and government agencies. They address the submission o...
Sample to answer visualization pipeline for low-cost point-of-care blood cell counting
NASA Astrophysics Data System (ADS)
Smith, Suzanne; Naidoo, Thegaran; Davies, Emlyn; Fourie, Louis; Nxumalo, Zandile; Swart, Hein; Marais, Philip; Land, Kevin; Roux, Pieter
2015-03-01
We present a visualization pipeline from sample to answer for point-of-care blood cell counting applications. Effective and low-cost point-of-care medical diagnostic tests provide developing countries and rural communities with accessible healthcare solutions [1], and can be particularly beneficial for blood cell count tests, which are often the starting point in the process of diagnosing a patient [2]. The initial focus of this work is on total white and red blood cell counts, using a microfluidic cartridge [3] for sample processing. Analysis of the processed samples has been implemented by means of two main optical visualization systems developed in-house: 1) a fluidic operation analysis system using high speed video data to determine volumes, mixing efficiency and flow rates, and 2) a microscopy analysis system to investigate homogeneity and concentration of blood cells. Fluidic parameters were derived from the optical flow [4] as well as color-based segmentation of the different fluids using a hue-saturation-value (HSV) color space. Cell count estimates were obtained using automated microscopy analysis and were compared to a widely accepted manual method for cell counting using a hemocytometer [5]. The results using the first iteration microfluidic device [3] showed that the most simple - and thus low-cost - approach for microfluidic component implementation was not adequate as compared to techniques based on manual cell counting principles. An improved microfluidic design has been developed to incorporate enhanced mixing and metering components, which together with this work provides the foundation on which to successfully implement automated, rapid and low-cost blood cell counting tests.
Algorithms for parallel flow solvers on message passing architectures
NASA Technical Reports Server (NTRS)
Vanderwijngaart, Rob F.
1995-01-01
The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those immediately adjacent to them, then the first processor in the pipeline will receive a computational load that is less than that of subsequent processors, magnifying the pipeline slowdown effect. Extra compensation is needed for grid boundary effects, even if all grid blocks are equally sized.
Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian
2017-03-28
Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.
Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian
2017-01-01
Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation. PMID:28350358
A Real-Time System for Lane Detection Based on FPGA and DSP
NASA Astrophysics Data System (ADS)
Xiao, Jing; Li, Shutao; Sun, Bin
2016-12-01
This paper presents a real-time lane detection system including edge detection and improved Hough Transform based lane detection algorithm and its hardware implementation with field programmable gate array (FPGA) and digital signal processor (DSP). Firstly, gradient amplitude and direction information are combined to extract lane edge information. Then, the information is used to determine the region of interest. Finally, the lanes are extracted by using improved Hough Transform. The image processing module of the system consists of FPGA and DSP. Particularly, the algorithms implemented in FPGA are working in pipeline and processing in parallel so that the system can run in real-time. In addition, DSP realizes lane line extraction and display function with an improved Hough Transform. The experimental results show that the proposed system is able to detect lanes under different road situations efficiently and effectively.
Ali, Salman; Qaisar, Saad Bin; Saeed, Husnain; Khan, Muhammad Farhan; Naeem, Muhammad; Anpalagan, Alagan
2015-03-25
The synergy of computational and physical network components leading to the Internet of Things, Data and Services has been made feasible by the use of Cyber Physical Systems (CPSs). CPS engineering promises to impact system condition monitoring for a diverse range of fields from healthcare, manufacturing, and transportation to aerospace and warfare. CPS for environment monitoring applications completely transforms human-to-human, human-to-machine and machine-to-machine interactions with the use of Internet Cloud. A recent trend is to gain assistance from mergers between virtual networking and physical actuation to reliably perform all conventional and complex sensing and communication tasks. Oil and gas pipeline monitoring provides a novel example of the benefits of CPS, providing a reliable remote monitoring platform to leverage environment, strategic and economic benefits. In this paper, we evaluate the applications and technical requirements for seamlessly integrating CPS with sensor network plane from a reliability perspective and review the strategies for communicating information between remote monitoring sites and the widely deployed sensor nodes. Related challenges and issues in network architecture design and relevant protocols are also provided with classification. This is supported by a case study on implementing reliable monitoring of oil and gas pipeline installations. Network parameters like node-discovery, node-mobility, data security, link connectivity, data aggregation, information knowledge discovery and quality of service provisioning have been reviewed.
Ali, Salman; Qaisar, Saad Bin; Saeed, Husnain; Farhan Khan, Muhammad; Naeem, Muhammad; Anpalagan, Alagan
2015-01-01
The synergy of computational and physical network components leading to the Internet of Things, Data and Services has been made feasible by the use of Cyber Physical Systems (CPSs). CPS engineering promises to impact system condition monitoring for a diverse range of fields from healthcare, manufacturing, and transportation to aerospace and warfare. CPS for environment monitoring applications completely transforms human-to-human, human-to-machine and machine-to-machine interactions with the use of Internet Cloud. A recent trend is to gain assistance from mergers between virtual networking and physical actuation to reliably perform all conventional and complex sensing and communication tasks. Oil and gas pipeline monitoring provides a novel example of the benefits of CPS, providing a reliable remote monitoring platform to leverage environment, strategic and economic benefits. In this paper, we evaluate the applications and technical requirements for seamlessly integrating CPS with sensor network plane from a reliability perspective and review the strategies for communicating information between remote monitoring sites and the widely deployed sensor nodes. Related challenges and issues in network architecture design and relevant protocols are also provided with classification. This is supported by a case study on implementing reliable monitoring of oil and gas pipeline installations. Network parameters like node-discovery, node-mobility, data security, link connectivity, data aggregation, information knowledge discovery and quality of service provisioning have been reviewed. PMID:25815444
NASA Technical Reports Server (NTRS)
Couvidat, S.; Zhao, J.; Birch, A. C.; Kosovichev, A. G.; Duvall, Thomas L., Jr.; Parchevsky, K.; Scherrer, P. H.
2010-01-01
The Helioseismic and Magnetic Imager (HMI) instrument onboard the Solar Dynamics Observatory (SDO) satellite is designed to produce high-resolution Doppler-velocity maps of oscillations at the solar surface with high temporal cadence. To take advantage of these high-quality oscillation data, a time - distance helioseismology pipeline (Zhao et al., Solar Phys. submitted, 2010) has been implemented at the Joint Science Operations Center (JSOC) at Stanford University. The aim of this pipeline is to generate maps of acoustic travel times from oscillations on the solar surface, and to infer subsurface 3D flow velocities and sound-speed perturbations. The wave travel times are measured from cross-covariances of the observed solar oscillation signals. For implementation into the pipeline we have investigated three different travel-time definitions developed in time - distance helioseismology: a Gabor-wavelet fitting (Kosovichev and Duvall, SCORE'96: Solar Convection and Oscillations and Their Relationship, ASSL, Dordrecht, 241, 1997), a minimization relative to a reference cross-covariance function (Gizon and Birch, Astrophys. J. 571, 966, 2002), and a linearized version of the minimization method (Gizon and Birch, Astrophys. J. 614, 472, 2004). Using Doppler-velocity data from the Michelson Doppler Imager (MDI) instrument onboard SOHO, we tested and compared these definitions for the mean and difference traveltime perturbations measured from reciprocal signals. Although all three procedures return similar travel times in a quiet-Sun region, the method of Gizon and Birch (Astrophys. J. 614, 472, 2004) gives travel times that are significantly different from the others in a magnetic (active) region. Thus, for the pipeline implementation we chose the procedures of Kosovichev and Duvall (SCORE'96: Solar Convection and Oscillations and Their Relationship, ASSL, Dordrecht, 241, 1997) and Gizon and Birch (Astrophys. J. 571, 966, 2002). We investigated the relationships among these three travel-time definitions, their sensitivities to fitting parameters, and estimated the random errors that they produce.
Cohen Freue, Gabriela V.; Meredith, Anna; Smith, Derek; Bergman, Axel; Sasaki, Mayu; Lam, Karen K. Y.; Hollander, Zsuzsanna; Opushneva, Nina; Takhar, Mandeep; Lin, David; Wilson-McManus, Janet; Balshaw, Robert; Keown, Paul A.; Borchers, Christoph H.; McManus, Bruce; Ng, Raymond T.; McMaster, W. Robert
2013-01-01
Recent technical advances in the field of quantitative proteomics have stimulated a large number of biomarker discovery studies of various diseases, providing avenues for new treatments and diagnostics. However, inherent challenges have limited the successful translation of candidate biomarkers into clinical use, thus highlighting the need for a robust analytical methodology to transition from biomarker discovery to clinical implementation. We have developed an end-to-end computational proteomic pipeline for biomarkers studies. At the discovery stage, the pipeline emphasizes different aspects of experimental design, appropriate statistical methodologies, and quality assessment of results. At the validation stage, the pipeline focuses on the migration of the results to a platform appropriate for external validation, and the development of a classifier score based on corroborated protein biomarkers. At the last stage towards clinical implementation, the main aims are to develop and validate an assay suitable for clinical deployment, and to calibrate the biomarker classifier using the developed assay. The proposed pipeline was applied to a biomarker study in cardiac transplantation aimed at developing a minimally invasive clinical test to monitor acute rejection. Starting with an untargeted screening of the human plasma proteome, five candidate biomarker proteins were identified. Rejection-regulated proteins reflect cellular and humoral immune responses, acute phase inflammatory pathways, and lipid metabolism biological processes. A multiplex multiple reaction monitoring mass-spectrometry (MRM-MS) assay was developed for the five candidate biomarkers and validated by enzyme-linked immune-sorbent (ELISA) and immunonephelometric assays (INA). A classifier score based on corroborated proteins demonstrated that the developed MRM-MS assay provides an appropriate methodology for an external validation, which is still in progress. Plasma proteomic biomarkers of acute cardiac rejection may offer a relevant post-transplant monitoring tool to effectively guide clinical care. The proposed computational pipeline is highly applicable to a wide range of biomarker proteomic studies. PMID:23592955
Visual analysis of trash bin processing on garbage trucks in low resolution video
NASA Astrophysics Data System (ADS)
Sidla, Oliver; Loibner, Gernot
2015-03-01
We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.
Efficient k-Winner-Take-All Competitive Learning Hardware Architecture for On-Chip Learning
Ou, Chien-Min; Li, Hui-Ya; Hwang, Wen-Jyi
2012-01-01
A novel k-winners-take-all (k-WTA) competitive learning (CL) hardware architecture is presented for on-chip learning in this paper. The architecture is based on an efficient pipeline allowing k-WTA competition processes associated with different training vectors to be performed concurrently. The pipeline architecture employs a novel codeword swapping scheme so that neurons failing the competition for a training vector are immediately available for the competitions for the subsequent training vectors. The architecture is implemented by the field programmable gate array (FPGA). It is used as a hardware accelerator in a system on programmable chip (SOPC) for realtime on-chip learning. Experimental results show that the SOPC has significantly lower training time than that of other k-WTA CL counterparts operating with or without hardware support.
The Herschel Data Processing System - Hipe And Pipelines - During The Early Mission Phase
NASA Astrophysics Data System (ADS)
Ardila, David R.; Herschel Science Ground Segment Consortium
2010-01-01
The Herschel Space Observatory, the fourth cornerstone mission in the ESA science program, was launched 14th of May 2009. With a 3.5 m telescope, it is the largest space telescope ever launched. Herschel's three instruments (HIFI, PACS, and SPIRE) perform photometry and spectroscopy in the 55 - 672 micron range and will deliver exciting science for the astronomical community during at least three years of routine observations. Here we summarize the state of the Herschel Data Processing System and give an overview about future development milestones and plans. The development of the Herschel Data Processing System started seven years ago to support the data analysis for Instrument Level Tests. Resources were made available to implement a freely distributable Data Processing System capable of interactively and automatically reduce Herschel data at different processing levels. The system combines data retrieval, pipeline execution and scientific analysis in one single environment. The software is coded in Java and Jython to be platform independent and to avoid the need for commercial licenses. The Herschel Interactive Processing Environment (HIPE) is the user-friendly face of Herschel Data Processing. The first PACS preview observation of M51 was processed with HIPE, using basic pipeline scripts to a fantastic image within 30 minutes of data reception. Also the first HIFI observations on DR-21 were successfully reduced to high quality spectra, followed by SPIRE observations on M66 and M74. The Herschel Data Processing System is a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS and SPIRE consortium members.
Pipeline systems - safety for assets and transport regularity
DOT National Transportation Integrated Search
1997-01-01
This review regarding safety for assets and financial interests for pipeline systems has showed how this aspect has been taken care of in the existing petroleum legislation. It has been demonstrated that the integrity of pipeline systems with the res...
Bad Actors Criticality Assessment for Pipeline system
NASA Astrophysics Data System (ADS)
Nasir, Meseret; Chong, Kit wee; Osman, Sabtuni; Siaw Khur, Wee
2015-04-01
Failure of a pipeline system could bring huge economic loss. In order to mitigate such catastrophic loss, it is required to evaluate and rank the impact of each bad actor of the pipeline system. In this study, bad actors are known as the root causes or any potential factor leading to the system downtime. Fault Tree Analysis (FTA) is used to analyze the probability of occurrence for each bad actor. Bimbaum's Importance and criticality measure (BICM) is also employed to rank the impact of each bad actor on the pipeline system failure. The results demonstrate that internal corrosion; external corrosion and construction damage are critical and highly contribute to the pipeline system failure with 48.0%, 12.4% and 6.0% respectively. Thus, a minor improvement in internal corrosion; external corrosion and construction damage would bring significant changes in the pipeline system performance and reliability. These results could also be useful to develop efficient maintenance strategy by identifying the critical bad actors.
Development of a robotic system of nonstripping pipeline repair by reinforced polymeric compositions
NASA Astrophysics Data System (ADS)
Rybalkin, LA
2018-03-01
The article considers the possibility of creating a robotic system for pipeline repair. The pipeline repair is performed due to inner layer formation by special polyurethane compositions reinforced by short glass fiber strands. This approach provides the opportunity to repair pipelines without excavation works and pipe replacement.
A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application
NASA Astrophysics Data System (ADS)
Roy, Sounak; Banerjee, Swapna
2018-03-01
This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.
A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application
NASA Astrophysics Data System (ADS)
Roy, Sounak; Banerjee, Swapna
2018-06-01
This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.
A pipeline for comprehensive and automated processing of electron diffraction data in IPLT.
Schenk, Andreas D; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas
2013-05-01
Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library and Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. Copyright © 2013 Elsevier Inc. All rights reserved.
A pipeline for comprehensive and automated processing of electron diffraction data in IPLT
Schenk, Andreas D.; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas
2013-01-01
Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library & Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. PMID:23500887
75 FR 53733 - Pipeline Safety: Information Collection Activities
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-01
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0246] Pipeline Safety: Information Collection Activities AGENCY: Pipeline and Hazardous... liquefied natural gas, hazardous liquid, and gas transmission pipeline systems operated by a company. The...
NOBLE - Flexible concept recognition for large-scale biomedical natural language processing.
Tseytlin, Eugene; Mitchell, Kevin; Legowski, Elizabeth; Corrigan, Julia; Chavan, Girish; Jacobson, Rebecca S
2016-01-14
Natural language processing (NLP) applications are increasingly important in biomedical data analysis, knowledge engineering, and decision support. Concept recognition is an important component task for NLP pipelines, and can be either general-purpose or domain-specific. We describe a novel, flexible, and general-purpose concept recognition component for NLP pipelines, and compare its speed and accuracy against five commonly used alternatives on both a biological and clinical corpus. NOBLE Coder implements a general algorithm for matching terms to concepts from an arbitrary vocabulary set. The system's matching options can be configured individually or in combination to yield specific system behavior for a variety of NLP tasks. The software is open source, freely available, and easily integrated into UIMA or GATE. We benchmarked speed and accuracy of the system against the CRAFT and ShARe corpora as reference standards and compared it to MMTx, MGrep, Concept Mapper, cTAKES Dictionary Lookup Annotator, and cTAKES Fast Dictionary Lookup Annotator. We describe key advantages of the NOBLE Coder system and associated tools, including its greedy algorithm, configurable matching strategies, and multiple terminology input formats. These features provide unique functionality when compared with existing alternatives, including state-of-the-art systems. On two benchmarking tasks, NOBLE's performance exceeded commonly used alternatives, performing almost as well as the most advanced systems. Error analysis revealed differences in error profiles among systems. NOBLE Coder is comparable to other widely used concept recognition systems in terms of accuracy and speed. Advantages of NOBLE Coder include its interactive terminology builder tool, ease of configuration, and adaptability to various domains and tasks. NOBLE provides a term-to-concept matching system suitable for general concept recognition in biomedical NLP pipelines.
An Overview of Research and Evaluation Designs for Dissemination and Implementation
Brown, C. Hendricks; Curran, Geoffrey; Palinkas, Lawrence A.; Aarons, Gregory A.; Wells, Kenneth B.; Jones, Loretta; Collins, Linda M.; Duan, Naihua; Mittman, Brian S.; Wallace, Andrea; Tabak, Rachel G.; Ducharme, Lori; Chambers, David; Neta, Gila; Wiley, Tisha; Landsverk, John; Cheung, Ken; Cruden, Gracelyn
2016-01-01
Background The wide variety of dissemination and implementation designs now being used to evaluate and improve health systems and outcomes warrants review of the scope, features, and limitations of these designs. Methods This paper is one product of a design workgroup formed in 2013 by the National Institutes of Health to address dissemination and implementation research, and whose members represented diverse methodologic backgrounds, content focus areas, and health sectors. These experts integrated their collective knowledge on dissemination and implementation designs with searches of published evaluations strategies. Results This paper emphasizes randomized and non-randomized designs for the traditional translational research continuum or pipeline, which builds on existing efficacy and effectiveness trials to examine how one or more evidence-based clinical/prevention interventions are adopted, scaled up, and sustained in community or service delivery systems. We also mention other designs, including hybrid designs that combine effectiveness and implementation research, quality improvement designs for local knowledge, and designs that use simulation modeling. PMID:28384085
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-27
... PHMSA-2013-0248] Pipeline Safety: Random Drug Testing Rate; Contractor Management Information System Reporting; and Obtaining Drug and Alcohol Management Information System Sign-In Information AGENCY: Pipeline... Management Information System (MIS) Data; and New Method for Operators to Obtain User Name and Password for...
Tappis, Hannah; Doocy, Shannon; Amoako, Stephen
2013-01-01
ABSTRACT Despite decades of support for international food assistance programs by the U.S. Agency for International Development (USAID) Office of Food for Peace, relatively little is known about the commodity pipeline and management issues these programs face in post-conflict and politically volatile settings. Based on an audit of the program's commodity tracking system and interviews with 13 key program staff, this case study documents the experiences of organizations implementing the first USAID-funded non-emergency (development) food assistance program approved for Sudan and South Sudan. Key challenges and lessons learned in this experience about food commodity procurement, transport, and management may help improve the design and implementation of future development food assistance programs in a variety of complex, food-insecure settings around the world. Specifically, expanding shipping routes in complex political situations may facilitate reliable and timely commodity delivery. In addition, greater flexibility to procure commodities locally, rather than shipping U.S.-procured commodities, may avoid unnecessary shipping delays and reduce costs. PMID:25276532
A software framework for pipelined arithmetic algorithms in field programmable gate arrays
NASA Astrophysics Data System (ADS)
Kim, J. B.; Won, E.
2018-03-01
Pipelined algorithms implemented in field programmable gate arrays are extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms increases rapidly. For development of such hardware triggers, algorithms are developed in C++, ported to hardware description language for synthesizing firmware, and then ported back to C++ for simulating the firmware response down to the single bit level. We present a C++ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.
A pipeline design of a fast prime factor DFT on a finite field
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, In-Shek; Shao, H. M.; Reed, Irving S.; Shyu, Hsuen-Chyun
1988-01-01
A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.
A 32-bit Ultrafast Parallel Correlator using Resonant Tunneling Devices
NASA Technical Reports Server (NTRS)
Kulkarni, Shriram; Mazumder, Pinaki; Haddad, George I.
1995-01-01
An ultrafast 32-bit pipeline correlator has been implemented using resonant tunneling diodes (RTD) and hetero-junction bipolar transistors (HBT). The negative differential resistance (NDR) characteristics of RTD's is the basis of logic gates with the self-latching property that eliminates pipeline area and delay overheads which limit throughput in conventional technologies. The circuit topology also allows threshold logic functions such as minority/majority to be implemented in a compact manner resulting in reduction of the overall complexity and delay of arbitrary logic circuits. The parallel correlator is an essential component in code division multi-access (CDMA) transceivers used for the continuous calculation of correlation between an incoming data stream and a PN sequence. Simulation results show that a nano-pipelined correlator can provide and effective throughput of one 32-bit correlation every 100 picoseconds, using minimal hardware, with a power dissipation of 1.5 watts. RTD plus HBT based logic gates have been fabricated and the RTD plus HBT based correlator is compared with state of the art complementary metal oxide semiconductor (CMOS) implementations.
77 FR 74275 - Pipeline Safety: Information Collection Activities
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-13
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No.... These regulations require operators of hazardous liquid pipelines and gas pipelines to develop and... control room. Affected Public: Operators of both natural gas and hazardous liquid pipeline systems. Annual...
An integrated database-pipeline system for studying single nucleotide polymorphisms and diseases.
Yang, Jin Ok; Hwang, Sohyun; Oh, Jeongsu; Bhak, Jong; Sohn, Tae-Kwon
2008-12-12
Studies on the relationship between disease and genetic variations such as single nucleotide polymorphisms (SNPs) are important. Genetic variations can cause disease by influencing important biological regulation processes. Despite the needs for analyzing SNP and disease correlation, most existing databases provide information only on functional variants at specific locations on the genome, or deal with only a few genes associated with disease. There is no combined resource to widely support gene-, SNP-, and disease-related information, and to capture relationships among such data. Therefore, we developed an integrated database-pipeline system for studying SNPs and diseases. To implement the pipeline system for the integrated database, we first unified complicated and redundant disease terms and gene names using the Unified Medical Language System (UMLS) for classification and noun modification, and the HUGO Gene Nomenclature Committee (HGNC) and NCBI gene databases. Next, we collected and integrated representative databases for three categories of information. For genes and proteins, we examined the NCBI mRNA, UniProt, UCSC Table Track and MitoDat databases. For genetic variants we used the dbSNP, JSNP, ALFRED, and HGVbase databases. For disease, we employed OMIM, GAD, and HGMD databases. The database-pipeline system provides a disease thesaurus, including genes and SNPs associated with disease. The search results for these categories are available on the web page http://diseasome.kobic.re.kr/, and a genome browser is also available to highlight findings, as well as to permit the convenient review of potentially deleterious SNPs among genes strongly associated with specific diseases and clinical phenotypes. Our system is designed to capture the relationships between SNPs associated with disease and disease-causing genes. The integrated database-pipeline provides a list of candidate genes and SNP markers for evaluation in both epidemiological and molecular biological approaches to diseases-gene association studies. Furthermore, researchers then can decide semi-automatically the data set for association studies while considering the relationships between genetic variation and diseases. The database can also be economical for disease-association studies, as well as to facilitate an understanding of the processes which cause disease. Currently, the database contains 14,674 SNP records and 109,715 gene records associated with human diseases and it is updated at regular intervals.
Computer models of complex multiloop branched pipeline systems
NASA Astrophysics Data System (ADS)
Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.
2013-11-01
This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.
NASA Astrophysics Data System (ADS)
Bigdeli, Abbas; Biglari-Abhari, Morteza; Salcic, Zoran; Tin Lai, Yat
2006-12-01
A new pipelined systolic array-based (PSA) architecture for matrix inversion is proposed. The pipelined systolic array (PSA) architecture is suitable for FPGA implementations as it efficiently uses available resources of an FPGA. It is scalable for different matrix size and as such allows employing parameterisation that makes it suitable for customisation for application-specific needs. This new architecture has an advantage of[InlineEquation not available: see fulltext.] processing element complexity, compared to the[InlineEquation not available: see fulltext.] in other systolic array structures, where the size of the input matrix is given by[InlineEquation not available: see fulltext.]. The use of the PSA architecture for Kalman filter as an implementation example, which requires different structures for different number of states, is illustrated. The resulting precision error is analysed and shown to be negligible.
Kibinge, Nelson; Ono, Naoaki; Horie, Masafumi; Sato, Tetsuo; Sugiura, Tadao; Altaf-Ul-Amin, Md; Saito, Akira; Kanaya, Shigehiko
2016-06-01
Conventionally, workflows examining transcription regulation networks from gene expression data involve distinct analytical steps. There is a need for pipelines that unify data mining and inference deduction into a singular framework to enhance interpretation and hypotheses generation. We propose a workflow that merges network construction with gene expression data mining focusing on regulation processes in the context of transcription factor driven gene regulation. The pipeline implements pathway-based modularization of expression profiles into functional units to improve biological interpretation. The integrated workflow was implemented as a web application software (TransReguloNet) with functions that enable pathway visualization and comparison of transcription factor activity between sample conditions defined in the experimental design. The pipeline merges differential expression, network construction, pathway-based abstraction, clustering and visualization. The framework was applied in analysis of actual expression datasets related to lung, breast and prostrate cancer. Copyright © 2016 Elsevier Inc. All rights reserved.
A Review of Fatigue Crack Growth for Pipeline Steels Exposed to Hydrogen
Nanninga, N.; Slifka, A.; Levy, Y.; White, C.
2010-01-01
Hydrogen pipeline systems offer an economical means of storing and transporting energy in the form of hydrogen gas. Pipelines can be used to transport hydrogen that has been generated at solar and wind farms to and from salt cavern storage locations. In addition, pipeline transportation systems will be essential before widespread hydrogen fuel cell vehicle technology becomes a reality. Since hydrogen pipeline use is expected to grow, the mechanical integrity of these pipelines will need to be validated under the presence of pressurized hydrogen. This paper focuses on a review of the fatigue crack growth response of pipeline steels when exposed to gaseous hydrogen environments. Because of defect-tolerant design principles in pipeline structures, it is essential that designers consider hydrogen-assisted fatigue crack growth behavior in these applications. PMID:27134796
78 FR 14877 - Pipeline Safety: Incident and Accident Reports
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-07
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2013-0028] Pipeline Safety: Incident and Accident Reports AGENCY: Pipeline and Hazardous Materials... PHMSA F 7100.2--Incident Report--Natural and Other Gas Transmission and Gathering Pipeline Systems and...
Manifold Coal-Slurry Transport System
NASA Technical Reports Server (NTRS)
Liddle, S. G.; Estus, J. M.; Lavin, M. L.
1986-01-01
Feeding several slurry pipes into main pipeline reduces congestion in coal mines. System based on manifold concept: feeder pipelines from each working entry joined to main pipeline that carries coal slurry out of panel and onto surface. Manifold concept makes coal-slurry haulage much simpler than existing slurry systems.
43 CFR 2801.9 - When do I need a grant?
Code of Federal Regulations, 2014 CFR
2014-10-01
..., pipelines, tunnels, and other systems which impound, store, transport, or distribute water; (2) Pipelines and other systems for transporting or distributing liquids and gases, other than water and other than... and terminal facilities used in connection with them; (3) Pipelines, slurry and emulsion systems, and...
43 CFR 2801.9 - When do I need a grant?
Code of Federal Regulations, 2012 CFR
2012-10-01
..., pipelines, tunnels, and other systems which impound, store, transport, or distribute water; (2) Pipelines and other systems for transporting or distributing liquids and gases, other than water and other than... and terminal facilities used in connection with them; (3) Pipelines, slurry and emulsion systems, and...
43 CFR 2801.9 - When do I need a grant?
Code of Federal Regulations, 2011 CFR
2011-10-01
..., pipelines, tunnels, and other systems which impound, store, transport, or distribute water; (2) Pipelines and other systems for transporting or distributing liquids and gases, other than water and other than... and terminal facilities used in connection with them; (3) Pipelines, slurry and emulsion systems, and...
43 CFR 2801.9 - When do I need a grant?
Code of Federal Regulations, 2013 CFR
2013-10-01
..., pipelines, tunnels, and other systems which impound, store, transport, or distribute water; (2) Pipelines and other systems for transporting or distributing liquids and gases, other than water and other than... and terminal facilities used in connection with them; (3) Pipelines, slurry and emulsion systems, and...
Solving systems of linear equations by GPU-based matrix factorization in a Science Ground Segment
NASA Astrophysics Data System (ADS)
Legendre, Maxime; Schmidt, Albrecht; Moussaoui, Saïd; Lammers, Uwe
2013-11-01
Recently, Graphics Cards have been used to offload scientific computations from traditional CPUs for greater efficiency. This paper investigates the adaptation of a real-world linear system solver, which plays a central role in the data processing of the Science Ground Segment of ESA's astrometric Gaia mission. The paper quantifies the resource trade-offs between traditional CPU implementations and modern CUDA based GPU implementations. It also analyses the impact on the pipeline architecture and system development. The investigation starts from both a selected baseline algorithm with a reference implementation and a traditional linear system solver and then explores various modifications to control flow and data layout to achieve higher resource efficiency. It turns out that with the current state of the art, the modifications impact non-technical system attributes. For example, the control flow of the original modified Cholesky transform is modified so that locality of the code and verifiability deteriorate. The maintainability of the system is affected as well. On the system level, users will have to deal with more complex configuration control and testing procedures.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0097] Pipeline Safety: Reminder of Requirements for Liquefied Petroleum Gas and Utility Liquefied Petroleum Gas Pipeline Systems AGENCY: Pipeline and Hazardous Materials Safety Administration...
Planning bioinformatics workflows using an expert system.
Chen, Xiaoling; Chang, Jeffrey T
2017-04-15
Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Pipelining in structural health monitoring wireless sensor network
NASA Astrophysics Data System (ADS)
Li, Xu; Dorvash, Siavash; Cheng, Liang; Pakzad, Shamim
2010-04-01
Application of wireless sensor network (WSN) for structural health monitoring (SHM), is becoming widespread due to its implementation ease and economic advantage over traditional sensor networks. Beside advantages that have made wireless network preferable, there are some concerns regarding their performance in some applications. In long-span Bridge monitoring the need to transfer data over long distance causes some challenges in design of WSN platforms. Due to the geometry of bridge structures, using multi-hop data transfer between remote nodes and base station is essential. This paper focuses on the performances of pipelining algorithms. We summarize several prevent pipelining approaches, discuss their performances, and propose a new pipelining algorithm, which gives consideration to both boosting of channel usage and the simplicity in deployment.
Methods of increasing efficiency and maintainability of pipeline systems
NASA Astrophysics Data System (ADS)
Ivanov, V. A.; Sokolov, S. M.; Ogudova, E. V.
2018-05-01
This study is dedicated to the issue of pipeline transportation system maintenance. The article identifies two classes of technical-and-economic indices, which are used to select an optimal pipeline transportation system structure. Further, the article determines various system maintenance strategies and strategy selection criteria. Meanwhile, the maintenance strategies turn out to be not sufficiently effective due to non-optimal values of maintenance intervals. This problem could be solved by running the adaptive maintenance system, which includes a pipeline transportation system reliability improvement algorithm, especially an equipment degradation computer model. In conclusion, three model building approaches for determining optimal technical systems verification inspections duration were considered.
SAND: an automated VLBI imaging and analysing pipeline - I. Stripping component trajectories
NASA Astrophysics Data System (ADS)
Zhang, M.; Collioud, A.; Charlot, P.
2018-02-01
We present our implementation of an automated very long baseline interferometry (VLBI) data-reduction pipeline that is dedicated to interferometric data imaging and analysis. The pipeline can handle massive VLBI data efficiently, which makes it an appropriate tool to investigate multi-epoch multiband VLBI data. Compared to traditional manual data reduction, our pipeline provides more objective results as less human interference is involved. The source extraction is carried out in the image plane, while deconvolution and model fitting are performed in both the image plane and the uv plane for parallel comparison. The output from the pipeline includes catalogues of CLEANed images and reconstructed models, polarization maps, proper motion estimates, core light curves and multiband spectra. We have developed a regression STRIP algorithm to automatically detect linear or non-linear patterns in the jet component trajectories. This algorithm offers an objective method to match jet components at different epochs and to determine their proper motions.
PipelineDog: a simple and flexible graphic pipeline construction and maintenance tool.
Zhou, Anbo; Zhang, Yeting; Sun, Yazhou; Xing, Jinchuan
2018-05-01
Analysis pipelines are an essential part of bioinformatics research, and ad hoc pipelines are frequently created by researchers for prototyping and proof-of-concept purposes. However, most existing pipeline management system or workflow engines are too complex for rapid prototyping or learning the pipeline concept. A lightweight, user-friendly and flexible solution is thus desirable. In this study, we developed a new pipeline construction and maintenance tool, PipelineDog. This is a web-based integrated development environment with a modern web graphical user interface. It offers cross-platform compatibility, project management capabilities, code formatting and error checking functions and an online repository. It uses an easy-to-read/write script system that encourages code reuse. With the online repository, it also encourages sharing of pipelines, which enhances analysis reproducibility and accountability. For most users, PipelineDog requires no software installation. Overall, this web application provides a way to rapidly create and easily manage pipelines. PipelineDog web app is freely available at http://web.pipeline.dog. The command line version is available at http://www.npmjs.com/package/pipelinedog and online repository at http://repo.pipeline.dog. ysun@kean.edu or xing@biology.rutgers.edu or ysun@diagnoa.com. Supplementary data are available at Bioinformatics online.
78 FR 28837 - Acadian Gas Pipeline System; Notice of Petition
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR11-129-001] Acadian Gas Pipeline System; Notice of Petition Take notice that on May 6, 2013, Acadian Gas Pipeline System (Acadian... concerns filed in the September 26, 2011 filing, as more fully detailed in the petition. Any person...
40 CFR 761.60 - Disposal requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... a disposal facility approved under this part. (5) Natural gas pipeline systems containing PCBs. The owner or operator of natural gas pipeline systems containing ≥50 ppm PCBs, when no longer in use, shall... the PCB concentrations in natural gas pipeline systems shall do so in accordance with paragraph (b)(5...
Reducing vibration transfer from power plants by active methods
NASA Astrophysics Data System (ADS)
Kiryukhin, A. V.; Milman, O. O.; Ptakhin, A. V.
2017-12-01
The possibility of applying the methods of active damping of vibration and pressure pulsations for reducing their transfer from power plants into the environment, the seating, and the industrial premises are considered. The results of experimental works implemented by the authors on the active broadband damping of vibration and dynamic forces after shock-absorption up to 15 dB in the frequency band up to 150 Hz, of water pressure pulsations in the pipeline up to 20 dB in the frequency band up to 600 Hz, and of spatial low-frequency air noise indoors of a diesel generator at discrete frequency up to 20 dB are presented. It is shown that a reduction of vibration transfer through a vibration-isolating junction (expansion joints) of pipelines with liquid is the most complicated and has hardly been developed so far. This problem is essential for vibration isolation of power equipment from the seating and the environment through pipelines with water and steam in the power and transport engineering, shipbuilding, and in oil and gas pipelines in pumping stations. For improving efficiency, reducing the energy consumption, and decreasing the overall dimensions of equipment, it is advisable to combine the work of an active system with passive damping means, the use of which is not always sufficient. The executive component of the systems of active damping should be placed behind the vibration isolators (expansion joints). It is shown that the existence of working medium and connection of vibration with pressure pulsations in existing designs of pipeline expansion joints lead to growth of vibration stiffness of the expansion joint with the environment by two and more orders as compared with the static stiffness and makes difficulties for using the active methods. For active damping of vibration transfer through expansion joints of pipelines with a liquid, it is necessary to develop expansion joint structures with minimal connection of vibrations and pulsations and minimal vibration stiffness in the specified frequency range. The example of structure of such expansion joint and its test results are presented.
Closha: bioinformatics workflow system for the analysis of massive sequencing data.
Ko, GunHwan; Kim, Pan-Gyu; Yoon, Jongcheol; Han, Gukhee; Park, Seong-Jin; Song, Wangho; Lee, Byungwook
2018-02-19
While next-generation sequencing (NGS) costs have fallen in recent years, the cost and complexity of computation remain substantial obstacles to the use of NGS in bio-medical care and genomic research. The rapidly increasing amounts of data available from the new high-throughput methods have made data processing infeasible without automated pipelines. The integration of data and analytic resources into workflow systems provides a solution to the problem by simplifying the task of data analysis. To address this challenge, we developed a cloud-based workflow management system, Closha, to provide fast and cost-effective analysis of massive genomic data. We implemented complex workflows making optimal use of high-performance computing clusters. Closha allows users to create multi-step analyses using drag and drop functionality and to modify the parameters of pipeline tools. Users can also import the Galaxy pipelines into Closha. Closha is a hybrid system that enables users to use both analysis programs providing traditional tools and MapReduce-based big data analysis programs simultaneously in a single pipeline. Thus, the execution of analytics algorithms can be parallelized, speeding up the whole process. We also developed a high-speed data transmission solution, KoDS, to transmit a large amount of data at a fast rate. KoDS has a file transfer speed of up to 10 times that of normal FTP and HTTP. The computer hardware for Closha is 660 CPU cores and 800 TB of disk storage, enabling 500 jobs to run at the same time. Closha is a scalable, cost-effective, and publicly available web service for large-scale genomic data analysis. Closha supports the reliable and highly scalable execution of sequencing analysis workflows in a fully automated manner. Closha provides a user-friendly interface to all genomic scientists to try to derive accurate results from NGS platform data. The Closha cloud server is freely available for use from http://closha.kobic.re.kr/ .
Data streaming in telepresence environments.
Lamboray, Edouard; Würmlin, Stephan; Gross, Markus
2005-01-01
In this paper, we discuss data transmission in telepresence environments for collaborative virtual reality applications. We analyze data streams in the context of networked virtual environments and classify them according to their traffic characteristics. Special emphasis is put on geometry-enhanced (3D) video. We review architectures for real-time 3D video pipelines and derive theoretical bounds on the minimal system latency as a function of the transmission and processing delays. Furthermore, we discuss bandwidth issues of differential update coding for 3D video. In our telepresence system-the blue-c-we use a point-based 3D video technology which allows for differentially encoded 3D representations of human users. While we discuss the considerations which lead to the design of our three-stage 3D video pipeline, we also elucidate some critical implementation details regarding decoupling of acquisition, processing and rendering frame rates, and audio/video synchronization. Finally, we demonstrate the communication and networking features of the blue-c system in its full deployment. We show how the system can possibly be controlled to face processing or networking bottlenecks by adapting the multiple system components like audio, application data, and 3D video.
A Realization of Theoretical Maximum Performance in IPSec on Gigabit Ethernet
NASA Astrophysics Data System (ADS)
Onuki, Atsushi; Takeuchi, Kiyofumi; Inada, Toru; Tokiniwa, Yasuhisa; Ushirozawa, Shinobu
This paper describes “IPSec(IP Security) VPN system" and how it attains a theoretical maximum performance on Gigabit Ethernet. The Conventional System is implemented by software. However, the system has several bottlenecks which must be overcome to realize a theoretical maximum performance on Gigabit Ethernet. Thus, we newly propose IPSec VPN System with the FPGA(Field Programmable Gate Array) based hardware architecture, which transmits a packet by the pipe-lined flow processing and has 6 parallel structure of encryption and authentication engines. We show that our system attains the theoretical maximum performance in the short packet which is difficult to realize until now.
Natural disasters and the gas pipeline system.
DOT National Transportation Integrated Search
1996-11-01
Episodic descriptions are provided of the effects of the Loma Prieta earthquake (1989) on the gas pipeline systems of Pacific Gas & Electric Company and the Cit of Palo Alto and of the Northridge earthquake (1994) on Southern California Gas' pipeline...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-27
... Reports for Gas Pipeline Operators AGENCY: Pipeline and Hazardous Materials Safety Administration, DOT... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Pipeline Systems; PHMSA F 7100.2-1 Annual Report for Calendar Year 20xx Natural and Other Gas Transmission...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-17
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR 191... Reports AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Issuance of... Pipeline and Hazardous Materials Safety Administration (PHMSA) published a final rule on November 26, 2010...
Jayashree, B; Hanspal, Manindra S; Srinivasan, Rajgopal; Vigneshwaran, R; Varshney, Rajeev K; Spurthi, N; Eshwar, K; Ramesh, N; Chandra, S; Hoisington, David A
2007-01-01
The large amounts of EST sequence data available from a single species of an organism as well as for several species within a genus provide an easy source of identification of intra- and interspecies single nucleotide polymorphisms (SNPs). In the case of model organisms, the data available are numerous, given the degree of redundancy in the deposited EST data. There are several available bioinformatics tools that can be used to mine this data; however, using them requires a certain level of expertise: the tools have to be used sequentially with accompanying format conversion and steps like clustering and assembly of sequences become time-intensive jobs even for moderately sized datasets. We report here a pipeline of open source software extended to run on multiple CPU architectures that can be used to mine large EST datasets for SNPs and identify restriction sites for assaying the SNPs so that cost-effective CAPS assays can be developed for SNP genotyping in genetics and breeding applications. At the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), the pipeline has been implemented to run on a Paracel high-performance system consisting of four dual AMD Opteron processors running Linux with MPICH. The pipeline can be accessed through user-friendly web interfaces at http://hpc.icrisat.cgiar.org/PBSWeb and is available on request for academic use. We have validated the developed pipeline by mining chickpea ESTs for interspecies SNPs, development of CAPS assays for SNP genotyping, and confirmation of restriction digestion pattern at the sequence level.
COSMOS: Python library for massively parallel workflows
Gafni, Erik; Luquette, Lovelace J.; Lancaster, Alex K.; Hawkins, Jared B.; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P.; Tonellato, Peter J.
2014-01-01
Summary: Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Availability and implementation: Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. Contact: dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24982428
Design and Implementation of Data Reduction Pipelines for the Keck Observatory Archive
NASA Astrophysics Data System (ADS)
Gelino, C. R.; Berriman, G. B.; Kong, M.; Laity, A. C.; Swain, M. A.; Campbell, R.; Goodrich, R. W.; Holt, J.; Lyke, J.; Mader, J. A.; Tran, H. D.; Barlow, T.
2015-09-01
The Keck Observatory Archive (KOA), a collaboration between the NASA Exoplanet Science Institute and the W. M. Keck Observatory, serves science and calibration data for all active and inactive instruments from the twin Keck Telescopes located near the summit of Mauna Kea, Hawaii. In addition to the raw data, we produce and provide quick look reduced data for four instruments (HIRES, LWS, NIRC2, and OSIRIS) so that KOA users can more easily assess the scientific content and the quality of the data, which can often be difficult with raw data. The reduced products derive from both publicly available data reduction packages (when available) and KOA-created reduction scripts. The automation of publicly available data reduction packages has the benefit of providing a good quality product without the additional time and expense of creating a new reduction package, and is easily applied to bulk processing needs. The downside is that the pipeline is not always able to create an ideal product, particularly for spectra, because the processing options for one type of target (eg., point sources) may not be appropriate for other types of targets (eg., extended galaxies and nebulae). In this poster we present the design and implementation for the current pipelines used at KOA and discuss our strategies for handling data for which the nature of the targets and the observers' scientific goals and data taking procedures are unknown. We also discuss our plans for implementing automated pipelines for the remaining six instruments.
Yi, Ming; Mudunuri, Uma; Che, Anney; Stephens, Robert M
2009-06-29
One of the challenges in the analysis of microarray data is to integrate and compare the selected (e.g., differential) gene lists from multiple experiments for common or unique underlying biological themes. A common way to approach this problem is to extract common genes from these gene lists and then subject these genes to enrichment analysis to reveal the underlying biology. However, the capacity of this approach is largely restricted by the limited number of common genes shared by datasets from multiple experiments, which could be caused by the complexity of the biological system itself. We now introduce a new Pathway Pattern Extraction Pipeline (PPEP), which extends the existing WPS application by providing a new pathway-level comparative analysis scheme. To facilitate comparing and correlating results from different studies and sources, PPEP contains new interfaces that allow evaluation of the pathway-level enrichment patterns across multiple gene lists. As an exploratory tool, this analysis pipeline may help reveal the underlying biological themes at both the pathway and gene levels. The analysis scheme provided by PPEP begins with multiple gene lists, which may be derived from different studies in terms of the biological contexts, applied technologies, or methodologies. These lists are then subjected to pathway-level comparative analysis for extraction of pathway-level patterns. This analysis pipeline helps to explore the commonality or uniqueness of these lists at the level of pathways or biological processes from different but relevant biological systems using a combination of statistical enrichment measurements, pathway-level pattern extraction, and graphical display of the relationships of genes and their associated pathways as Gene-Term Association Networks (GTANs) within the WPS platform. As a proof of concept, we have used the new method to analyze many datasets from our collaborators as well as some public microarray datasets. This tool provides a new pathway-level analysis scheme for integrative and comparative analysis of data derived from different but relevant systems. The tool is freely available as a Pathway Pattern Extraction Pipeline implemented in our existing software package WPS, which can be obtained at http://www.abcc.ncifcrf.gov/wps/wps_index.php.
Identification of microorganisms associated with corrosion of offshore oil production systems
NASA Astrophysics Data System (ADS)
Sørensen, Ketil; Grigoryan, Aleksandr; Holmkvist, Lars; Skovhus, Torben; Thomsen, Uffe; Lundgaard, Thomas
2010-05-01
Microbiologically influenced corrosion (MIC) poses a major challenge to oil producers and distributors. The annual cost associated with MIC-related pipeline failures and general maintenance and surveillance of installations amounts to several billion dollar in the oil production sector alone. Hence, large efforts are undertaken by some producers to control and monitor microbial growth in pipelines and other installations, and extensive surveillance programs are carried out in order to detect and quantify potential MIC-promoting microorganisms. Traditionally, efforts to mitigate and survey microbial growth in oil production systems have focused on sulfate-reducing Bacteria (SRB), and microorganisms have usually been enumerated by the culture-dependent MPN (most probable number) -technique. Culture-independent molecular tools yielding much more detailed information about the microbial communities have now been implemented as a reliable tool for routine surveillance of oil production systems in the North Sea. This has resulted in new and hitherto unattainable information regarding the distribution of different microorganisms in hot reservoirs and associated oil production systems. This presentation will provide a review of recent insights regarding thermophilic microbial communities and their implication for steel corrosion in offshore oil production systems. Data collected from solids and biofilms in different corroded pipelines and tubes indicate that in addition to SRB, other groups such as methanogens and sulfate-reducing Archaea (SRA) are also involved in MIC. In the hot parts of the system where the temperature approaches 80 ⁰C, SRA closely related to Archaeoglobus fulgidus outnumber SRB by several orders of magnitude. Methanogens affiliated with the genus Methanothermococcus were shown to completely dominate the microbial community at the metal surface in a sample of highly corroded piping. Thus, the microbial communities associated with MIC appear to be more complex than previously recognized by the industry.
FPGA based hardware optimized implementation of signal processing system for LFM pulsed radar
NASA Astrophysics Data System (ADS)
Azim, Noor ul; Jun, Wang
2016-11-01
Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.
Bridging the Gap between Research and Practice: Implementation Science
ERIC Educational Resources Information Center
Olswang, Lesley B.; Prelock, Patricia A.
2015-01-01
Purpose: This article introduces implementation science, which focuses on research methods that promote the systematic application of research findings to practice. Method: The narrative defines implementation science and highlights the importance of moving research along the pipeline from basic science to practice as one way to facilitate…
Microcomputers, software combine to provide daily product, movement inventory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cable, T.
1985-06-01
This paper describes the efforts of Sante Fe Pipelines Inc. in keeping track of product inventory on the 810 mile, 12-in. Chapparal Pipeline and the 1,913 mile, 8- and 10-in. Gulf Central Pipeline. The decision to use a PC for monitoring the inventory was significant. The application was completed by TRON, Inc. The system is actually two major subsystems. The pipeline system accounts for injections into the pipeline and deliveries of product. This feeds the storage and the terminal inventory system where inventories are maintained at storage locations by shipper and supplier account. The paper further explains the inventory monitoringmore » process in detail. Communications software is described as well.« less
49 CFR 195.401 - General requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... its pipeline system according to the following requirements: (1) Non Integrity management repairs... affected part of the system until it has corrected the unsafe condition. (2) Integrity management repairs... this part: (1) An interstate pipeline, other than a low-stress pipeline, on which construction was...
49 CFR 195.401 - General requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... its pipeline system according to the following requirements: (1) Non Integrity management repairs... affected part of the system until it has corrected the unsafe condition. (2) Integrity management repairs... this part: (1) An interstate pipeline, other than a low-stress pipeline, on which construction was...
49 CFR 195.401 - General requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... its pipeline system according to the following requirements: (1) Non Integrity management repairs... affected part of the system until it has corrected the unsafe condition. (2) Integrity management repairs... this part: (1) An interstate pipeline, other than a low-stress pipeline, on which construction was...
49 CFR 195.401 - General requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... its pipeline system according to the following requirements: (1) Non Integrity management repairs... affected part of the system until it has corrected the unsafe condition. (2) Integrity management repairs... this part: (1) An interstate pipeline, other than a low-stress pipeline, on which construction was...
Renaissance of protein crystallization and precipitation in biopharmaceuticals purification.
Dos Santos, Raquel; Carvalho, Ana Luísa; Roque, A Cecília A
The current chromatographic approaches used in protein purification are not keeping pace with the increasing biopharmaceutical market demand. With the upstream improvements, the bottleneck shifted towards the downstream process. New approaches rely in Anything But Chromatography methodologies and revisiting former techniques with a bioprocess perspective. Protein crystallization and precipitation methods are already implemented in the downstream process of diverse therapeutic biological macromolecules, overcoming the current chromatographic bottlenecks. Promising work is being developed in order to implement crystallization and precipitation in the purification pipeline of high value therapeutic molecules. This review focuses in the role of these two methodologies in current industrial purification processes, and highlights their potential implementation in the purification pipeline of high value therapeutic molecules, overcoming chromatographic holdups. Copyright © 2016 Elsevier Inc. All rights reserved.
The MiPACQ clinical question answering system.
Cairns, Brian L; Nielsen, Rodney D; Masanz, James J; Martin, James H; Palmer, Martha S; Ward, Wayne H; Savova, Guergana K
2011-01-01
The Multi-source Integrated Platform for Answering Clinical Questions (MiPACQ) is a QA pipeline that integrates a variety of information retrieval and natural language processing systems into an extensible question answering system. We present the system's architecture and an evaluation of MiPACQ on a human-annotated evaluation dataset based on the Medpedia health and medical encyclopedia. Compared with our baseline information retrieval system, the MiPACQ rule-based system demonstrates 84% improvement in Precision at One and the MiPACQ machine-learning-based system demonstrates 134% improvement. Other performance metrics including mean reciprocal rank and area under the precision/recall curves also showed significant improvement, validating the effectiveness of the MiPACQ design and implementation.
75 FR 67807 - Pipeline Safety: Emergency Preparedness Communications
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-03
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... is issuing an Advisory Bulletin to remind operators of gas and hazardous liquid pipeline facilities... Gas Pipeline Systems. Subject: Emergency Preparedness Communications. Advisory: To further enhance the...
76 FR 65778 - Pipeline Safety: Information Collection Activities
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-24
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...: 12,120. Frequency of Collection: On occasion. 2. Title: Recordkeeping for Natural Gas Pipeline... investigating incidents. Affected Public: Operators of natural gas pipeline systems. Annual Reporting and...
Design and Implementation of CIA, the ISOCAM Interactive Analysis System
NASA Astrophysics Data System (ADS)
Ott, S.; Abergel, A.; Altieri, B.; Augueres, J.-L.; Aussel, H.; Bernard, J.-P.; Biviano, A.; Blommaert, J.; Boulade, O.; Boulanger, F.; Cesarsky, C.; Cesarsky, D. A.; Claret, A.; Delattre, C.; Delaney, M.; Deschamps, T.; Desert, F.-X.; Didelon, P.; Elbaz, D.; Gallais, P.; Gastaud, R.; Guest, S.; Helou, G.; Kong, M.; Lacombe, F.; Li, J.; Landriu, D.; Metcalfe, L.; Okumura, K.; Perault, M.; Pollock, A. M. T.; Rouan, D.; Sam-Lone, J.; Sauvage, M.; Siebenmorgen, R.; Starck, J.-L.; Tran, D.; van Buren, D.; Vigroux, L.; Vivares, F.
This paper presents an overview of the Interactive Analysis System for ISOCAM (CIA). With this system ISOCAM data can be analysed for calibration and engineering purposes, the ISOCAM pipeline software validated and refined, and astronomical data processing can be performed. The system is mainly IDL-based but contains \\fortran, C, and C++ parts for special tasks. It represents an effort of 15 man-years and is comprised of over 1000 IDL and 200 \\fortran, C, and C++ modules. CIA is a joint development by the ESA Astrophysics Division and the ISOCAM Consortium led by the ISOCAM PI, C. Cesarsky, Direction des Sciences de la Matiere, C.E.A., France.
A Systolic VLSI Design of a Pipeline Reed-solomon Decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.
1984-01-01
A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.
A VLSI design of a pipeline Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.
1985-01-01
A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.
Atchan, Marjorie; Davis, Deborah; Foureur, Maralyn
2014-06-01
The Baby Friendly Hospital Initiative is a global, evidence-based, public health initiative. The evidence underpinning the Initiative supports practices promoting the initiation and maintenance of breastfeeding and encourages women's informed infant feeding decisions. In Australia, where the Initiative is known as the Baby Friendly Health Initiative (BFHI) the translation of evidence into practice has not been uniform, as demonstrated by a varying number of maternity facilities in each State and Territory currently accredited as 'baby friendly'. This variance has persisted regardless of BFHI implementation in Australia gaining 'in principle' support at a national and governmental level as well as inclusion in health policy in several states. There are many stakeholders that exert an influence on policy development and health care practices. Identify a theory and model to examine where and how barriers occur in the gap between evidence and practice in the uptake of the BFHI in Australia. Knowledge translation theory and the research to practice pipeline model are used to examine the identified barriers to BFHI implementation and accreditation in Australia. Australian and international studies have identified similar issues that have either enabled implementation of the BFHI or acted as a barrier. Knowledge translation theory and the research to practice pipeline model is of practical value to examine barriers. Recommendations in the form of specific targeted strategies to facilitate knowledge transfer and supportive practices into the Australian health care system and current midwifery practice are included. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
eHive: an artificial intelligence workflow system for genomic analysis.
Severin, Jessica; Beal, Kathryn; Vilella, Albert J; Fitzgerald, Stephen; Schuster, Michael; Gordon, Leo; Ureta-Vidal, Abel; Flicek, Paul; Herrero, Javier
2010-05-11
The Ensembl project produces updates to its comparative genomics resources with each of its several releases per year. During each release cycle approximately two weeks are allocated to generate all the genomic alignments and the protein homology predictions. The number of calculations required for this task grows approximately quadratically with the number of species. We currently support 50 species in Ensembl and we expect the number to continue to grow in the future. We present eHive, a new fault tolerant distributed processing system initially designed to support comparative genomic analysis, based on blackboard systems, network distributed autonomous agents, dataflow graphs and block-branch diagrams. In the eHive system a MySQL database serves as the central blackboard and the autonomous agent, a Perl script, queries the system and runs jobs as required. The system allows us to define dataflow and branching rules to suit all our production pipelines. We describe the implementation of three pipelines: (1) pairwise whole genome alignments, (2) multiple whole genome alignments and (3) gene trees with protein homology inference. Finally, we show the efficiency of the system in real case scenarios. eHive allows us to produce computationally demanding results in a reliable and efficient way with minimal supervision and high throughput. Further documentation is available at: http://www.ensembl.org/info/docs/eHive/.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-23
... Federal agency for pipeline security, it is important for TSA to have contact information for company... DEPARTMENT OF HOMELAND SECURITY Transportation Security Administration Extension of Agency Information Collection Activity Under OMB Review: Pipeline System Operator Security Information AGENCY...
Hosseini, Zahra; Liu, Junmin; Solovey, Igor; Menon, Ravi S; Drangova, Maria
2017-04-01
To implement and optimize a new approach for susceptibility-weighted image (SWI) generation from multi-echo multi-channel image data and compare its performance against optimized traditional SWI pipelines. Five healthy volunteers were imaged at 7 Tesla. The inter-echo-variance (IEV) channel combination, which uses the variance of the local frequency shift at multiple echo times as a weighting factor during channel combination, was used to calculate multi-echo local phase shift maps. Linear phase masks were combined with the magnitude to generate IEV-SWI. The performance of the IEV-SWI pipeline was compared with that of two accepted SWI pipelines-channel combination followed by (i) Homodyne filtering (HPH-SWI) and (ii) unwrapping and high-pass filtering (SVD-SWI). The filtering steps of each pipeline were optimized. Contrast-to-noise ratio was used as the comparison metric. Qualitative assessment of artifact and vessel conspicuity was performed and processing time of pipelines was evaluated. The optimized IEV-SWI pipeline (σ = 7 mm) resulted in continuous vessel visibility throughout the brain. IEV-SWI had significantly higher contrast compared with HPH-SWI and SVD-SWI (P < 0.001, Friedman nonparametric test). Residual background fields and phase wraps in HPH-SWI and SVD-SWI corrupted the vessel signal and/or generated vessel-mimicking artifact. Optimized implementation of the IEV-SWI pipeline processed a six-echo 16-channel dataset in under 10 min. IEV-SWI benefits from channel-by-channel processing of phase data and results in high contrast images with an optimal balance between contrast and background noise removal, thereby presenting evidence of importance of the order in which postprocessing techniques are applied for multi-channel SWI generation. 2 J. Magn. Reson. Imaging 2017;45:1113-1124. © 2016 International Society for Magnetic Resonance in Medicine.
Wiegers, Thomas C; Davis, Allan Peter; Mattingly, Carolyn J
2014-01-01
The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/ © The Author(s) 2014. Published by Oxford University Press.
Wiegers, Thomas C.; Davis, Allan Peter; Mattingly, Carolyn J.
2014-01-01
The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/ PMID:24919658
Airborne LIDAR Pipeline Inspection System (ALPIS) Mapping Tests
DOT National Transportation Integrated Search
2003-06-06
Natural gas and hazardous liquid pipeline operators have a need to identify where leaks are occurring along their pipelines in order to lower the risks the pipelines pose to people and the environment. Current methods of locating natural gas and haza...
A bubble detection system for propellant filling pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, Wen; Zong, Guanghua; Bi, Shusheng
2014-06-15
This paper proposes a bubble detection system based on the ultrasound transmission method, mainly for probing high-speed bubbles in the satellite propellant filling pipeline. First, three common ultrasonic detection methods are compared and the ultrasound transmission method is used in this paper. Then, the ultrasound beam in a vertical pipe is investigated, suggesting that the width of the beam used for detection is usually smaller than the internal diameter of the pipe, which means that when bubbles move close to the pipe wall, they may escape from being detected. A special device is designed to solve this problem. It canmore » generate the spiral flow to force all the bubbles to ascend along the central line of the pipe. In the end, experiments are implemented to evaluate the performance of this system. Bubbles of five different sizes are generated and detected. Experiment results show that the sizes and quantity of bubbles can be estimated by this system. Also, the bubbles of different radii can be distinguished from each other. The numerical relationship between the ultrasound attenuation and the bubble radius is acquired and it can be utilized for estimating the unknown bubble size and measuring the total bubble volume.« less
Finite Element Analysis and Experimental Study on Elbow Vibration Transmission Characteristics
NASA Astrophysics Data System (ADS)
Qing-shan, Dai; Zhen-hai, Zhang; Shi-jian, Zhu
2017-11-01
Pipeline system vibration is one of the significant factors leading to the vibration and noise of vessel. Elbow is widely used in the pipeline system. However, the researches about vibration of elbow are little, and there is no systematic study. In this research, we firstly analysed the relationship between elbow vibration transmission characteristics and bending radius by ABAQUS finite element simulation. Then, we conducted the further vibration test to observe the vibration transmission characteristics of different elbows which have the same diameter and different bending radius under different flow velocity. The results of simulation calculation and experiment both showed that the vibration acceleration levels of the pipeline system decreased with the increase of bending radius of the elbow, which was beneficial to reduce the transmission of vibration in the pipeline system. The results could be used as reference for further studies and designs for the low noise installation of pipeline system.
Bat detective-Deep learning tools for bat acoustic signal detection.
Mac Aodha, Oisin; Gibb, Rory; Barlow, Kate E; Browning, Ella; Firman, Michael; Freeman, Robin; Harder, Briana; Kinsey, Libby; Mead, Gary R; Newson, Stuart E; Pandourski, Ivan; Parsons, Stuart; Russ, Jon; Szodoray-Paradi, Abigel; Szodoray-Paradi, Farkas; Tilova, Elena; Girolami, Mark; Brostow, Gabriel; Jones, Kate E
2018-03-01
Passive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings. We developed a convolutional neural network based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats. Our deep learning algorithms were trained on full-spectrum ultrasonic audio collected along road-transects across Europe and labelled by citizen scientists from www.batdetective.org. When compared to other existing algorithms and commercial systems, we show significantly higher detection performance of search-phase echolocation calls with our test sets. As an example application, we ran our detection pipeline on bat monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system. Our detection pipeline can be used for the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale. Our proposed pipeline makes only a small number of bat specific design decisions, and with appropriate training data it could be applied to detecting other species in audio. A crucial novelty of our work is showing that with careful, non-trivial, design and implementation considerations, state-of-the-art deep learning methods can be used for accurate and efficient monitoring in audio.
Bat detective—Deep learning tools for bat acoustic signal detection
Barlow, Kate E.; Firman, Michael; Freeman, Robin; Harder, Briana; Kinsey, Libby; Mead, Gary R.; Newson, Stuart E.; Pandourski, Ivan; Russ, Jon; Szodoray-Paradi, Abigel; Tilova, Elena; Girolami, Mark; Jones, Kate E.
2018-01-01
Passive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings. We developed a convolutional neural network based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats. Our deep learning algorithms were trained on full-spectrum ultrasonic audio collected along road-transects across Europe and labelled by citizen scientists from www.batdetective.org. When compared to other existing algorithms and commercial systems, we show significantly higher detection performance of search-phase echolocation calls with our test sets. As an example application, we ran our detection pipeline on bat monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system. Our detection pipeline can be used for the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale. Our proposed pipeline makes only a small number of bat specific design decisions, and with appropriate training data it could be applied to detecting other species in audio. A crucial novelty of our work is showing that with careful, non-trivial, design and implementation considerations, state-of-the-art deep learning methods can be used for accurate and efficient monitoring in audio. PMID:29518076
Acoustic power delivery to pipeline monitoring wireless sensors.
Kiziroglou, M E; Boyle, D E; Wright, S W; Yeatman, E M
2017-05-01
The use of energy harvesting for powering wireless sensors is made more challenging in most applications by the requirement for customization to each specific application environment because of specificities of the available energy form, such as precise location, direction and motion frequency, as well as the temporal variation and unpredictability of the energy source. Wireless power transfer from dedicated sources can overcome these difficulties, and in this work, the use of targeted ultrasonic power transfer as a possible method for remote powering of sensor nodes is investigated. A powering system for pipeline monitoring sensors is described and studied experimentally, with a pair of identical, non-inertial piezoelectric transducers used at the transmitter and receiver. Power transmission of 18mW (Root-Mean-Square) through 1m of a118mm diameter cast iron pipe, with 8mm wall thickness is demonstrated. By analysis of the delay between transmission and reception, including reflections from the pipeline edges, a transmission speed of 1000m/s is observed, corresponding to the phase velocity of the L(0,1) axial and F(1,1) radial modes of the pipe structure. A reduction of power delivery with water-filling is observed, yet over 4mW of delivered power through a fully-filled pipe is demonstrated. The transmitted power and voltage levels exceed the requirements for efficient power management, including rectification at cold-starting conditions, and for the operation of low-power sensor nodes. The proposed powering technique may allow the implementation of energy autonomous wireless sensor systems for monitoring industrial and network pipeline infrastructure. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Parallelization of Lower-Upper Symmetric Gauss-Seidel Method for Chemically Reacting Flow
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Jost, Gabriele; Chang, Sherry
2005-01-01
Development of technologies for exploration of the solar system has revived an interest in computational simulation of chemically reacting flows since planetary probe vehicles exhibit non-equilibrium phenomena during the atmospheric entry of a planet or a moon as well as the reentry to the Earth. Stability in combustion is essential for new propulsion systems. Numerical solution of real-gas flows often increases computational work by an order-of-magnitude compared to perfect gas flow partly because of the increased complexity of equations to solve. Recently, as part of Project Columbia, NASA has integrated a cluster of interconnected SGI Altix systems to provide a ten-fold increase in current supercomputing capacity that includes an SGI Origin system. Both the new and existing machines are based on cache coherent non-uniform memory access architecture. Lower-Upper Symmetric Gauss-Seidel (LU-SGS) relaxation method has been implemented into both perfect and real gas flow codes including Real-Gas Aerodynamic Simulator (RGAS). However, the vectorized RGAS code runs inefficiently on cache-based shared-memory machines such as SGI system. Parallelization of a Gauss-Seidel method is nontrivial due to its sequential nature. The LU-SGS method has been vectorized on an oblique plane in INS3D-LU code that has been one of the base codes for NAS Parallel benchmarks. The oblique plane has been called a hyperplane by computer scientists. It is straightforward to parallelize a Gauss-Seidel method by partitioning the hyperplanes once they are formed. Another way of parallelization is to schedule processors like a pipeline using software. Both hyperplane and pipeline methods have been implemented using openMP directives. The present paper reports the performance of the parallelized RGAS code on SGI Origin and Altix systems.
The MiPACQ Clinical Question Answering System
Cairns, Brian L.; Nielsen, Rodney D.; Masanz, James J.; Martin, James H.; Palmer, Martha S.; Ward, Wayne H.; Savova, Guergana K.
2011-01-01
The Multi-source Integrated Platform for Answering Clinical Questions (MiPACQ) is a QA pipeline that integrates a variety of information retrieval and natural language processing systems into an extensible question answering system. We present the system’s architecture and an evaluation of MiPACQ on a human-annotated evaluation dataset based on the Medpedia health and medical encyclopedia. Compared with our baseline information retrieval system, the MiPACQ rule-based system demonstrates 84% improvement in Precision at One and the MiPACQ machine-learning-based system demonstrates 134% improvement. Other performance metrics including mean reciprocal rank and area under the precision/recall curves also showed significant improvement, validating the effectiveness of the MiPACQ design and implementation. PMID:22195068
NASA Astrophysics Data System (ADS)
Huang, Li-Xin; Gao, Hai-Xia; Li, Chun-Shu; Xiao, Chang-Ming
2009-08-01
In a colloidal system confined by a small cylindric pipeline, the depletion interaction between two large spheres is different to the system confined by two plates, and the influence on depletion interaction from the pipeline is related to both the size and shape of it. In this paper, the depletion interactions in the systems confined by pipelines of different sizes or different shapes are studied by Monte Carlo simulations. The numerical results show that the influence on depletion force from the cylindric pipeline is stronger than that from two parallel plates, and the depletion force will be strengthened when the diameter of the cylinder is decreased. In addition, we also find that the depletion interaction is rather affected if the shape change of the pipeline is slightly changed, and the influence on depletion force from the shape change is stronger than that from the size change.
49 CFR 191.11 - Distribution system: Annual report.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 3 2011-10-01 2011-10-01 false Distribution system: Annual report. 191.11 Section 191.11 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE;...
49 CFR 191.9 - Distribution system: Incident report.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 3 2011-10-01 2011-10-01 false Distribution system: Incident report. 191.9 Section 191.9 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE;...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-09
...'s Belle River-St. Clair Pipeline into the new 21-mile long Dawn Gateway Pipeline system, which... & Optimization, DTE Pipeline/Dawn Gateway LLC, One Energy Plaza, Detroit, MI 48226, phone (313) 235-6531 or e...
2014-06-20
archaeologically sensitive areas for system installation or operation and maintenance. An archaeological inventory survey of the pipeline...though introduced species are present. Birds are quite common in Hawai‘i, and there are many native bird species. Wildlife field surveys were...conducted on Bellows AFS as part of the 1996 Resource Inventory (Bellows AFS, 1996). During the survey , 21 species of birds were observed, including 3
Diroma, Maria Angela; Santorsola, Mariangela; Guttà, Cristiano; Gasparre, Giuseppe; Picardi, Ernesto; Pesole, Graziano; Attimonelli, Marcella
2014-01-01
Motivation: The increasing availability of mitochondria-targeted and off-target sequencing data in whole-exome and whole-genome sequencing studies (WXS and WGS) has risen the demand of effective pipelines to accurately measure heteroplasmy and to easily recognize the most functionally important mitochondrial variants among a huge number of candidates. To this purpose, we developed MToolBox, a highly automated pipeline to reconstruct and analyze human mitochondrial DNA from high-throughput sequencing data. Results: MToolBox implements an effective computational strategy for mitochondrial genomes assembling and haplogroup assignment also including a prioritization analysis of detected variants. MToolBox provides a Variant Call Format file featuring, for the first time, allele-specific heteroplasmy and annotation files with prioritized variants. MToolBox was tested on simulated samples and applied on 1000 Genomes WXS datasets. Availability and implementation: MToolBox package is available at https://sourceforge.net/projects/mtoolbox/. Contact: marcella.attimonelli@uniba.it Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028726
KAnalyze: a fast versatile pipelined k-mer toolkit.
Audano, Peter; Vannberg, Fredrik
2014-07-15
Converting nucleotide sequences into short overlapping fragments of uniform length, k-mers, is a common step in many bioinformatics applications. While existing software packages count k-mers, few are optimized for speed, offer an application programming interface (API), a graphical interface or contain features that make it extensible and maintainable. We designed KAnalyze to compete with the fastest k-mer counters, to produce reliable output and to support future development efforts through well-architected, documented and testable code. Currently, KAnalyze can output k-mer counts in a sorted tab-delimited file or stream k-mers as they are read. KAnalyze can process large datasets with 2 GB of memory. This project is implemented in Java 7, and the command line interface (CLI) is designed to integrate into pipelines written in any language. As a k-mer counter, KAnalyze outperforms Jellyfish, DSK and a pipeline built on Perl and Linux utilities. Through extensive unit and system testing, we have verified that KAnalyze produces the correct k-mer counts over multiple datasets and k-mer sizes. KAnalyze is available on SourceForge: https://sourceforge.net/projects/kanalyze/. © The Author 2014. Published by Oxford University Press.
Development of an extensible dual-core wireless sensing node for cyber-physical systems
NASA Astrophysics Data System (ADS)
Kane, Michael; Zhu, Dapeng; Hirose, Mitsuhito; Dong, Xinjun; Winter, Benjamin; Häckell, Mortiz; Lynch, Jerome P.; Wang, Yang; Swartz, A.
2014-04-01
The introduction of wireless telemetry into the design of monitoring and control systems has been shown to reduce system costs while simplifying installations. To date, wireless nodes proposed for sensing and actuation in cyberphysical systems have been designed using microcontrollers with one computational pipeline (i.e., single-core microcontrollers). While concurrent code execution can be implemented on single-core microcontrollers, concurrency is emulated by splitting the pipeline's resources to support multiple threads of code execution. For many applications, this approach to multi-threading is acceptable in terms of speed and function. However, some applications such as feedback controls demand deterministic timing of code execution and maximum computational throughput. For these applications, the adoption of multi-core processor architectures represents one effective solution. Multi-core microcontrollers have multiple computational pipelines that can execute embedded code in parallel and can be interrupted independent of one another. In this study, a new wireless platform named Martlet is introduced with a dual-core microcontroller adopted in its design. The dual-core microcontroller design allows Martlet to dedicate one core to standard wireless sensor operations while the other core is reserved for embedded data processing and real-time feedback control law execution. Another distinct feature of Martlet is a standardized hardware interface that allows specialized daughter boards (termed wing boards) to be interfaced to the Martlet baseboard. This extensibility opens opportunity to encapsulate specialized sensing and actuation functions in a wing board without altering the design of Martlet. In addition to describing the design of Martlet, a few example wings are detailed, along with experiments showing the Martlet's ability to monitor and control physical systems such as wind turbines and buildings.
New gas-pipeline system planned for Argentina
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrich, V.
1979-03-01
A new gas-pipeline system planned for Argentina by Gas del Estado will carry up to 10 million cu m/day to Mendoza, San Juan, and San Luis from the Neuquen basin. Operating on a toll system of transport payment, the Centro-Oeste pipeline system will consist of 1100 km of over 30 in. dia trunk line and 600 km of 8, 12, and 18 in. dia lateral lines.
IPeak: An open source tool to combine results from multiple MS/MS search engines.
Wen, Bo; Du, Chaoqin; Li, Guilin; Ghali, Fawaz; Jones, Andrew R; Käll, Lukas; Xu, Shaohang; Zhou, Ruo; Ren, Zhe; Feng, Qiang; Xu, Xun; Wang, Jun
2015-09-01
Liquid chromatography coupled tandem mass spectrometry (LC-MS/MS) is an important technique for detecting peptides in proteomics studies. Here, we present an open source software tool, termed IPeak, a peptide identification pipeline that is designed to combine the Percolator post-processing algorithm and multi-search strategy to enhance the sensitivity of peptide identifications without compromising accuracy. IPeak provides a graphical user interface (GUI) as well as a command-line interface, which is implemented in JAVA and can work on all three major operating system platforms: Windows, Linux/Unix and OS X. IPeak has been designed to work with the mzIdentML standard from the Proteomics Standards Initiative (PSI) as an input and output, and also been fully integrated into the associated mzidLibrary project, providing access to the overall pipeline, as well as modules for calling Percolator on individual search engine result files. The integration thus enables IPeak (and Percolator) to be used in conjunction with any software packages implementing the mzIdentML data standard. IPeak is freely available and can be downloaded under an Apache 2.0 license at https://code.google.com/p/mzidentml-lib/. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Code of Federal Regulations, 2010 CFR
2010-10-01
... addressing time dependent and independent threats for a transmission pipeline operating below 30% SMYS not in... pipeline system are covered for purposes of the integrity management program requirements, an operator must... system, or an operator may apply one method to individual portions of the pipeline system. (Refer to...
Development of Protective Coatings for Co-Sequestration Processes and Pipelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bierwagen, Gordon; Huang, Yaping
2011-11-30
The program, entitled Development of Protective Coatings for Co-Sequestration Processes and Pipelines, examined the sensitivity of existing coating systems to supercritical carbon dioxide (SCCO2) exposure and developed new coating system to protect pipelines from their corrosion under SCCO2 exposure. A literature review was also conducted regarding pipeline corrosion sensors to monitor pipes used in handling co-sequestration fluids. Research was to ensure safety and reliability for a pipeline involving transport of SCCO2 from the power plant to the sequestration site to mitigate the greenhouse gas effect. Results showed that one commercial coating and one designed formulation can both be supplied asmore » potential candidates for internal pipeline coating to transport SCCO2.« less
49 CFR 192.907 - What must an operator do to implement this subpart?
Code of Federal Regulations, 2011 CFR
2011-10-01
... management program must consist, at a minimum, of a framework that describes the process for implementing... Transmission Pipeline Integrity Management § 192.907 What must an operator do to implement this subpart? (a... follow a written integrity management program that contains all the elements described in § 192.911 and...
49 CFR 192.907 - What must an operator do to implement this subpart?
Code of Federal Regulations, 2012 CFR
2012-10-01
... management program must consist, at a minimum, of a framework that describes the process for implementing... Transmission Pipeline Integrity Management § 192.907 What must an operator do to implement this subpart? (a... follow a written integrity management program that contains all the elements described in § 192.911 and...
49 CFR 192.907 - What must an operator do to implement this subpart?
Code of Federal Regulations, 2013 CFR
2013-10-01
... management program must consist, at a minimum, of a framework that describes the process for implementing... Transmission Pipeline Integrity Management § 192.907 What must an operator do to implement this subpart? (a... follow a written integrity management program that contains all the elements described in § 192.911 and...
49 CFR 192.907 - What must an operator do to implement this subpart?
Code of Federal Regulations, 2014 CFR
2014-10-01
... management program must consist, at a minimum, of a framework that describes the process for implementing... Transmission Pipeline Integrity Management § 192.907 What must an operator do to implement this subpart? (a... follow a written integrity management program that contains all the elements described in § 192.911 and...
30 CFR 250.905 - How do I get approval for the installation, modification, or repair of my platform?
Code of Federal Regulations, 2013 CFR
2013-07-01
... foundations; drilling, production, and pipeline risers and riser tensioning systems; turrets and turret-and... component design; pile foundations; drilling, production, and pipeline risers and riser tensioning systems... Loads imposed by jacket; decks; production components; drilling, production, and pipeline risers, and...
30 CFR 250.905 - How do I get approval for the installation, modification, or repair of my platform?
Code of Federal Regulations, 2014 CFR
2014-07-01
... foundations; drilling, production, and pipeline risers and riser tensioning systems; turrets and turret-and... component design; pile foundations; drilling, production, and pipeline risers and riser tensioning systems... Loads imposed by jacket; decks; production components; drilling, production, and pipeline risers, and...
A Systematic Methodology for Verifying Superscalar Microprocessors
NASA Technical Reports Server (NTRS)
Srivas, Mandayam; Hosabettu, Ravi; Gopalakrishnan, Ganesh
1999-01-01
We present a systematic approach to decompose and incrementally build the proof of correctness of pipelined microprocessors. The central idea is to construct the abstraction function by using completion functions, one per unfinished instruction, each of which specifies the effect (on the observables) of completing the instruction. In addition to avoiding the term size and case explosion problem that limits the pure flushing approach, our method helps localize errors, and also handles stages with interactive loops. The technique is illustrated on pipelined and superscalar pipelined implementations of a subset of the DLX architecture. It has also been applied to a processor with out-of-order execution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washenfelder, D. J.; Girardot, C. L.; Wilson, E. R.
The twenty-eight double-shell underground radioactive waste storage tanks at the U. S. Department of Energy’s Hanford Site near Richland, WA are interconnected by the Waste Transfer System network of buried steel encased pipelines and pipe jumpers in below-grade pits. The pipeline material is stainless steel or carbon steel in 51 mm to 152 mm (2 in. to 6 in.) sizes. The pipelines carry slurries ranging up to 20 volume percent solids and supernatants with less than one volume percent solids at velocities necessary to prevent settling. The pipelines, installed between 1976 and 2011, were originally intended to last until themore » 2028 completion of the double-shell tank storage mission. The mission has been subsequently extended. In 2010 the Tank Operating Contractor began a systematic evaluation of the Waste Transfer System pipeline conditions applying guidelines from API 579-1/ASME FFS-1 (2007), Fitness-For-Service. Between 2010 and 2014 Fitness-for-Service examinations of the Waste Transfer System pipeline materials, sizes, and components were completed. In parallel, waste throughput histories were prepared allowing side-by-side pipeline wall thinning rate comparisons between carbon and stainless steel, slurries and supernatants and throughput volumes. The work showed that for transfer volumes up to 6.1E+05 m 3 (161 million gallons), the highest throughput of any pipeline segment examined, there has been no detectable wall thinning in either stainless or carbon steel pipeline material regardless of waste fluid characteristics or throughput. The paper describes the field and laboratory evaluation methods used for the Fitness-for-Service examinations, the results of the examinations, and the data reduction methodologies used to support Hanford Waste Transfer System pipeline wall thinning conclusions.« less
Technology Cost and Schedule Estimation (TCASE) Final Report
NASA Technical Reports Server (NTRS)
Wallace, Jon; Schaffer, Mark
2015-01-01
During the 2014-2015 project year, the focus of the TCASE project has shifted from collection of historical data from many sources to securing a data pipeline between TCASE and NASA's widely used TechPort system. TCASE v1.0 implements a data import solution that was achievable within the project scope, while still providing the basis for a long-term ability to keep TCASE in sync with TechPort. Conclusion: TCASE data quantity is adequate and the established data pipeline will enable future growth. Data quality is now highly dependent the quality of data in TechPort. Recommendation: Technology development organizations within NASA should continue to work closely with project/program data tracking and archiving efforts (e.g. TechPort) to ensure that the right data is being captured at the appropriate quality level. TCASE would greatly benefit, for example, if project cost/budget information was included in TechPort in the future.
VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal
NASA Astrophysics Data System (ADS)
Satheeskumaran, S.; Sabrigiriraj, M.
2016-06-01
Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.
Guidelines for riser splash zone design and repair
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-02-01
The many years of offshore oil and gas development has established the subsea pipeline as a reliable and cost effective means of transportation for produced hydrocarbons. The requirement for subsea pipeline systems will continue to move into deeper water and more remote locations with the future development of oil and gas exploration. The integrity of subsea pipeline and riser systems, throughout their operating lifetime, is an important area for operators to consider in maximizing reliability and serviceability for economic, contractual and environmental reasons. Adequate design and installation are the basis for ensuring the integrity of any subsea pipeline and risermore » systems. In the event of system damage, from any source, quick and accurate repair and reinstatement of the pipeline system is essential. This report has been developed to provide guidelines for riser and splash zone design, to perform a detailed overview of existing riser repair techniques and products, and to prepare comprehensive guidelines identifying the capabilities and limits of riser reinstatement systems.« less
Adaptations to a New Physical Training Program in the Combat Controller Training Pipeline
2010-09-01
education regarding optimizing recovery through hydration and nutrition . We designed and implemented a short class that explained the benefits of pre...to poor nutrition and hydration practices. Finally, many of the training methods employed throughout the pipeline were outdated, non-periodized, and...contributing to overtraining. Creation of a nutrition and hydration class. Apart from being told to drink copious amounts of water, trainees had little
NASA Astrophysics Data System (ADS)
Lan, G.; Jiang, J.; Li, D. D.; Yi, W. S.; Zhao, Z.; Nie, L. N.
2013-12-01
The calculation of water-hammer pressure phenomenon of single-phase liquid is already more mature for a pipeline of uniform characteristics, but less research has addressed the calculation of slurry water hammer pressure in complex pipelines with slurry flows carrying solid particles. In this paper, based on the developments of slurry pipelines at home and abroad, the fundamental principle and method of numerical simulation of transient processes are presented, and several boundary conditions are given. Through the numerical simulation and analysis of transient processes of a practical engineering of long-distance slurry transportation pipeline system, effective protection measures and operating suggestions are presented. A model for calculating the water impact of solid and fluid phases is established based on a practical engineering of long-distance slurry pipeline transportation system. After performing a numerical simulation of the transient process, analyzing and comparing the results, effective protection measures and operating advice are recommended, which has guiding significance to the design and operating management of practical engineering of longdistance slurry pipeline transportation system.
Mathematical simulation for compensation capacities area of pipeline routes in ship systems
NASA Astrophysics Data System (ADS)
Ngo, G. V.; Sakhno, K. N.
2018-05-01
In this paper, the authors considered the problem of manufacturability’s enhancement of ship systems pipeline at the designing stage. The analysis of arrangements and possibilities for compensation of deviations for pipeline routes has been carried out. The task was set to produce the “fit pipe” together with the rest of the pipes in the route. It was proposed to compensate for deviations by movement of the pipeline route during pipe installation and to calculate maximum values of these displacements in the analyzed path. Theoretical bases of deviation compensation for pipeline routes using rotations of parallel section pairs of pipes are assembled. Mathematical and graphical simulations of compensation area capacities of pipeline routes with various configurations are completed. Prerequisites have been created for creating an automated program that will allow one to determine values of the compensatory capacities area for pipeline routes and to assign quantities of necessary allowances.
Leak detectability of the Norman Wells pipeline by mass balance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, J.C.P.
Pipeline leak detection using software-based systems is becoming common practice. The detectability of such systems is measured by how small and how quickly a leak can be detected. Algorithms used and measurement uncertainties determine leak detectability. This paper addresses leak detectability using mass balance, establishes leak detectability for Norman Wells pipelines, and compares it with field leak test results. The pipeline is operated by the Interprovincial Pipe Line (IPL) Inc., of Edmonton, Canada. It is a 12.75-inch outside diameter steel pipe with variable wall thickness. The length of the pipe is approximately 550 miles (868.9 km). The pipeline transports lightmore » crude oil at a constant flow rate of about 250 m{sup 3}/hr. The crude oil can enter the pipeline at two locations. Besides the Norman Wells inlet, there is a side line near Zama terminal that can inject crude oil into the pipeline.« less
NASA Astrophysics Data System (ADS)
Stefan Devlin, Benjamin; Nakura, Toru; Ikeda, Makoto; Asada, Kunihiro
We detail a self synchronous field programmable gate array (SSFPGA) with dual-pipeline (DP) architecture to conceal pre-charge time for dynamic logic, and its throughput optimization by using pipeline alignment implemented on benchmark circuits. A self synchronous LUT (SSLUT) consists of a three input tree-type structure with 8bits of SRAM for programming. A self synchronous switch box (SSSB) consists of both pass transistors and buffers to route signals, with 12bits of SRAM. One common block with one SSLUT and one SSSB occupies 2.2Mλ2 area with 35bits of SRAM, and the prototype SSFPGA with 34 × 30 (1020) blocks is designed and fabricated using 65nm CMOS. Measured results show at 1.2V 430MHz and 647MHz operation for a 3bit ripple carry adder, without and with throughput optimization, respectively. We find that using the proposed pipeline alignment techniques we can perform at maximum throughput of 647MHz in various benchmarks on the SSFPGA. We demonstrate up to 56.1 times throughput improvement with our pipeline alignment techniques. The pipeline alignment is carried out within the number of logic elements in the array and pipeline buffers in the switching matrix.
Management experiences and trends for water reuse implementation in Northern California.
Bischel, Heather N; Simon, Gregory L; Frisby, Tammy M; Luthy, Richard G
2012-01-03
In 2010, California fell nearly 300,000 acre-ft per year (AFY) short of its goal to recycle 1,000,000 AFY of municipal wastewater. Growth of recycled water in the 48 Northern California counties represented only 20% of the statewide increase in reuse between 2001 and 2009. To evaluate these trends and experiences, major drivers and challenges that influenced the implementation of recycled water programs in Northern California are presented based on a survey of 71 program managers conducted in 2010. Regulatory requirements limiting discharge, cited by 65% of respondents as a driver for program implementation, historically played an important role in motivating many water reuse programs in the region. More recently, pressures from limited water supplies and needs for system reliability are prevalent drivers. Almost half of respondents (49%) cited ecological protection or enhancement goals as drivers for implementation. However, water reuse for direct benefit of natural systems and wildlife habitat represents just 6-7% of total recycling in Northern California and few financial incentives exist for such projects. Economic challenges are the greatest barrier to successful project implementation. In particular, high costs of distribution systems (pipelines) are especially challenging, with $1 to 3 million/mile costs experienced. Negative perceptions of water reuse were cited by only 26% of respondents as major hindrances to implementation of surveyed programs.
NASA Astrophysics Data System (ADS)
Yang, Xiaojun; Zhu, Xiaofei; Deng, Chi; Li, Junyi; Liu, Cheng; Yu, Wenpeng; Luo, Hui
2017-10-01
To improve the level of management and monitoring of leakage and abnormal disturbance of long distance oil pipeline, the distributed optical fiber temperature and vibration sensing system is employed to test the feasibility for the healthy monitoring of a domestic oil pipeline. The simulating leakage and abnormal disturbance affairs of oil pipeline are performed in the experiment. It is demonstrated that the leakage and abnormal disturbance affairs of oil pipeline can be monitored and located accurately with the distributed optical fiber sensing system, which exhibits good performance in the sensitivity, reliability, operation and maintenance etc., and shows good market application prospect.
Pipeline repair development in support of the Oman to India gas pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abadie, W.; Carlson, W.
1995-12-01
This paper provides a summary of development which has been conducted to date for the ultra deep, diverless pipeline repair system for the proposed Oman to India Gas Pipeline. The work has addressed critical development areas involving testing and/or prototype development of tools and procedures required to perform a diverless pipeline repair in water depths of up to 3,525 m.
Simulation of systems for shock wave/compression waves damping in technological plants
NASA Astrophysics Data System (ADS)
Sumskoi, S. I.; Sverchkov, A. M.; Lisanov, M. V.; Egorov, A. F.
2016-09-01
At work of pipeline systems, flow velocity decrease can take place in the pipeline as a result of the pumps stop, the valves shutdown. As a result, compression waves appear in the pipeline systems. These waves can propagate in the pipeline system, leading to its destruction. This phenomenon is called water hammer (water hammer flow). The most dangerous situations occur when the flow is stopped quickly. Such urgent flow cutoff often takes place in an emergency situation when liquid hydrocarbons are being loaded into sea tankers. To prevent environment pollution it is necessary to stop the hydrocarbon loading urgently. The flow in this case is cut off within few seconds. To prevent an increase in pressure in a pipeline system during water hammer flow, special protective systems (pressure relief systems) are installed. The approaches to systems of protection against water hammer (pressure relief systems) modeling are described in this paper. A model of certain pressure relief system is considered. It is shown that in case of an increase in the intensity of hydrocarbons loading at a sea tanker, presence of the pressure relief system allows to organize safe mode of loading.
Liu, Wenbin; Liu, Aimin
2018-01-01
With the exploitation of offshore oil and gas gradually moving to deep water, higher temperature differences and pressure differences are applied to the pipeline system, making the global buckling of the pipeline more serious. For unburied deep-water pipelines, the lateral buckling is the major buckling form. The initial imperfections widely exist in the pipeline system due to manufacture defects or the influence of uneven seabed, and the distribution and geometry features of initial imperfections are random. They can be divided into two kinds based on shape: single-arch imperfections and double-arch imperfections. This paper analyzed the global buckling process of a pipeline with 2 initial imperfections by using a numerical simulation method and revealed how the ratio of the initial imperfection’s space length to the imperfection’s wavelength and the combination of imperfections affects the buckling process. The results show that a pipeline with 2 initial imperfections may suffer the superposition of global buckling. The growth ratios of buckling displacement, axial force and bending moment in the superposition zone are several times larger than no buckling superposition pipeline. The ratio of the initial imperfection’s space length to the imperfection’s wavelength decides whether a pipeline suffers buckling superposition. The potential failure point of pipeline exhibiting buckling superposition is as same as the no buckling superposition pipeline, but the failure risk of pipeline exhibiting buckling superposition is much higher. The shape and direction of two nearby imperfections also affects the failure risk of pipeline exhibiting global buckling superposition. The failure risk of pipeline with two double-arch imperfections is higher than pipeline with two single-arch imperfections. PMID:29554123
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-15
... pipeline infrastructure to receive natural gas produced from Marcellus Shale production areas for delivery to existing interstate pipeline systems of Tennessee Gas Pipeline Company (TGP), CNYOG, and Transcontinental Gas Pipeline Corporation (Transco). It would also provide for bi-directional transportation...
NASA Astrophysics Data System (ADS)
Modigliani, Andrea; Goldoni, Paolo; Royer, Frédéric; Haigron, Regis; Guglielmi, Laurent; François, Patrick; Horrobin, Matthew; Bristow, Paul; Vernet, Joel; Moehler, Sabine; Kerber, Florian; Ballester, Pascal; Mason, Elena; Christensen, Lise
2010-07-01
The X-shooter data reduction pipeline, as part of the ESO-VLT Data Flow System, provides recipes for Paranal Science Operations, and for Data Product and Quality Control Operations at Garching headquarters. At Paranal, it is used for the quick-look data evaluation. The pipeline recipes can be executed either with EsoRex at the command line level or through the Gasgano graphical user interface. The recipes are implemented with the ESO Common Pipeline Library (CPL). X-shooter is the first of the second generation of VLT instruments. It makes possible to collect in one shot the full spectrum of the target from 300 to 2500 nm, subdivided in three arms optimised for UVB, VIS and NIR ranges, with an efficiency between 15% and 35% including the telescope and the atmosphere, and a spectral resolution varying between 3000 and 17,000. It allows observations in stare, offset modes, using the slit or an IFU, and observing sequences nodding the target along the slit. Data reduction can be performed either with a classical approach, by determining the spectral format via 2D-polynomial transformations, or with the help of a dedicated instrument physical model to gain insight on the instrument and allowing a constrained solution that depends on a few parameters with a physical meaning. In the present paper we describe the steps of data reduction necessary to fully reduce science observations in the different modes with examples on typical data calibrations and observations sequences.
Innovative Sensors for Pipeline Crawlers: Rotating Permanent Magnet Inspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Bruce Nestleroth; Richard J. Davis; Stephanie Flamberg
2006-09-30
Internal inspection of pipelines is an important tool for ensuring safe and reliable delivery of fossil energy products. Current inspection systems that are propelled through the pipeline by the product flow cannot be used to inspect all pipelines because of the various physical barriers they may encounter. To facilitate inspection of these ''unpiggable'' pipelines, recent inspection development efforts have focused on a new generation of powered inspection platforms that are able to crawl slowly inside a pipeline and can maneuver past the physical barriers that limit internal inspection applicability, such as bore restrictions, low product flow rate, and low pressure.more » The first step in this research was to review existing inspection technologies for applicability and compatibility with crawler systems. Most existing inspection technologies, including magnetic flux leakage and ultrasonic methods, had significant implementation limitations including mass, physical size, inspection energy coupling requirements and technology maturity. The remote field technique was the most promising but power consumption was high and anomaly signals were low requiring sensitive detectors and electronics. After reviewing each inspection technology, it was decided to investigate the potential for a new inspection method. The new inspection method takes advantage of advances in permanent magnet strength, along with their wide availability and low cost. Called rotating permanent magnet inspection (RPMI), this patent pending technology employs pairs of permanent magnets rotating around the central axis of a cylinder to induce high current densities in the material under inspection. Anomalies and wall thickness variations are detected with an array of sensors that measure local changes in the magnetic field produced by the induced current flowing in the material. This inspection method is an alternative to the common concentric coil remote field technique that induces low-frequency eddy currents in ferromagnetic pipes and tubes. Since this is a new inspection method, both theory and experiment were used to determine fundamental capabilities and limitations. Fundamental finite element modeling analysis and experimental investigations performed during this development have led to the derivation of a first order analytical equation for designing rotating magnetizers to induce current and positioning sensors to record signals from anomalies. Experimental results confirm the analytical equation and the finite element calculations provide a firm basis for the design of RPMI systems. Experimental results have shown that metal loss anomalies and wall thickness variations can be detected with an array of sensors that measure local changes in the magnetic field produced by the induced current flowing in the material. The design exploits the phenomenon that circumferential currents are easily detectable at distances well away from the magnets. Current changes at anomalies were detectable with commercial low cost Hall Effect sensors. Commercial analog to digital converters can be used to measure the sensor output and data analysis can be performed in real time using PC computer systems. The technology was successfully demonstrated during two blind benchmark tests where numerous metal loss defects were detected. For this inspection technology, the detection threshold is a function of wall thickness and corrosion depth. For thinner materials, the detection threshold was experimentally shown to be comparable to magnetic flux leakage. For wall thicknesses greater than three tenths of an inch, the detection threshold increases with wall thickness. The potential for metal loss anomaly sizing was demonstrated in the second benchmarking study, again with accuracy comparable to existing magnetic flux leakage technologies. The rotating permanent magnet system has the potential for inspecting unpiggable pipelines since the magnetizer configurations can be sufficiently small with respect to the bore of the pipe to pass obstructions that limit the application of many inspection technologies. Also, since the largest dimension of the Hall Effect sensor is two tenths of an inch, the sensor packages can be small, flexible and light. The power consumption, on the order of ten watts, is low compared to some inspection systems; this would enable autonomous systems to inspect longer distances between charges. This project showed there are no technical barriers to building a field ready unit that can pass through narrow obstructions, such as plug valves. The next step in project implementation is to build a field ready unit that can begin to establish optimal performance capabilities including detection thresholds, sizing capability, and wall thickness limitations.« less
Coordinated Scheduling for Interdependent Electric Power and Natural Gas Infrastructures
Zlotnik, Anatoly; Roald, Line; Backhaus, Scott; ...
2016-03-24
The extensive installation of gas-fired power plants in many parts of the world has led electric systems to depend heavily on reliable gas supplies. The use of gas-fired generators for peak load and reserve provision causes high intraday variability in withdrawals from high-pressure gas transmission systems. Such variability can lead to gas price fluctuations and supply disruptions that affect electric generator dispatch, electricity prices, and threaten the security of power systems and gas pipelines. These infrastructures function on vastly different spatio-temporal scales, which prevents current practices for separate operations and market clearing from being coordinated. Here in this article, wemore » apply new techniques for control of dynamic gas flows on pipeline networks to examine day-ahead scheduling of electric generator dispatch and gas compressor operation for different levels of integration, spanning from separate forecasting, and simulation to combined optimal control. We formulate multiple coordination scenarios and develop tractable physically accurate computational implementations. These scenarios are compared using an integrated model of test networks for power and gas systems with 24 nodes and 24 pipes, respectively, which are coupled through gas-fired generators. The analysis quantifies the economic efficiency and security benefits of gas-electric coordination and dynamic gas system operation.« less
1987-09-01
Later, when these alloca- :t)il t rate(-ies beconme a p)erfornmance concern, the schieduler can be inolded 4 toN fit the p~ articular appllicationi... distracts attention from the more important points that this example is intended to demonstrate. The implementation, therefore, is described separately in...for the benefit of outsiders. From the object’s point of view the pipeline is nothing but a list of messages that tell it how to mutate its own state
Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems
Hendrix, Valerie; Fox, James; Ghoshal, Devarshi; ...
2016-07-21
The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less
Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrix, Valerie; Fox, James; Ghoshal, Devarshi
The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less
An Efficient Pipeline Wavefront Phase Recovery for the CAFADIS Camera for Extremely Large Telescopes
Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel
2010-01-01
In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations. PMID:22315523
Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel
2010-01-01
In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations.
Expansion of the U.S. Natural Gas Pipeline Network
2009-01-01
Additions in 2008 and Projects through 2011. This report examines new natural gas pipeline capacity added to the U.S. natural gas pipeline system during 2008. In addition, it discusses and analyzes proposed natural gas pipeline projects that may be developed between 2009 and 2011, and the market factors supporting these initiatives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daum, Christopher; Zane, Matthew; Han, James
2011-01-31
The U.S. Department of Energy (DOE) Joint Genome Institute's (JGI) Production Sequencing group is committed to the generation of high-quality genomic DNA sequence to support the mission areas of renewable energy generation, global carbon management, and environmental characterization and clean-up. Within the JGI's Production Sequencing group, a robust Illumina Genome Analyzer and HiSeq pipeline has been established. Optimization of the sesequencer pipelines has been ongoing with the aim of continual process improvement of the laboratory workflow, reducing operational costs and project cycle times to increases ample throughput, and improving the overall quality of the sequence generated. A sequence QC analysismore » pipeline has been implemented to automatically generate read and assembly level quality metrics. The foremost of these optimization projects, along with sequencing and operational strategies, throughput numbers, and sequencing quality results will be presented.« less
Photometer Performance Assessment in TESS SPOC Pipeline
NASA Astrophysics Data System (ADS)
Li, Jie; Caldwell, Douglas A.; Jenkins, Jon Michael; Twicken, Joseph D.; Wohler, Bill; Chen, Xiaolan; Rose, Mark; TESS Science Processing Operations Center
2018-06-01
This poster describes the Photometer Performance Assessment (PPA) software component in the Transiting Exoplanet Survey Satellite (TESS) Science Processing Operations Center (SPOC) pipeline, which is developed based on the Kepler science pipeline. The PPA component performs two tasks: the first task is to assess the health and performance of the instrument based on the science data sets collected during each observation sector, identifying out of bounds conditions and generating alerts. The second is to combine the astrometric data collected for each CCD readout channel to construct a high fidelity record of the pointing history for each of the 4 cameras and an attitude solution for the TESS spacecraft for each 2-min data collection interval. PPA is implemented with multiple pipeline modules: PPA Metrics Determination (PMD), PMD Aggregator (PAG), and PPA Attitude Determination (PAD). The TESS Mission is funded by NASA's Science Mission Directorate. The SPOC is managed and operated by NASA Ames Research Center.
A Model for Oil-Gas Pipelines Cost Prediction Based on a Data Mining Process
NASA Astrophysics Data System (ADS)
Batzias, Fragiskos A.; Spanidis, Phillip-Mark P.
2009-08-01
This paper addresses the problems associated with the cost estimation of oil/gas pipelines during the elaboration of feasibility assessments. Techno-economic parameters, i.e., cost, length and diameter, are critical for such studies at the preliminary design stage. A methodology for the development of a cost prediction model based on Data Mining (DM) process is proposed. The design and implementation of a Knowledge Base (KB), maintaining data collected from various disciplines of the pipeline industry, are presented. The formulation of a cost prediction equation is demonstrated by applying multiple regression analysis using data sets extracted from the KB. Following the methodology proposed, a learning context is inductively developed as background pipeline data are acquired, grouped and stored in the KB, and through a linear regression model provide statistically substantial results, useful for project managers or decision makers.
Study on Resources Assessment of Coal Seams covered by Long-Distance Oil & Gas Pipelines
NASA Astrophysics Data System (ADS)
Han, Bing; Fu, Qiang; Pan, Wei; Hou, Hanfang
2018-01-01
The assessment of mineral resources covered by construction projects plays an important role in reducing the overlaying of important mineral resources and ensuring the smooth implementation of construction projects. To take a planned long-distance gas pipeline as an example, the assessment method and principles for coal resources covered by linear projects are introduced. The areas covered by multiple coal seams are determined according to the linear projection method, and the resources covered by pipelines directly and indirectly are estimated by using area segmentation method on the basis of original blocks. The research results can provide references for route optimization of projects and compensation for mining right..
MIEC-SVM: automated pipeline for protein peptide/ligand interaction prediction.
Li, Nan; Ainsworth, Richard I; Wu, Meixin; Ding, Bo; Wang, Wei
2016-03-15
MIEC-SVM is a structure-based method for predicting protein recognition specificity. Here, we present an automated MIEC-SVM pipeline providing an integrated and user-friendly workflow for construction and application of the MIEC-SVM models. This pipeline can handle standard amino acids and those with post-translational modifications (PTMs) or small molecules. Moreover, multi-threading and support to Sun Grid Engine (SGE) are implemented to significantly boost the computational efficiency. The program is available at http://wanglab.ucsd.edu/MIEC-SVM CONTACT: : wei-wang@ucsd.edu Supplementary data available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
IDCDACS: IDC's Distributed Application Control System
NASA Astrophysics Data System (ADS)
Ertl, Martin; Boresch, Alexander; Kianička, Ján; Sudakov, Alexander; Tomuta, Elena
2015-04-01
The Preparatory Commission for the CTBTO is an international organization based in Vienna, Austria. Its mission is to establish a global verification regime to monitor compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT), which bans all nuclear explosions. For this purpose time series data from a global network of seismic, hydro-acoustic and infrasound (SHI) sensors are transmitted to the International Data Centre (IDC) in Vienna in near-real-time, where it is processed to locate events that may be nuclear explosions. We newly designed the distributed application control system that glues together the various components of the automatic waveform data processing system at the IDC (IDCDACS). Our highly-scalable solution preserves the existing architecture of the IDC processing system that proved successful over many years of operational use, but replaces proprietary components with open-source solutions and custom developed software. Existing code was refactored and extended to obtain a reusable software framework that is flexibly adaptable to different types of processing workflows. Automatic data processing is organized in series of self-contained processing steps, each series being referred to as a processing pipeline. Pipelines process data by time intervals, i.e. the time-series data received from monitoring stations is organized in segments based on the time when the data was recorded. So-called data monitor applications queue the data for processing in each pipeline based on specific conditions, e.g. data availability, elapsed time or completion states of preceding processing pipelines. IDCDACS consists of a configurable number of distributed monitoring and controlling processes, a message broker and a relational database. All processes communicate through message queues hosted on the message broker. Persistent state information is stored in the database. A configurable processing controller instantiates and monitors all data processing applications. Due to decoupling by message queues the system is highly versatile and failure tolerant. The implementation utilizes the RabbitMQ open-source messaging platform that is based upon the Advanced Message Queuing Protocol (AMQP), an on-the-wire protocol (like HTML) and open industry standard. IDCDACS uses high availability capabilities provided by RabbitMQ and is equipped with failure recovery features to survive network and server outages. It is implemented in C and Python and is operated in a Linux environment at the IDC. Although IDCDACS was specifically designed for the existing IDC processing system its architecture is generic and reusable for different automatic processing workflows, e.g. similar to those described in (Olivieri et al. 2012, Kværna et al. 2012). Major advantages are its independence of the specific data processing applications used and the possibility to reconfigure IDCDACS for different types of processing, data and trigger logic. A possible future development would be to use the IDCDACS framework for different scientific domains, e.g. for processing of Earth observation satellite data extending the one-dimensional time-series intervals to spatio-temporal data cubes. REFERENCES Olivieri M., J. Clinton (2012) An almost fair comparison between Earthworm and SeisComp3, Seismological Research Letters, 83(4), 720-727. Kværna, T., S. J. Gibbons, D. B. Harris, D. A. Dodge (2012) Adapting pipeline architectures to track developing aftershock sequences and recurrent explosions, Proceedings of the 2012 Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies, 776-785.
Deliverability on the interstate natural gas pipeline system
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-05-01
Deliverability on the Interstate Natural Gas Pipeline System examines the capability of the national pipeline grid to transport natural gas to various US markets. The report quantifies the capacity levels and utilization rates of major interstate pipeline companies in 1996 and the changes since 1990, as well as changes in markets and end-use consumption patterns. It also discusses the effects of proposed capacity expansions on capacity levels. The report consists of five chapters, several appendices, and a glossary. Chapter 1 discusses some of the operational and regulatory features of the US interstate pipeline system and how they affect overall systemmore » design, system utilization, and capacity expansions. Chapter 2 looks at how the exploration, development, and production of natural gas within North America is linked to the national pipeline grid. Chapter 3 examines the capability of the interstate natural gas pipeline network to link production areas to market areas, on the basis of capacity and usage levels along 10 corridors. The chapter also examines capacity expansions that have occurred since 1990 along each corridor and the potential impact of proposed new capacity. Chapter 4 discusses the last step in the transportation chain, that is, deliverability to the ultimate end user. Flow patterns into and out of each market region are discussed, as well as the movement of natural gas between States in each region. Chapter 5 examines how shippers reserve interstate pipeline capacity in the current transportation marketplace and how pipeline companies are handling the secondary market for short-term unused capacity. Four appendices provide supporting data and additional detail on the methodology used to estimate capacity. 32 figs., 15 tabs.« less
Design and Execution of make-like, distributed Analyses based on Spotify’s Pipelining Package Luigi
NASA Astrophysics Data System (ADS)
Erdmann, M.; Fischer, B.; Fischer, R.; Rieger, M.
2017-10-01
In high-energy particle physics, workflow management systems are primarily used as tailored solutions in dedicated areas such as Monte Carlo production. However, physicists performing data analyses are usually required to steer their individual workflows manually which is time-consuming and often leads to undocumented relations between particular workloads. We present a generic analysis design pattern that copes with the sophisticated demands of end-to-end HEP analyses and provides a make-like execution system. It is based on the open-source pipelining package Luigi which was developed at Spotify and enables the definition of arbitrary workloads, so-called Tasks, and the dependencies between them in a lightweight and scalable structure. Further features are multi-user support, automated dependency resolution and error handling, central scheduling, and status visualization in the web. In addition to already built-in features for remote jobs and file systems like Hadoop and HDFS, we added support for WLCG infrastructure such as LSF and CREAM job submission, as well as remote file access through the Grid File Access Library. Furthermore, we implemented automated resubmission functionality, software sandboxing, and a command line interface with auto-completion for a convenient working environment. For the implementation of a t \\overline{{{t}}} H cross section measurement, we created a generic Python interface that provides programmatic access to all external information such as datasets, physics processes, statistical models, and additional files and values. In summary, the setup enables the execution of the entire analysis in a parallelized and distributed fashion with a single command.
A method for real-time implementation of HOG feature extraction
NASA Astrophysics Data System (ADS)
Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai
2011-08-01
Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.
eHive: An Artificial Intelligence workflow system for genomic analysis
2010-01-01
Background The Ensembl project produces updates to its comparative genomics resources with each of its several releases per year. During each release cycle approximately two weeks are allocated to generate all the genomic alignments and the protein homology predictions. The number of calculations required for this task grows approximately quadratically with the number of species. We currently support 50 species in Ensembl and we expect the number to continue to grow in the future. Results We present eHive, a new fault tolerant distributed processing system initially designed to support comparative genomic analysis, based on blackboard systems, network distributed autonomous agents, dataflow graphs and block-branch diagrams. In the eHive system a MySQL database serves as the central blackboard and the autonomous agent, a Perl script, queries the system and runs jobs as required. The system allows us to define dataflow and branching rules to suit all our production pipelines. We describe the implementation of three pipelines: (1) pairwise whole genome alignments, (2) multiple whole genome alignments and (3) gene trees with protein homology inference. Finally, we show the efficiency of the system in real case scenarios. Conclusions eHive allows us to produce computationally demanding results in a reliable and efficient way with minimal supervision and high throughput. Further documentation is available at: http://www.ensembl.org/info/docs/eHive/. PMID:20459813
NASA Education Implementation Plan 2015-2017
ERIC Educational Resources Information Center
National Aeronautics and Space Administration, 2015
2015-01-01
The NASA Education Implementation Plan (NEIP) provides an understanding of the role of NASA in advancing the nation's STEM education and workforce pipeline. The document outlines the roles and responsibilities that NASA Education has in approaching and achieving the agency's and administration's strategic goals in STEM Education. The specific…
Neuroimaging Study Designs, Computational Analyses and Data Provenance Using the LONI Pipeline
Dinov, Ivo; Lozev, Kamen; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Zamanyan, Alen; Chakrapani, Shruthi; Van Horn, John; Parker, D. Stott; Magsipoc, Rico; Leung, Kelvin; Gutman, Boris; Woods, Roger; Toga, Arthur
2010-01-01
Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges—management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu. PMID:20927408
A Concept for the One Degree Imager (ODI) Data Reduction Pipeline and Archiving System
NASA Astrophysics Data System (ADS)
Knezek, Patricia; Stobie, B.; Michael, S.; Valdes, F.; Marru, S.; Henschel, R.; Pierce, M.
2010-05-01
The One Degree Imager (ODI), currently being built by the WIYN Observatory, will provide tremendous possibilities for conducting diverse scientific programs. ODI will be a complex instrument, using non-conventional Orthogonal Transfer Array (OTA) detectors. Due to its large field of view, small pixel size, use of OTA technology, and expected frequent use, ODI will produce vast amounts of astronomical data. If ODI is to achieve its full potential, a data reduction pipeline must be developed. Long-term archiving must also be incorporated into the pipeline system to ensure the continued value of ODI data. This paper presents a concept for an ODI data reduction pipeline and archiving system. To limit costs and development time, our plan leverages existing software and hardware, including existing pipeline software, Science Gateways, Computational Grid & Cloud Technology, Indiana University's Data Capacitor and Massive Data Storage System, and TeraGrid compute resources. Existing pipeline software will be augmented to add functionality required to meet challenges specific to ODI, enhance end-user control, and enable the execution of the pipeline on grid resources including national grid resources such as the TeraGrid and Open Science Grid. The planned system offers consistent standard reductions and end-user flexibility when working with images beyond the initial instrument signature removal. It also gives end-users access to computational and storage resources far beyond what are typically available at most institutions. Overall, the proposed system provides a wide array of software tools and the necessary hardware resources to use them effectively.
WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves
NASA Astrophysics Data System (ADS)
Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise
2017-10-01
Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pineda Porras, Omar Andrey; Ordaz, Mario
2009-01-01
Though Differential Ground Subsidence (DGS) impacts the seismic response of segmented buried pipelines augmenting their vulnerability, fragility formulations to estimate repair rates under such condition are not available in the literature. Physical models to estimate pipeline seismic damage considering other cases of permanent ground subsidence (e.g. faulting, tectonic uplift, liquefaction, and landslides) have been extensively reported, not being the case of DGS. The refinement of the study of two important phenomena in Mexico City - the 1985 Michoacan earthquake scenario and the sinking of the city due to ground subsidence - has contributed to the analysis of the interrelation ofmore » pipeline damage, ground motion intensity, and DGS; from the analysis of the 48-inch pipeline network of the Mexico City's Water System, fragility formulations for segmented buried pipeline systems for two DGS levels are proposed. The novel parameter PGV{sup 2}/PGA, being PGV peak ground velocity and PGA peak ground acceleration, has been used as seismic parameter in these formulations, since it has shown better correlation to pipeline damage than PGV alone according to previous studies. By comparing the proposed fragilities, it is concluded that a change in the DGS level (from Low-Medium to High) could increase the pipeline repair rates (number of repairs per kilometer) by factors ranging from 1.3 to 2.0; being the higher the seismic intensity the lower the factor.« less
Arneson, Douglas; Bhattacharya, Anindya; Shu, Le; Mäkinen, Ville-Petteri; Yang, Xia
2016-09-09
Human diseases are commonly the result of multidimensional changes at molecular, cellular, and systemic levels. Recent advances in genomic technologies have enabled an outpour of omics datasets that capture these changes. However, separate analyses of these various data only provide fragmented understanding and do not capture the holistic view of disease mechanisms. To meet the urgent needs for tools that effectively integrate multiple types of omics data to derive biological insights, we have developed Mergeomics, a computational pipeline that integrates multidimensional disease association data with functional genomics and molecular networks to retrieve biological pathways, gene networks, and central regulators critical for disease development. To make the Mergeomics pipeline available to a wider research community, we have implemented an online, user-friendly web server ( http://mergeomics. idre.ucla.edu/ ). The web server features a modular implementation of the Mergeomics pipeline with detailed tutorials. Additionally, it provides curated genomic resources including tissue-specific expression quantitative trait loci, ENCODE functional annotations, biological pathways, and molecular networks, and offers interactive visualization of analytical results. Multiple computational tools including Marker Dependency Filtering (MDF), Marker Set Enrichment Analysis (MSEA), Meta-MSEA, and Weighted Key Driver Analysis (wKDA) can be used separately or in flexible combinations. User-defined summary-level genomic association datasets (e.g., genetic, transcriptomic, epigenomic) related to a particular disease or phenotype can be uploaded and computed real-time to yield biologically interpretable results, which can be viewed online and downloaded for later use. Our Mergeomics web server offers researchers flexible and user-friendly tools to facilitate integration of multidimensional data into holistic views of disease mechanisms in the form of tissue-specific key regulators, biological pathways, and gene networks.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 3 The President 1 2013-01-01 2013-01-01 false Implementing Provisions of the Temporary Payroll Tax Cut Continuation Act of 2011 Relating to the Keystone XL Pipeline Permit Presidential Documents Other Presidential Documents Memorandum of January 18, 2012 Implementing Provisions of the Temporary Payroll Tax Cut Continuation Act of 2011 Relating t...
Fang, Wai-Chi; Huang, Kuan-Ju; Chou, Chia-Ching; Chang, Jui-Chung; Cauwenberghs, Gert; Jung, Tzyy-Ping
2014-01-01
This is a proposal for an efficient very-large-scale integration (VLSI) design, 16-channel on-line recursive independent component analysis (ORICA) processor ASIC for real-time EEG system, implemented with TSMC 40 nm CMOS technology. ORICA is appropriate to be used in real-time EEG system to separate artifacts because of its highly efficient and real-time process features. The proposed ORICA processor is composed of an ORICA processing unit and a singular value decomposition (SVD) processing unit. Compared with previous work [1], this proposed ORICA processor has enhanced effectiveness and reduced hardware complexity by utilizing a deeper pipeline architecture, shared arithmetic processing unit, and shared registers. The 16-channel random signals which contain 8-channel super-Gaussian and 8-channel sub-Gaussian components are used to analyze the dependence of the source components, and the average correlation coefficient is 0.95452 between the original source signals and extracted ORICA signals. Finally, the proposed ORICA processor ASIC is implemented with TSMC 40 nm CMOS technology, and it consumes 15.72 mW at 100 MHz operating frequency.
Underground pipeline laying using the pipe-in-pipe system
NASA Astrophysics Data System (ADS)
Antropova, N.; Krets, V.; Pavlov, M.
2016-09-01
The problems of resource saving and environmental safety during the installation and operation of the underwater crossings are always relevant. The paper describes the existing methods of trenchless pipeline technology, the structure of multi-channel pipelines, the types of supporting and guiding systems. The rational design is suggested for the pipe-in-pipe system. The finite element model is presented for the most dangerous sections of the inner pipes, the optimum distance is detected between the roller supports.
1. FIRST SECTION OF PIPELINE BETWEEN CONFLUENCE POOL AND FISH ...
1. FIRST SECTION OF PIPELINE BETWEEN CONFLUENCE POOL AND FISH SCREEN. NOTE RETAINING WALL BESIDE PIPE. VIEW TO NORTH-NORTHEAST. - Santa Ana River Hydroelectric System, Pipeline to Fish Screen, Redlands, San Bernardino County, CA
NASA Astrophysics Data System (ADS)
Dudin, S. M.; Novitskiy, D. V.
2018-05-01
The works of researchers at VNIIgaz, Giprovostokneft, Kuibyshev NIINP, Grozny Petroleum Institute, etc., are devoted to modeling heterogeneous medium flows in pipelines under laboratory conditions. In objective consideration, the empirical relationships obtained and the calculation procedures for pipelines transporting multiphase products are a bank of experimental data on the problem of pipeline transportation of multiphase systems. Based on the analysis of the published works, the main design requirements for experimental installations designed to study the flow regimes of gas-liquid flows in pipelines were formulated, which were taken into account by the authors when creating the experimental stand. The article describes the results of experimental studies of the flow regimes of a gas-liquid mixture in a pipeline, and also gives a methodological description of the experimental installation. Also the article describes the software of the experimental scientific and educational stand developed with the participation of the authors.
Mathematical modeling of non-stationary gas flow in gas pipeline
NASA Astrophysics Data System (ADS)
Fetisov, V. G.; Nikolaev, A. K.; Lykov, Y. V.; Duchnevich, L. N.
2018-03-01
An analysis of the operation of the gas transportation system shows that for a considerable part of time pipelines operate in an unsettled regime of gas movement. Its pressure and flow rate vary along the length of pipeline and over time as a result of uneven consumption and selection, switching on and off compressor units, shutting off stop valves, emergence of emergency leaks. The operational management of such regimes is associated with difficulty of reconciling the operating modes of individual sections of gas pipeline with each other, as well as with compressor stations. Determining the grounds that cause change in the operating mode of the pipeline system and revealing patterns of these changes determine the choice of its parameters. Therefore, knowledge of the laws of changing the main technological parameters of gas pumping through pipelines in conditions of non-stationary motion is of great importance for practice.
A Pipeline for 3D Digital Optical Phenotyping Plant Root System Architecture
NASA Astrophysics Data System (ADS)
Davis, T. W.; Shaw, N. M.; Schneider, D. J.; Shaff, J. E.; Larson, B. G.; Craft, E. J.; Liu, Z.; Kochian, L. V.; Piñeros, M. A.
2017-12-01
This work presents a new pipeline for digital optical phenotyping the root system architecture of agricultural crops. The pipeline begins with a 3D root-system imaging apparatus for hydroponically grown crop lines of interest. The apparatus acts as a self-containing dark room, which includes an imaging tank, motorized rotating bearing and digital camera. The pipeline continues with the Plant Root Imaging and Data Acquisition (PRIDA) software, which is responsible for image capturing and storage. Once root images have been captured, image post-processing is performed using the Plant Root Imaging Analysis (PRIA) command-line tool, which extracts root pixels from color images. Following the pre-processing binarization of digital root images, 3D trait characterization is performed using the next-generation RootReader3D software. RootReader3D measures global root system architecture traits, such as total root system volume and length, total number of roots, and maximum rooting depth and width. While designed to work together, the four stages of the phenotyping pipeline are modular and stand-alone, which provides flexibility and adaptability for various research endeavors.
Study on the flow in the pipelines of the support system of circulating fluidized bed
NASA Astrophysics Data System (ADS)
Meng, L.; Yang, J.; Zhou, L. J.; Wang, Z. W.; Zhuang, X. H.
2013-12-01
In the support system of Circulating Fluidized Bed (Below referred to as CFB) of thermal power plant, the pipelines of primary wind are used for transporting the cold air to the boiler, which is important in controlling and combustion effect. The pipeline design will greatly affect the energy loss of the system, and accordingly affect the thermal power plant economic benefits and production environment. Three-dimensional numerical simulation is carried out for the pipeline internal flow field of a thermal power plant in this paper. Firstly three turbulence models were compared and the results showed that the SST k-ω model converged better and the energy losses predicted were closer to the experimental results. The influence of the pipeline design form on the flow characteristics are analysed, then the optimization designs of the pipeline are proposed according to the energy loss distribution of the flow field, in order to reduce energy loss and improve the efficiency of tunnel. The optimization plan turned out to be efficacious; about 36% of the pressure loss is reduced.
A midas plugin to enable construction of reproducible web-based image processing pipelines
Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A.; Oguz, Ipek
2013-01-01
Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline. PMID:24416016
A midas plugin to enable construction of reproducible web-based image processing pipelines.
Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A; Oguz, Ipek
2013-01-01
Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.
NASA Astrophysics Data System (ADS)
Wen, Shipeng; Xu, Jishang; Hu, Guanghai; Dong, Ping; Shen, Hong
2015-08-01
The safety of submarine pipelines is largely influenced by free spans and corrosions. Previous studies on free spans caused by seabed scours are mainly based on the stable environment, where the background seabed scour is in equilibrium and the soil is homogeneous. To study the effects of background erosion on the free span development of subsea pipelines, a submarine pipeline located at the abandoned Yellow River subaqueous delta lobe was investigated with an integrated surveying system which included a Multibeam bathymetric system, a dual-frequency side-scan sonar, a high resolution sub-bottom profiler, and a Magnetic Flux Leakage (MFL) sensor. We found that seabed homogeneity has a great influence on the free span development of the pipeline. More specifically, for homogeneous background scours, the morphology of scour hole below the pipeline is quite similar to that without the background scour, whereas for inhomogeneous background scour, the nature of spanning is mainly dependent on the evolution of seabed morphology near the pipeline. Magnetic Flux Leakage (MFL) detection results also reveal the possible connection between long free spans and accelerated corrosion of the pipeline.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-16
... maps, security plans, etc.); and Actual or suspected cyber-attacks that could impact pipeline... suspected attacks on pipeline systems, facilities, or assets; Bomb threats or weapons of mass destruction...
49 CFR 193.2019 - Mobile and temporary LNG facilities.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Section 193.2019 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY... during gas pipeline systems repair/alteration, or for other short term applications need not meet the...
49 CFR 193.2513 - Transfer procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY LIQUEFIED NATURAL GAS FACILITIES... transfer position; and (7) Verify that transfers into a pipeline system will not exceed the pressure or...
Magnetic pipeline for coal and oil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knolle, E.
1998-07-01
A 1994 analysis of the recorded costs of the Alaska oil pipeline, in a paper entitled Maglev Crude Oil Pipeline, (NASA CP-3247 pp. 671--684) concluded that, had the Knolle Magnetrans pipeline technology been available and used, some $10 million per day in transportation costs could have been saved over the 20 years of the Alaska oil pipeline's existence. This over 800 mile long pipeline requires about 500 horsepower per mile in pumping power, which together with the cost of the pipeline's capital investment consumes about one-third of the energy value of the pumped oil. This does not include the costmore » of getting the oil out of the ground. The reason maglev technology performs superior to conventional pipelines is because by magnetically levitating the oil into contact-free suspense, there is no drag-causing adhesion. In addition, by using permanent magnets in repulsion, suspension is achieved without using energy. Also, the pumped oil's adhesion to the inside of pipes limits its speed. In the case of the Alaska pipeline the speed is limited to about 7 miles per hour, which, with its 48-inch pipe diameter and 1200 psi pressure, pumps about 2 million barrels per day. The maglev system, as developed by Knolle Magnetrans, would transport oil in magnetically suspended sealed containers and, thus free of adhesion, at speeds 10 to 20 times faster. Furthermore, the diameter of the levitated containers can be made smaller with the same capacity, which makes the construction of the maglev system light and inexpensive. There are similar advantages when using maglev technology to transport coal. Also, a maglev system has advantages over railroads in mountainous regions where coal is primarily mined. A maglev pipeline can travel, all-year and all weather, in a straight line to the end-user, whereas railroads have difficult circuitous routes. In contrast, a maglev pipeline can climb over steep hills without much difficulty.« less
The force on the flex: Global parallelism and portability
NASA Technical Reports Server (NTRS)
Jordan, H. F.
1986-01-01
A parallel programming methodology, called the force, supports the construction of programs to be executed in parallel by an unspecified, but potentially large, number of processes. The methodology was originally developed on a pipelined, shared memory multiprocessor, the Denelcor HEP, and embodies the primitive operations of the force in a set of macros which expand into multiprocessor Fortran code. A small set of primitives is sufficient to write large parallel programs, and the system has been used to produce 10,000 line programs in computational fluid dynamics. The level of complexity of the force primitives is intermediate. It is high enough to mask detailed architectural differences between multiprocessors but low enough to give the user control over performance. The system is being ported to a medium scale multiprocessor, the Flex/32, which is a 20 processor system with a mixture of shared and local memory. Memory organization and the type of processor synchronization supported by the hardware on the two machines lead to some differences in efficient implementations of the force primitives, but the user interface remains the same. An initial implementation was done by retargeting the macros to Flexible Computer Corporation's ConCurrent C language. Subsequently, the macros were caused to directly produce the system calls which form the basis for ConCurrent C. The implementation of the Fortran based system is in step with Flexible Computer Corporations's implementation of a Fortran system in the parallel environment.
Agent-based re-engineering of ErbB signaling: a modeling pipeline for integrative systems biology.
Das, Arya A; Ajayakumar Darsana, T; Jacob, Elizabeth
2017-03-01
Experiments in systems biology are generally supported by a computational model which quantitatively estimates the parameters of the system by finding the best fit to the experiment. Mathematical models have proved to be successful in reverse engineering the system. The data generated is interpreted to understand the dynamics of the underlying phenomena. The question we have sought to answer is that - is it possible to use an agent-based approach to re-engineer a biological process, making use of the available knowledge from experimental and modelling efforts? Can the bottom-up approach benefit from the top-down exercise so as to create an integrated modelling formalism for systems biology? We propose a modelling pipeline that learns from the data given by reverse engineering, and uses it for re-engineering the system, to carry out in-silico experiments. A mathematical model that quantitatively predicts co-expression of EGFR-HER2 receptors in activation and trafficking has been taken for this study. The pipeline architecture takes cues from the population model that gives the rates of biochemical reactions, to formulate knowledge-based rules for the particle model. Agent-based simulations using these rules, support the existing facts on EGFR-HER2 dynamics. We conclude that, re-engineering models, built using the results of reverse engineering, opens up the possibility of harnessing the power pack of data which now lies scattered in literature. Virtual experiments could then become more realistic when empowered with the findings of empirical cell biology and modelling studies. Implemented on the Agent Modelling Framework developed in-house. C ++ code templates available in Supplementary material . liz.csir@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rieber, M.; Soo, S.L.
1977-08-01
A coal slurry pipeline system requires that the coal go through a number of processing stages before it is used by the power plant. Once mined, the coal is delivered to a preparation plant where it is pulverized to sizes between 18 and 325 mesh and then suspended in about an equal weight of water. This 50-50 slurry mixture has a consistency approximating toothpaste. It is pushed through the pipeline via electric pumping stations 70 to 100 miles apart. Flow velocity through the line must be maintained within a narrow range. For example, if a 3.5 mph design is usedmore » at 5 mph, the system must be able to withstand double the horsepower, peak pressure, and wear. Minimum flowrate must be maintained to avoid particle settling and plugging. However, in general, once a pipeline system has been designed, because of economic considerations on the one hand and design limits on the other, flowrate is rather inflexible. Pipelines that have a slowly moving throughput and a water carrier may be subject to freezing in northern areas during periods of severe cold. One of the problems associated with slurry pipeline analyses is the lack of operating experience.« less
A quantitative non-destructive residual stress assessment tool for pipelines.
DOT National Transportation Integrated Search
2014-09-01
G2MT successfully demonstrated the eStress system, a powerful new nondestructive evaluation : system for analyzing through-thickness residual stresses in mechanical damaged areas of steel : pipelines. The eStress system is designed to help pipe...
NASA Technical Reports Server (NTRS)
Wheatley, Thomas E.; Michaloski, John L.; Lumia, Ronald
1989-01-01
Analysis of a robot control system leads to a broad range of processing requirements. One fundamental requirement of a robot control system is the necessity of a microcomputer system in order to provide sufficient processing capability.The use of multiple processors in a parallel architecture is beneficial for a number of reasons, including better cost performance, modular growth, increased reliability through replication, and flexibility for testing alternate control strategies via different partitioning. A survey of the progression from low level control synchronizing primitives to higher level communication tools is presented. The system communication and control mechanisms of existing robot control systems are compared to the hierarchical control model. The impact of this design methodology on the current robot control systems is explored.
New Yumurtalik to Kirikkale crude-oil pipeline would boost Turkish industrial area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonnet, G.
1982-12-13
Plans for a crude oil pipeline linking the 101 cm (40 in.) Iraq to Turkey pipeline terminal located in Yumurtalik to the site of a future refinery to be situated near Ankara are described. Designed for fully unattended operation, the ''brain'' of the system will be a telecom/telecontrol telemetry system. Support for data information exchanged between the master and local outstations will be a microwave radio carrier system, also permitting the transfer of telephone and telegraph traffic as well as facsimiles.
VizieR Online Data Catalog: LAMOST candidate members of star clusters (Xiang+, 2015)
NASA Astrophysics Data System (ADS)
Xiang, M. S.; Liu, X. W.; Yuan, H. B.; Huang, Y.; Huo, Z. Y.; Zhang, H. W.; Chen, B. Q.; Zhang, H. H.; Sun, N. C.; Wang, C.; Zhao, Y. H.; Shi, J. R.; Luo, A. L.; Li, G. P.; Wu, Y.; Bai, Z. R.; Zhang, Y.; Hou, Y. H.; Yuan, H. L.; Li, G. W.; Wei, Z.
2015-08-01
In this work, we describe the algorithms and implementation of LSP3, the LAMOST Stellar Parameter Pipeline at Peking University, a pipeline developed to determine the stellar parameters (radial velocity Vr, effective temperature Teff, surface gravity logg and metallicity [Fe/H]) from LAMOST spectra based on a template-matching technique. Following the data policy of LAMOST surveys, the data as well as the LSP3 pipeline will be public released as value-added products of the first data release of LAMOST (LAMOST DR1; Bai et al., 2015, A&A submitted), currently scheduled in 2014 December and can be accessed via http://lamost973.pku.edu.cn/site/node/4, along with a description file. (1 data file).
Development of the updated system of city underground pipelines based on Visual Studio
NASA Astrophysics Data System (ADS)
Zhang, Jianxiong; Zhu, Yun; Li, Xiangdong
2009-10-01
Our city has owned the integrated pipeline network management system with ArcGIS Engine 9.1 as the bottom development platform and with Oracle9i as basic database for storaging data. In this system, ArcGIS SDE9.1 is applied as the spatial data engine, and the system was a synthetic management software developed with Visual Studio visualization procedures development tools. As the pipeline update function of the system has the phenomenon of slower update and even sometimes the data lost, to ensure the underground pipeline data can real-time be updated conveniently and frequently, and the actuality and integrity of the underground pipeline data, we have increased a new update module in the system developed and researched by ourselves. The module has the powerful data update function, and can realize the function of inputting and outputting and rapid update volume of data. The new developed module adopts Visual Studio visualization procedures development tools, and uses access as the basic database to storage data. We can edit the graphics in AutoCAD software, and realize the database update using link between the graphics and the system. Practice shows that the update module has good compatibility with the original system, reliable and high update efficient of the database.
Risk Analysis using Corrosion Rate Parameter on Gas Transmission Pipeline
NASA Astrophysics Data System (ADS)
Sasikirono, B.; Kim, S. J.; Haryadi, G. D.; Huda, A.
2017-05-01
In the oil and gas industry, the pipeline is a major component in the transmission and distribution process of oil and gas. Oil and gas distribution process sometimes performed past the pipeline across the various types of environmental conditions. Therefore, in the transmission and distribution process of oil and gas, a pipeline should operate safely so that it does not harm the surrounding environment. Corrosion is still a major cause of failure in some components of the equipment in a production facility. In pipeline systems, corrosion can cause failures in the wall and damage to the pipeline. Therefore it takes care and periodic inspections or checks on the pipeline system. Every production facility in an industry has a level of risk for damage which is a result of the opportunities and consequences of damage caused. The purpose of this research is to analyze the level of risk of 20-inch Natural Gas Transmission Pipeline using Risk-based inspection semi-quantitative based on API 581 associated with the likelihood of failure and the consequences of the failure of a component of the equipment. Then the result is used to determine the next inspection plans. Nine pipeline components were observed, such as a straight pipes inlet, connection tee, and straight pipes outlet. The risk assessment level of the nine pipeline’s components is presented in a risk matrix. The risk level of components is examined at medium risk levels. The failure mechanism that is used in this research is the mechanism of thinning. Based on the results of corrosion rate calculation, remaining pipeline components age can be obtained, so the remaining lifetime of pipeline components are known. The calculation of remaining lifetime obtained and the results vary for each component. Next step is planning the inspection of pipeline components by NDT external methods.
Hybrid Laser/GMAW of High Strength Steel Gas Transmission Pipelines
DOT National Transportation Integrated Search
2008-07-01
Pipelines will be an integral part of our energy distribution systems for the foreseeable future. Operators are currently considering the installation of tens of billions of dollars of pipeline infrastructure. In a number of cases, the cost of export...
Post-SM4 Sensitivity Calibration of the STIS Echelle Modes
NASA Astrophysics Data System (ADS)
Bostroem, K. Azalee; Aloisi, A.; Bohlin, R.; Hodge, P.; Proffitt, C.
2012-01-01
On-orbit sensitivity curves for all echelle modes were derived for post - servicing mis- sion 4 data using observations of the DA white dwarf G191-B2B. Additionally, new echelle ripple tables and grating dependent bad pixel tables were created for the FUV and NUV MAMA. We review the procedures used to derive the adopted throughputs and implement them in the pipeline as well as the motivation for the modification of the additional reference files and pipeline procedures.
High-speed reconstruction of compressed images
NASA Astrophysics Data System (ADS)
Cox, Jerome R., Jr.; Moore, Stephen M.
1990-07-01
A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.
Blending Hydrogen into Natural Gas Pipeline Networks. A Review of Key Issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melaina, M. W.; Antonia, O.; Penev, M.
2013-03-01
This study assesses the potential to deliver hydrogen through the existing natural gas pipeline network as a hydrogen and natural gas mixture to defray the cost of building dedicated hydrogen pipelines. Blending hydrogen into the existing natural gas pipeline network has also been proposed as a means of increasing the output of renewable energy systems such as large wind farms.
77 FR 11520 - Commission Information Collection Activities; Comment Request; Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-27
..., Gas Pipeline Certificates: Annual Reports of System Flow Diagrams and System Capacity. DATES: Comments... Certificates: Annual Reports of System Flow Diagrams and System Capacity. OMB Control No.: 1902-0005. Type of... June 1 of each year, diagrams reflecting operating conditions on the pipeline's main transmission...
Argentine gas system underway for Gas del Estado
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosch, H.
Gas del Estado's giant 1074-mile Centro-Oeste pipeline project - designed to ultimately transport over 350 million CF/day of natural gas from the Neuquen basin to the Campo Duran-Buenos Aires pipeline system - is now underway. The COGASCO consortium of Dutch and Argentine companies awarded the construction project will also operate and maintain the system for 15 years after its completion. In addition to the 30-in. pipelines, the agreement calls for a major compressor station at the gas field, three intermediate compressor stations, a gas-treatment plant, liquids-recovery facilities, and the metering, control, communications, and maintenance equipment for the system. Fabricated inmore » Holland, the internally and externally coated pipe will be double-jointed to 80-ft lengths after shipment to Argentina; welders will use conventional manual-arc techniques to weld the pipeline in the field.« less
The Brackets Design and Stress Analysis of a Refinery's Hot Water Pipeline
NASA Astrophysics Data System (ADS)
Zhou, San-Ping; He, Yan-Lin
2016-05-01
The reconstruction engineering which reconstructs the hot water pipeline from a power station to a heat exchange station requires the new hot water pipeline combine with old pipe racks. Taking the allowable span calculated based on GB50316 and the design philosophy of the pipeline supports into account, determine the types and locations of brackets. By analyzing the stresses of the pipeline in AutoPIPE, adjusting the supports at dangerous segments, recalculating in AutoPIPE, at last determine the types, locations and numbers of supports reasonably. Then the overall pipeline system will satisfy the requirement of the ASME B31.3.
The Hyper Suprime-Cam software pipeline
NASA Astrophysics Data System (ADS)
Bosch, James; Armstrong, Robert; Bickerton, Steven; Furusawa, Hisanori; Ikeda, Hiroyuki; Koike, Michitaro; Lupton, Robert; Mineo, Sogo; Price, Paul; Takata, Tadafumi; Tanaka, Masayuki; Yasuda, Naoki; AlSayyad, Yusra; Becker, Andrew C.; Coulton, William; Coupon, Jean; Garmilla, Jose; Huang, Song; Krughoff, K. Simon; Lang, Dustin; Leauthaud, Alexie; Lim, Kian-Tat; Lust, Nate B.; MacArthur, Lauren A.; Mandelbaum, Rachel; Miyatake, Hironao; Miyazaki, Satoshi; Murata, Ryoma; More, Surhud; Okura, Yuki; Owen, Russell; Swinbank, John D.; Strauss, Michael A.; Yamada, Yoshihiko; Yamanoi, Hitomi
2018-01-01
In this paper, we describe the optical imaging data processing pipeline developed for the Subaru Telescope's Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope's Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrending and image characterizations.
High altitude mine waste remediation -- Implementation of the Idarado remedial action plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, A.J.; Redmond, J.V.; River, R.A.
1999-07-01
The Idarado Mine in Colorado's San Juan Mountains includes 11 tailing areas, numerous waste rock dumps, and a large number of underground openings connected by over 100 miles of raises and drifts. The tailings and mine wastes were generated from different mining and milling operations between 1975 and 1978. the Idarado Remedial Action Plan (RAP) was an innovative 5-year program developed for remediating the impacts of historic mining activities in the San Miguel River and Red Mountain Creek drainages. The challenges during implementation included seasonal access limitations due to the high altitude construction areas, high volumes of runoff during snowmore » melt, numerous abandoned underground openings and stopped-out veins, and high profile sites adjacent to busy jeep trails and a major ski resort town. Implementation of the RAP has included pioneering efforts in engineering design and construction of remedial measures. Innovative engineering designs included direct revegetation techniques for the stabilization of tailings piles, concrete cutoff walls and French drains to control subsurface flows, underground water controls that included pipelines, weeplines, and portal collection systems, and various underground structures to collect and divert subsurface flows often exceeding 2,000 gpm. Remote work locations have also required the use of innovative construction techniques such as heavy lift helicopters to move construction materials to mines above 10,000 feet. This paper describes the 5-year implementation program which has included over 1,000,000 cubic yards of tailing regrading, application of 5,000 tons of manure and 26,000 tons of limestone, and construction of over 10,000 feet of pipeline and approximately 45,000 feet of diversion channel.« less
Lambert, Jean-Philippe; Ivosev, Gordana; Couzens, Amber L; Larsen, Brett; Taipale, Mikko; Lin, Zhen-Yuan; Zhong, Quan; Lindquist, Susan; Vidal, Marc; Aebersold, Ruedi; Pawson, Tony; Bonner, Ron; Tate, Stephen; Gingras, Anne-Claude
2013-12-01
Characterizing changes in protein-protein interactions associated with sequence variants (e.g., disease-associated mutations or splice forms) or following exposure to drugs, growth factors or hormones is critical to understanding how protein complexes are built, localized and regulated. Affinity purification (AP) coupled with mass spectrometry permits the analysis of protein interactions under near-physiological conditions, yet monitoring interaction changes requires the development of a robust and sensitive quantitative approach, especially for large-scale studies in which cost and time are major considerations. We have coupled AP to data-independent mass spectrometric acquisition (sequential window acquisition of all theoretical spectra, SWATH) and implemented an automated data extraction and statistical analysis pipeline to score modulated interactions. We used AP-SWATH to characterize changes in protein-protein interactions imparted by the HSP90 inhibitor NVP-AUY922 or melanoma-associated mutations in the human kinase CDK4. We show that AP-SWATH is a robust label-free approach to characterize such changes and propose a scalable pipeline for systems biology studies.
Decoding noises in HIV computational genotyping.
Jia, MingRui; Shaw, Timothy; Zhang, Xing; Liu, Dong; Shen, Ye; Ezeamama, Amara E; Yang, Chunfu; Zhang, Ming
2017-11-01
Lack of a consistent and reliable genotyping system can critically impede HIV genomic research on pathogenesis, fitness, virulence, drug resistance, and genomic-based healthcare and treatment. At present, mis-genotyping, i.e., background noises in molecular genotyping, and its impact on epidemic surveillance is unknown. For the first time, we present a comprehensive assessment of HIV genotyping quality. HIV sequence data were retrieved from worldwide published records, and subjected to a systematic genotyping assessment pipeline. Results showed that mis-genotyped cases occurred at 4.6% globally, with some regional and high-risk population heterogeneities. Results also revealed a consistent mis-genotyping pattern in gp120 in all studied populations except the group of men who have sex with men. Our study also suggests novel virus diversities in the mis-genotyped cases. Finally, this study reemphasizes the importance of implementing a standardized genotyping pipeline to avoid genotyping disparity and to advance our understanding of virus evolution in various epidemiological settings. Copyright © 2017 Elsevier Inc. All rights reserved.
Implementation of the Pan-STARRS Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Fang, Julia; Aspin, C.
2007-12-01
Pan-STARRS, or Panoramic Survey Telescope and Rapid Response System, is a wide-field imaging facility that combines small mirrors with gigapixel cameras. It surveys the entire available sky several times a month, which ultimately requires large amounts of data to be processed and stored right away. Accordingly, the Image Processing Pipeline--the IPP--is a collection of software tools that is responsible for the primary image analysis for Pan-STARRS. It includes data registration, basic image analysis such as obtaining master images and detrending the exposures, mosaic calibration when applicable, and lastly, image sum and difference. In this paper I present my work of the installation of IPP 2.1 and 2.2 on a Linux machine, running the Simtest, which is simulated data to test your installation, and finally applying the IPP to two different sets of UH 2.2m Tek data. This work was conducted by a Research Experience for Undergraduates (REU) position at the University of Hawaii's Institute for Astronomy and funded by the NSF.
OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing
NASA Astrophysics Data System (ADS)
Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping
2017-02-01
The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.
Flow-accelerated corrosion 2016 international conference
NASA Astrophysics Data System (ADS)
Tomarov, G. V.; Shipkov, A. A.
2017-05-01
The paper discusses materials and results of the most representative world forum on the problems of flow-accelerated metal corrosion in power engineering—Flow-Accelerated Corrosion (FAC) 2016, the international conference, which was held in Lille (France) from May 23 through May 27, 2016, sponsored by EdF-DTG with the support of the International Atomic Energy Agency (IAEA) and the World Association of Nuclear Operators (WANO). The information on major themes of reports and materials of the exhibition arranged within the framework of the congress is presented. The statistics on operation time and intensity of FAC wall thinning of NPP pipelines and equipment in the world is set out. The paper describes typical examples of flow-accelerated corrosion damage of condensate-feed and wet-steam pipeline components of nuclear and thermal power plants that caused forced shutdowns or accidents. The importance of research projects on the problem of flow-accelerated metal corrosion of nuclear power units coordinated by the IAEA with the participation of leading experts in this field from around the world is considered. The reports presented at the conference considered issues of implementation of an FAC mechanism in single- and two-phase flows, the impact of hydrodynamic and water-chemical factors, the chemical composition of the metal, and other parameters on the intensity and location of FAC wall thinning localized areas in pipeline components and power equipment. Features and patterns of local and general FAC leading to local metal thinning and contamination of the working environment with ferriferous compounds are considered. Main trends of modern practices preventing FAC wear of NPP pipelines and equipment are defined. An increasing role of computer codes for the assessment and prediction of FAC rate, as well as software systems of support of the NPP personnel for the inspection planning and prevention of FAC wall thinning of equipment operating in singleand two-phase flows, is accepted. Different lines of attack on the problem of FAC of pipelines and equipment components of existing and future nuclear power units are reviewed. Promising methods of nondestructive inspection of pipelines and equipment are presented.
ERIC Educational Resources Information Center
Anderson, Eleanor Robinson
2017-01-01
Mounting public concern about a school-to-prison pipeline has put schools and districts under increasing pressure to reduce their use of suspensions, expulsions and arrests. Many are turning to restorative justice practices (RJP) as a promising alternative for addressing school discipline and improving school climate. However, implementing RJP in…
49 CFR 192.11 - Petroleum gas systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Petroleum gas systems. 192.11 Section 192.11... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS General § 192.11 Petroleum gas systems. (a) Each plant that supplies petroleum gas by pipeline to a natural gas distribution system must meet the requirements...
49 CFR 192.11 - Petroleum gas systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 3 2011-10-01 2011-10-01 false Petroleum gas systems. 192.11 Section 192.11... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS General § 192.11 Petroleum gas systems. (a) Each plant that supplies petroleum gas by pipeline to a natural gas distribution system must meet the requirements...
Inverse Transient Analysis for Classification of Wall Thickness Variations in Pipelines
Tuck, Jeffrey; Lee, Pedro
2013-01-01
Analysis of transient fluid pressure signals has been investigated as an alternative method of fault detection in pipeline systems and has shown promise in both laboratory and field trials. The advantage of the method is that it can potentially provide a fast and cost effective means of locating faults such as leaks, blockages and pipeline wall degradation within a pipeline while the system remains fully operational. The only requirement is that high speed pressure sensors are placed in contact with the fluid. Further development of the method requires detailed numerical models and enhanced understanding of transient flow within a pipeline where variations in pipeline condition and geometry occur. One such variation commonly encountered is the degradation or thinning of pipe walls, which can increase the susceptible of a pipeline to leak development. This paper aims to improve transient-based fault detection methods by investigating how changes in pipe wall thickness will affect the transient behaviour of a system; this is done through the analysis of laboratory experiments. The laboratory experiments are carried out on a stainless steel pipeline of constant outside diameter, into which a pipe section of variable wall thickness is inserted. In order to detect the location and severity of these changes in wall conditions within the laboratory system an inverse transient analysis procedure is employed which considers independent variations in wavespeed and diameter. Inverse transient analyses are carried out using a genetic algorithm optimisation routine to match the response from a one-dimensional method of characteristics transient model to the experimental time domain pressure responses. The accuracy of the detection technique is evaluated and benefits associated with various simplifying assumptions and simulation run times are investigated. It is found that for the case investigated, changes in the wavespeed and nominal diameter of the pipeline are both important to the accuracy of the inverse analysis procedure and can be used to differentiate the observed transient behaviour caused by changes in wall thickness from that caused by other known faults such as leaks. Further application of the method to real pipelines is discussed.
ASIC implementation of recursive scaled discrete cosine transform algorithm
NASA Astrophysics Data System (ADS)
On, Bill N.; Narasimhan, Sam; Huang, Victor K.
1994-05-01
A program to implement the Recursive Scaled Discrete Cosine Transform (DCT) algorithm as proposed by H. S. Hou has been undertaken at the Institute of Microelectronics. Implementation of the design was done using top-down design methodology with VHDL (VHSIC Hardware Description Language) for chip modeling. When the VHDL simulation has been satisfactorily completed, the design is synthesized into gates using a synthesis tool. The architecture of the design consists of two processing units together with a memory module for data storage and transpose. Each processing unit is composed of four pipelined stages which allow the internal clock to run at one-eighth (1/8) the speed of the pixel clock. Each stage operates on eight pixels in parallel. As the data flows through each stage, there are various adders and multipliers to transform them into the desired coefficients. The Scaled IDCT was implemented in a similar fashion with the adders and multipliers rearranged to perform the inverse DCT algorithm. The chip has been verified using Field Programmable Gate Array devices. The design is operational. The combination of fewer multiplications required and pipelined architecture give Hou's Recursive Scaled DCT good potential of achieving high performance at a low cost in using Very Large Scale Integration implementation.
VLSI architectures for computing multiplications and inverses in GF(2m)
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.
1985-01-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
VLSI architectures for computing multiplications and inverses in GF(2-m)
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.; Reed, I. S.
1983-01-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
VLSI architectures for computing multiplications and inverses in GF(2m).
Wang, C C; Truong, T K; Shao, H M; Deutsch, L J; Omura, J K; Reed, I S
1985-08-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that can be easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. In this paper, a pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal basis representation used together with this multiplier, a pipeline architecture is developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable, and therefore, naturally suitable for VLSI implementation.
Santillán, Moisés
2003-07-21
A simple model of an oxygen exchanging network is presented and studied. This network's task is to transfer a given oxygen rate from a source to an oxygen consuming system. It consists of a pipeline, that interconnects the oxygen consuming system and the reservoir and of a fluid, the active oxygen transporting element, moving through the pipeline. The network optimal design (total pipeline surface) and dynamics (volumetric flow of the oxygen transporting fluid), which minimize the energy rate expended in moving the fluid, are calculated in terms of the oxygen exchange rate, the pipeline length, and the pipeline cross-section. After the oxygen exchanging network is optimized, the energy converting system is shown to satisfy a 3/4-like allometric scaling law, based upon the assumption that its performance regime is scale invariant as well as on some feasible geometric scaling assumptions. Finally, the possible implications of this result on the allometric scaling properties observed elsewhere in living beings are discussed.
49 CFR 195.444 - CPM leak detection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false CPM leak detection. 195.444 Section 195.444... PIPELINE Operation and Maintenance § 195.444 CPM leak detection. Each computational pipeline monitoring (CPM) leak detection system installed on a hazardous liquid pipeline transporting liquid in single...
NASA Technical Reports Server (NTRS)
Dowler, W. L.
1979-01-01
High strength steel pipeline carries hot mixture of powdered coal and coal derived oil to electric-power-generating station. Slurry is processed along way to remove sulfur, ash, and nitrogen and to recycle part of oil. System eliminates hazards and limitations associated with anticipated coal/water-slurry pipelines.
NASA Astrophysics Data System (ADS)
Pérez-López, F.; Vallejo, J. C.; Martínez, S.; Ortiz, I.; Macfarlane, A.; Osuna, P.; Gill, R.; Casale, M.
2015-09-01
BepiColombo is an interdisciplinary ESA mission to explore the planet Mercury in cooperation with JAXA. The mission consists of two separate orbiters: ESA's Mercury Planetary Orbiter (MPO) and JAXA's Mercury Magnetospheric Orbiter (MMO), which are dedicated to the detailed study of the planet and its magnetosphere. The MPO scientific payload comprises eleven instruments packages covering different disciplines developed by several European teams. This paper describes the design and development approach of the framework required to support the operation of the distributed BepiColombo MPO instruments pipelines, developed and operated from different locations, but designed as a single entity. An architecture based on primary-redundant configuration, fully integrated into the BepiColombo Science Operations Control System (BSCS), has been selected, where some instrument pipelines will be operated from the instrument team's data processing centres, having a pipeline replica that can be run from the Science Ground Segment (SGS), while others will be executed as primary pipelines from the SGS, adopting the SGS the pipeline orchestration role.
Carey, Michelle; Ramírez, Juan Camilo; Wu, Shuang; Wu, Hulin
2018-07-01
A biological host response to an external stimulus or intervention such as a disease or infection is a dynamic process, which is regulated by an intricate network of many genes and their products. Understanding the dynamics of this gene regulatory network allows us to infer the mechanisms involved in a host response to an external stimulus, and hence aids the discovery of biomarkers of phenotype and biological function. In this article, we propose a modeling/analysis pipeline for dynamic gene expression data, called Pipeline4DGEData, which consists of a series of statistical modeling techniques to construct dynamic gene regulatory networks from the large volumes of high-dimensional time-course gene expression data that are freely available in the Gene Expression Omnibus repository. This pipeline has a consistent and scalable structure that allows it to simultaneously analyze a large number of time-course gene expression data sets, and then integrate the results across different studies. We apply the proposed pipeline to influenza infection data from nine studies and demonstrate that interesting biological findings can be discovered with its implementation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... when identifying response systems and equipment to be deployed in accordance with a response plan... which those systems or equipment are intended to function. Barrel means 42 United States gallons (159... oil pipeline system or (2) Receive and store oil transported by a pipeline for reinjection and...
Pipeline scada upgrade uses satellite terminal system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conrad, W.; Skovrinski, J.R.
In the recent automation of its supervisory control and data acquisition (scada) system, Transwestern Pipeline Co. has become the first to use very small aperture satellite terminals (VSAT's) for scada. A subsidiary of Enron Interstate Pipeline, Houston, Transwestern moves natural gas through a 4,400-mile system from West Texas, New Mexico, and Oklahoma to southern California markets. Transwestern's modernization, begun in November 1985, addressed problems associated with its aging control equipment which had been installed when the compressor stations were built in 1960. Over the years a combination of three different systems had been added. All were cumbersome to maintain andmore » utilized outdated technology. Problems with reliability, high maintenance time, and difficulty in getting new parts were determining factors in Transwestern's decision to modernize its scada system. In addition, the pipeline was anticipating moving its control center from Roswell, N.M., to Houston and believed it would be impossible to marry the old system with the new computer equipment in Houston.« less
Method for oil pipeline leak detection based on distributed fiber optic technology
NASA Astrophysics Data System (ADS)
Chen, Huabo; Tu, Yaqing; Luo, Ting
1998-08-01
Pipeline leak detection is a difficult problem to solve up to now. Some traditional leak detection methods have such problems as high rate of false alarm or missing detection, low location estimate capability. For the problems given above, a method for oil pipeline leak detection based on distributed optical fiber sensor with special coating is presented. The fiber's coating interacts with hydrocarbon molecules in oil, which alters the refractive indexed of the coating. Therefore the light-guiding properties of the fiber are modified. Thus pipeline leak location can be determined by OTDR. Oil pipeline lead detection system is designed based on the principle. The system has some features like real time, multi-point detection at the same time and high location accuracy. In the end, some factors that probably influence detection are analyzed and primary improving actions are given.
NASA Technical Reports Server (NTRS)
Samba, A. S.
1985-01-01
The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.
Modelling of non-equilibrium flow in the branched pipeline systems
NASA Astrophysics Data System (ADS)
Sumskoi, S. I.; Sverchkov, A. M.; Lisanov, M. V.; Egorov, A. F.
2016-09-01
This article presents a mathematical model and a numerical method for solving the task of water hammer in the branched pipeline system. The task is considered in the onedimensional non-stationary formulation taking into account the realities such as the change in the diameter of the pipeline and its branches. By comparison with the existing analytic solution it has been shown that the proposed method possesses good accuracy. With the help of the developed model and numerical method the task has been solved concerning the transmission of the compression waves complex in the branching pipeline system when several shut down valves operate. It should be noted that the offered model and method may be easily introduced to a number of other tasks, for example, to describe the flow of blood in the vessels.
40 CFR 52.987 - Control of hydrocarbon emissions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... control systems on a 37,500 barrel capacity crude oil storage tank at Cities Service Pipeline Company, Oil... a 25,000 barrel capacity crude oil storage tank at Cities Service Pipeline Company, Haynesville... barrel capacity crude oil storage tank at Cities Service Pipeline Company, Summerfield, Louisiana with...
40 CFR 52.987 - Control of hydrocarbon emissions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... control systems on a 37,500 barrel capacity crude oil storage tank at Cities Service Pipeline Company, Oil... a 25,000 barrel capacity crude oil storage tank at Cities Service Pipeline Company, Haynesville... barrel capacity crude oil storage tank at Cities Service Pipeline Company, Summerfield, Louisiana with...
40 CFR 52.987 - Control of hydrocarbon emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... control systems on a 37,500 barrel capacity crude oil storage tank at Cities Service Pipeline Company, Oil... a 25,000 barrel capacity crude oil storage tank at Cities Service Pipeline Company, Haynesville... barrel capacity crude oil storage tank at Cities Service Pipeline Company, Summerfield, Louisiana with...
49 CFR 195.104 - Variations in pressure.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 3 2014-10-01 2014-10-01 false Variations in pressure. 195.104 Section 195.104... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF HAZARDOUS LIQUIDS BY PIPELINE Design Requirements § 195.104 Variations in pressure. If, within a pipeline system, two or more...
49 CFR 195.104 - Variations in pressure.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Variations in pressure. 195.104 Section 195.104... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF HAZARDOUS LIQUIDS BY PIPELINE Design Requirements § 195.104 Variations in pressure. If, within a pipeline system, two or more...
49 CFR 195.104 - Variations in pressure.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 3 2012-10-01 2012-10-01 false Variations in pressure. 195.104 Section 195.104... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF HAZARDOUS LIQUIDS BY PIPELINE Design Requirements § 195.104 Variations in pressure. If, within a pipeline system, two or more...
49 CFR 195.104 - Variations in pressure.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 3 2013-10-01 2013-10-01 false Variations in pressure. 195.104 Section 195.104... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF HAZARDOUS LIQUIDS BY PIPELINE Design Requirements § 195.104 Variations in pressure. If, within a pipeline system, two or more...
49 CFR 195.104 - Variations in pressure.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 3 2011-10-01 2011-10-01 false Variations in pressure. 195.104 Section 195.104... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF HAZARDOUS LIQUIDS BY PIPELINE Design Requirements § 195.104 Variations in pressure. If, within a pipeline system, two or more...
ISEScan: automated identification of insertion sequence elements in prokaryotic genomes.
Xie, Zhiqun; Tang, Haixu
2017-11-01
The insertion sequence (IS) elements are the smallest but most abundant autonomous transposable elements in prokaryotic genomes, which play a key role in prokaryotic genome organization and evolution. With the fast growing genomic data, it is becoming increasingly critical for biology researchers to be able to accurately and automatically annotate ISs in prokaryotic genome sequences. The available automatic IS annotation systems are either providing only incomplete IS annotation or relying on the availability of existing genome annotations. Here, we present a new IS elements annotation pipeline to address these issues. ISEScan is a highly sensitive software pipeline based on profile hidden Markov models constructed from manually curated IS elements. ISEScan performs better than existing IS annotation systems when tested on prokaryotic genomes with curated annotations of IS elements. Applying it to 2784 prokaryotic genomes, we report the global distribution of IS families across taxonomic clades in Archaea and Bacteria. ISEScan is implemented in Python and released as an open source software at https://github.com/xiezhq/ISEScan. hatang@indiana.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Research on airborne infrared leakage detection of natural gas pipeline
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Xu, Bin; Xu, Xu; Wang, Hongchao; Yu, Dongliang; Tian, Shengjie
2011-12-01
An airborne laser remote sensing technology is proposed to detect natural gas pipeline leakage in helicopter which carrying a detector, and the detector can detect a high spatial resolution of trace of methane on the ground. The principle of the airborne laser remote sensing system is based on tunable diode laser absorption spectroscopy (TDLAS). The system consists of an optical unit containing the laser, camera, helicopter mount, electronic unit with DGPS antenna, a notebook computer and a pilot monitor. And the system is mounted on a helicopter. The principle and the architecture of the airborne laser remote sensing system are presented. Field test experiments are carried out on West-East Natural Gas Pipeline of China, and the results show that airborne detection method is suitable for detecting gas leak of pipeline on plain, desert, hills but unfit for the area with large altitude diversification.
The Hyper Suprime-Cam software pipeline
Bosch, James; Armstrong, Robert; Bickerton, Steven; ...
2017-10-12
Here in this article, we describe the optical imaging data processing pipeline developed for the Subaru Telescope’s Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope’s Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrendingmore » and image characterizations.« less
The Hyper Suprime-Cam software pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosch, James; Armstrong, Robert; Bickerton, Steven
Here in this article, we describe the optical imaging data processing pipeline developed for the Subaru Telescope’s Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope’s Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrendingmore » and image characterizations.« less
Siretskiy, Alexey; Sundqvist, Tore; Voznesenskiy, Mikhail; Spjuth, Ola
2015-01-01
New high-throughput technologies, such as massively parallel sequencing, have transformed the life sciences into a data-intensive field. The most common e-infrastructure for analyzing this data consists of batch systems that are based on high-performance computing resources; however, the bioinformatics software that is built on this platform does not scale well in the general case. Recently, the Hadoop platform has emerged as an interesting option to address the challenges of increasingly large datasets with distributed storage, distributed processing, built-in data locality, fault tolerance, and an appealing programming methodology. In this work we introduce metrics and report on a quantitative comparison between Hadoop and a single node of conventional high-performance computing resources for the tasks of short read mapping and variant calling. We calculate efficiency as a function of data size and observe that the Hadoop platform is more efficient for biologically relevant data sizes in terms of computing hours for both split and un-split data files. We also quantify the advantages of the data locality provided by Hadoop for NGS problems, and show that a classical architecture with network-attached storage will not scale when computing resources increase in numbers. Measurements were performed using ten datasets of different sizes, up to 100 gigabases, using the pipeline implemented in Crossbow. To make a fair comparison, we implemented an improved preprocessor for Hadoop with better performance for splittable data files. For improved usability, we implemented a graphical user interface for Crossbow in a private cloud environment using the CloudGene platform. All of the code and data in this study are freely available as open source in public repositories. From our experiments we can conclude that the improved Hadoop pipeline scales better than the same pipeline on high-performance computing resources, we also conclude that Hadoop is an economically viable option for the common data sizes that are currently used in massively parallel sequencing. Given that datasets are expected to increase over time, Hadoop is a framework that we envision will have an increasingly important role in future biological data analysis.
NASA Astrophysics Data System (ADS)
Artana, K. B.; Pitana, T.; Dinariyana, D. P.; Ariana, M.; Kristianto, D.; Pratiwi, E.
2018-06-01
The aim of this research is to develop an algorithm and application that can perform real-time monitoring of the safety operation of offshore platforms and subsea gas pipelines as well as determine the need for ship inspection using data obtained from automatic identification system (AIS). The research also focuses on the integration of shipping database, AIS data, and others to develop a prototype for designing a real-time monitoring system of offshore platforms and pipelines. A simple concept is used in the development of this prototype, which is achieved by using an overlaying map that outlines the coordinates of the offshore platform and subsea gas pipeline with the ship's coordinates (longitude/latitude) as detected by AIS. Using such information, we can then build an early warning system (EWS) relayed through short message service (SMS), email, or other means when the ship enters the restricted and exclusion zone of platforms and pipelines. The ship inspection system is developed by combining several attributes. Then, decision analysis software is employed to prioritize the vessel's four attributes, including ship age, ship type, classification, and flag state. Results show that the EWS can increase the safety level of offshore platforms and pipelines, as well as the efficient use of patrol boats in monitoring the safety of the facilities. Meanwhile, ship inspection enables the port to prioritize the ship to be inspected in accordance with the priority ranking inspection score.
Almazyad, Abdulaziz S.; Seddiq, Yasser M.; Alotaibi, Ahmed M.; Al-Nasheri, Ahmed Y.; BenSaleh, Mohammed S.; Obeid, Abdulfattah M.; Qasim, Syed Manzoor
2014-01-01
Anomalies such as leakage and bursts in water pipelines have severe consequences for the environment and the economy. To ensure the reliability of water pipelines, they must be monitored effectively. Wireless Sensor Networks (WSNs) have emerged as an effective technology for monitoring critical infrastructure such as water, oil and gas pipelines. In this paper, we present a scalable design and simulation of a water pipeline leakage monitoring system using Radio Frequency IDentification (RFID) and WSN technology. The proposed design targets long-distance aboveground water pipelines that have special considerations for maintenance, energy consumption and cost. The design is based on deploying a group of mobile wireless sensor nodes inside the pipeline and allowing them to work cooperatively according to a prescheduled order. Under this mechanism, only one node is active at a time, while the other nodes are sleeping. The node whose turn is next wakes up according to one of three wakeup techniques: location-based, time-based and interrupt-driven. In this paper, mathematical models are derived for each technique to estimate the corresponding energy consumption and memory size requirements. The proposed equations are analyzed and the results are validated using simulation. PMID:24561404
Almazyad, Abdulaziz S; Seddiq, Yasser M; Alotaibi, Ahmed M; Al-Nasheri, Ahmed Y; BenSaleh, Mohammed S; Obeid, Abdulfattah M; Qasim, Syed Manzoor
2014-02-20
Anomalies such as leakage and bursts in water pipelines have severe consequences for the environment and the economy. To ensure the reliability of water pipelines, they must be monitored effectively. Wireless Sensor Networks (WSNs) have emerged as an effective technology for monitoring critical infrastructure such as water, oil and gas pipelines. In this paper, we present a scalable design and simulation of a water pipeline leakage monitoring system using Radio Frequency IDentification (RFID) and WSN technology. The proposed design targets long-distance aboveground water pipelines that have special considerations for maintenance, energy consumption and cost. The design is based on deploying a group of mobile wireless sensor nodes inside the pipeline and allowing them to work cooperatively according to a prescheduled order. Under this mechanism, only one node is active at a time, while the other nodes are sleeping. The node whose turn is next wakes up according to one of three wakeup techniques: location-based, time-based and interrupt-driven. In this paper, mathematical models are derived for each technique to estimate the corresponding energy consumption and memory size requirements. The proposed equations are analyzed and the results are validated using simulation.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-23
... facilities 486210 Pipeline transportation of natural gas. Petroleum and Natural Gas Systems. 221210 Natural... and Budget PHMSA Pipeline and Hazardous Material Safety Administration QA/QC quality assurance/quality... distribution pipelines, but also into liquefied natural gas storage or into underground storage. We are...
Thermal interaction of underground pipeline with freezing heaving soil
NASA Astrophysics Data System (ADS)
Podorozhnikov, S. Y.; Mikhailov, P.; Puldas, L.; Shabarov, A.
2018-05-01
A mathematical model and a method for calculating the stress-strain state of a pipeline describing the heat-power interaction in the "underground pipeline - soil" system in the conditions of negative temperatures in the soils of soils are offered. Some results of computational-parametric research are presented.
The SCUBA Data Reduction Pipeline: ORAC-DR at the JCMT
NASA Astrophysics Data System (ADS)
Jenness, Tim; Economou, Frossie
The ORAC data reduction pipeline, developed for UKIRT, has been designed to be a completely general approach to writing data reduction pipelines. This generality has enabled the JCMT to adapt the system for use with SCUBA with minimal development time using the existing SCUBA data reduction algorithms (Surf).
Distributed shared memory for roaming large volumes.
Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno
2006-01-01
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.
Supply Support of Air Force 463L Equipment: An Analysis of the 463L equipment Spare Parts Pipeline
1989-09-01
service; and 4) the order processing system created inherent delays in the pipeline because of outdated and indirect information systems and technology. Keywords: Materials handling equipment, Theses. (AW)
Digital Mapping of Buried Pipelines with a Dual Array System
DOT National Transportation Integrated Search
2003-06-06
The objective of this research is to develop a non-invasive system for detecting, mapping, and inspecting ferrous and plastic pipelines in place using technology that combines and interprets measurements from ground penetrating radar and electromagne...
49 CFR 193.2609 - Support systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Support systems. 193.2609 Section 193.2609 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY LIQUEFIED NATURAL GAS FACILITIES...
49 CFR 193.2519 - Communication systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Communication systems. 193.2519 Section 193.2519 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY LIQUEFIED NATURAL GAS FACILITIES...
Bayesian methods in reliability
NASA Astrophysics Data System (ADS)
Sander, P.; Badoux, R.
1991-11-01
The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.
Distributed Sensing and Processing for Multi-Camera Networks
NASA Astrophysics Data System (ADS)
Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.
Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.
78 FR 10614 - Information Collection Activities and Request for Comments
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-14
... superior method of authentication. Under the revised procedures, a regulated natural gas and oil pipeline... particular type of filing. Implementation of these changes will provide increased flexibility for regulated...
NASA Astrophysics Data System (ADS)
Bonoli, Carlotta; Balestra, Andrea; Bortoletto, Favio; D'Alessandro, Maurizio; Farinelli, Ruben; Medinaceli, Eduardo; Stephen, John; Borsato, Enrico; Dusini, Stefano; Laudisio, Fulvio; Sirignano, Chiara; Ventura, Sandro; Auricchio, Natalia; Corcione, Leonardo; Franceschi, Enrico; Ligori, Sebastiano; Morgante, Gianluca; Patrizii, Laura; Sirri, Gabriele; Trifoglio, Massimo; Valenziano, Luca
2016-07-01
The Near Infrared Spectrograph and Photometer (NISP) is one of the two instruments on board the EUCLID mission now under implementation phase; VIS, the Visible Imager is the second instrument working on the same shared optical beam. The NISP focal plane is based on a detector mosaic deploying 16x, 2048x2048 pixels^2 HAWAII-II HgCdTe detectors, now in advanced delivery phase from Teledyne Imaging Scientific (TIS), and will provide NIR imaging in three bands (Y, J, H) plus slit-less spectroscopy in the range 0.9÷2.0 micron. All the NISP observational modes will be supported by different parametrization of the classic multi-accumulation IR detector readout mode covering the specific needs for spectroscopic, photometric and calibration exposures. Due to the large number of deployed detectors and to the limited satellite telemetry available to ground, a consistent part of the data processing, conventionally performed off-line, will be accomplished on board, in parallel with the flow of data acquisitions. This has led to the development of a specific on-board, HW/SW, data processing pipeline, and to the design of computationally performing control electronics, suited to cope with the time constraints of the NISP acquisition sequences during the sky survey. In this paper we present the architecture of the NISP on-board processing system, directly interfaced to the SIDECAR ASICs system managing the detector focal plane, and the implementation of the on-board pipe-line allowing all the basic operations of input frame averaging, final frame interpolation and data-volume compression before ground down-link.
Aerodynamics of electrically driven freight pipeline system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundgren, T.S.; Zhao, Y.
2000-06-01
This paper examines the aerodynamic characteristics of a freight pipeline system in which freight capsules are individually propelled by electrical motors. The fundamental difference between this system and the more extensively studied pneumatic capsule pipeline is the different role played by aerodynamic forces. In a driven system the propelled capsules are resisted by aerodynamic forces and, in reaction, pump air through the tube. In contrast, in a pneumatically propelled system external blowers pump air through the tubes, and this provides the thrust for the capsules. An incompressible transient analysis is developed to study the aerodynamics of multiple capsules in amore » cross-linked two-bore pipeline. An aerodynamic friction coefficient is used as a cost parameter to compare the effects of capsule blockage and headway and to assess the merits of adits and vents. The authors conclude that optimum efficiency for off-design operation is obtained with long platoons of capsules in vented or adit connected tubes.« less
Fuchs, Helmut; Aguilar-Pimentel, Juan Antonio; Amarie, Oana V; Becker, Lore; Calzada-Wack, Julia; Cho, Yi-Li; Garrett, Lillian; Hölter, Sabine M; Irmler, Martin; Kistler, Martin; Kraiger, Markus; Mayer-Kuckuk, Philipp; Moreth, Kristin; Rathkolb, Birgit; Rozman, Jan; da Silva Buttkus, Patricia; Treise, Irina; Zimprich, Annemarie; Gampe, Kristine; Hutterer, Christine; Stöger, Claudia; Leuchtenberger, Stefanie; Maier, Holger; Miller, Manuel; Scheideler, Angelika; Wu, Moya; Beckers, Johannes; Bekeredjian, Raffi; Brielmeier, Markus; Busch, Dirk H; Klingenspor, Martin; Klopstock, Thomas; Ollert, Markus; Schmidt-Weber, Carsten; Stöger, Tobias; Wolf, Eckhard; Wurst, Wolfgang; Yildirim, Ali Önder; Zimmer, Andreas; Gailus-Durner, Valérie; Hrabě de Angelis, Martin
2017-09-29
Since decades, model organisms have provided an important approach for understanding the mechanistic basis of human diseases. The German Mouse Clinic (GMC) was the first phenotyping facility that established a collaboration-based platform for phenotype characterization of mouse lines. In order to address individual projects by a tailor-made phenotyping strategy, the GMC advanced in developing a series of pipelines with tests for the analysis of specific disease areas. For a general broad analysis, there is a screening pipeline that covers the key parameters for the most relevant disease areas. For hypothesis-driven phenotypic analyses, there are thirteen additional pipelines with focus on neurological and behavioral disorders, metabolic dysfunction, respiratory system malfunctions, immune-system disorders and imaging techniques. In this article, we give an overview of the pipelines and describe the scientific rationale behind the different test combinations. Copyright © 2017 Elsevier B.V. All rights reserved.
The CWF pipeline system from Shen mu to the Yellow Sea
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ercolani, D.
1993-12-31
A feasibility study on the applicability of coal-water fuel (CWF) technology in the People`s Republic of China (PRC) is in progress. This study, awarded to Snamprogetti by the International Centre for Scientific Culture (World Laboratory) of Geneva, Switzerland, is performed on behalf of Chinese Organizations led by the Ministry of Energy Resources and the Academy of Sciences of the People`s Republic of China. Slurry pipelines appear to be a solution for solving the logistic problems created by a progressively increasing coal consumption and a limited availability of conventional transport infrastructures in the PRC. Within this framework, CWF pipelines are themore » most innovative technological option in consideration of the various advantages the technology offers with respect to conventional slurry pipelines. The PRC CWF pipeline system study evaluates two alternative transport streams, but originating from the same slurry production plant, located at Shachuanguo, about 100 km from Sheng Mu.« less
Simulation and Experiment Research on Fatigue Life of High Pressure Air Pipeline Joint
NASA Astrophysics Data System (ADS)
Shang, Jin; Xie, Jianghui; Yu, Jian; Zhang, Deman
2017-12-01
High pressure air pipeline joint is important part of high pressure air system, whose reliability is related to the safety and stability of the system. This thesis developed a new type-high pressure air pipeline joint, carried out dynamics research on CB316-1995 and new type-high pressure air pipeline joint with finite element method, deeply analysed the join forms of different design schemes and effect of materials on stress, tightening torque and fatigue life of joint. Research team set up vibration/pulse test bench, carried out joint fatigue life contrast test. The result shows: the maximum stress of the joint is inverted in the inner side of the outer sleeve nut, which is consistent with the failure mode of the crack on the outer sleeve nut in practice. Simulation and experiment of fatigue life and tightening torque of new type-high pressure air pipeline joint are better than CB316-1995 joint.
Albi, Angela; Meola, Antonio; Zhang, Fan; Kahali, Pegah; Rigolo, Laura; Tax, Chantal M W; Ciris, Pelin Aksit; Essayed, Walid I; Unadkat, Prashin; Norton, Isaiah; Rathi, Yogesh; Olubiyi, Olutayo; Golby, Alexandra J; O'Donnell, Lauren J
2018-03-01
Diffusion magnetic resonance imaging (dMRI) provides preoperative maps of neurosurgical patients' white matter tracts, but these maps suffer from echo-planar imaging (EPI) distortions caused by magnetic field inhomogeneities. In clinical neurosurgical planning, these distortions are generally not corrected and thus contribute to the uncertainty of fiber tracking. Multiple image processing pipelines have been proposed for image-registration-based EPI distortion correction in healthy subjects. In this article, we perform the first comparison of such pipelines in neurosurgical patient data. Five pipelines were tested in a retrospective clinical dMRI dataset of 9 patients with brain tumors. Pipelines differed in the choice of fixed and moving images and the similarity metric for image registration. Distortions were measured in two important tracts for neurosurgery, the arcuate fasciculus and corticospinal tracts. Significant differences in distortion estimates were found across processing pipelines. The most successful pipeline used dMRI baseline and T2-weighted images as inputs for distortion correction. This pipeline gave the most consistent distortion estimates across image resolutions and brain hemispheres. Quantitative results of mean tract distortions on the order of 1-2 mm are in line with other recent studies, supporting the potential need for distortion correction in neurosurgical planning. Novel results include significantly higher distortion estimates in the tumor hemisphere and greater effect of image resolution choice on results in the tumor hemisphere. Overall, this study demonstrates possible pitfalls and indicates that care should be taken when implementing EPI distortion correction in clinical settings. Copyright © 2018 by the American Society of Neuroimaging.
Implementation of MPEG-2 encoder to multiprocessor system using multiple MVPs (TMS320C80)
NASA Astrophysics Data System (ADS)
Kim, HyungSun; Boo, Kenny; Chung, SeokWoo; Choi, Geon Y.; Lee, YongJin; Jeon, JaeHo; Park, Hyun Wook
1997-05-01
This paper presents the efficient algorithm mapping for the real-time MPEG-2 encoding on the KAIST image computing system (KICS), which has a parallel architecture using five multimedia video processors (MVPs). The MVP is a general purpose digital signal processor (DSP) of Texas Instrument. It combines one floating-point processor and four fixed- point DSPs on a single chip. The KICS uses the MVP as a primary processing element (PE). Two PEs form a cluster, and there are two processing clusters in the KICS. Real-time MPEG-2 encoder is implemented through the spatial and the functional partitioning strategies. Encoding process of spatially partitioned half of the video input frame is assigned to ne processing cluster. Two PEs perform the functionally partitioned MPEG-2 encoding tasks in the pipelined operation mode. One PE of a cluster carries out the transform coding part and the other performs the predictive coding part of the MPEG-2 encoding algorithm. One MVP among five MVPs is used for system control and interface with host computer. This paper introduces an implementation of the MPEG-2 algorithm with a parallel processing architecture.
Ground motion values for use in the seismic design of the Trans-Alaska Pipeline system
Page, Robert A.; Boore, D.M.; Joyner, W.B.; Coulter, H.W.
1972-01-01
The proposed trans-Alaska oil pipeline, which would traverse the state north to south from Prudhoe Bay on the Arctic coast to Valdez on Prince William Sound, will be subject to serious earthquake hazards over much of its length. To be acceptable from an environmental standpoint, the pipeline system is to be designed to minimize the potential of oil leakage resulting from seismic shaking, faulting, and seismically induced ground deformation. The design of the pipeline system must accommodate the effects of earthquakes with magnitudes ranging from 5.5 to 8.5 as specified in the 'Stipulations for Proposed Trans-Alaskan Pipeline System.' This report characterizes ground motions for the specified earthquakes in terms of peak levels of ground acceleration, velocity, and displacement and of duration of shaking. Published strong motion data from the Western United States are critically reviewed to determine the intensity and duration of shaking within several kilometers of the slipped fault. For magnitudes 5 and 6, for which sufficient near-fault records are available, the adopted ground motion values are based on data. For larger earthquakes the values are based on extrapolations from the data for smaller shocks, guided by simplified theoretical models of the faulting process.
NASA Astrophysics Data System (ADS)
Tomarov, G. V.; Povarov, V. P.; Shipkov, A. A.; Gromov, A. F.; Kiselev, A. N.; Shepelev, S. V.; Galanin, A. V.
2015-02-01
Specific features relating to development of the information-analytical system on the problem of flow-accelerated corrosion of pipeline elements in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh nuclear power plant are considered. The results from a statistical analysis of data on the quantity, location, and operating conditions of the elements and preinserted segments of pipelines used in the condensate-feedwater and wet steam paths are presented. The principles of preparing and using the information-analytical system for determining the lifetime to reaching inadmissible wall thinning in elements of pipelines used in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh NPP are considered.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-16
... DEPARTMENT OF HOMELAND SECURITY Transportation Security Administration New Agency Information Collection Activity Under OMB Review: Pipeline System Operator Security Information AGENCY: Transportation... INFORMATION CONTACT: Joanna Johnson, Office of Information Technology, TSA-11, Transportation Security...
Safety and integrity of pipeline systems - philosophy and experience in Germany
DOT National Transportation Integrated Search
1997-01-01
The design, construction and operation of gas pipeline systems in Germany are subject to the Energy Act and associated regulations. This legal structure is based on a deterministic rather than a probabilistic safety philosophy, consisting of technica...
Real Time Global Tests of the ALICE High Level Trigger Data Transport Framework
NASA Astrophysics Data System (ADS)
Becker, B.; Chattopadhyay, S.; Cicalo, C.; Cleymans, J.; de Vaux, G.; Fearick, R. W.; Lindenstruth, V.; Richter, M.; Rohrich, D.; Staley, F.; Steinbeck, T. M.; Szostak, A.; Tilsner, H.; Weis, R.; Vilakazi, Z. Z.
2008-04-01
The High Level Trigger (HLT) system of the ALICE experiment is an online event filter and trigger system designed for input bandwidths of up to 25 GB/s at event rates of up to 1 kHz. The system is designed as a scalable PC cluster, implementing several hundred nodes. The transport of data in the system is handled by an object-oriented data flow framework operating on the basis of the publisher-subscriber principle, being designed fully pipelined with lowest processing overhead and communication latency in the cluster. In this paper, we report the latest measurements where this framework has been operated on five different sites over a global north-south link extending more than 10,000 km, processing a ldquoreal-timerdquo data flow.
Specht, Michael; Kuhlgert, Sebastian; Fufezan, Christian; Hippler, Michael
2011-04-15
We present Proteomatic, an operating system independent and user-friendly platform that enables the construction and execution of MS/MS data evaluation pipelines using free and commercial software. Required external programs such as for peptide identification are downloaded automatically in the case of free software. Due to a strict separation of functionality and presentation, and support for multiple scripting languages, new processing steps can be added easily. Proteomatic is implemented in C++/Qt, scripts are implemented in Ruby, Python and PHP. All source code is released under the LGPL. Source code and installers for Windows, Mac OS X, and Linux are freely available at http://www.proteomatic.org. michael.specht@uni-muenster.de Supplementary data are available at Bioinformatics online.
Programming the Navier-Stokes computer: An abstract machine model and a visual editor
NASA Technical Reports Server (NTRS)
Middleton, David; Crockett, Tom; Tomboulian, Sherry
1988-01-01
The Navier-Stokes computer is a parallel computer designed to solve Computational Fluid Dynamics problems. Each processor contains several floating point units which can be configured under program control to implement a vector pipeline with several inputs and outputs. Since the development of an effective compiler for this computer appears to be very difficult, machine level programming seems necessary and support tools for this process have been studied. These support tools are organized into a graphical program editor. A programming process is described by which appropriate computations may be efficiently implemented on the Navier-Stokes computer. The graphical editor would support this programming process, verifying various programmer choices for correctness and deducing values such as pipeline delays and network configurations. Step by step details are provided and demonstrated with two example programs.
Natural Gas Pipeline and System Expansions
1997-01-01
This special report examines recent expansions to the North American natural gas pipeline network and the nature and type of proposed pipeline projects announced or approved for construction during the next several years in the United States. It includes those projects in Canada and Mexico that tie in with U.S. markets or projects.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
... that would permit gas to be received by pipeline at the terminal and liquefied for subsequent export... the natural gas proposed for export will come from the United States natural gas pipeline system... authorization to construct additional pipeline facilities necessary to provide feed gas to the proposed...
Algeria LPG pipeline is build by Bechtel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horner, C.
1984-08-01
The construction of the 313 mile long, 24 in. LPG pipeline from Hassi R'Mel to Arzew, Algeria is described. The pipeline was designed to deliver 6 million tons of LPG annually using one pumping station. Eventually an additional pumping station will be added to raise the system capacity to 9 million tons annually.
Testing the School-to-Prison Pipeline
ERIC Educational Resources Information Center
Owens, Emily G.
2017-01-01
The School-to-Prison Pipeline is a social phenomenon where students become formally involved with the criminal justice system as a result of school policies that use law enforcement, rather than discipline, to address behavioral problems. A potentially important part of the School-to-Prison Pipeline is the use of sworn School Resource Officers…
Mittal, Varun; Hung, Ling-Hong; Keswani, Jayant; Kristiyanto, Daniel; Lee, Sung Bong
2017-01-01
Abstract Background: Software container technology such as Docker can be used to package and distribute bioinformatics workflows consisting of multiple software implementations and dependencies. However, Docker is a command line–based tool, and many bioinformatics pipelines consist of components that require a graphical user interface. Results: We present a container tool called GUIdock-VNC that uses a graphical desktop sharing system to provide a browser-based interface for containerized software. GUIdock-VNC uses the Virtual Network Computing protocol to render the graphics within most commonly used browsers. We also present a minimal image builder that can add our proposed graphical desktop sharing system to any Docker packages, with the end result that any Docker packages can be run using a graphical desktop within a browser. In addition, GUIdock-VNC uses the Oauth2 authentication protocols when deployed on the cloud. Conclusions: As a proof-of-concept, we demonstrated the utility of GUIdock-noVNC in gene network inference. We benchmarked our container implementation on various operating systems and showed that our solution creates minimal overhead. PMID:28327936
Mittal, Varun; Hung, Ling-Hong; Keswani, Jayant; Kristiyanto, Daniel; Lee, Sung Bong; Yeung, Ka Yee
2017-04-01
Software container technology such as Docker can be used to package and distribute bioinformatics workflows consisting of multiple software implementations and dependencies. However, Docker is a command line-based tool, and many bioinformatics pipelines consist of components that require a graphical user interface. We present a container tool called GUIdock-VNC that uses a graphical desktop sharing system to provide a browser-based interface for containerized software. GUIdock-VNC uses the Virtual Network Computing protocol to render the graphics within most commonly used browsers. We also present a minimal image builder that can add our proposed graphical desktop sharing system to any Docker packages, with the end result that any Docker packages can be run using a graphical desktop within a browser. In addition, GUIdock-VNC uses the Oauth2 authentication protocols when deployed on the cloud. As a proof-of-concept, we demonstrated the utility of GUIdock-noVNC in gene network inference. We benchmarked our container implementation on various operating systems and showed that our solution creates minimal overhead. © The Authors 2017. Published by Oxford University Press.
76 FR 30197 - Notice of Lodging of Consent Decree Under The Clean Air Act
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-24
... Pipeline System, LLC, et al., Civil Action No. 11-CV-1188RPM-CBS was lodged with the United States District... System, LLC, Western Convenience Stores, Inc., and Offen Petroleum, Inc. (collectively, the ``Defendants... environmental mitigation project requires Rocky Mountain Pipeline System to installation a domed cover on an...
Third-Generation Partnerships for P-16 Pipelines and Cradle-through-Career Education Systems
ERIC Educational Resources Information Center
Lawson, Hal A.
2013-01-01
Amid unprecedented novelty, complexity, turbulence, and conflict, it is apparent that a new education system is needed. Focused on a new outcome--postsecondary education completion with advanced competence--heretofore separate systems for early childhood, K-12 schools, and postsecondary education are being joined in P-16 pipelines and…
30 CFR 250.910 - Which of my facilities are subject to the Platform Verification Program?
Code of Federal Regulations, 2010 CFR
2010-07-01
... pipeline risers, and riser tensioning systems (each platform must be designed to accommodate all the loads imposed by all risers and riser does not have tensioning systems);(ii) Turrets and turret-and-hull... Platform Verification Program: (i) Drilling, production, and pipeline risers, and riser tensioning systems...
DOT National Transportation Integrated Search
1978-12-01
This study is the final phase of a muck pipeline program begun in 1973. The objective of the study was to evaluate a pneumatic pipeline system for muck haulage from a tunnel excavated by a tunnel boring machine. The system was comprised of a muck pre...
Accident Prevention and Diagnostics of Underground Pipeline Systems
NASA Astrophysics Data System (ADS)
Trokhimchuk, M.; Bakhracheva, Y.
2017-11-01
Up to forty thousand accidents occur annually with underground pipelines due to corrosion. The comparison of the methods for assessing the quality of anti-corrosion coating is provided. It is proposed to use the device to be tied-in to existing pipeline which has a higher functionality in comparison with other types of the devices due to the possibility of tie-in to the pipelines with different diameters. The existing technologies and applied materials allow us to organize industrial production of the proposed device.
Acoustic energy transmission in cast iron pipelines
NASA Astrophysics Data System (ADS)
Kiziroglou, Michail E.; Boyle, David E.; Wright, Steven W.; Yeatman, Eric M.
2015-12-01
In this paper we propose acoustic power transfer as a method for the remote powering of pipeline sensor nodes. A theoretical framework of acoustic power propagation in the ceramic transducers and the metal structures is drawn, based on the Mason equivalent circuit. The effect of mounting on the electrical response of piezoelectric transducers is studied experimentally. Using two identical transducer structures, power transmission of 0.33 mW through a 1 m long, 118 mm diameter cast iron pipe, with 8 mm wall thickness is demonstrated, at 1 V received voltage amplitude. A near-linear relationship between input and output voltage is observed. These results show that it is possible to deliver significant power to sensor nodes through acoustic waves in solid structures. The proposed method may enable the implementation of acoustic - powered wireless sensor nodes for structural and operation monitoring of pipeline infrastructure.
A Pipeline Tool for CCD Image Processing
NASA Astrophysics Data System (ADS)
Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.
MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.
Andersson, Neil
2011-12-21
Social audits are typically observational studies, combining qualitative and quantitative uptake of evidence with consultative interpretation of results. This often falters on issues of causality because their cross-sectional design limits interpretation of time relations and separation out of other indirect associations.Social audits drawing on methods of randomised controlled cluster trials (RCCT) allow more certainty about causality. Randomisation means that exposure occurs independently of all events that precede it--it converts potential confounders and other covariates into random differences. In 2008, CIET social audits introduced randomisation of the knowledge translation component with subsequent measurement of impact in the changes introduced. This "proof of impact" generates an additional layer of evidence in a cost-effective way, providing implementation-ready solutions for planners.Pipeline planning is a social audit that incorporates stepped wedge RCCTs. From a listing of districts/communities as a sampling frame, individual entities (communities, towns, districts) are randomly assigned to waves of intervention. Measurement of the impact takes advantage of the delay occasioned by the reality that there are insufficient resources to implement everywhere at the same time. The impact in the first wave contrasts with the second wave, which in turn contrasts with a third wave, and so on until all have received the intervention. Provided care is taken to achieve reasonable balance in the random allocation of communities, towns or districts to the waves, the resulting analysis can be straightforward.Where there is sufficient management interest in and commitment to evidence, pipeline planning can be integrated in the roll-out of programmes where real time information can improve the pipeline. Not all interventions can be randomly allocated, however, and random differences can still distort measurement. Other issues include contamination of the subsequent waves, ambiguity of indicators, "participant effects" that result from lack of blinding and lack of placebos, ethics and, not least important, the skills to do pipeline planning correctly.
2011-01-01
Social audits are typically observational studies, combining qualitative and quantitative uptake of evidence with consultative interpretation of results. This often falters on issues of causality because their cross-sectional design limits interpretation of time relations and separation out of other indirect associations. Social audits drawing on methods of randomised controlled cluster trials (RCCT) allow more certainty about causality. Randomisation means that exposure occurs independently of all events that precede it – it converts potential confounders and other covariates into random differences. In 2008, CIET social audits introduced randomisation of the knowledge translation component with subsequent measurement of impact in the changes introduced. This “proof of impact” generates an additional layer of evidence in a cost-effective way, providing implementation-ready solutions for planners. Pipeline planning is a social audit that incorporates stepped wedge RCCTs. From a listing of districts/communities as a sampling frame, individual entities (communities, towns, districts) are randomly assigned to waves of intervention. Measurement of the impact takes advantage of the delay occasioned by the reality that there are insufficient resources to implement everywhere at the same time. The impact in the first wave contrasts with the second wave, which in turn contrasts with a third wave, and so on until all have received the intervention. Provided care is taken to achieve reasonable balance in the random allocation of communities, towns or districts to the waves, the resulting analysis can be straightforward. Where there is sufficient management interest in and commitment to evidence, pipeline planning can be integrated in the roll-out of programmes where real time information can improve the pipeline. Not all interventions can be randomly allocated, however, and random differences can still distort measurement. Other issues include contamination of the subsequent waves, ambiguity of indicators, “participant effects” that result from lack of blinding and lack of placebos, ethics and, not least important, the skills to do pipeline planning correctly. PMID:22376386
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thaule, S.B.; Postvoll, W.
Installation by den norske stats oljeselskap A.S. (Statoil) of a powerful pipeline-modeling system on Zeepipe has allowed this major North Sea gas pipeline to meet the growing demands and seasonal variations of the European gas market. The Troll gas-sales agreement (TGSA) in 1986 called for large volumes of Norwegian gas to begin arriving from the North Sea Sleipner East field in october 1993. It is important to Statoil to maintain regular gas delivers from its integrated transport network. In addition, high utilization of transport capacity maximizes profits. In advance of operations, Statoil realized that state-of-the-art supervisory control and data acquisitionmore » (scada) and pipeline-modeling systems (PMS) would be necessary to meet its goals and to remain the most efficient North Sea operator. The paper describes the linking of Troll and Zeebrugge, contractual issues, the supervisory system, the scada module, pipeline modeling, real-time model, look-ahead model, predictive model, and model performance.« less
Hardware/software codesign for embedded RISC core
NASA Astrophysics Data System (ADS)
Liu, Peng
2001-12-01
This paper describes hardware/software codesign method of the extendible embedded RISC core VIRGO, which based on MIPS-I instruction set architecture. VIRGO is described by Verilog hardware description language that has five-stage pipeline with shared 32-bit cache/memory interface, and it is controlled by distributed control scheme. Every pipeline stage has one small controller, which controls the pipeline stage status and cooperation among the pipeline phase. Since description use high level language and structure is distributed, VIRGO core has highly extension that can meet the requirements of application. We take look at the high-definition television MPEG2 MPHL decoder chip, constructed the hardware/software codesign virtual prototyping machine that can research on VIRGO core instruction set architecture, and system on chip memory size requirements, and system on chip software, etc. We also can evaluate the system on chip design and RISC instruction set based on the virtual prototyping machine platform.
1983-10-24
Gas Transport System Examined (R. Verdiyan; VYSHKA, 26 Jul 83).. 40 National Pipeline Transport System Examined (L. Korenev ; SOVETSKAYA...SOVETSKAYA LATVIYA in Russian 9 Aug 83 p 2 /Article by L. Korenev : "The Country’s Pipeline TransportV /Text/ The Soviet Union produces more steel pipes
Electrically reconfigurable logic array
NASA Technical Reports Server (NTRS)
Agarwal, R. K.
1982-01-01
To compose the complicated systems using algorithmically specialized logic circuits or processors, one solution is to perform relational computations such as union, division and intersection directly on hardware. These relations can be pipelined efficiently on a network of processors having an array configuration. These processors can be designed and implemented with a few simple cells. In order to determine the state-of-the-art in Electrically Reconfigurable Logic Array (ERLA), a survey of the available programmable logic array (PLA) and the logic circuit elements used in such arrays was conducted. Based on this survey some recommendations are made for ERLA devices.
Adapt or Perish – Updating the Pre-doctoral Training Model
Chabowski, Dawid; Kadlec, Andrew; Dellostritto, Daniel; Gutterman, David
2017-01-01
The fate of biomedical research lies in the hands of future generations of scientists. In recent decades the diversity of scientific career opportunities has exploded multidimensionally. However, the educational system for maintaining a pipeline of talented biomedical trainees remains unidimensional and has become outdated. This Viewpoint identifies inadequacies in training and offers potential solutions and implementation strategies to stimulate interest in science at a younger age and to better align individualized training pathways with career opportunities (“precision training”). Both interventions support of the ultimate goal of attracting the best possible future leaders in biomedical science. PMID:28360347
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-05
... integrated U.S. natural gas pipeline system. GLLC notes that due to the Gulf LNG Terminal's direct access to multiple major interstate pipelines and indirect access to the national gas pipeline grid, the Project's... possible impacts that the Export Project might have on natural gas supply and pricing. Navigant's analysis...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-27
..., of pipeline- quality natural gas. Pangea states that such gas will be delivered to the ST LNG Project... systems \\4\\ via the South Texas Pipeline, thereby allowing natural gas to be supplied through displacement... the siting, construction, and operation of an affiliated natural gas pipeline that will bring feed gas...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-03
..., Regulatory Certainty, and Job Creation Act of 2011 (PL112-90), have imposed additional demands on their... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 192 [Docket ID PHMSA-2011-0009] RIN 2137-AE71 Pipeline Safety: Expanding the Use of Excess Flow Valves...
(Un)Doing Hegemony in Education: Disrupting School-to-Prison Pipelines for Black Males
ERIC Educational Resources Information Center
Dancy, T. Elon, II
2014-01-01
The school-to-prison pipeline refers to the disturbing national trend in which children are funneled out of public schools and into juvenile and criminal justice systems. The purpose of this article is to theorize how this pipeline fulfills societal commitments to black male over-incarceration. First, the author reviews the troublesome perceptions…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-02
... integrated pipeline system which connects producers and shippers of crude oil and natural gas liquids in....enbridgepartners.com ). Enbridge Partners provides pipeline transportation of petroleum and natural gas in the mid... or importation of liquid petroleum, petroleum products, or other non-gaseous fuels to or from a...
77 FR 68115 - Millennium Pipeline Company, L.L.C.; Notice of Application
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-15
...] Millennium Pipeline Company, L.L.C.; Notice of Application Take notice that on November 1, 2012, Millennium Pipeline Company, L.L.C. (Millennium), One Blue Hill Plaza, Seventh Floor, P.O. Box 1565, Pearl River, New... system to the existing interconnection with Algonquin Gas Transmission, L.L.C. in Ramapo, New York and...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zellmer, S.D.; Rastorfer, J.R.; Van Dyke, G.D.
Implementation of recent federal and state regulations promulgated to protect wetlands makes information on effects of gas pipeline rights-of-way (ROWs) in wetlands essential to the gas pipeline industry. This study is designed to record vegetational changes induced by the construction of a large-diameter gas pipeline through deciduous forested wetlands. Two second-growth forested wetland sites mapped as Lenawee soils, one mature and one subjected to recent selective logging, were selected in Midland County, Michigan. Changes in the adjacent forest and successional development on the ROW are being documented. Cover-class estimates are being made for understory and ROW plant species using 1more » {times}1-m quadrats. Counts are also being made for all woody species with stems < 2 cm in diameter at breast height (dbh) in the same plots used for cover-class estimates. Individual stem diameters and species counts are being recorded for all woody understory and overstory plants with stems {ge}2 cm dbh in 10 {times} 10-m plots. Although analyses of the data have not been completed, preliminary analyses indicate that some destruction of vegetation at the ROW forest edge may have been avoidable during pipeline construction. Rapid regrowth of many native wetland plant species on the ROW occurred because remnants of native vegetation and soil-bearing propagules of existing species survived on the ROW after pipeline construction and seeding operations. 91 refs., 11 figs., 3 tabs.« less
Assessing fugitive emissions of CH4 from high-pressure gas pipelines in the UK
NASA Astrophysics Data System (ADS)
Clancy, S.; Worrall, F.; Davies, R. J.; Almond, S.; Boothroyd, I.
2016-12-01
Concern over the greenhouse gas impact of the exploitation of unconventional natural gas from shale deposits has caused a spotlight to be shone on to the entire hydrocarbon industry. Numerous studies have developed life-cycle emissions inventories to assess the impact that hydraulic fracturing has upon greenhouse gas emissions. Incorporated within life-cycle assessments are transmission and distribution losses, including infrastructure such as pipelines and compressor stations that pressurise natural gas for transport along pipelines. Estimates of fugitive emissions from transmission, storage and distribution have been criticized for reliance on old data from inappropriate sources (1970s Russian gas pipelines). In this study, we investigate fugitive emissions of CH4 from the UK high pressure national transmission system. The study took two approaches. Firstly, CH4 concentration is detected by driving along roads bisecting high pressure gas pipelines and also along an equivalent distance along a route where no high pressure gas pipeline was nearby. Five pipelines and five equivalent control routes were driven and the test was that CH4 measurements, when adjusted for distance and wind speed, should be greater on any route with a pipe than any route without a pipe. Secondly, 5 km of a high pressure gas pipeline and 5 km of equivalent farmland, were walked and soil gas (above the pipeline where present) was analysed every 7 m using a tunable diode laser. When wind adjusted 92 km of high pressure pipeline and 72 km of control route were drive over a 10 day period. When wind and distance adjusted CH4 fluxes were significantly greater on routes with a pipeline than those without. The smallest leak detectable was 3% above ambient (1.03 relative concentration) with any leaks below 3% above ambient assumed ambient. The number of leaks detected along the pipelines correlate to the estimated length of pipe joints, inferring that there are constant fugitive CH4 emissions from these joints. When scaled up to the UK's National Transmission System pipeline length of 7600 km gives a fugitive CH4 flux of 62.6 kt CH4/yr with a CO2 equivalent of 1570 kt CO2eq/yr - this fugitive emission from high pressure pipelines is 0.14% of the annual gas supply.
NASA Astrophysics Data System (ADS)
Tejedor, J.; Macias-Guarasa, J.; Martins, H. F.; Piote, D.; Pastor-Graells, J.; Martin-Lopez, S.; Corredera, P.; De Pauw, G.; De Smet, F.; Postvoll, W.; Ahlen, C. H.; Gonzalez-Herraez, M.
2017-04-01
This paper presents the first report on on-line and final blind field test results of a pipeline integrity threat surveillance system. The system integrates a machine+activity identification mode, and a threat detection mode. Two different pipeline sections were selected for the blind tests: One close to the sensor position, and the other 35 km away from it. Results of the machine+activity identification mode showed that about 46% of the times the machine, the activity or both were correctly identified. For the threat detection mode, 8 out of 10 threats were correctly detected, with 1 false alarm.
Alizadeh, Mahdi; Conklin, Chris J; Middleton, Devon M; Shah, Pallav; Saksena, Sona; Krisa, Laura; Finsterbusch, Jürgen; Faro, Scott H; Mulcahey, M J; Mohamed, Feroze B
2018-04-01
Ghost artifacts are a major contributor to degradation of spinal cord diffusion tensor images. A multi-stage post-processing pipeline was designed, implemented and validated to automatically remove ghost artifacts arising from reduced field of view diffusion tensor imaging (DTI) of the pediatric spinal cord. A total of 12 pediatric subjects including 7 healthy subjects (mean age=11.34years) with no evidence of spinal cord injury or pathology and 5 patients (mean age=10.96years) with cervical spinal cord injury were studied. Ghost/true cords, labeled as region of interests (ROIs), in non-diffusion weighted b0 images were segmented automatically using mathematical morphological processing. Initially, 21 texture features were extracted from each segmented ROI including 5 first-order features based on the histogram of the image (mean, variance, skewness, kurtosis and entropy) and 16s-order feature vector elements, incorporating four statistical measures (contrast, correlation, homogeneity and energy) calculated from co-occurrence matrices in directions of 0°, 45°, 90° and 135°. Next, ten features with a high value of mutual information (MI) relative to the pre-defined target class and within the features were selected as final features which were input to a trained classifier (adaptive neuro-fuzzy interface system) to separate the true cord from the ghost cord. The implemented pipeline was successfully able to separate the ghost artifacts from true cord structures. The results obtained from the classifier showed a sensitivity of 91%, specificity of 79%, and accuracy of 84% in separating the true cord from ghost artifacts. The results show that the proposed method is promising for the automatic detection of ghost cords present in DTI images of the spinal cord. This step is crucial towards development of accurate, automatic DTI spinal cord post processing pipelines. Copyright © 2017 Elsevier Inc. All rights reserved.
19. PIPELINE INTERSECTION AT THE MOUTH OF WAIKOLU VALLEY ON ...
19. PIPELINE INTERSECTION AT THE MOUTH OF WAIKOLU VALLEY ON THE BEACH. VALVE AT RIGHT (WITH WRENCH NEARBY) OPENS TO FLUSH VALLEY SYSTEM OUT. VALVE AT LEFT CLOSES TO KEEP WATER FROM ENTERING SYSTEM ALONG THE PALI DURING REPAIRS. - Kalaupapa Water Supply System, Waikolu Valley to Kalaupapa Settlement, Island of Molokai, Kalaupapa, Kalawao County, HI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawn Lenz; Raymond T. Lines; Darryl Murdock
ITT Industries Space Systems Division (Space Systems) has developed an airborne natural gas leak detection system designed to detect, image, quantify, and precisely locate leaks from natural gas transmission pipelines. This system is called the Airborne Natural Gas Emission Lidar (ANGEL) system. The ANGEL system uses a highly sensitive differential absorption Lidar technology to remotely detect pipeline leaks. The ANGEL System is operated from a fixed wing aircraft and includes automatic scanning, pointing system, and pilot guidance systems. During a pipeline inspection, the ANGEL system aircraft flies at an elevation of 1000 feet above the ground at speeds of betweenmore » 100 and 150 mph. Under this contract with DOE/NETL, Space Systems was funded to integrate the ANGEL sensor into a test aircraft and conduct a series of flight tests over a variety of test targets including simulated natural gas pipeline leaks. Following early tests in upstate New York in the summer of 2004, the ANGEL system was deployed to Casper, Wyoming to participate in a set of DOE-sponsored field tests at the Rocky Mountain Oilfield Testing Center (RMOTC). At RMOTC the Space Systems team completed integration of the system and flew an operational system for the first time. The ANGEL system flew 2 missions/day for the duration for the 5-day test. Over the course of the week the ANGEL System detected leaks ranging from 100 to 5,000 scfh.« less
49 CFR 195.402 - Procedural manual for operations, maintenance, and emergencies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... effective. This manual shall be prepared before initial operations of a pipeline system commence, and... in a timely and effective manner. (3) Operating, maintaining, and repairing the pipeline system in... public officials to learn the responsibility and resources of each government organization that may...
The Environmental Technology Verification report discusses the technology and performance of the Parametric Emissions Monitoring System (PEMS) manufactured by ANR Pipeline Company, a subsidiary of Coastal Corporation, now El Paso Corporation. The PEMS predicts carbon doixide (CO2...
78 FR 39717 - Iroquois Gas Transmission System, LP; Notice of Application
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-02
... associated with these new and modified facilities to Constitution Pipeline Company, LLC (Constitution), a... to establish a new receipt interconnection with Constitution and create an incremental 650,000... Constitution to interconnections with Iroquois' mainline system as well as Tennessee Gas Pipeline Company, LLC...
49 CFR 191.12 - Distribution Systems: Mechanical Fitting Failure Reports
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 3 2011-10-01 2011-10-01 false Distribution Systems: Mechanical Fitting Failure Reports 191.12 Section 191.12 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER...
Thermographic identification of wetted insulation on pipelines in the arctic oilfields
NASA Astrophysics Data System (ADS)
Miles, Jonathan J.; Dahlquist, A. L.; Dash, L. C.
2006-04-01
Steel pipes used at Alaskan oil-producing facilities to transport production crude, gas, and injection water between well house and drill site manifold building, and along cross-country lines to and from central processing facilities, must be insulated in order to protect against the severely cold temperatures that are common during the arctic winter. A problem inherent with this system is that the sealed joints between adjacent layers of the outer wrap will over time degrade and can allow water to breach the system and migrate into and through the insulation. The moisture can ultimately interact with the steel pipe and trigger external corrosion which, if left unchecked, can lead to pipe failure and spillage. A New Technology Evaluation Guideline prepared for ConocoPhillips Alaska, Inc. in 2001 is intended to guide the consideration of new technologies for pipeline inspection in a manner that is safer, faster, and more cost-effective than existing techniques. Infrared thermography (IRT) was identified as promising for identification of wetted insulation regions given that it offers the means to scan a large area quickly from a safe distance, and measure the temperature field associated with that area. However, it was also recognized that there are limiting factors associated with an IRT-based approach including instrument sensitivity, cost, portability, functionality in hostile (arctic) environments, and training required for proper interpretation of data. A methodology was developed and tested in the field that provides a technique to conduct large-scale screening for wetted regions along insulated pipelines. The results of predictive modeling analysis and testing demonstrate the feasibility under certain condition of identifying wetted insulation areas. The results of the study and recommendations for implementation are described.
DataMed - an open source discovery index for finding biomedical datasets.
Chen, Xiaoling; Gururaj, Anupama E; Ozyurt, Burak; Liu, Ruiling; Soysal, Ergin; Cohen, Trevor; Tiryaki, Firat; Li, Yueling; Zong, Nansu; Jiang, Min; Rogith, Deevakar; Salimi, Mandana; Kim, Hyeon-Eui; Rocca-Serra, Philippe; Gonzalez-Beltran, Alejandra; Farcas, Claudiu; Johnson, Todd; Margolis, Ron; Alter, George; Sansone, Susanna-Assunta; Fore, Ian M; Ohno-Machado, Lucila; Grethe, Jeffrey S; Xu, Hua
2018-01-13
Finding relevant datasets is important for promoting data reuse in the biomedical domain, but it is challenging given the volume and complexity of biomedical data. Here we describe the development of an open source biomedical data discovery system called DataMed, with the goal of promoting the building of additional data indexes in the biomedical domain. DataMed, which can efficiently index and search diverse types of biomedical datasets across repositories, is developed through the National Institutes of Health-funded biomedical and healthCAre Data Discovery Index Ecosystem (bioCADDIE) consortium. It consists of 2 main components: (1) a data ingestion pipeline that collects and transforms original metadata information to a unified metadata model, called DatA Tag Suite (DATS), and (2) a search engine that finds relevant datasets based on user-entered queries. In addition to describing its architecture and techniques, we evaluated individual components within DataMed, including the accuracy of the ingestion pipeline, the prevalence of the DATS model across repositories, and the overall performance of the dataset retrieval engine. Our manual review shows that the ingestion pipeline could achieve an accuracy of 90% and core elements of DATS had varied frequency across repositories. On a manually curated benchmark dataset, the DataMed search engine achieved an inferred average precision of 0.2033 and a precision at 10 (P@10, the number of relevant results in the top 10 search results) of 0.6022, by implementing advanced natural language processing and terminology services. Currently, we have made the DataMed system publically available as an open source package for the biomedical community. © The Author 2018. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Turning Noise into Signal: Utilizing Impressed Pipeline Currents for EM Exploration
NASA Astrophysics Data System (ADS)
Lindau, Tobias; Becken, Michael
2017-04-01
Impressed Current Cathodic Protection (ICCP) systems are extensively used for the protection of central Europe's dense network of oil-, gas- and water pipelines against destruction by electrochemical corrosion. While ICCP systems usually provide protection by injecting a DC current into the pipeline, mandatory pipeline integrity surveys demand a periodical switching of the current. Consequently, the resulting time varying pipe currents induce secondary electric- and magnetic fields in the surrounding earth. While these fields are usually considered to be unwanted cultural noise in electromagnetic exploration, this work aims at utilizing the fields generated by the ICCP system for determining the electrical resistivity of the subsurface. The fundamental period of the switching cycles typically amounts to 15 seconds in Germany and thereby roughly corresponds to periods used in controlled source EM applications (CSEM). For detailed studies we chose an approximately 30km long pipeline segment near Herford, Germany as a test site. The segment is located close to the southern margin of the Lower Saxony Basin (LSB) and part of a larger gas pipeline composed of multiple segments. The current injected into the pipeline segment originates in a rectified 50Hz AC signal which is periodically switched on and off. In contrast to the usual dipole sources used in CSEM surveys, the current distribution along the pipeline is unknown and expected to be non-uniform due to coating defects that cause current to leak into the surrounding soil. However, an accurate current distribution is needed to model the fields generated by the pipeline source. We measured the magnetic fields at several locations above the pipeline and used Biot-Savarts-Law to estimate the currents decay function. The resulting frequency dependent current distribution shows a current decay away from the injection point as well as a frequency dependent phase shift which is increasing with distance from the injection point. Electric field data were recorded at 45 stations located in an area of about 60 square kilometers in the vicinity to the pipeline. Additionally, the injected source current was recorded directly at the injection point. Transfer functions between the local electric fields and the injected source current are estimated for frequencies ranging from 0.03Hz to 15Hz using robust time series processing techniques. The resulting transfer functions are inverted for a 3D conductivity model of the subsurface using an elaborate pipeline model. We interpret the model with regards to the local geologic setting, demonstrating the methods capabilities to image the subsurface.
Designing a reliable leak bio-detection system for natural gas pipelines.
Batzias, F A; Siontorou, C G; Spanidis, P-M P
2011-02-15
Monitoring of natural gas (NG) pipelines is an important task for economical/safety operation, loss prevention and environmental protection. Timely and reliable leak detection of gas pipeline, therefore, plays a key role in the overall integrity management for the pipeline system. Owing to the various limitations of the currently available techniques and the surveillance area that needs to be covered, the research on new detector systems is still thriving. Biosensors are worldwide considered as a niche technology in the environmental market, since they afford the desired detector capabilities at low cost, provided they have been properly designed/developed and rationally placed/networked/maintained by the aid of operational research techniques. This paper addresses NG leakage surveillance through a robust cooperative/synergistic scheme between biosensors and conventional detector systems; the network is validated in situ and optimized in order to provide reliable information at the required granularity level. The proposed scheme is substantiated through a knowledge based approach and relies on Fuzzy Multicriteria Analysis (FMCA), for selecting the best biosensor design that suits both, the target analyte and the operational micro-environment. This approach is illustrated in the design of leak surveying over a pipeline network in Greece. Copyright © 2010 Elsevier B.V. All rights reserved.
Use of FBG sensors for health monitoring of pipelines
NASA Astrophysics Data System (ADS)
Felli, Ferdinando; Paolozzi, Antonio; Vendittozzi, Cristian; Paris, Claudio; Asanuma, Hiroshi
2016-04-01
The infrastructures for oil and gas production and distribution need reliable monitoring systems. The risks for pipelines, in particular, are not only limited to natural disasters (landslides, earthquakes, extreme environmental conditions) and accidents, but involve also the damages related to criminal activities, such as oil theft. The existing monitoring systems are not adequate for detecting damages from oil theft, and in several occasion the illegal activities resulted in leakage of oil and catastrophic environmental pollution. Systems based on fiber optic FBG (Fiber Bragg Grating) sensors present a number of advantages for pipeline monitoring. FBG sensors can withstand harsh environment, are immune to interferences, and can be used to develop a smart system for monitoring at the same time several physical characteristics, such as strain, temperature, acceleration, pressure, and vibrations. The monitoring station can be positioned tens of kilometers away from the measuring points, lowering the costs and the complexity of the system. This paper describes tests on a sensor, based on FBG technology, developed specifically for detecting damages of pipeline due to illegal activities (drilling of the pipes), that can be integrated into a smart monitoring chain.
RGAugury: a pipeline for genome-wide prediction of resistance gene analogs (RGAs) in plants.
Li, Pingchuan; Quan, Xiande; Jia, Gaofeng; Xiao, Jin; Cloutier, Sylvie; You, Frank M
2016-11-02
Resistance gene analogs (RGAs), such as NBS-encoding proteins, receptor-like protein kinases (RLKs) and receptor-like proteins (RLPs), are potential R-genes that contain specific conserved domains and motifs. Thus, RGAs can be predicted based on their conserved structural features using bioinformatics tools. Computer programs have been developed for the identification of individual domains and motifs from the protein sequences of RGAs but none offer a systematic assessment of the different types of RGAs. A user-friendly and efficient pipeline is needed for large-scale genome-wide RGA predictions of the growing number of sequenced plant genomes. An integrative pipeline, named RGAugury, was developed to automate RGA prediction. The pipeline first identifies RGA-related protein domains and motifs, namely nucleotide binding site (NB-ARC), leucine rich repeat (LRR), transmembrane (TM), serine/threonine and tyrosine kinase (STTK), lysin motif (LysM), coiled-coil (CC) and Toll/Interleukin-1 receptor (TIR). RGA candidates are identified and classified into four major families based on the presence of combinations of these RGA domains and motifs: NBS-encoding, TM-CC, and membrane associated RLP and RLK. All time-consuming analyses of the pipeline are paralleled to improve performance. The pipeline was evaluated using the well-annotated Arabidopsis genome. A total of 98.5, 85.2, and 100 % of the reported NBS-encoding genes, membrane associated RLPs and RLKs were validated, respectively. The pipeline was also successfully applied to predict RGAs for 50 sequenced plant genomes. A user-friendly web interface was implemented to ease command line operations, facilitate visualization and simplify result management for multiple datasets. RGAugury is an efficiently integrative bioinformatics tool for large scale genome-wide identification of RGAs. It is freely available at Bitbucket: https://bitbucket.org/yaanlpc/rgaugury .
GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.
Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A
2017-03-01
We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.
Control structures for high speed processors
NASA Technical Reports Server (NTRS)
Maki, G. K.; Mankin, R.; Owsley, P. A.; Kim, G. M.
1982-01-01
A special processor was designed to function as a Reed Solomon decoder with throughput data rate in the Mhz range. This data rate is significantly greater than is possible with conventional digital architectures. To achieve this rate, the processor design includes sequential, pipelined, distributed, and parallel processing. The processor was designed using a high level language register transfer language. The RTL can be used to describe how the different processes are implemented by the hardware. One problem of special interest was the development of dependent processes which are analogous to software subroutines. For greater flexibility, the RTL control structure was implemented in ROM. The special purpose hardware required approximately 1000 SSI and MSI components. The data rate throughput is 2.5 megabits/second. This data rate is achieved through the use of pipelined and distributed processing. This data rate can be compared with 800 kilobits/second in a recently proposed very large scale integration design of a Reed Solomon encoder.
Numerical Modeling of Mechanical Behavior for Buried Steel Pipelines Crossing Subsidence Strata
Han, C. J.
2015-01-01
This paper addresses the mechanical behavior of buried steel pipeline crossing subsidence strata. The investigation is based on numerical simulation of the nonlinear response of the pipeline-soil system through finite element method, considering large strain and displacement, inelastic material behavior of buried pipeline and the surrounding soil, as well as contact and friction on the pipeline-soil interface. Effects of key parameters on the mechanical behavior of buried pipeline were investigated, such as strata subsidence, diameter-thickness ratio, buried depth, internal pressure, friction coefficient and soil properties. The results show that the maximum strain appears on the outer transition subsidence section of the pipeline, and its cross section is concave shaped. With the increasing of strata subsidence and diameter-thickness ratio, the out of roundness, longitudinal strain and equivalent plastic strain increase gradually. With the buried depth increasing, the deflection, out of roundness and strain of the pipeline decrease. Internal pressure and friction coefficient have little effect on the deflection of buried pipeline. Out of roundness is reduced and the strain is increased gradually with the increasing of internal pressure. The physical properties of soil have a great influence on the mechanical properties of buried pipeline. The results from the present study can be used for the development of optimization design and preventive maintenance for buried steel pipelines. PMID:26103460
SCRAM: a pipeline for fast index-free small RNA read alignment and visualization.
Fletcher, Stephen J; Boden, Mikael; Mitter, Neena; Carroll, Bernard J
2018-03-15
Small RNAs play key roles in gene regulation, defense against viral pathogens and maintenance of genome stability, though many aspects of their biogenesis and function remain to be elucidated. SCRAM (Small Complementary RNA Mapper) is a novel, simple-to-use short read aligner and visualization suite that enhances exploration of small RNA datasets. The SCRAM pipeline is implemented in Go and Python, and is freely available under MIT license. Source code, multiplatform binaries and a Docker image can be accessed via https://sfletc.github.io/scram/. s.fletcher@uq.edu.au. Supplementary data are available at Bioinformatics online.
Commanding Constellations (Pipeline Architecture)
NASA Technical Reports Server (NTRS)
Ray, Tim; Condron, Jeff
2003-01-01
Providing ground command software for constellations of spacecraft is a challenging problem. Reliable command delivery requires a feedback loop; for a constellation there will likely be an independent feedback loop for each constellation member. Each command must be sent via the proper Ground Station, which may change from one contact to the next (and may be different for different members). Dynamic configuration of the ground command software is usually required (e.g. directives to configure each member's feedback loop and assign the appropriate Ground Station). For testing purposes, there must be a way to insert command data at any level in the protocol stack. The Pipeline architecture described in this paper can support all these capabilities with a sequence of software modules (the pipeline), and a single self-identifying message format (for all types of command data and configuration directives). The Pipeline architecture is quite simple, yet it can solve some complex problems. The resulting solutions are conceptually simple, and therefore, reliable. They are also modular, and therefore, easy to distribute and extend. We first used the Pipeline architecture to design a CCSDS (Consultative Committee for Space Data Systems) Ground Telecommand system (to command one spacecraft at a time with a fixed Ground Station interface). This pipeline was later extended to include gateways to any of several Ground Stations. The resulting pipeline was then extended to handle a small constellation of spacecraft. The use of the Pipeline architecture allowed us to easily handle the increasing complexity. This paper will describe the Pipeline architecture, show how it was used to solve each of the above commanding situations, and how it can easily be extended to handle larger constellations.
Finite-Element Modeling of a Damaged Pipeline Repaired Using the Wrap of a Composite Material
NASA Astrophysics Data System (ADS)
Lyapin, A. A.; Chebakov, M. I.; Dumitrescu, A.; Zecheru, G.
2015-07-01
The nonlinear static problem of FEM modeling of a damaged pipeline repaired by a composite material and subjected to internal pressure is considered. The calculation is carried out using plasticity theory for the pipeline material and considering the polymeric filler and the composite wrap. The level of stresses in various zones of the structure is analyzed. The most widespread alloy used for oil pipelines is selected as pipe material. The contribution of each component of the pipeline-filler-wrap system to the level of stresses is investigated. The effect of the number of composite wrap layers is estimated. The results obtained allow one to decrease the costs needed for producing test specimens.
A bipolar population counter using wave pipelining to achieve 2.5 x normal clock frequency
NASA Technical Reports Server (NTRS)
Wong, Derek C.; De Micheli, Giovanni; Flynn, Michael J.; Huston, Robert E.
1992-01-01
Wave pipelining is a technique for pipelining digital systems that can increase clock frequency in practical circuits without increasing the number of storage elements. In wave pipelining, multiple coherent waves of data are sent through a block of combinational logic by applying new inputs faster than the delay through the logic. The throughput of a 63-b CML population counter was increased from 97 to 250 MHz using wave pipelining. The internal circuit is flowthrough combinational logic. Novel CAD methods have balanced all input-to-output paths to about the same delay. This allows multiple data waves to propagate in sequence when the circuit is clocked faster than its propagation delay.
2013-12-01
21 a. Siberian Pipeline Explosion (1982) ............................21 b. Chevron Emergency Alert...the fifth domain: Are the mouse and keyboard the new weapons of conflict?,” The Economist, July 1, 2010, http://www.economist.com/node/16478792. 15...a. Siberian Pipeline Explosion (1982) In 1982, intruders planted a Trojan horse in the SCADA system that controls the Siberian Pipeline. This is the
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-05
... is a new natural gas pipeline system that would transport natural gas produced on the Alaska North... provisions of section 7(c) of the Natural Gas Act (NGA) and the Alaska Natural Gas Pipeline Act of 2004... pipeline to Valdez, Alaska for delivery into a liquefied natural gas (LNG) plant for liquefaction and...
ERIC Educational Resources Information Center
Boeke, Marianne; Zis, Stacey; Ewell, Peter
2011-01-01
With support from the Bill and Melinda Gates Foundation, the National Center for Higher Education Management Systems (NCHEMS) is engaged in a two year project centered on state policies that foster student progression and success in the "adult re-entry pipeline." The adult re-entry pipeline consists of the many alternative pathways to…
Pipelined CPU Design with FPGA in Teaching Computer Architecture
ERIC Educational Resources Information Center
Lee, Jong Hyuk; Lee, Seung Eun; Yu, Heon Chang; Suh, Taeweon
2012-01-01
This paper presents a pipelined CPU design project with a field programmable gate array (FPGA) system in a computer architecture course. The class project is a five-stage pipelined 32-bit MIPS design with experiments on the Altera DE2 board. For proper scheduling, milestones were set every one or two weeks to help students complete the project on…
Using steady-state equations for transient flow calculation in natural gas pipelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddox, R.N.; Zhou, P.
1984-04-02
Maddox and Zhou have extended their technique for calculating the unsteady-state behavior of straight gas pipelines to complex pipeline systems and networks. After developing the steady-state flow rate and pressure profile for each pipe in the network, analysts can perform the transient-state analysis in the real-time step-wise manner described for this technique.
Coupling Network Computing Applications in Air-cooled Turbine Blades Optimization
NASA Astrophysics Data System (ADS)
Shi, Liang; Yan, Peigang; Xie, Ming; Han, Wanjin
2018-05-01
Through establishing control parameters from blade outside to inside, the parametric design of air-cooled turbine blade based on airfoil has been implemented. On the basis of fast updating structure features and generating solid model, a complex cooling system has been created. Different flow units are modeled into a complex network topology with parallel and serial connection. Applying one-dimensional flow theory, programs have been composed to get pipeline network physical quantities along flow path, including flow rate, pressure, temperature and other parameters. These inner units parameters set as inner boundary conditions for external flow field calculation program HIT-3D by interpolation, thus to achieve full field thermal coupling simulation. Referring the studies in literatures to verify the effectiveness of pipeline network program and coupling algorithm. After that, on the basis of a modified design, and with the help of iSIGHT-FD, an optimization platform had been established. Through MIGA mechanism, the target of enhancing cooling efficiency has been reached, and the thermal stress has been effectively reduced. Research work in this paper has significance for rapid deploying the cooling structure design.
Blau, Ashley; Brown, Alison; Mahanta, Lisa; Amr, Sami S.
2016-01-01
The Translational Genomics Core (TGC) at Partners Personalized Medicine (PPM) serves as a fee-for-service core laboratory for Partners Healthcare researchers, providing access to technology platforms and analysis pipelines for genomic, transcriptomic, and epigenomic research projects. The interaction of the TGC with various components of PPM provides it with a unique infrastructure that allows for greater IT and bioinformatics opportunities, such as sample tracking and data analysis. The following article describes some of the unique opportunities available to an academic research core operating within PPM, such the ability to develop analysis pipelines with a dedicated bioinformatics team and maintain a flexible Laboratory Information Management System (LIMS) with the support of an internal IT team, as well as the operational challenges encountered to respond to emerging technologies, diverse investigator needs, and high staff turnover. In addition, the implementation and operational role of the TGC in the Partners Biobank genotyping project of over 25,000 samples is presented as an example of core activities working with other components of PPM. PMID:26927185
Blau, Ashley; Brown, Alison; Mahanta, Lisa; Amr, Sami S
2016-02-26
The Translational Genomics Core (TGC) at Partners Personalized Medicine (PPM) serves as a fee-for-service core laboratory for Partners Healthcare researchers, providing access to technology platforms and analysis pipelines for genomic, transcriptomic, and epigenomic research projects. The interaction of the TGC with various components of PPM provides it with a unique infrastructure that allows for greater IT and bioinformatics opportunities, such as sample tracking and data analysis. The following article describes some of the unique opportunities available to an academic research core operating within PPM, such the ability to develop analysis pipelines with a dedicated bioinformatics team and maintain a flexible Laboratory Information Management System (LIMS) with the support of an internal IT team, as well as the operational challenges encountered to respond to emerging technologies, diverse investigator needs, and high staff turnover. In addition, the implementation and operational role of the TGC in the Partners Biobank genotyping project of over 25,000 samples is presented as an example of core activities working with other components of PPM.
The Raptor Real-Time Processing Architecture
NASA Astrophysics Data System (ADS)
Galassi, M.; Starr, D.; Wozniak, P.; Brozdin, K.
The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback, etc.) is implemented with a ``component'' approach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally, the Raptor architecture is entirely based on free software (sometimes referred to as ``open source'' software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.
Raptor -- Mining the Sky in Real Time
NASA Astrophysics Data System (ADS)
Galassi, M.; Borozdin, K.; Casperson, D.; McGowan, K.; Starr, D.; White, R.; Wozniak, P.; Wren, J.
2004-06-01
The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback...) is implemented with a ``component'' aproach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally: the Raptor architecture is entirely based on free software (sometimes referred to as "open source" software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.
BiDiBlast: comparative genomics pipeline for the PC.
de Almeida, João M G C F
2010-06-01
Bi-directional BLAST is a simple approach to detect, annotate, and analyze candidate orthologous or paralogous sequences in a single go. This procedure is usually confined to the realm of customized Perl scripts, usually tuned for UNIX-like environments. Porting those scripts to other operating systems involves refactoring them, and also the installation of the Perl programming environment with the required libraries. To overcome these limitations, a data pipeline was implemented in Java. This application submits two batches of sequences to local versions of the NCBI BLAST tool, manages result lists, and refines both bi-directional and simple hits. GO Slim terms are attached to hits, several statistics are derived, and molecular evolution rates are estimated through PAML. The results are written to a set of delimited text tables intended for further analysis. The provided graphic user interface allows a friendly interaction with this application, which is documented and available to download at http://moodle.fct.unl.pt/course/view.php?id=2079 or https://sourceforge.net/projects/bidiblast/ under the GNU GPL license. Copyright 2010 Beijing Genomics Institute. Published by Elsevier Ltd. All rights reserved.
Aerial surveillance for gas and liquid hydrocarbon pipelines using a flame ionization detector (FID)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riquetti, P.V.; Fletcher, J.I.; Minty, C.D.
1996-12-31
A novel application for the detection of airborne hydrocarbons has been successfully developed by means of a highly sensitive, fast responding Flame Ionization Detector (FID). The traditional way to monitor pipeline leaks has been by ground crews using specific sensors or by airborne crews highly trained to observe anomalies associated with leaks during periodic surveys of the pipeline right-of-way. The goal has been to detect leaks in a fast and cost effective way before the associated spill becomes a costly and hazardous problem. This paper describes a leak detection system combined with a global positioning system (GPS) and a computerizedmore » data output designed to pinpoint the presence of hydrocarbons in the air space of the pipeline`s right of way. Fixed wing aircraft as well as helicopters have been successfully used as airborne platforms. Natural gas, crude oil and finished products pipelines in Canada and the US have been surveyed using this technology with excellent correlation between the aircraft detection and in situ ground detection. The information obtained is processed with a proprietary software and reduced to simple coordinates. Results are transferred to ground crews to effect the necessary repairs.« less
Ho, Cheng-I; Lin, Min-Der; Lo, Shang-Lien
2010-07-01
A methodology based on the integration of a seismic-based artificial neural network (ANN) model and a geographic information system (GIS) to assess water leakage and to prioritize pipeline replacement is developed in this work. Qualified pipeline break-event data derived from the Taiwan Water Corporation Pipeline Leakage Repair Management System were analyzed. "Pipe diameter," "pipe material," and "the number of magnitude-3( + ) earthquakes" were employed as the input factors of ANN, while "the number of monthly breaks" was used for the prediction output. This study is the first attempt to manipulate earthquake data in the break-event ANN prediction model. Spatial distribution of the pipeline break-event data was analyzed and visualized by GIS. Through this, the users can swiftly figure out the hotspots of the leakage areas. A northeastern township in Taiwan, frequently affected by earthquakes, is chosen as the case study. Compared to the traditional processes for determining the priorities of pipeline replacement, the methodology developed is more effective and efficient. Likewise, the methodology can overcome the difficulty of prioritizing pipeline replacement even in situations where the break-event records are unavailable.
U.S. pipeline industry enters new era
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnsen, M.R.
1999-11-01
The largest construction project in North America this year and next--the Alliance Pipeline--marks some advances for the US pipeline industry. With the Alliance Pipeline system (Alliance), mechanized welding and ultrasonic testing are making their debuts in the US as primary mainline construction techniques. Particularly in Canada and Europe, mechanized welding technology has been used for both onshore and offshore pipeline construction for at least 15 years. However, it has never before been used to build a cross-country pipeline in the US, although it has been tested on short segments. This time, however, an accelerated construction schedule, among other reasons, necessitatedmore » the use of mechanized gas metal arc welding (GMAW). The $3-billion pipeline will delivery natural gas from northwestern British Columbia and northeastern Alberta in Canada to a hub near Chicago, Ill., where it will connect to the North American pipeline grid. Once the pipeline is completed and buried, crews will return the topsoil. Corn and other crops will reclaim the land. While the casual passerby probably won't know the Alliance pipeline is there, it may have a far-reaching effect on the way mainline pipelines are built in the US. For even though mechanized welding and ultrasonic testing are being used for the first time in the United States on this project, some US workers had already gained experience with the technology on projects elsewhere. And work on this pipeline has certainly developed a much larger pool of experienced workers for industry to draw from. The Alliance project could well signal the start of a new era in US pipeline construction.« less
77 FR 76024 - Combined Notice of Filings
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-26
... that the Commission has received the following Natural Gas Pipeline Rate and Refund Report filings...: Natural Gas Pipeline Company of America. Description: Removal of Agreements to be effective 1/19/2013... Natural Gas System, L.L.C. Description: Gulfstream Natural Gas System, L.L.C. submits tariff filing per...
Continuous Turbidity Monitoring in the Indian Creek Watershed, Tazewell County, Virginia, 2006-08
Moyer, Douglas; Hyer, Kenneth
2009-01-01
Thousands of miles of natural gas pipelines are installed annually in the United States. These pipelines commonly cross streams, rivers, and other water bodies during pipeline construction. A major concern associated with pipelines crossing water bodies is increased sediment loading and the subsequent impact to the ecology of the aquatic system. Several studies have investigated the techniques used to install pipelines across surface-water bodies and their effect on downstream suspended-sediment concentrations. These studies frequently employ the evaluation of suspended-sediment or turbidity data that were collected using discrete sample-collection methods. No studies, however, have evaluated the utility of continuous turbidity monitoring for identifying real-time sediment input and providing a robust dataset for the evaluation of long-term changes in suspended-sediment concentration as it relates to a pipeline crossing. In 2006, the U.S. Geological Survey, in cooperation with East Tennessee Natural Gas and the U.S. Fish and Wildlife Service, began a study to monitor the effects of construction of the Jewell Ridge Lateral natural gas pipeline on turbidity conditions below pipeline crossings of Indian Creek and an unnamed tributary to Indian Creek, in Tazewell County, Virginia. The potential for increased sediment loading to Indian Creek is of major concern for watershed managers because Indian Creek is listed as one of Virginia's Threatened and Endangered Species Waters and contains critical habitat for two freshwater mussel species, purple bean (Villosa perpurpurea) and rough rabbitsfoot (Quadrula cylindrical strigillata). Additionally, Indian Creek contains the last known reproducing population of the tan riffleshell (Epioblasma florentina walkeri). Therefore, the objectives of the U.S. Geological Survey monitoring effort were to (1) develop a continuous turbidity monitoring network that attempted to measure real-time changes in suspended sediment (using turbidity as a surrogate) downstream from the pipeline crossings, and (2) provide continuous turbidity data that enable the development of a real-time turbidity-input warning system and assessment of long-term changes in turbidity conditions. Water-quality conditions were assessed using continuous water-quality monitors deployed upstream and downstream from the pipeline crossings in Indian Creek and the unnamed tributary. These paired upstream and downstream monitors were outfitted with turbidity, pH (for Indian Creek only), specific-conductance, and water-temperature sensors. Water-quality data were collected continuously (every 15 minutes) during three phases of the pipeline construction: pre-construction, during construction, and post-construction. Continuous turbidity data were evaluated at various time steps to determine whether the construction of the pipeline crossings had an effect on downstream suspended-sediment conditions in Indian Creek and the unnamed tributary. These continuous turbidity data were analyzed in real time with the aid of a turbidity-input warning system. A warning occurred when turbidity values downstream from the pipeline were 6 Formazin Nephelometric Units or 15 percent (depending on the observed range) greater than turbidity upstream from the pipeline crossing. Statistical analyses also were performed on monthly and phase-of-construction turbidity data to determine if the pipeline crossing served as a long-term source of sediment. Results of this intensive water-quality monitoring effort indicate that values of turbidity in Indian Creek increased significantly between the upstream and downstream water-quality monitors during the construction of the Jewell Ridge pipeline. The magnitude of the significant turbidity increase, however, was small (less than 2 Formazin Nephelometric Units). Patterns in the continuous turbidity data indicate that the actual pipeline crossing of Indian Creek had little influence of downstream water quality; co
Microfluidics‐based 3D cell culture models: Utility in novel drug discovery and delivery research
Gupta, Nilesh; Liu, Jeffrey R.; Patel, Brijeshkumar; Solomon, Deepak E.; Vaidya, Bhuvaneshwar
2016-01-01
Abstract The implementation of microfluidic devices within life sciences has furthered the possibilities of both academic and industrial applications such as rapid genome sequencing, predictive drug studies, and single cell manipulation. In contrast to the preferred two‐dimensional cell‐based screening, three‐dimensional (3D) systems have more in vivo relevance as well as ability to perform as a predictive tool for the success or failure of a drug screening campaign. 3D cell culture has shown an adaptive response to the recent advancements in microfluidic technologies which has allowed better control over spheroid sizes and subsequent drug screening studies. In this review, we highlight the most significant developments in the field of microfluidic 3D culture over the past half‐decade with a special focus on their benefits and challenges down the lane. With the newer technologies emerging, implementation of microfluidic 3D culture systems into the drug discovery pipeline is right around the bend. PMID:29313007
Forecasting and Evaluation of Gas Pipelines Geometric Forms Breach Hazard
NASA Astrophysics Data System (ADS)
Voronin, K. S.
2016-10-01
Main gas pipelines during operation are under the influence of the permanent pressure drops which leads to their lengthening and as a result, to instability of their position in space. In dynamic systems that have feedback, phenomena, preceding emergencies, should be observed. The article discusses the forced vibrations of the gas pipeline cylindrical surface under the influence of dynamic loads caused by pressure surges, and the process of its geometric shape deformation. Frequency of vibrations, arising in the pipeline at the stage preceding its bending, is being determined. Identification of this frequency can be the basis for the development of a method of monitoring the technical condition of the gas pipeline, and forecasting possible emergency situations allows planning and carrying out in due time reconstruction works on sections of gas pipeline with a possible deviation from the design position.
Nayor, Jennifer; Borges, Lawrence F; Goryachev, Sergey; Gainer, Vivian S; Saltzman, John R
2018-07-01
ADR is a widely used colonoscopy quality indicator. Calculation of ADR is labor-intensive and cumbersome using current electronic medical databases. Natural language processing (NLP) is a method used to extract meaning from unstructured or free text data. (1) To develop and validate an accurate automated process for calculation of adenoma detection rate (ADR) and serrated polyp detection rate (SDR) on data stored in widely used electronic health record systems, specifically Epic electronic health record system, Provation ® endoscopy reporting system, and Sunquest PowerPath pathology reporting system. Screening colonoscopies performed between June 2010 and August 2015 were identified using the Provation ® reporting tool. An NLP pipeline was developed to identify adenomas and sessile serrated polyps (SSPs) on pathology reports corresponding to these colonoscopy reports. The pipeline was validated using a manual search. Precision, recall, and effectiveness of the natural language processing pipeline were calculated. ADR and SDR were then calculated. We identified 8032 screening colonoscopies that were linked to 3821 pathology reports (47.6%). The NLP pipeline had an accuracy of 100% for adenomas and 100% for SSPs. Mean total ADR was 29.3% (range 14.7-53.3%); mean male ADR was 35.7% (range 19.7-62.9%); and mean female ADR was 24.9% (range 9.1-51.0%). Mean total SDR was 4.0% (0-9.6%). We developed and validated an NLP pipeline that accurately and automatically calculates ADRs and SDRs using data stored in Epic, Provation ® and Sunquest PowerPath. This NLP pipeline can be used to evaluate colonoscopy quality parameters at both individual and practice levels.
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
2018-03-19
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
esATAC: An Easy-to-use Systematic pipeline for ATAC-seq data analysis.
Wei, Zheng; Zhang, Wei; Fang, Huan; Li, Yanda; Wang, Xiaowo
2018-03-07
ATAC-seq is rapidly emerging as one of the major experimental approaches to probe chromatin accessibility genome-wide. Here, we present "esATAC", a highly integrated easy-to-use R/Bioconductor package, for systematic ATAC-seq data analysis. It covers essential steps for full analyzing procedure, including raw data processing, quality control and downstream statistical analysis such as peak calling, enrichment analysis and transcription factor footprinting. esATAC supports one command line execution for preset pipelines, and provides flexible interfaces for building customized pipelines. esATAC package is open source under the GPL-3.0 license. It is implemented in R and C ++. Source code and binaries for Linux, MAC OS X and Windows are available through Bioconductor https://www.bioconductor.org/packages/release/bioc/html/esATAC.html). xwwang@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online.
Extraction of UMLS® Concepts Using Apache cTAKES™ for German Language.
Becker, Matthias; Böckmann, Britta
2016-01-01
Automatic information extraction of medical concepts and classification with semantic standards from medical reports is useful for standardization and for clinical research. This paper presents an approach for an UMLS concept extraction with a customized natural language processing pipeline for German clinical notes using Apache cTAKES. The objectives are, to test the natural language processing tool for German language if it is suitable to identify UMLS concepts and map these with SNOMED-CT. The German UMLS database and German OpenNLP models extended the natural language processing pipeline, so the pipeline can normalize to domain ontologies such as SNOMED-CT using the German concepts. For testing, the ShARe/CLEF eHealth 2013 training dataset translated into German was used. The implemented algorithms are tested with a set of 199 German reports, obtaining a result of average 0.36 F1 measure without German stemming, pre- and post-processing of the reports.
Reid, Jeffrey G; Carroll, Andrew; Veeraraghavan, Narayanan; Dahdouli, Mahmoud; Sundquist, Andreas; English, Adam; Bainbridge, Matthew; White, Simon; Salerno, William; Buhay, Christian; Yu, Fuli; Muzny, Donna; Daly, Richard; Duyk, Geoff; Gibbs, Richard A; Boerwinkle, Eric
2014-01-29
Massively parallel DNA sequencing generates staggering amounts of data. Decreasing cost, increasing throughput, and improved annotation have expanded the diversity of genomics applications in research and clinical practice. This expanding scale creates analytical challenges: accommodating peak compute demand, coordinating secure access for multiple analysts, and sharing validated tools and results. To address these challenges, we have developed the Mercury analysis pipeline and deployed it in local hardware and the Amazon Web Services cloud via the DNAnexus platform. Mercury is an automated, flexible, and extensible analysis workflow that provides accurate and reproducible genomic results at scales ranging from individuals to large cohorts. By taking advantage of cloud computing and with Mercury implemented on the DNAnexus platform, we have demonstrated a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples.
BioQueue: a novel pipeline framework to accelerate bioinformatics analysis.
Yao, Li; Wang, Heming; Song, Yuanyuan; Sui, Guangchao
2017-10-15
With the rapid development of Next-Generation Sequencing, a large amount of data is now available for bioinformatics research. Meanwhile, the presence of many pipeline frameworks makes it possible to analyse these data. However, these tools concentrate mainly on their syntax and design paradigms, and dispatch jobs based on users' experience about the resources needed by the execution of a certain step in a protocol. As a result, it is difficult for these tools to maximize the potential of computing resources, and avoid errors caused by overload, such as memory overflow. Here, we have developed BioQueue, a web-based framework that contains a checkpoint before each step to automatically estimate the system resources (CPU, memory and disk) needed by the step and then dispatch jobs accordingly. BioQueue possesses a shell command-like syntax instead of implementing a new script language, which means most biologists without computer programming background can access the efficient queue system with ease. BioQueue is freely available at https://github.com/liyao001/BioQueue. The extensive documentation can be found at http://bioqueue.readthedocs.io. li_yao@outlook.com or gcsui@nefu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dellinger, M.; Allen, E.
A unique public/private partnership of local, state, federal and corporate stakeholders are constructing the world`s first wastewater-to-electricity system at The Geysers. A rare example of a genuinely {open_quotes}sustainable{close_quotes} energy system, three Lake County communities will recycle their treated wastewater effluent through the southeast portion of the The Geysers steamfield to produce approximately 625,000 MWh annually from six existing geothermal power plants. In effect, the communities` effluent will produce enough power to indefinitely sustain their electric needs, along with enough extra power for thousands of other California consumers. Because of the project`s unique sponsorship, function and environmental impacts, its implementation hasmore » required: (1) preparation of a consolidated state environmental impact report (EIR) and federal environmental impact statement (EIS), and seven related environmental agreements and management plans; (2) acquisition of 25 local, state, and federal permits; (3) negotiation of six federal and state financial assistance agreements; (4) negotiation of six participant agreements on construction, operation and financing of the project, and (5) acquisition of 163 easements from private land owners for pipeline construction access and ongoing maintenance. The project`s success in efficiently and economically completing these requirements is a model for geothermal innovation and partnering throughout the Pacific Rim and elsewhere internationally.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dellinger, M.; Allen, E.
A unique public/private partnership of local, state, federal, and corporate stakeholders are constructing the world`s first wastewater-to-electricity system at The Geysers. A rare example of a genuinely {open_quotes}sustainable{close_quote} energy system, three Lake County communities will recycle their treated wastewater effluent through the southeast portion of The Geysers steamfield to produce approximately 625,000 MWh annually from six existing geothermal power plants. In effect, the communities` effluent will produce enough power to indefinitely sustain their electric needs, along with enough extra power for thousands of other California consumers. Because of the project`s unique sponsorship, function, and environmental impacts, its implementation has required:more » (1) preparation of a consolidated state environmental impact report (EIR) and federal environmental impact statement (EIS), and seven related environmental agreements and management plans; (2) acquisition of 25 local, state, and federal permits; (3) negotiation of six federal and state financial assistance agreements; (4) negotiation of six participant agreements on construction, operation, and financing of the project; and (5) acquisition of 163 easements from private land owners for pipeline construction access and ongoing maintenance. The project`s success in efficiently and economically completing these requirements is a model for geothermal innovation and partnering throughout the Pacific Rim and elsewhere internationally.« less
Design of cylindrical pipe automatic welding control system based on STM32
NASA Astrophysics Data System (ADS)
Chen, Shuaishuai; Shen, Weicong
2018-04-01
The development of modern economy makes the demand for pipeline construction and construction rapidly increasing, and the pipeline welding has become an important link in pipeline construction. At present, there are still a large number of using of manual welding methods at home and abroad, and field pipe welding especially lacks miniature and portable automatic welding equipment. An automated welding system consists of a control system, which consisting of a lower computer control panel and a host computer operating interface, as well as automatic welding machine mechanisms and welding power systems in coordination with the control system. In this paper, a new control system of automatic pipe welding based on the control panel of the lower computer and the interface of the host computer is proposed, which has many advantages over the traditional automatic welding machine.
System for corrosion monitoring in pipeline applying fuzzy logic mathematics
NASA Astrophysics Data System (ADS)
Kuzyakov, O. N.; Kolosova, A. L.; Andreeva, M. A.
2018-05-01
A list of factors influencing corrosion rate on the external side of underground pipeline is determined. Principles of constructing a corrosion monitoring system are described; the system performance algorithm and program are elaborated. A comparative analysis of methods for calculating corrosion rate is undertaken. Fuzzy logic mathematics is applied to reduce calculations while considering a wider range of corrosion factors.
Real-time implementing wavefront reconstruction for adaptive optics
NASA Astrophysics Data System (ADS)
Wang, Caixia; Li, Mei; Wang, Chunhong; Zhou, Luchun; Jiang, Wenhan
2004-12-01
The capability of real time wave-front reconstruction is important for an adaptive optics (AO) system. The bandwidth of system and the real-time processing ability of the wave-front processor is mainly affected by the speed of calculation. The system requires enough number of subapertures and high sampling frequency to compensate atmospheric turbulence. The number of reconstruction operation is increased accordingly. Since the performance of AO system improves with the decrease of calculation latency, it is necessary to study how to increase the speed of wavefront reconstruction. There are two methods to improve the real time of the reconstruction. One is to convert the wavefront reconstruction matrix, such as by wavelet or FFT. The other is enhancing the performance of the processing element. Analysis shows that the latency cutting is performed with the cost of reconstruction precision by the former method. In this article, the latter method is adopted. From the characteristic of the wavefront reconstruction algorithm, a systolic array by FPGA is properly designed to implement real-time wavefront reconstruction. The system delay is reduced greatly by the utilization of pipeline and parallel processing. The minimum latency of reconstruction is the reconstruction calculation of one subaperture.
Pipeline oil fire detection with MODIS active fire products
NASA Astrophysics Data System (ADS)
Ogungbuyi, M. G.; Martinez, P.; Eckardt, F. D.
2017-12-01
We investigate 85 129 MODIS satellite active fire events from 2007 to 2015 in the Niger Delta of Nigeria. The region is the oil base for Nigerian economy and the hub of oil exploration where oil facilities (i.e. flowlines, flow stations, trunklines, oil wells and oil fields) are domiciled, and from where crude oil and refined products are transported to different Nigerian locations through a network of pipeline systems. Pipeline and other oil facilities are consistently susceptible to oil leaks due to operational or maintenance error, and by acts of deliberate sabotage of the pipeline equipment which often result in explosions and fire outbreaks. We used ground oil spill reports obtained from the National Oil Spill Detection and Response Agency (NOSDRA) database (see www.oilspillmonitor.ng) to validate MODIS satellite data. NOSDRA database shows an estimate of 10 000 spill events from 2007 - 2015. The spill events were filtered to include largest spills by volume and events occurring only in the Niger Delta (i.e. 386 spills). By projecting both MODIS fire and spill as `input vector' layers with `Points' geometry, and the Nigerian pipeline networks as `from vector' layers with `LineString' geometry in a geographical information system, we extracted the nearest MODIS events (i.e. 2192) closed to the pipelines by 1000m distance in spatial vector analysis. The extraction process that defined the nearest distance to the pipelines is based on the global practices of the Right of Way (ROW) in pipeline management that earmarked 30m strip of land to the pipeline. The KML files of the extracted fires in a Google map validated their source origin to be from oil facilities. Land cover mapping confirmed fire anomalies. The aim of the study is to propose a near-real-time monitoring of spill events along pipeline routes using 250 m spatial resolution of MODIS active fire detection sensor when such spills are accompanied by fire events in the study location.
Assessing fugitive emissions of CH4 from high-pressure gas pipelines
NASA Astrophysics Data System (ADS)
Worrall, Fred; Boothroyd, Ian; Davies, Richard
2017-04-01
The impact of unconventional natural gas production using hydraulic fracturing methods from shale gas basins has been assessed using life-cycle emissions inventories, covering areas such as pre-production, production and transmission processes. The transmission of natural gas from well pad to processing plants and its transport to domestic sites is an important source of fugitive CH4, yet emissions factors and fluxes from transmission processes are often based upon ver out of date measurements. It is important to determine accurate measurements of natural gas losses when compressed and transported between production and processing facilities so as to accurately determine life-cycle CH4 emissions. This study considers CH4 emissions from the UK National Transmission System (NTS) of high pressure natural gas pipelines. Mobile surveys of CH4 emissions using a Picarro Surveyor cavity-ring-down spectrometer were conducted across four areas in the UK, with routes bisecting high pressure pipelines and separate control routes away from the pipelines. A manual survey of soil gas measurements was also conducted along one of the high pressure pipelines using a tunable diode laser. When wind adjusted 92 km of high pressure pipeline and 72 km of control route were drive over a 10 day period. When wind and distance adjusted CH4 fluxes were significantly greater on routes with a pipeline than those without. The smallest leak detectable was 3% above ambient (1.03 relative concentration) with any leaks below 3% above ambient assumed ambient. The number of leaks detected along the pipelines correlate to the estimated length of pipe joints, inferring that there are constant fugitive CH4 emissions from these joints. When scaled up to the UK's National Transmission System pipeline length of 7600 km gives a fugitive CH4 flux of 4700 ± 2864 kt CH4/yr - this fugitive emission from high pressure pipelines is 0.016% of the annual gas supply.
External Corrosion Direct Assessment for Unique Threats to Underground Pipelines
DOT National Transportation Integrated Search
2007-11-01
External corrosion direct assessment process (ECDA) implemented in accordance with the NACE Recommended Practice RP0502-02 relies on above ground DA techniques to prioritize locations at risk for corrosion. Two special cases warrant special considera...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-01
... registration account with a superior method of authentication. Under the revised procedures, each regulated... pipeline has authorized to submit a particular type of filing. Implementation of these changes will provide...
Hong, Na; Wen, Andrew; Shen, Feichen; Sohn, Sunghwan; Liu, Sijia; Liu, Hongfang; Jiang, Guoqian
2018-01-01
Standards-based modeling of electronic health records (EHR) data holds great significance for data interoperability and large-scale usage. Integration of unstructured data into a standard data model, however, poses unique challenges partially due to heterogeneous type systems used in existing clinical NLP systems. We introduce a scalable and standards-based framework for integrating structured and unstructured EHR data leveraging the HL7 Fast Healthcare Interoperability Resources (FHIR) specification. We implemented a clinical NLP pipeline enhanced with an FHIR-based type system and performed a case study using medication data from Mayo Clinic's EHR. Two UIMA-based NLP tools known as MedXN and MedTime were integrated in the pipeline to extract FHIR MedicationStatement resources and related attributes from unstructured medication lists. We developed a rule-based approach for assigning the NLP output types to the FHIR elements represented in the type system, whereas we investigated the FHIR elements belonging to the source of the structured EMR data. We used the FHIR resource "MedicationStatement" as an example to illustrate our integration framework and methods. For evaluation, we manually annotated FHIR elements in 166 medication statements from 14 clinical notes generated by Mayo Clinic in the course of patient care, and used standard performance measures (precision, recall and f-measure). The F-scores achieved ranged from 0.73 to 0.99 for the various FHIR element representations. The results demonstrated that our framework based on the FHIR type system is feasible for normalizing and integrating both structured and unstructured EHR data.
Pipeline inspection using an autonomous underwater vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egeskov, P.; Bech, M.; Bowley, R.
1995-12-31
Pipeline inspection can be carried out by means of small Autonomous Underwater Vehicles (AUVs), operating either with a control link to a surface vessel, or totally independently. The AUV offers an attractive alternative to conventional inspection methods where Remotely Operated Vehicles (ROVs) or paravanes are used. A flatfish type AUV ``MARTIN`` (Marine Tool for Inspection) has been developed for this purpose. The paper describes the proposed types of inspection jobs to be carried out by ``MARTIN``. The design and construction of the vessel, its hydrodynamic properties, its propulsion and control systems are discussed. The pipeline tracking and survey systems, asmore » well as the launch and recovery systems are described.« less