Finite Element Modeling of the World Federation's Second MFL Benchmark Problem
NASA Astrophysics Data System (ADS)
Zeng, Zhiwei; Tian, Yong; Udpa, Satish; Udpa, Lalita
2004-02-01
This paper presents results obtained by simulating the second magnetic flux leakage benchmark problem proposed by the World Federation of NDE Centers. The geometry consists of notches machined on the internal and external surfaces of a rotating steel pipe that is placed between two yokes that are part of a magnetic circuit energized by an electromagnet. The model calculates the radial component of the leaked field at specific positions. The nonlinear material property of the ferromagnetic pipe is taken into account in simulating the problem. The velocity effect caused by the rotation of the pipe is, however, ignored for reasons of simplicity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezler, P.; Hartzman, M.; Reich, M.
1980-08-01
A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.
Numerical Prediction of Signal for Magnetic Flux Leakage Benchmark Task
NASA Astrophysics Data System (ADS)
Lunin, V.; Alexeevsky, D.
2003-03-01
Numerical results predicted by the finite element method based code are presented. The nonlinear magnetic time-dependent benchmark problem proposed by the World Federation of Nondestructive Evaluation Centers, involves numerical prediction of normal (radial) component of the leaked field in the vicinity of two practically rectangular notches machined on a rotating steel pipe (with known nonlinear magnetic characteristic). One notch is located on external surface of pipe and other is on internal one, and both are oriented axially.
MFL Benchmark Problem 2: Laboratory Measurements
NASA Astrophysics Data System (ADS)
Etcheverry, J.; Pignotti, A.; Sánchez, G.; Stickar, P.
2003-03-01
This experiment involves the measurement of the magnetic flux leaked from a rotating seamless steel tube with two machined notches. The signal measured is the radial component of the leaked field at a fixed point in space, as a function of the notch position, for four values of the liftoff and two notches. As the pipe tangential velocity was varied between 0.23 and 0.62 m/s, the sole observed effect was that of increasing the signal by a value that grows linearly with the velocity and is independent of the notch angular position.
Modifications to the Conduit Flow Process Mode 2 for MODFLOW-2005
Reimann, T.; Birk, S.; Rehrl, C.; Shoemaker, W.B.
2012-01-01
As a result of rock dissolution processes, karst aquifers exhibit highly conductive features such as caves and conduits. Within these structures, groundwater flow can become turbulent and therefore be described by nonlinear gradient functions. Some numerical groundwater flow models explicitly account for pipe hydraulics by coupling the continuum model with a pipe network that represents the conduit system. In contrast, the Conduit Flow Process Mode 2 (CFPM2) for MODFLOW-2005 approximates turbulent flow by reducing the hydraulic conductivity within the existing linear head gradient of the MODFLOW continuum model. This approach reduces the practical as well as numerical efforts for simulating turbulence. The original formulation was for large pore aquifers where the onset of turbulence is at low Reynolds numbers (1 to 100) and not for conduits or pipes. In addition, the existing code requires multiple time steps for convergence due to iterative adjustment of the hydraulic conductivity. Modifications to the existing CFPM2 were made by implementing a generalized power function with a user-defined exponent. This allows for matching turbulence in porous media or pipes and eliminates the time steps required for iterative adjustment of hydraulic conductivity. The modified CFPM2 successfully replicated simple benchmark test problems. ?? 2011 The Author(s). Ground Water ?? 2011, National Ground Water Association.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-01
... of businesses, which the GOK deemed ``harmful to juveniles, affecting public morals, certain private... (August 30, 2002), and accompanying Issues and Decision Memorandum (Wire Rod Memorandum) at ``Benchmark... from Turkey, 71 FR 43111 (July 31, 2006) (2004 Pipe Final), and accompanying Issues and Decision...
Application of program generation technology in solving heat and flow problems
NASA Astrophysics Data System (ADS)
Wan, Shui; Wu, Bangxian; Chen, Ningning
2007-05-01
Based on a new DIY concept for software development, an automatic program-generating technology attached on a software system called as Finite Element Program Generator (FEPG) provides a platform of developing programs, through which a scientific researcher can submit his special physico-mathematical problem to the system in a more direct and convenient way for solution. For solving flow and heat problems by using finite element method, the stabilization technologies and fraction-step methods are adopted to overcome the numerical difficulties caused mainly due to the dominated convection. A couple of benchmark problems are given in this paper as examples to illustrate the usage and the superiority of the automatic program generation technique, including the flow in a lid-driven cavity, the starting flow in a circular pipe, the natural convection in a square cavity, and the flow past a circular cylinder, etc. They are also shown as the verification of the algorithms.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-03-09
This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
Branch-pipe-routing approach for ships using improved genetic algorithm
NASA Astrophysics Data System (ADS)
Sui, Haiteng; Niu, Wentie
2016-09-01
Branch-pipe routing plays fundamental and critical roles in ship-pipe design. The branch-pipe-routing problem is a complex combinatorial optimization problem and is thus difficult to solve when depending only on human experts. A modified genetic-algorithm-based approach is proposed in this paper to solve this problem. The simplified layout space is first divided into threedimensional (3D) grids to build its mathematical model. Branch pipes in layout space are regarded as a combination of several two-point pipes, and the pipe route between two connection points is generated using an improved maze algorithm. The coding of branch pipes is then defined, and the genetic operators are devised, especially the complete crossover strategy that greatly accelerates the convergence speed. Finally, simulation tests demonstrate the performance of proposed method.
NASA Astrophysics Data System (ADS)
Shao, Zhenlu; Revil, André; Mao, Deqiang; Wang, Deming
2018-04-01
The location of buried utility pipes is often unknown. We use the time-domain induced polarization method to non-intrusively localize metallic pipes. A new approach, based on injecting a primary electrical current between a pair of electrodes and measuring the time-lapse voltage response on a set of potential electrodes after shutting down this primary current is used. The secondary voltage is measured on all the electrodes with respect to a single electrode used as a reference for the electrical potential, in a way similar to a self-potential time lapse survey. This secondary voltage is due to the formation of a secondary current density in the ground associated with the polarization of the metallic pipes. An algorithm is designed to localize the metallic object using the secondary voltage distribution by performing a tomography of the secondary source current density associated with the polarization of the pipes. This algorithm is first benchmarked on a synthetic case. Then, two laboratory sandbox experiments are performed with buried metallic pipes located in a sandbox filled with some clean sand. In Experiment #1, we use a horizontal copper pipe while in Experiment #2 we use an inclined stainless steel pipe. The result shows that the method is effective in localizing these two pipes. At the opposite, electrical resistivity tomography is not effective in localizing the pipes because they may appear resistive at low frequencies. This is due to the polarization of the metallic pipes which blocks the charge carriers at its external boundaries.
Application of USNRC NUREG/CR-6661 and draft DG-1108 to evolutionary and advanced reactor designs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang 'Apollo', Chen
2006-07-01
For the seismic design of evolutionary and advanced nuclear reactor power plants, there are definite financial advantages in the application of USNRC NUREG/CR-6661 and draft Regulatory Guide DG-1108. NUREG/CR-6661, 'Benchmark Program for the Evaluation of Methods to Analyze Non-Classically Damped Coupled Systems', was by Brookhaven National Laboratory (BNL) for the USNRC, and Draft Regulatory Guide DG-1108 is the proposed revision to the current Regulatory Guide (RG) 1.92, Revision 1, 'Combining Modal Responses and Spatial Components in Seismic Response Analysis'. The draft Regulatory Guide DG-1108 is available at http://members.cox.net/apolloconsulting, which also provides a link to the USNRC ADAMS site to searchmore » for NUREG/CR-6661 in text file or image file. The draft Regulatory Guide DG-1108 removes unnecessary conservatism in the modal combinations for closely spaced modes in seismic response spectrum analysis. Its application will be very helpful in coupled seismic analysis for structures and heavy equipment to reduce seismic responses and in piping system seismic design. In the NUREG/CR-6661 benchmark program, which investigated coupled seismic analysis of structures and equipment or piping systems with different damping values, three of the four participants applied the complex mode solution method to handle different damping values for structures, equipment, and piping systems. The fourth participant applied the classical normal mode method with equivalent weighted damping values to handle differences in structural, equipment, and piping system damping values. Coupled analysis will reduce the equipment responses when equipment, or piping system and structure are in or close to resonance. However, this reduction in responses occurs only if the realistic DG-1108 modal response combination method is applied, because closely spaced modes will be produced when structure and equipment or piping systems are in or close to resonance. Otherwise, the conservatism in the current Regulatory Guide 1.92, Revision 1, will overshadow the advantage of coupled analysis. All four participants applied the realistic modal combination method of DG-1108. Consequently, more realistic and reduced responses were obtained. (authors)« less
Benchmark problems for numerical implementations of phase field models
Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...
2016-10-01
Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less
Modeling of the Edwards pipe experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiselj, I.; Petelin, S.
1995-12-31
The Edwards pipe experiment is used as one of the basic benchmarks for the two-phase flow codes due to its simple geometry and the wide range of phenomena that it covers. Edwards and O`Brien filled 4-m-long pipe with liquid water at 7 MPa and 502 K and ruptured one end of the tube. They measured pressure and void fraction during the blowdown. Important phenomena observed were pressure rarefaction wave, flashing onset, critical two-phase flow, and void fraction wave. Experimental data were used to analyze the capabilities of the RELAP5/MOD3.1 six-equation two-phase flow model and to examine two different numerical schemes:more » one from the RELAP5/MOD3.1 code and one from our own code, which was based on characteristic upwind discretization.« less
Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set
NASA Astrophysics Data System (ADS)
Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.
2017-05-01
A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.
MARC calculations for the second WIPP structural benchmark problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, H.S.
1981-05-01
This report describes calculations made with the MARC structural finite element code for the second WIPP structural benchmark problem. Specific aspects of problem implementation such as element choice, slip line modeling, creep law implementation, and thermal-mechanical coupling are discussed in detail. Also included are the computational results specified in the benchmark problem formulation.
All-in-one model for designing optimal water distribution pipe networks
NASA Astrophysics Data System (ADS)
Aklog, Dagnachew; Hosoi, Yoshihiko
2017-05-01
This paper discusses the development of an easy-to-use, all-in-one model for designing optimal water distribution networks. The model combines different optimization techniques into a single package in which a user can easily choose what optimizer to use and compare the results of different optimizers to gain confidence in the performances of the models. At present, three optimization techniques are included in the model: linear programming (LP), genetic algorithm (GA) and a heuristic one-by-one reduction method (OBORM) that was previously developed by the authors. The optimizers were tested on a number of benchmark problems and performed very well in terms of finding optimal or near-optimal solutions with a reasonable computation effort. The results indicate that the model effectively addresses the issues of complexity and limited performance trust associated with previous models and can thus be used for practical purposes.
Benchmark problems and solutions
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
1995-01-01
The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.
Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0
NASA Technical Reports Server (NTRS)
Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine
2004-01-01
We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, Yassin; Anand, Nk
2016-03-30
A 1/16th scaled VHTR experimental model was constructed and the preliminary test was performed in this study. To produce benchmark data for CFD validation in the future, the facility was first run at partial operation with five pipes being heated. PIV was performed to extract the vector velocity field for three adjacent naturally convective jets at statistically steady state. A small recirculation zone was found between the pipes, and the jets entered the merging zone at 3 cm from the pipe outlet but diverged as the flow approached the top of the test geometry. Turbulence analysis shows the turbulence intensitymore » peaked at 41-45% as the jets mixed. A sensitivity analysis confirmed that 1000 frames were sufficient to measure statistically steady state. The results were then validated by extracting the flow rate from the PIV jet velocity profile, and comparing it with an analytic flow rate and ultrasonic flowmeter; all flow rates lie within the uncertainty of the other two methods for Tests 1 and 2. This test facility can be used for further analysis of naturally convective mixing, and eventually produce benchmark data for CFD validation for the VHTR during a PCC or DCC accident scenario. Next, a PTV study of 3000 images (1500 image pairs) were used to quantify the velocity field in the upper plenum. A sensitivity analysis confirmed that 1500 frames were sufficient to precisely estimate the flow. Subsequently, three (3, 9, and 15 cm) Y-lines from the pipe output were extracted to consider the output differences between 50 to 1500 frames. The average velocity field and standard deviation error that accrued in the three different tests were calculated to assess repeatability. The error was varied, from 1 to 14%, depending on Y-elevation. The error decreased as the flow moved farther from the output pipe. In addition, turbulent intensity was calculated and found to be high near the output. Reynolds stresses and turbulent intensity were used to validate the data by comparing it with benchmark data. The experimental data gave the same pattern as the benchmark data. A turbulent single buoyant jet study was performed for the case of LOFC in the upper plenum of scaled VHTR. Time-averaged profiles show that 3,000 frames of images were sufficient for the study up to second-order statistics. Self-similarity is an important feature of jets since the behavior of jets is independent of Reynolds number and a sole function of geometry. Self-similarity profiles were well observed in the axial velocity and velocity magnitude profile regardless of z/D where the radial velocity did not show any similarity pattern. The normal components of Reynolds stresses have self-similarity within the expected range. The study shows that large vortices were observed close to the dome wall, indicating that the geometry of the VHTR has a significant impact on its safety and performance. Near the dome surface, large vortices were shown to inhibit the flows, resulting in reduced axial jet velocity. The vortices that develop subsequently reduce the Reynolds stresses that develop and the impact on the integrity of the VHTR upper plenum surface. Multiple jets study, including two, three and five jets, were investigated.« less
Willemse, Elias J; Joubert, Johan W
2016-09-01
In this article we present benchmark datasets for the Mixed Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities (MCARPTIF). The problem is a generalisation of the Capacitated Arc Routing Problem (CARP), and closely represents waste collection routing. Four different test sets are presented, each consisting of multiple instance files, and which can be used to benchmark different solution approaches for the MCARPTIF. An in-depth description of the datasets can be found in "Constructive heuristics for the Mixed Capacity Arc Routing Problem under Time Restrictions with Intermediate Facilities" (Willemseand Joubert, 2016) [2] and "Splitting procedures for the Mixed Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities" (Willemseand Joubert, in press) [4]. The datasets are publicly available from "Library of benchmark test sets for variants of the Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities" (Willemse and Joubert, 2016) [3].
Multi-terminal pipe routing by Steiner minimal tree and particle swarm optimisation
NASA Astrophysics Data System (ADS)
Liu, Qiang; Wang, Chengen
2012-08-01
Computer-aided design of pipe routing is of fundamental importance for complex equipments' developments. In this article, non-rectilinear branch pipe routing with multiple terminals that can be formulated as a Euclidean Steiner Minimal Tree with Obstacles (ESMTO) problem is studied in the context of an aeroengine-integrated design engineering. Unlike the traditional methods that connect pipe terminals sequentially, this article presents a new branch pipe routing algorithm based on the Steiner tree theory. The article begins with a new algorithm for solving the ESMTO problem by using particle swarm optimisation (PSO), and then extends the method to the surface cases by using geodesics to meet the requirements of routing non-rectilinear pipes on the surfaces of aeroengines. Subsequently, the adaptive region strategy and the basic visibility graph method are adopted to increase the computation efficiency. Numeral computations show that the proposed routing algorithm can find satisfactory routing layouts while running in polynomial time.
KENTUCKY STRAIGHT PIPES REPORT, DECEMBER 2002
The poor sanitary conditions and water pollution problems EPA observed in the Kentucky counties of Harlan, Martin, Bath, and Montgomery were of the highest concern. The widespread scale of both the straight pipe issues as well as package plant wastewater problems present an envir...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Peiyuan; Brown, Timothy; Fullmer, William D.
Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less
Sonic limitations and startup problems of heat pipes
NASA Technical Reports Server (NTRS)
Deverall, J. E.; Kemme, J. E.; Florschuetz, L. W.
1972-01-01
Introduction of small amounts of inert, noncombustible gas aids startup in certain types of heat pipes. When the heat pipe is closely coupled to the heat sink, the startup system must be designed to bring the heat sink on-line slowly.
The MCNP6 Analytic Criticality Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less
Underground pipeline laying using the pipe-in-pipe system
NASA Astrophysics Data System (ADS)
Antropova, N.; Krets, V.; Pavlov, M.
2016-09-01
The problems of resource saving and environmental safety during the installation and operation of the underwater crossings are always relevant. The paper describes the existing methods of trenchless pipeline technology, the structure of multi-channel pipelines, the types of supporting and guiding systems. The rational design is suggested for the pipe-in-pipe system. The finite element model is presented for the most dangerous sections of the inner pipes, the optimum distance is detected between the roller supports.
Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects
Lovaglio, Pietro Giorgio
2012-01-01
Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140
Benchmarking strategies for measuring the quality of healthcare: problems and prospects.
Lovaglio, Pietro Giorgio
2012-01-01
Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.
Within-Group Effect-Size Benchmarks for Problem-Solving Therapy for Depression in Adults
ERIC Educational Resources Information Center
Rubin, Allen; Yu, Miao
2017-01-01
This article provides benchmark data on within-group effect sizes from published randomized clinical trials that supported the efficacy of problem-solving therapy (PST) for depression among adults. Benchmarks are broken down by type of depression (major or minor), type of outcome measure (interview or self-report scale), whether PST was provided…
Noise control of waste water pipes
NASA Astrophysics Data System (ADS)
Lilly, Jerry
2005-09-01
Noise radiated by waste water pipes is a major concern in multifamily housing projects. While the most common solution to this problem is to use cast-iron pipes in lieu of plastic pipes, this may not be sufficient in high-end applications. It should also be noted that many (if not most) multifamily housing projects in the U.S.A. are constructed with plastic waste piping. This paper discusses some of the measures that developers are currently using to control noise from both plastic and cast-iron waste pipes. In addition, results of limited noise measurements of transient water flow in plastic and cast-iron waste pipes will be presented.
Predoi, Mihai Valentin
2014-09-01
The dispersion curves for hollow multilayered cylinders are prerequisites in any practical guided waves application on such structures. The equations for homogeneous isotropic materials have been established more than 120 years ago. The difficulties in finding numerical solutions to analytic expressions remain considerable, especially if the materials are orthotropic visco-elastic as in the composites used for pipes in the last decades. Among other numerical techniques, the semi-analytical finite elements method has proven its capability of solving this problem. Two possibilities exist to model a finite elements eigenvalue problem: a two-dimensional cross-section model of the pipe or a radial segment model, intersecting the layers between the inner and the outer radius of the pipe. The last possibility is here adopted and distinct differential problems are deduced for longitudinal L(0,n), torsional T(0,n) and flexural F(m,n) modes. Eigenvalue problems are deduced for the three modes classes, offering explicit forms of each coefficient for the matrices used in an available general purpose finite elements code. Comparisons with existing solutions for pipes filled with non-linear viscoelastic fluid or visco-elastic coatings as well as for a fully orthotropic hollow cylinder are all proving the reliability and ease of use of this method. Copyright © 2014 Elsevier B.V. All rights reserved.
Phase field benchmark problems for dendritic growth and linear elasticity
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...
2018-03-26
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
Phase field benchmark problems for dendritic growth and linear elasticity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
WATER QUALITY AND TREATMENT CONSIDERATIONS FOR CEMENT-LINED AND A-C PIPE
Both cement mortar lined (CML) and asbestos-cement pipes (A-C) are widely used in many water systems. Cement linings are also commonly applied in-situ after pipe cleaning, usually to prevent the recurrence of red water or tuberculation problems. Unfortunately, little consideratio...
Benchmarking--Measuring and Comparing for Continuous Improvement.
ERIC Educational Resources Information Center
Henczel, Sue
2002-01-01
Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…
Satellite Radar Interferometry For Risk Management Of Gas Pipeline Networks
NASA Astrophysics Data System (ADS)
Ianoschi, Raluca; Schouten, Mathijs; Bas Leezenberg, Pieter; Dheenathayalan, Prabu; Hanssen, Ramon
2013-12-01
InSAR time series analyses can be fine-tuned for specific applications, yielding a potential increase in benchmark density, precision and reliability. Here we demonstrate the algorithms developed for gas pipeline monitoring, enabling operators to precisely pinpoint unstable locations. This helps asset management in planning, prioritizing and focusing in-situ inspections, thus reducing maintenance costs. In unconsolidated Quaternary soils, ground settlement contributes to possible failure of brittle cast iron gas pipes and their connections to houses. Other risk factors include the age and material of the pipe. The soil dynamics have led to a catastrophic explosion in the city of Amsterdam, which triggered an increased awareness for the significance of this problem. As the extent of the networks can be very wide, InSAR is shown to be a valuable source of information for identifying the hazard regions. We monitor subsidence affecting an urban gas transportation network in the Netherlands using both medium and high resolution SAR data. Results for the 2003-2010 period provide clear insights on the differential subsidence rates in the area. This enables characterization of underground motion that affects the integrity of the pipeline. High resolution SAR data add extra detail of door-to-door pipeline connections, which are vulnerable due to different settlements between house connections and main pipelines. The rates which we measure represent important input in planning of maintenance works. Managers can decide the priority and timing for inspecting the pipelines. The service helps manage the risk and reduce operational cost in gas transportation networks.
Shift Verification and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G
2016-09-07
This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less
Effects of Stormwater Pipe Size and Rainfall on Sediment and Nutrients Delivered to a Coastal Bayou
Pollutants discharged from stormwater pipes can cause water quality and ecosystem problems in coastal bayous. A study was conducted to characterize sediment and nutrients discharged by small and large (, 20 cm and .20 cm in internal diameters, respectively) pipes under different ...
Benchmark Problems for Spacecraft Formation Flying Missions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.
2003-01-01
To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.
Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems
NASA Technical Reports Server (NTRS)
Tam, C. K. W. (Editor); Hardin, J. C. (Editor)
1997-01-01
The proceedings of the Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems held at Florida State University are the subject of this report. For this workshop, problems arising in typical industrial applications of CAA were chosen. Comparisons between numerical solutions and exact solutions are presented where possible.
A proposed benchmark problem for cargo nuclear threat monitoring
NASA Astrophysics Data System (ADS)
Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.
2011-10-01
There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.
NASA Technical Reports Server (NTRS)
Marshburn, J. P.
1972-01-01
The OAO-C spacecraft has three circular heat pipes, each of a different internal design, located in the space between the spacecraft structural tube and the experiment tube, which are designed to isothermalize the structure. Two of the pipes are used to transport high heat loads, and the third is for low heat loads. The test problems deal with the charging of the pipes, modifications, the mobile tilt table, the position indicator, and the heat input mechanisms. The final results showed that the techniques used were adequate for thermal-vacuum testing of heat pipes.
Internal erosion during soil pipe flow: Role in gully erosion and hillslope instability
USDA-ARS?s Scientific Manuscript database
Many field observations have lead to speculation on the role of piping in embankment failures, landslides, and gully erosion. However, there has not been a consensus on the subsurface flow and erosion processes involved and inconsistent use of terms have exasperated the problem. One such piping proc...
USDA-ARS?s Scientific Manuscript database
Locating buried agricultural drainage pipes is a difficult problem confronting farmers and land improvement contractors, especially in the Midwest U.S., where the removal of excess soil water using subsurface drainage systems is a common farm practice. Enhancing the efficiency of soil water removal ...
Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.
Lennartsson, Jan; Lindberg, Carl
2015-01-01
To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.
Third Computational Aeroacoustics (CAA) Workshop on Benchmark Problems
NASA Technical Reports Server (NTRS)
Dahl, Milo D. (Editor)
2000-01-01
The proceedings of the Third Computational Aeroacoustics (CAA) Workshop on Benchmark Problems cosponsored by the Ohio Aerospace Institute and the NASA Glenn Research Center are the subject of this report. Fan noise was the chosen theme for this workshop with representative problems encompassing four of the six benchmark problem categories. The other two categories were related to jet noise and cavity noise. For the first time in this series of workshops, the computational results for the cavity noise problem were compared to experimental data. All the other problems had exact solutions, which are included in this report. The Workshop included a panel discussion by representatives of industry. The participants gave their views on the status of applying computational aeroacoustics to solve practical industry related problems and what issues need to be addressed to make CAA a robust design tool.
Isentropic fluid dynamics in a curved pipe
NASA Astrophysics Data System (ADS)
Colombo, Rinaldo M.; Holden, Helge
2016-10-01
In this paper we study isentropic flow in a curved pipe. We focus on the consequences of the geometry of the pipe on the dynamics of the flow. More precisely, we present the solution of the general Cauchy problem for isentropic fluid flow in an arbitrarily curved, piecewise smooth pipe. We consider initial data in the subsonic regime, with small total variation about a stationary solution. The proof relies on the front-tracking method and is based on [1].
PMLB: a large benchmark suite for machine learning evaluation and comparison.
Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H
2017-01-01
The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.
Centrifugal compressor modifications and their effect on high-frequency pipe wall vibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motriuk, R.W.; Harvey, D.P.
1998-08-01
High-frequency pulsation generated by centrifugal compressors, with pressure wave-lengths much smaller than the attached pipe diameter, can cause fatigue failures of the compressor internals, impair compressor performance, and damage the attached compressor piping. There are numerous sources producing pulsation in centrifugal compressors. Some of them are discussed in literature at large (Japikse, 1995; Niese, 1976). NGTL has experienced extreme high-frequency discharge pulsation and pipe wall vibration on many of its radial inlet high-flow centrifugal gas compressor facilities. These pulsations led to several piping attachment failures and compressor internal component failures while the compressor operated within the design envelope. This papermore » considers several pulsation conditions at an NGTL compression facility which resulted in unacceptable piping vibration. Significant vibration attenuation was achieved by modifying the compressor (pulsation source) through removal of the diffuser vanes and partial removal of the inlet guide vanes (IGV). Direct comparison of the changes in vibration, pulsation, and performance are made for each of the modifications. The vibration problem, probable causes, options available to address the problem, and the results of implementation are reviewed. The effects of diffuser vane removal on discharge pipe wall vibration as well as changes in compressor performance are described.« less
Research on the ITOC based scheduling system for ship piping production
NASA Astrophysics Data System (ADS)
Li, Rui; Liu, Yu-Jun; Hamada, Kunihiro
2010-12-01
Manufacturing of ship piping systems is one of the major production activities in shipbuilding. The schedule of pipe production has an important impact on the master schedule of shipbuilding. In this research, the ITOC concept was introduced to solve the scheduling problems of a piping factory, and an intelligent scheduling system was developed. The system, in which a product model, an operation model, a factory model, and a knowledge database of piping production were integrated, automated the planning process and production scheduling. Details of the above points were discussed. Moreover, an application of the system in a piping factory, which achieved a higher level of performance as measured by tardiness, lead time, and inventory, was demonstrated.
Benchmarking on Tsunami Currents with ComMIT
NASA Astrophysics Data System (ADS)
Sharghi vand, N.; Kanoglu, U.
2015-12-01
There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)
A suite of benchmark and challenge problems for enhanced geothermal systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Mark; Fu, Pengcheng; McClure, Mark
A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less
Acoustic Emission Analysis of Prestressed Concrete Structures
NASA Astrophysics Data System (ADS)
Elfergani, H. A.; Pullin, R.; Holford, K. M.
2011-07-01
Corrosion is a substantial problem in numerous structures and in particular corrosion is very serious in reinforced and prestressed concrete and must, in certain applications, be given special consideration because failure may result in loss of life and high financial cost. Furthermore corrosion cannot only be considered a long term problem with many studies reporting failure of bridges and concrete pipes due to corrosion within a short period after they were constructed. The concrete pipes which transport water are examples of structures that have suffered from corrosion; for example, the pipes of The Great Man-Made River Project of Libya. Five pipe failures due to corrosion have occurred since their installation. The main reason for the damage is corrosion of prestressed wires in the pipes due to the attack of chloride ions from the surrounding soil. Detection of the corrosion in initial stages has been very important to avoid other failures and the interruption of water flow. Even though most non-destructive methods which are used in the project are able to detect wire breaks, they cannot detect the presence of corrosion. Hence in areas where no excavation has been completed, areas of serious damage can go undetected. Therefore, the major problem which faces engineers is to find the best way to detect the corrosion and prevent the pipes from deteriorating. This paper reports on the use of the Acoustic Emission (AE) technique to detect the early stages of corrosion prior to deterioration of concrete structures.
NASA Astrophysics Data System (ADS)
Liu, Shuyong; Jiang, J.; Parr, Nicola
2016-09-01
Water loss in distribution systems is a global problem for the water industry and governments. According to the international water supply association (IWSA), as a result of leaks from distribution pipes, 20% to 30% of water is lost while in transit from treatment plants to consumers. Although governments have tried to push the water industry to reduce the water leaks, a lot of experts have pointed out that a wide use of plastic pipes instead of metal pipes in recent years has caused difficulties in the detection of leaks using current acoustic technology. Leaks from plastic pipes are much quieter than traditional metal pipes and comparing to metal pipes the plastic pipes have very different coupling characteristics with soil, water and surrounding structures, such as other pipes, road surface and building foundations. The dispersion characteristics of wave propagating along buried plastic pipes are investigated in this paper using finite element and boundary element based models. Both empty and water- filled pipes were considered. Influences from nearby pipes and building foundations were carefully studied. The results showed that soil condition and nearby structures have significant influences on the dispersion characteristics of wave propagating along buried plastic pipes.
Verification and benchmark testing of the NUFT computer code
NASA Astrophysics Data System (ADS)
Lee, K. H.; Nitao, J. J.; Kulshrestha, A.
1993-10-01
This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.
NASA Astrophysics Data System (ADS)
Hooda, Nikhil; Damani, Om
2017-06-01
The classic problem of the capital cost optimization of branched piped networks consists of choosing pipe diameters for each pipe in the network from a discrete set of commercially available pipe diameters. Each pipe in the network can consist of multiple segments of differing diameters. Water networks also consist of intermediate tanks that act as buffers between incoming flow from the primary source and the outgoing flow to the demand nodes. The network from the primary source to the tanks is called the primary network, and the network from the tanks to the demand nodes is called the secondary network. During the design stage, the primary and secondary networks are optimized separately, with the tanks acting as demand nodes for the primary network. Typically the choice of tank locations, their elevations, and the set of demand nodes to be served by different tanks is manually made in an ad hoc fashion before any optimization is done. It is desirable therefore to include this tank configuration choice in the cost optimization process itself. In this work, we explain why the choice of tank configuration is important to the design of a network and describe an integer linear program model that integrates the tank configuration to the standard pipe diameter selection problem. In order to aid the designers of piped-water networks, the improved cost optimization formulation is incorporated into our existing network design system called JalTantra.
Sensitivity Analysis of OECD Benchmark Tests in BISON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less
Analyzing the BBOB results by means of benchmarking concepts.
Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C
2015-01-01
We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.
NASA Astrophysics Data System (ADS)
Kumpel, E.; Nelson, K. L.
2012-12-01
An increasing number of urban residents in low- and middle-income countries have access to piped water; however, this water is often not available continuously. 84% of reporting utilities in low-income countries provide piped water for fewer than 24 hours per day (van den Berg and Danilenko, 2010), while no major city in India has continuous piped water supply. Intermittent water supply leaves pipes vulnerable to contamination and forces households to store water or rely on alternative unsafe sources, posing a health threat to consumers. In these systems, pipes are empty for long periods of time and experience low or negative pressure even when water is being supplied, leaving them susceptible to intrusion from sewage, soil, or groundwater. Households with a non-continuous supply must collect and store water, presenting more opportunities for recontamination. Upgrading to a continuous water supply, while an obvious solution to these challenges, is currently out of reach for many resource-constrained utilities. Despite its widespread prevalence, there are few data on the mechanisms causing contamination in an intermittent supply and the frequency with which it occurs. Understanding the impact of intermittent operation on water quality can lead to strategies to improve access to safe piped water for the millions of people currently served by these systems. We collected over 100 hours of continuous measurements of pressure and physico-chemical water quality indicators and tested over 1,000 grab samples for indicator bacteria over 14 months throughout the distribution system in Hubli-Dharwad, India. This data set is used to explore and explain the mechanisms influencing water quality when piped water is provided for a few hours every 3-5 days. These data indicate that contamination occurs along the distribution system as water travels from the treatment plant to reservoirs and through intermittently supplied pipes to household storage containers, while real-time measurements document variability in water quality throughout the 2-8 hour supply period. Our results show that piped water is not always safe water, but that safe water can be achieved in an intermittent supply under certain physical and operational conditions. Intermittent piped water supply is an important constraint on access to safe water in towns and cities in low-income countries, and strategies that improve these existing systems can help urban residents gain access to safe water. References van den Berg, C., and Danilenko, A. (2010). "The IBNET Water Supply and Sanitation Performance Blue Book: The International Benchmarking Network for Water and Sanitation Utilities Databook." World Bank Washington, DC.
Least-Squares Spectral Element Solutions to the CAA Workshop Benchmark Problems
NASA Technical Reports Server (NTRS)
Lin, Wen H.; Chan, Daniel C.
1997-01-01
This paper presents computed results for some of the CAA benchmark problems via the acoustic solver developed at Rocketdyne CFD Technology Center under the corporate agreement between Boeing North American, Inc. and NASA for the Aerospace Industry Technology Program. The calculations are considered as benchmark testing of the functionality, accuracy, and performance of the solver. Results of these computations demonstrate that the solver is capable of solving the propagation of aeroacoustic signals. Testing of sound generation and on more realistic problems is now pursued for the industrial applications of this solver. Numerical calculations were performed for the second problem of Category 1 of the current workshop problems for an acoustic pulse scattered from a rigid circular cylinder, and for two of the first CAA workshop problems, i. e., the first problem of Category 1 for the propagation of a linear wave and the first problem of Category 4 for an acoustic pulse reflected from a rigid wall in a uniform flow of Mach 0.5. The aim for including the last two problems in this workshop is to test the effectiveness of some boundary conditions set up in the solver. Numerical results of the last two benchmark problems have been compared with their corresponding exact solutions and the comparisons are excellent. This demonstrates the high fidelity of the solver in handling wave propagation problems. This feature lends the method quite attractive in developing a computational acoustic solver for calculating the aero/hydrodynamic noise in a violent flow environment.
Asymptotic scalings of developing curved pipe flow
NASA Astrophysics Data System (ADS)
Ault, Jesse; Chen, Kevin; Stone, Howard
2015-11-01
Asymptotic velocity and pressure scalings are identified for the developing curved pipe flow problem in the limit of small pipe curvature and high Reynolds numbers. The continuity and Navier-Stokes equations in toroidal coordinates are linearized about Dean's analytical curved pipe flow solution (Dean 1927). Applying appropriate scaling arguments to the perturbation pressure and velocity components and taking the limits of small curvature and large Reynolds number yields a set of governing equations and boundary conditions for the perturbations, independent of any Reynolds number and pipe curvature dependence. Direct numerical simulations are used to confirm these scaling arguments. Fully developed straight pipe flow is simulated entering a curved pipe section for a range of Reynolds numbers and pipe-to-curvature radius ratios. The maximum values of the axial and secondary velocity perturbation components along with the maximum value of the pressure perturbation are plotted along the curved pipe section. The results collapse when the scaling arguments are applied. The numerically solved decay of the velocity perturbation is also used to determine the entrance/development lengths for the curved pipe flows, which are shown to scale linearly with the Reynolds number.
Influence of dimension parameters of the gravity heat pipe on the thermal performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosa, Ľuboš, E-mail: lubos.kosa@fstroj.uniza.sk; Nemec, Patrik, E-mail: patrik.nemec@fstroj.uniza.sk; Jobb, Marián, E-mail: marian.jobb@fstroj.uniza.sk
Currently the problem with the increasing number of electronic devices is a problem with the outlet Joule heating. Joule heating, also known as ohmic heating and resistive heating, is the process by which the passage of an electric current through a conductor releases heat. Perfect dustproof cooling of electronic components ensures longer life of the equipment. One of more alternatives of heat transfer without the using of mechanical equipment is the use of the heat pipe. Heat pipes are easy to manufacture and maintenance of low input investment cost. The advantage of using the heat pipe is its use inmore » hermetic closed electronic device which is separated exchange of air between the device and the environment. This experiment deals with the influence of changes in the working tube diameter and changing the working fluid on performance parameters. Changing the working fluid and the tube diameter changes the thermal performance of the heat pipe. The result of this paper is finding the optimal diameter with ideal working substance for the greatest heat transfer for 1cm{sup 2} sectional area tube.« less
Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)
2013-01-01
Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.
OPEN PROBLEM: Turbulence transition in pipe flow: some open questions
NASA Astrophysics Data System (ADS)
Eckhardt, Bruno
2008-01-01
The transition to turbulence in pipe flow is a longstanding problem in fluid dynamics. In contrast to many other transitions it is not connected with linear instabilities of the laminar profile and hence follows a different route. Experimental and numerical studies within the last few years have revealed many unexpected connections to the nonlinear dynamics of strange saddles and have considerably improved our understanding of this transition. The text summarizes some of these insights and points to some outstanding problems in areas where valuable contributions from nonlinear dynamics can be expected.
Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)
2002-01-01
The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.
Research on Buckling State of Prestressed Fiber-Strengthened Steel Pipes
NASA Astrophysics Data System (ADS)
Wang, Ruheng; Lan, Kunchang
2018-01-01
The main restorative methods of damaged oil and gas pipelines include welding reinforcement, fixture reinforcement and fiber material reinforcement. Owing to the severe corrosion problems of pipes in practical use, the research on renovation and consolidation techniques of damaged pipes gains extensive attention by experts and scholars both at home and abroad. The analysis of mechanical behaviors of reinforced pressure pipelines and further studies focusing on “the critical buckling” and intensity of pressure pipeline failure are conducted in this paper, providing theoretical basis to restressed fiber-strengthened steel pipes. Deformation coordination equations and buckling control equations of steel pipes under the effect of prestress is deduced by using Rayleigh Ritz method, which is an approximation method based on potential energy stationary value theory and minimum potential energy principle. According to the deformation of prestressed steel pipes, the deflection differential equation of prestressed steel pipes is established, and the critical value of buckling under prestress is obtained.
NASA Technical Reports Server (NTRS)
1997-01-01
Small Business Innovation Research contracts from Goddard Space Flight Center to Thermacore Inc. have fostered the company work on devices tagged "heat pipes" for space application. To control the extreme temperature ranges in space, heat pipes are important to spacecraft. The problem was to maintain an 8-watt central processing unit (CPU) at less than 90 C in a notebook computer using no power, with very little space available and without using forced convection. Thermacore's answer was in the design of a powder metal wick that transfers CPU heat from a tightly confined spot to an area near available air flow. The heat pipe technology permits a notebook computer to be operated in any position without loss of performance. Miniature heat pipe technology has successfully been applied, such as in Pentium Processor notebook computers. The company expects its heat pipes to accommodate desktop computers as well. Cellular phones, camcorders, and other hand-held electronics are forsible applications for heat pipes.
Radiation Detection Computational Benchmark Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.
2013-09-24
Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing differentmore » techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for compilation. This is a report describing the details of the selected Benchmarks and results from various transport codes.« less
Model Prediction Results for 2007 Ultrasonic Benchmark Problems
NASA Astrophysics Data System (ADS)
Kim, Hak-Joon; Song, Sung-Jin
2008-02-01
The World Federation of NDE Centers (WFNDEC) has addressed two types of problems for the 2007 ultrasonic benchmark problems: prediction of side-drilled hole responses with 45° and 60° refracted shear waves, and effects of surface curvatures on the ultrasonic responses of flat-bottomed hole. To solve this year's ultrasonic benchmark problems, we applied multi-Gaussian beam models for calculation of ultrasonic beam fields and the Kirchhoff approximation and the separation of variables method for calculation of far-field scattering amplitudes of flat-bottomed holes and side-drilled holes respectively In this paper, we present comparison results of model predictions to experiments for side-drilled holes and discuss effect of interface curvatures on ultrasonic responses by comparison of peak-to-peak amplitudes of flat-bottomed hole responses with different sizes and interface curvatures.
Analytical scaling relations to evaluate leakage and intrusion in intermittent water supply systems.
Taylor, David D J; Slocum, Alexander H; Whittle, Andrew J
2018-01-01
Intermittent water supplies (IWS) deliver piped water to one billion people; this water is often microbially contaminated. Contaminants that accumulate while IWS are depressurized are flushed into customers' homes when these systems become pressurized. In addition, during the steady-state phase of IWS, contaminants from higher-pressure sources (e.g., sewers) may continue to intrude where pipe pressure is low. To guide the operation and improvement of IWS, this paper proposes an analytic model relating supply pressure, supply duration, leakage, and the volume of intruded, potentially-contaminated, fluids present during flushing and steady-state. The proposed model suggests that increasing the supply duration may improve water quality during the flushing phase, but decrease the subsequent steady-state water quality. As such, regulators and academics should take more care in reporting if water quality samples are taken during flushing or steady-state operational conditions. Pipe leakage increases with increased supply pressure and/or duration. We propose using an equivalent orifice area (EOA) to quantify pipe quality. This provides a more stable metric for regulators and utilities tracking pipe repairs. Finally, we show that the volume of intruded fluid decreases in proportion to reductions in EOA. The proposed relationships are applied to self-reported performance indicators for IWS serving 108 million people described in the IBNET database and in the Benchmarking and Data Book of Water Utilities in India. This application shows that current high-pressure, continuous water supply targets will require extensive EOA reductions. For example, in order to achieve national targets, utilities in India will need to reduce their EOA by a median of at least 90%.
Analytical scaling relations to evaluate leakage and intrusion in intermittent water supply systems
Slocum, Alexander H.; Whittle, Andrew J.
2018-01-01
Intermittent water supplies (IWS) deliver piped water to one billion people; this water is often microbially contaminated. Contaminants that accumulate while IWS are depressurized are flushed into customers’ homes when these systems become pressurized. In addition, during the steady-state phase of IWS, contaminants from higher-pressure sources (e.g., sewers) may continue to intrude where pipe pressure is low. To guide the operation and improvement of IWS, this paper proposes an analytic model relating supply pressure, supply duration, leakage, and the volume of intruded, potentially-contaminated, fluids present during flushing and steady-state. The proposed model suggests that increasing the supply duration may improve water quality during the flushing phase, but decrease the subsequent steady-state water quality. As such, regulators and academics should take more care in reporting if water quality samples are taken during flushing or steady-state operational conditions. Pipe leakage increases with increased supply pressure and/or duration. We propose using an equivalent orifice area (EOA) to quantify pipe quality. This provides a more stable metric for regulators and utilities tracking pipe repairs. Finally, we show that the volume of intruded fluid decreases in proportion to reductions in EOA. The proposed relationships are applied to self-reported performance indicators for IWS serving 108 million people described in the IBNET database and in the Benchmarking and Data Book of Water Utilities in India. This application shows that current high-pressure, continuous water supply targets will require extensive EOA reductions. For example, in order to achieve national targets, utilities in India will need to reduce their EOA by a median of at least 90%. PMID:29775462
NASA Astrophysics Data System (ADS)
Steefel, C. I.
2015-12-01
Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.
ERIC Educational Resources Information Center
Herman, Joan L.; Baker, Eva L.
2005-01-01
Many schools are moving to develop benchmark tests to monitor their students' progress toward state standards throughout the academic year. Benchmark tests can provide the ongoing information that schools need to guide instructional programs and to address student learning problems. The authors discuss six criteria that educators can use to…
NASA Astrophysics Data System (ADS)
Trindade, B. C.; Reed, P. M.
2017-12-01
The growing access and reduced cost for computing power in recent years has promoted rapid development and application of multi-objective water supply portfolio planning. As this trend continues there is a pressing need for flexible risk-based simulation frameworks and improved algorithm benchmarking for emerging classes of water supply planning and management problems. This work contributes the Water Utilities Management and Planning (WUMP) model: a generalizable and open source simulation framework designed to capture how water utilities can minimize operational and financial risks by regionally coordinating planning and management choices, i.e. making more efficient and coordinated use of restrictions, water transfers and financial hedging combined with possible construction of new infrastructure. We introduce the WUMP simulation framework as part of a new multi-objective benchmark problem for planning and management of regionally integrated water utility companies. In this problem, a group of fictitious water utilities seek to balance the use of the mentioned reliability driven actions (e.g., restrictions, water transfers and infrastructure pathways) and their inherent financial risks. Several traits of this problem make it ideal for a benchmark problem, namely the presence of (1) strong non-linearities and discontinuities in the Pareto front caused by the step-wise nature of the decision making formulation and by the abrupt addition of storage through infrastructure construction, (2) noise due to the stochastic nature of the streamflows and water demands, and (3) non-separability resulting from the cooperative formulation of the problem, in which decisions made by stakeholder may substantially impact others. Both the open source WUMP simulation framework and its demonstration in a challenging benchmarking example hold value for promoting broader advances in urban water supply portfolio planning for regions confronting change.
Development of a curved pipe capability for the NASTRAN finite element program
NASA Technical Reports Server (NTRS)
Jeter, J. W., Jr.
1977-01-01
A curved pipe element capability for the NASTRAN structural analysis program is developed using the NASTRAN dummy element feature. A description is given of the theory involved in the subroutines which describe stiffness, mass, thermal and enforced deformation loads, and force and stress recovery for the curved pipe element. Incorporation of these subroutines into NASTRAN is discussed. Test problems are proposed. Instructions on use of the new element capability are provided.
Benchmarking: A Process for Improvement.
ERIC Educational Resources Information Center
Peischl, Thomas M.
One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…
Solution of the neutronics code dynamic benchmark by finite element method
NASA Astrophysics Data System (ADS)
Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.
2016-10-01
The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.
Investigation on size tolerance of pore defect of girth weld pipe.
Li, Yan; Shuai, Jian; Xu, Kui
2018-01-01
Welding quality control is an important parameter for safe operation of oil and gas pipes, especially for high-strength steel pipes. Size control of welding defect is a bottleneck problem for current pipe construction. As a key part of construction procedure for butt-welding of pipes, pore defects in girth weld is difficult to ignore. A three-dimensional non-linear finite element numerical model is established to study applicability of size control indices based on groove shape and softening phenomenon of material in heat-affected zone of practical pipe girth weld. Taking design criteria of pipe as the basis, basic tensile, extremely tensile and extremely compressive loading conditions are determined for pipe stress analysis, and failure criteria based on flow stress is employed to perform stress analysis for pipe girth weld with pore defect. Results show that pipe girth welding stresses of pores at various radial locations are similar. Whereas, stress for pores of different sharpness varied significantly. Besides, tolerance capability of API 5L X90 grade pipe to pore defect of girth weld is lower than that of API 5L X80 grade pipe, and size control index of 3 mm related to pore defect in current standards is applicable to API 5L X80 and X90 grade girth welded pipes with radially non-sharp pore defects.
Investigation on size tolerance of pore defect of girth weld pipe
Shuai, Jian; Xu, Kui
2018-01-01
Welding quality control is an important parameter for safe operation of oil and gas pipes, especially for high-strength steel pipes. Size control of welding defect is a bottleneck problem for current pipe construction. As a key part of construction procedure for butt-welding of pipes, pore defects in girth weld is difficult to ignore. A three-dimensional non-linear finite element numerical model is established to study applicability of size control indices based on groove shape and softening phenomenon of material in heat-affected zone of practical pipe girth weld. Taking design criteria of pipe as the basis, basic tensile, extremely tensile and extremely compressive loading conditions are determined for pipe stress analysis, and failure criteria based on flow stress is employed to perform stress analysis for pipe girth weld with pore defect. Results show that pipe girth welding stresses of pores at various radial locations are similar. Whereas, stress for pores of different sharpness varied significantly. Besides, tolerance capability of API 5L X90 grade pipe to pore defect of girth weld is lower than that of API 5L X80 grade pipe, and size control index of 3 mm related to pore defect in current standards is applicable to API 5L X80 and X90 grade girth welded pipes with radially non-sharp pore defects. PMID:29364986
New technique for installing screen wicking into Inconel 718 heat pipe
NASA Astrophysics Data System (ADS)
Giriunas, Julius A.; Watson, Gordon K.; Tower, Leonard K.
1993-01-01
The creep behavior of superalloys, including Inconel 718, in the presence of liquid sodium is not yet known. To study this problem, the NASA Lewis Research Center has initiated a program with the Energy Technology Engineering Center (ETEC) of Rockwell International Corporation to fill with sodium and creep-test three small cylindrical heat pipes of Inconel 718 for a period of 1000 hours each. This report documents the design and the construction methods that were used at NASA Lewis to fabricate these heat pipes. Of particular importance in the heat pipe construction was the installation of the screen wicking by using an expandable mandrel and differential thermal expansion. This installation technique differs from anything known to have been reported in the heat pipe literature and may be of interest to other workers in the heat pipe field.
[Study on the automatic parameters identification of water pipe network model].
Jia, Hai-Feng; Zhao, Qi-Feng
2010-01-01
Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.
A Methodology for Benchmarking Relational Database Machines,
1984-01-01
user benchmarks is to compare the multiple users to the best-case performance The data for each query classification coll and the performance...called a benchmark. The term benchmark originates from the markers used by sur - veyors in establishing common reference points for their measure...formatted databases. In order to further simplify the problem, we restrict our study to those DBMs which support the relational model. A sur - vey
Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan
2015-11-06
In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependentmore » cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.« less
NASA Technical Reports Server (NTRS)
Enginer, J. E.; Luedke, E. E.; Wanous, D. J.
1976-01-01
Continuing efforts in large gains in heat-pipe performance are reported. It was found that gas-controlled variable-conductance heat pipes can perform reliably for long periods in space and effectively provide temperature stabilization for spacecraft electronics. A solution was formulated that allows the control gas to vent through arterial heat-pipe walls, thus eliminating the problem of arterial failure under load, due to trace impurities of noncondensable gas trapped in an arterial bubble during priming. This solution functions well in zero gravity. Another solution was found that allows priming at a much lower fluid charge. A heat pipe with high capacity, with close temperature control of the heat source and independent of large variations in sink temperature was fabricated.
Flow behaviour and transitions in surfactant-laden gas-liquid vertical flows
NASA Astrophysics Data System (ADS)
Zadrazil, Ivan; Chakraborty, Sourojeet; Matar, Omar; Markides, Christos
2016-11-01
The aim of this work is to elucidate the effect of surfactant additives on vertical gas-liquid counter-current pipe flows. Two experimental campaigns were undertaken, one with water and one with a light oil (Exxsol D80) as the liquid phase; in both cases air was used as the gaseous phase. Suitable surfactants were added to the liquid phase up to the critical micelle concentration (CMC); measurements in the absence of additives were also taken, for benchmarking. The experiments were performed in a 32-mm bore and 5-m long vertical pipe, over a range of superficial velocities (liquid: 1 to 7 m/s, gas: 1 to 44 m/s). High-speed axial- and side-view imaging was performed at different lengths along the pipe, together with pressure drop measurements. Flow regime maps were then obtained describing the observed flow behaviour and related phenomena, i.e., downwards/upwards annular flow, flooding, bridging, gas/liquid entrainment, oscillatory film flow, standing waves, climbing films, churn flow and dryout. Comparisons of the air-water and oil-water results will be presented and discussed, along with the role of the surfactants in affecting overall and detailed flow behaviour and transitions; in particular, a possible mechanism underlying the phenomenon of flooding will be presented. EPSRC UK Programme Grant EP/K003976/1.
Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool
NASA Astrophysics Data System (ADS)
Torlapati, Jagadish; Prabhakar Clement, T.
2013-01-01
We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.
NASA Astrophysics Data System (ADS)
Thienel, Lee; Stouffer, Chuck
1995-09-01
This paper presents an overview of the Cryogenic Test Bed (CTB) experiments including experiment results, integration techniques used, and lessons learned during integration, test and flight phases of the Cryogenic Heat Pipe Flight Experiment (STS-53) and the Cryogenic Two Phase Flight Experiment (OAST-2, STS-62). We will also discuss the Cryogenic Flexible Diode Heat Pipe (CRYOFD) experiment which will fly in the 1996/97 time frame and the fourth flight of the CTB which will fly in the 1997/98 time frame. The two missions tested two oxygen axially grooved heat pipes, a nitrogen fibrous wick heat pipe and a 2-methylpentane phase change material thermal storage unit. Techniques were found for solving problems with vibration from the cryo-collers transmitted through the compressors and the cold heads, and mounting the heat pipe without introducing parasitic heat leaks. A thermally conductive interface material was selected that would meet the requirements and perform over the temperature range of 55 to 300 K. Problems are discussed with the bi-metallic thermostats used for heater circuit protection and the S-Glass suspension straps originally used to secure the BETSU PCM in the CRYOTP mission. Flight results will be compared to 1-g test results and differences will be discussed.
NASA Technical Reports Server (NTRS)
Thienel, Lee; Stouffer, Chuck
1995-01-01
This paper presents an overview of the Cryogenic Test Bed (CTB) experiments including experiment results, integration techniques used, and lessons learned during integration, test and flight phases of the Cryogenic Heat Pipe Flight Experiment (STS-53) and the Cryogenic Two Phase Flight Experiment (OAST-2, STS-62). We will also discuss the Cryogenic Flexible Diode Heat Pipe (CRYOFD) experiment which will fly in the 1996/97 time frame and the fourth flight of the CTB which will fly in the 1997/98 time frame. The two missions tested two oxygen axially grooved heat pipes, a nitrogen fibrous wick heat pipe and a 2-methylpentane phase change material thermal storage unit. Techniques were found for solving problems with vibration from the cryo-collers transmitted through the compressors and the cold heads, and mounting the heat pipe without introducing parasitic heat leaks. A thermally conductive interface material was selected that would meet the requirements and perform over the temperature range of 55 to 300 K. Problems are discussed with the bi-metallic thermostats used for heater circuit protection and the S-Glass suspension straps originally used to secure the BETSU PCM in the CRYOTP mission. Flight results will be compared to 1-g test results and differences will be discussed.
Introduction to the IWA task group on biofilm modeling.
Noguera, D R; Morgenroth, E
2004-01-01
An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.
Modification of equation of motion of fluid-conveying pipe for laminar and turbulent flow profiles
NASA Astrophysics Data System (ADS)
Guo, C. Q.; Zhang, C. H.; Païdoussis, M. P.
2010-07-01
Considering the non-uniformity of the flow velocity distribution in fluid-conveying pipes caused by the viscosity of real fluids, the centrifugal force term in the equation of motion of the pipe is modified for laminar and turbulent flow profiles. The flow-profile-modification factors are found to be 1.333, 1.015-1.040 and 1.035-1.055 for laminar flow in circular pipes, turbulent flow in smooth-wall circular pipes and turbulent flow in rough-wall circular pipes, respectively. The critical flow velocities for divergence in the above-mentioned three cases are found to be 13.4%, 0.74-1.9% and 1.7-2.6%, respectively, lower than that with plug flow, while those for flutter are even lower, which could reach 36% for the laminar flow profile. By introducing two new concepts of equivalent flow velocity and equivalent mass, fluid-conveying pipe problems with different flow profiles can be solved with the equation of motion for plug flow.
Benchmark Problems for Space Mission Formation Flying
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Leitner, Jesse A.; Folta, David C.; Burns, Richard
2003-01-01
To provide a high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested for space mission formation flying. The problems cover formation flying in low altitude, near-circular Earth orbit, high altitude, highly elliptical Earth orbits, and large amplitude lissajous trajectories about co-linear libration points of the Sun-Earth/Moon system. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions that are of interest to various agencies.
Simplified Numerical Analysis of ECT Probe - Eddy Current Benchmark Problem 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikora, R.; Chady, T.; Gratkowski, S.
2005-04-09
In this paper a third eddy current benchmark problem is considered. The objective of the benchmark is to determine optimal operating frequency and size of the pancake coil designated for testing tubes made of Inconel. It can be achieved by maximization of the change in impedance of the coil due to a flaw. Approximation functions of the probe (coil) characteristic were developed and used in order to reduce number of required calculations. It results in significant speed up of the optimization process. An optimal testing frequency and size of the probe were achieved as a final result of the calculation.
Strain Modal Analysis of Small and Light Pipes Using Distributed Fibre Bragg Grating Sensors
Huang, Jun; Zhou, Zude; Zhang, Lin; Chen, Juntao; Ji, Chunqian; Pham, Duc Truong
2016-01-01
Vibration fatigue failure is a critical problem of hydraulic pipes under severe working conditions. Strain modal testing of small and light pipes is a good option for dynamic characteristic evaluation, structural health monitoring and damage identification. Unique features such as small size, light weight, and high multiplexing capability enable Fibre Bragg Grating (FBG) sensors to measure structural dynamic responses where sensor size and placement are critical. In this paper, experimental strain modal analysis of pipes using distributed FBG sensors ispresented. Strain modal analysis and parameter identification methods are introduced. Experimental strain modal testing and finite element analysis for a cantilever pipe have been carried out. The analysis results indicate that the natural frequencies and strain mode shapes of the tested pipe acquired by FBG sensors are in good agreement with the results obtained by a reference accelerometer and simulation outputs. The strain modal parameters of a hydraulic pipe were obtained by the proposed strain modal testing method. FBG sensors have been shown to be useful in the experimental strain modal analysis of small and light pipes in mechanical, aeronautic and aerospace applications. PMID:27681728
Benchmarking Gas Path Diagnostic Methods: A Public Approach
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene
2008-01-01
Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.
A numerical analysis of high-temperature heat pipe startup from the frozen state
NASA Technical Reports Server (NTRS)
Cao, Y.; Faghri, A.
1993-01-01
Continuum and rarefied vapor flows co-exist along the heat pipe length for most of the startup period. A two-region model is proposed in which the vapor flow in the continuum region is modeled by the compressible Navier-Stokes equations, and the vapor flow in the rarefied region is simulated by a self-diffusion model. The two vapor regions are linked with appropriate boundary conditions, and heat pipe wail, wick, and vapor flow are solved as a conjugate problem. The numerical solutions for the entire heat pipe startup process from the frozen state are compared with the corresponding experimental data with good agreement.
The PAC-MAN model: Benchmark case for linear acoustics in computational physics
NASA Astrophysics Data System (ADS)
Ziegelwanger, Harald; Reiter, Paul
2017-10-01
Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.
Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems
NASA Technical Reports Server (NTRS)
Dahl, Milo D. (Editor)
2004-01-01
This publication contains the proceedings of the Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems. In this workshop, as in previous workshops, the problems were devised to gauge the technological advancement of computational techniques to calculate all aspects of sound generation and propagation in air directly from the fundamental governing equations. A variety of benchmark problems have been previously solved ranging from simple geometries with idealized acoustic conditions to test the accuracy and effectiveness of computational algorithms and numerical boundary conditions; to sound radiation from a duct; to gust interaction with a cascade of airfoils; to the sound generated by a separating, turbulent viscous flow. By solving these and similar problems, workshop participants have shown the technical progress from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The fourth CAA workshop emphasized the application of CAA methods to the solution of realistic problems. The workshop was held at the Ohio Aerospace Institute in Cleveland, Ohio, on October 20 to 22, 2003. At that time, workshop participants presented their solutions to problems in one or more of five categories. Their solutions are presented in this proceedings along with the comparisons of their solutions to the benchmark solutions or experimental data. The five categories for the benchmark problems were as follows: Category 1:Basic Methods. The numerical computation of sound is affected by, among other issues, the choice of grid used and by the boundary conditions. Category 2:Complex Geometry. The ability to compute the sound in the presence of complex geometric surfaces is important in practical applications of CAA. Category 3:Sound Generation by Interacting With a Gust. The practical application of CAA for computing noise generated by turbomachinery involves the modeling of the noise source mechanism as a vortical gust interacting with an airfoil. Category 4:Sound Transmission and Radiation. Category 5:Sound Generation in Viscous Problems. Sound is generated under certain conditions by a viscous flow as the flow passes an object or a cavity.
Surface Characterization on Corrosion By-products on Cu in Drinking Water Pipes
Copper is widely used in house-hold plumbing due to its anti-corrosion property. However, as water travels within the distribution system into corroded copper pipes, copper may be released into consumer’s tap causing major problems. In an attempt to understand the mechanism and...
Large variable conductance heat pipe. Transverse header
NASA Technical Reports Server (NTRS)
Edelstein, F.
1975-01-01
The characteristics of gas-loaded, variable conductance heat pipes (VCHP) are discussed. The difficulties involved in developing a large VCHP header are analyzed. The construction of the large capacity VCHP is described. A research project to eliminate some of the problems involved in large capacity VCHP operation is explained.
Internal Erosion During Soil PipeFlow: State of Science for Experimental and Numerical Analysis
Many field observations have led to speculation on the role of piping in embankment failures, landslides, and gully erosion. However, there has not been a consensus on the subsurface flow and erosion processes involved, and inconsistent use of terms have exacerbated the problem. ...
Lead (Pb) in tap water (released from Pb-based plumbing materials) poses a serious public health concern. Water utilities experiencing Pb problems often use orthophosphate treatment, with the theory of forming insoluble Pb(II)-orthophosphate compounds on the pipe wall to inhibit ...
Cost of Water Distribution System Infrastructure Rehabilitation, Repair, and Replacement.
1985-03-01
terms of the Hazen-Williams C-factor. New pipes have C-factors on the order of 140. Severely tuberculated pipes can have C-factors as low as 40. The C... tuberculation , and significantly higher pressures being required to push a pig through a small opening. Unit costs level off above the 10-in. diam and...additional costs of lining. Lining the pipe will: (a) prevent reoccurrence of tuberculation , (b) seal small leaks, and (c) eliminate "red - water" problems
Meteoroid Protection Methods for Spacecraft Radiators Using Heat Pipes
NASA Technical Reports Server (NTRS)
Ernst, D. M.
1979-01-01
Various aspects of achieving a low mass heat pipe radiator for the nuclear electric propulsion spacecraft were studied. Specific emphasis was placed on a concept applicable to a closed Brayton cycle power sub-system. Three aspects of inter-related problems were examined: (1) the armor for meteoroid protection, (2) emissivity of the radiator surface, and (3) the heat pipe itself. The study revealed several alternatives for the achievement of the stated goal, but a final recommendation for the best design requires further investigation.
NASA Technical Reports Server (NTRS)
1993-01-01
A complex of high pressure piping at Stennis Space Center carries rocket propellants and other fluids/gases through the Center's Component Test Facility. Conventional clamped connectors tend to leak when propellant lines are chilled to extremely low temperatures. Reflange, Inc. customized an existing piping connector to include a secondary seal more tolerant of severe thermal gradients for Stennis. The T-Con connector solved the problem, and the company is now marketing a commercial version that permits testing, monitoring or collecting any emissions that may escape the primary seal during severe thermal transition.
Implementing Cognitive Strategy Instruction across the School: The Benchmark Manual for Teachers.
ERIC Educational Resources Information Center
Gaskins, Irene; Elliot, Thorne
Improving reading instruction has been the primary focus at the Benchmark School in Media, Pennsylvania. This book describes the various phases of Benchmark's development of a program to create strategic learners, thinkers, and problem solvers across the curriculum. The goal is to provide teachers and administrators with a handbook that can be…
Adaptive unified continuum FEM modeling of a 3D FSI benchmark problem.
Jansson, Johan; Degirmenci, Niyazi Cem; Hoffman, Johan
2017-09-01
In this paper, we address a 3D fluid-structure interaction benchmark problem that represents important characteristics of biomedical modeling. We present a goal-oriented adaptive finite element methodology for incompressible fluid-structure interaction based on a streamline diffusion-type stabilization of the balance equations for mass and momentum for the entire continuum in the domain, which is implemented in the Unicorn/FEniCS software framework. A phase marker function and its corresponding transport equation are introduced to select the constitutive law, where the mesh tracks the discontinuous fluid-structure interface. This results in a unified simulation method for fluids and structures. We present detailed results for the benchmark problem compared with experiments, together with a mesh convergence study. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Morris, J. F.
1981-01-01
Thermionic energy conversion (TEC) and metallic-fluid heat pipes (MFHPs), offering unique advantages in terrestrial and space energy processing by virtue of operating on working-fluid vaporization/condensation cycles that accept great thermal power densities at high temperatures, share complex materials problems. Simplified equations are presented that verify and solve such problems, suggesting the possibility of cost-effective applications in the near term for TEC and MFHP devices. Among the problems discussed are: the limitation of alkali-metal corrosion, protection against hot external gases, external and internal vaporization, interfacial reactions and diffusion, expansion coefficient matching, and creep deformation.
Unstructured Adaptive Meshes: Bad for Your Memory?
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob
2003-01-01
This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.
Dynamic vehicle routing with time windows in theory and practice.
Yang, Zhiwei; van Osta, Jan-Paul; van Veen, Barry; van Krevelen, Rick; van Klaveren, Richard; Stam, Andries; Kok, Joost; Bäck, Thomas; Emmerich, Michael
2017-01-01
The vehicle routing problem is a classical combinatorial optimization problem. This work is about a variant of the vehicle routing problem with dynamically changing orders and time windows. In real-world applications often the demands change during operation time. New orders occur and others are canceled. In this case new schedules need to be generated on-the-fly. Online optimization algorithms for dynamical vehicle routing address this problem but so far they do not consider time windows. Moreover, to match the scenarios found in real-world problems adaptations of benchmarks are required. In this paper, a practical problem is modeled based on the procedure of daily routing of a delivery company. New orders by customers are introduced dynamically during the working day and need to be integrated into the schedule. A multiple ant colony algorithm combined with powerful local search procedures is proposed to solve the dynamic vehicle routing problem with time windows. The performance is tested on a new benchmark based on simulations of a working day. The problems are taken from Solomon's benchmarks but a certain percentage of the orders are only revealed to the algorithm during operation time. Different versions of the MACS algorithm are tested and a high performing variant is identified. Finally, the algorithm is tested in situ: In a field study, the algorithm schedules a fleet of cars for a surveillance company. We compare the performance of the algorithm to that of the procedure used by the company and we summarize insights gained from the implementation of the real-world study. The results show that the multiple ant colony algorithm can get a much better solution on the academic benchmark problem and also can be integrated in a real-world environment.
Issues in Benchmark Metric Selection
NASA Astrophysics Data System (ADS)
Crolotte, Alain
It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.
A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation
NASA Technical Reports Server (NTRS)
Majumdar, Alok
1998-01-01
An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.
Initial Coupling of the RELAP-7 and PRONGHORN Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Ortensi; D. Andrs; A.A. Bingham
2012-10-01
Modern nuclear reactor safety codes require the ability to solve detailed coupled neutronic- thermal fluids problems. For larger cores, this implies fully coupled higher dimensionality spatial dynamics with appropriate feedback models that can provide enough resolution to accurately compute core heat generation and removal during steady and unsteady conditions. The reactor analysis code PRONGHORN is being coupled to RELAP-7 as a first step to extend RELAP’s current capabilities. This report details the mathematical models, the type of coupling, and the testing results from the integrated system. RELAP-7 is a MOOSE-based application that solves the continuity, momentum, and energy equations inmore » 1-D for a compressible fluid. The pipe and joint capabilities enable it to model parts of the power conversion unit. The PRONGHORN application, also developed on the MOOSE infrastructure, solves the coupled equations that define the neutron diffusion, fluid flow, and heat transfer in a full core model. The two systems are loosely coupled to simplify the transition towards a more complex infrastructure. The integration is tested on a simplified version of the OECD/NEA MHTGR-350 Coupled Neutronics-Thermal Fluids benchmark model.« less
NASA Astrophysics Data System (ADS)
Amran, T. S. T.; Ismail, M. P.; Ahmad, M. R.; Amin, M. S. M.; Ismail, M. A.; Sani, S.; Masenwat, N. A.; Basri, N. S. M.
2018-01-01
Water is the most treasure natural resources, however, a huge amount of water are lost during its distribution that leads to water leakage problem. The leaks meant the waste of money and created more economic loss to treat and fix the damaged pipe. Researchers and engineers have put tremendous attempts and effort, to solve the water leakage problem especially in water leakage of buried pipeline. An advanced technology of ground penetrating radar (GPR) has been established as one of the non-destructive testing (NDT) method to detect the underground water pipe leaking. This paper focuses on the ability of GPR in water utility field especially on detection of water leaks in the underground pipeline distribution. A series of laboratory experiments were carried out using 800-MHz antenna, where the performance of GPR on detecting underground pipeline and locating water leakage was investigated and validated. A prototype to recreate water-leaking system was constructed using a 4-inch PVC pipe. Different diameter of holes, i.e. ¼ inch, ½ inch, and ¾ inch, were drilled into the pipe to simulate the water leaking. The PVC pipe was buried at the depth of 60 cm into the test bed that was filled with dry sand. 15 litres of water was injected into the PVC pipe. The water leakage patterns in term of radargram data were gathered. The effectiveness of the GPR in locating the underground water leakage was ascertained, after the results were collected and verified.
Titanium-alloy, metallic-fluid heat pipes for space service
NASA Technical Reports Server (NTRS)
Morris, J. F.
1979-01-01
Reactivities of titanium limit its long-term terrestrial use for unprotected heat-pipe envelopes to about 870 K (1100 F). But this external thermochemical limitation disappears when considerations shift to space applications. In such hard-vacuum utilization much higher operating temperatures are possible. Primary restrictions in space environment result from vaporization, thermal creep, and internal compatibilities. Unfortunately, a respected head-pipe reference indicates that titanium is compatible only with cesium from the alkali-metal working-fluid family. This problem and others are subjects of the present paper which advocates titanium-alloy, metallic-fluid heat pipes for long-lived, weight-effective space service between 500 and 1300 K (440 and 1880 F).
Higher Education Ranking and Leagues Tables: Lessons Learned from Benchmarking
ERIC Educational Resources Information Center
Proulx, Roland
2007-01-01
The paper intends to contribute to the debate on ranking and league tables by adopting a critical approach to ranking methodologies from the point of view of a university benchmarking exercise. The absence of a strict benchmarking exercise in the ranking process has been, in the opinion of the author, one of the major problems encountered in the…
Land, Sander; Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E; Hurtado, Daniel E; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P; Sundnes, Joakim; Rice, J Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A
2015-12-08
Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software.
Brandenburg, Marcus; Hahn, Gerd J
2018-06-01
Process industries typically involve complex manufacturing operations and thus require adequate decision support for aggregate production planning (APP). The need for powerful and efficient approaches to solve complex APP problems persists. Problem-specific solution approaches are advantageous compared to standardized approaches that are designed to provide basic decision support for a broad range of planning problems but inadequate to optimize under consideration of specific settings. This in turn calls for methods to compare different approaches regarding their computational performance and solution quality. In this paper, we present a benchmarking problem for APP in the chemical process industry. The presented problem focuses on (i) sustainable operations planning involving multiple alternative production modes/routings with specific production-related carbon emission and the social dimension of varying operating rates and (ii) integrated campaign planning with production mix/volume on the operational level. The mutual trade-offs between economic, environmental and social factors can be considered as externalized factors (production-related carbon emission and overtime working hours) as well as internalized ones (resulting costs). We provide data for all problem parameters in addition to a detailed verbal problem statement. We refer to Hahn and Brandenburg [1] for a first numerical analysis based on and for future research perspectives arising from this benchmarking problem.
Internal erosion during soil pipeflow: state of science for experimental and numerical analysis
USDA-ARS?s Scientific Manuscript database
Many field observations have lead to speculation on the role of piping in embankment failures, landslides, and gully erosion. However, there has not been a consensus on the subsurface flow and erosion processes involved and inconsistent use of terms have exasperated the problem. One such piping proc...
Extensive localized or pitting corrosion of copper pipes used in household drinking-water plumbing can eventually lead to pinhole water leaks that may result in water damage, mold growth, and costly repairs. A growing number of problems have been associated with high pH and low ...
NASA Astrophysics Data System (ADS)
Kolpakov, G. N.; Zakusilov, V. V.; Demyanenko, N. V.; Mishin, A. S.
2016-06-01
Stainless steel pipes, used to cool a reactor plant, have a high cost, and after taking a reactor out of service they must be buried together with other radioactive waste. Therefore, the relevant problem is the rinse of pipes from contamination, followed by returning to operation.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Caldeira, K.; Ricke, K.
2014-12-01
With increasing risk of dangerous climate change geoengineering solutions to Earth's climate problems have attracted much attention. One proposed geoengineering approach considers the use of ocean pipes as a means to increase ocean carbon uptake and the storage of thermal energy in the deep ocean. We use a latest generation Earth System Model (ESM) to perform simulations of idealised extreme implementations of ocean pipes. In our simulations, downward transport of thermal energy by ocean pipes strongly cools the near surface atmosphere - by up to 11°C on a global mean. The ocean pipes cause net thermal energy to be transported from the terrestrial environment to the deep ocean while increasing the global net transport of water to land. By cooling the ocean surface more than the land, ocean pipes tend to promote a monsoonal-type circulation, resulting in increased water vapour transport to land. Throughout their implementation, ocean pipes prevent energy from escaping to space, increasing the amount of energy stored in Earth's climate system despite reductions in surface temperature. As a consequence, our results indicate that an abrupt termination of ocean pipes could cause dramatic increases in surface temperatures beyond that which would have been obtained had ocean pipes not been implemented.
High-Accuracy Finite Element Method: Benchmark Calculations
NASA Astrophysics Data System (ADS)
Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel
2018-02-01
We describe a new high-accuracy finite element scheme with simplex elements for solving the elliptic boundary-value problems and show its efficiency on benchmark solutions of the Helmholtz equation for the triangle membrane and hypercube.
NASA Astrophysics Data System (ADS)
Apperl, Benjamin; Pressl, Alexander; Schulz, Karsten
2016-04-01
This contribution describes a feasibility study carried out in the laboratory for the detection of leakages in lake pressure pipes using high-resolution fiber-optic temperature measurements (DTS). The usage of the DTS technology provides spatiotemporal high-resolution temperature measurements along a fibre optic cable. An opto-electrical device serves both as a light emitter as well as a spectrometer for measuring the scattering of light. The fiber optic cable serves as linear sensor. Measurements can be taken at a spatial resolution of up to 25 cm with a temperature accuracy of higher than 0.1 °C. The first warmer days after the winter stagnation provoke a temperature rise of superficial layers of lakes with barely stable temperature stratification. The warmer layer in the epilimnion differs 4 °C to 5 °C compared to the cold layers in the meta- or hypolimnion before water circulation in spring starts. The warmer water from the surface layer can be rinsed on the entire length of the pipe. Water intrudes at leakages by generating a slightly negative pressure in the pipe. This provokes a local temperature change, in case that the penetrating water (seawater) differs in temperature from the water pumped through the pipe. These temperature changes should be detectable and localized with a DTS cable introduced in the pipe. A laboratory experiment was carried out to determine feasibility as well as limits and problems of this methodology. A 6 m long pipe, submerged in a water tank at constant temperature, was rinsed with water 5-10 °C warmer than the water in the tank. Temperature measurements were taken continuously along the pipe. A negative pressure of 0.1 bar provoked the intrusion of colder water from the tank into the pipe through the leakages, resulting in local temperature changes. Experiments where conducted with different temperature gradients, leakage sizes, number of leaks as well as with different positioning of the DTS cable inside the pipe. Results showed that already small leakages (4mm) can be detected. Problems have arisen from the inside positioning of DTS cable, measuring a reduced temperature difference in the transition layer at the inside wall of the pipe.
High-Order Moving Overlapping Grid Methodology in a Spectral Element Method
NASA Astrophysics Data System (ADS)
Merrill, Brandon E.
A moving overlapping mesh methodology that achieves spectral accuracy in space and up to second-order accuracy in time is developed for solution of unsteady incompressible flow equations in three-dimensional domains. The targeted applications are in aerospace and mechanical engineering domains and involve problems in turbomachinery, rotary aircrafts, wind turbines and others. The methodology is built within the dual-session communication framework initially developed for stationary overlapping meshes. The methodology employs semi-implicit spectral element discretization of equations in each subdomain and explicit treatment of subdomain interfaces with spectrally-accurate spatial interpolation and high-order accurate temporal extrapolation, and requires few, if any, iterations, yet maintains the global accuracy and stability of the underlying flow solver. Mesh movement is enabled through the Arbitrary Lagrangian-Eulerian formulation of the governing equations, which allows for prescription of arbitrary velocity values at discrete mesh points. The stationary and moving overlapping mesh methodologies are thoroughly validated using two- and three-dimensional benchmark problems in laminar and turbulent flows. The spatial and temporal global convergence, for both methods, is documented and is in agreement with the nominal order of accuracy of the underlying solver. Stationary overlapping mesh methodology was validated to assess the influence of long integration times and inflow-outflow global boundary conditions on the performance. In a turbulent benchmark of fully-developed turbulent pipe flow, the turbulent statistics are validated against the available data. Moving overlapping mesh simulations are validated on the problems of two-dimensional oscillating cylinder and a three-dimensional rotating sphere. The aerodynamic forces acting on these moving rigid bodies are determined, and all results are compared with published data. Scaling tests, with both methodologies, show near linear strong scaling, even for moderately large processor counts. The moving overlapping mesh methodology is utilized to investigate the effect of an upstream turbulent wake on a three-dimensional oscillating NACA0012 extruded airfoil. A direct numerical simulation (DNS) at Reynolds Number 44,000 is performed for steady inflow incident upon the airfoil oscillating between angle of attack 5.6° and 25° with reduced frequency k=0.16. Results are contrasted with subsequent DNS of the same oscillating airfoil in a turbulent wake generated by a stationary upstream cylinder.
Transient Simulation of Accumulating Particle Deposition in Pipe Flow
NASA Astrophysics Data System (ADS)
Hewett, James; Sellier, Mathieu
2015-11-01
Colloidal particles that deposit in pipe systems can lead to fouling which is an expensive problem in both the geothermal and oil & gas industries. We investigate the gradual accumulation of deposited colloids in pipe flow using numerical simulations. An Euler-Lagrangian approach is employed for modelling the fluid and particle phases. Particle transport to the pipe wall is modelled with Brownian motion and turbulent diffusion. A two-way coupling exists between the fouled material and the pipe flow; the local mass flux of depositing particles is affected by the surrounding fluid in the near-wall region. This coupling is modelled by changing the cells from fluid to solid as the deposited particles exceed each local cell volume. A similar method has been used to model fouling in engine exhaust systems (Paz et al., Heat Transfer Eng., 34(8-9):674-682, 2013). We compare our deposition velocities and deposition profiles with an experiment on silica scaling in turbulent pipe flow (Kokhanenko et al., 19th AFMC, 2014).
Performance of Multi-chaotic PSO on a shifted benchmark functions set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions.
OTEC cold water pipe design for problems caused by vortex-excited oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffin, O. M.
1980-03-14
Vortex-excited oscillations of marine structures result in reduced fatigue life, large hydrodynamic forces and induced stresses, and sometimes lead to structural damage and to diestructive failures. The cold water pipe of an OTEC plant is nominally a bluff, flexible cylinder with a large aspect ratio (L/D = length/diameter), and is likely to be susceptible to resonant vortex-excited oscillations. The objective of this report is to survey recent results pertaining to the vortex-excited oscillations of structures in general and to consider the application of these findings to the design of the OTEC cold water pipe. Practical design calculations are given asmore » examples throughout the various sections of the report. This report is limited in scope to the problems of vortex shedding from bluff, flexible structures in steady currents and the resulting vortex-excited oscillations. The effects of flow non-uniformities, surface roughness of the cylinder, and inclination to the incident flow are considered in addition to the case of a smooth cyliner in a uniform stream. Emphasis is placed upon design procedures, hydrodynamic coefficients applicable in practice, and the specification of structural response parameters relevant to the OTEC cold water pipe. There are important problems associated with in shedding of vortices from cylinders in waves and from the combined action of waves and currents, but these complex fluid/structure interactions are not considered in this report.« less
Benchmarking image fusion system design parameters
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2013-06-01
A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.
Bin packing problem solution through a deterministic weighted finite automaton
NASA Astrophysics Data System (ADS)
Zavala-Díaz, J. C.; Pérez-Ortega, J.; Martínez-Rebollar, A.; Almanza-Ortega, N. N.; Hidalgo-Reyes, M.
2016-06-01
In this article the solution of Bin Packing problem of one dimension through a weighted finite automaton is presented. Construction of the automaton and its application to solve three different instances, one synthetic data and two benchmarks are presented: N1C1W1_A.BPP belonging to data set Set_1; and BPP13.BPP belonging to hard28. The optimal solution of synthetic data is obtained. In the first benchmark the solution obtained is one more container than the ideal number of containers and in the second benchmark the solution is two more containers than the ideal solution (approximately 2.5%). The runtime in all three cases was less than one second.
Electric discharge effects on a XeCl pumped S2 heat-pipe laser
NASA Technical Reports Server (NTRS)
Killeen, K.; Greenberg, K.; Verdeyen, J. T.
1982-01-01
It is shown that an electrical discharge can dissociate the higher-order sulfur molecules S(3-8) into dimers S2 and thus create the proper environment for efficient conversion of XeCl radiation at 308 nm to the blue-green. The use of a heat-pipe configuration greatly alleviates the technological problems.
An Investigation of the Cryogenic Freezing of Water in Non-Metallic Pipelines
NASA Astrophysics Data System (ADS)
Martin, C. I.; Richardson, R. N.; Bowen, R. J.
2004-06-01
Pipe freezing is increasingly used in a range of industries to solve otherwise intractable pipe line maintenance and servicing problems. This paper presents the interim results from an experimental study on deliberate freezing of polymeric pipelines. Previous and contemporary works are reviewed. The object of the current research is to confirm the feasibility of ice plug formation within a polymeric pipe as a method of isolation. Tests have been conducted on a range of polymeric pipes of various sizes. The results reported here all relate to freezing of horizontal pipelines. In each case the process of plug formation was photographed, the frozen plug pressure tested and the pipe inspected for signs of damage resulting from the freeze procedure. The time to freeze was recorded and various temperatures logged. These tests have demonstrated that despite the poor thermal and mechanical properties of the polymers, freezing offers a viable alternative method of isolation in polymeric pipelines.
High Energy Vibration for Gas Piping
NASA Astrophysics Data System (ADS)
Lee, Gary Y. H.; Chan, K. B.; Lee, Aylwin Y. S.; Jia, ShengXiang
2017-07-01
In September 2016, a gas compressor in offshore Sarawak has its rotor changed out. Prior to this change-out, pipe vibration study was carried-out by the project team to evaluate any potential high energy pipe vibration problems at the compressor’s existing relief valve downstream pipes due to process condition changes after rotor change out. This paper covers high frequency acoustic excitation (HFAE) vibration also known as acoustic induced vibration (AIV) study and discusses detailed methodologies as a companion to the Energy Institute Guidelines for the avoidance of vibration induced fatigue failure, which is a common industry practice to assess and mitigate for AIV induced fatigue failure. Such detailed theoretical studies can help to minimize or totally avoid physical pipe modification, leading to reduce offshore plant shutdown days to plant shutdowns only being required to accommodate gas compressor upgrades, reducing cost without compromising process safety.
Towards unbiased benchmarking of evolutionary and hybrid algorithms for real-valued optimisation
NASA Astrophysics Data System (ADS)
MacNish, Cara
2007-12-01
Randomised population-based algorithms, such as evolutionary, genetic and swarm-based algorithms, and their hybrids with traditional search techniques, have proven successful and robust on many difficult real-valued optimisation problems. This success, along with the readily applicable nature of these techniques, has led to an explosion in the number of algorithms and variants proposed. In order for the field to advance it is necessary to carry out effective comparative evaluations of these algorithms, and thereby better identify and understand those properties that lead to better performance. This paper discusses the difficulties of providing benchmarking of evolutionary and allied algorithms that is both meaningful and logistically viable. To be meaningful the benchmarking test must give a fair comparison that is free, as far as possible, from biases that favour one style of algorithm over another. To be logistically viable it must overcome the need for pairwise comparison between all the proposed algorithms. To address the first problem, we begin by attempting to identify the biases that are inherent in commonly used benchmarking functions. We then describe a suite of test problems, generated recursively as self-similar or fractal landscapes, designed to overcome these biases. For the second, we describe a server that uses web services to allow researchers to 'plug in' their algorithms, running on their local machines, to a central benchmarking repository.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mkhabela, P.; Han, J.; Tyobeka, B.
2006-07-01
The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less
NASA Astrophysics Data System (ADS)
Jacques, Diederik
2017-04-01
As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.
NASA Astrophysics Data System (ADS)
Giménez, Rafael Barrionuevo
2016-06-01
TSM is escape pipe in case of collapse of terrain. The TSM is a passive security tool placed underground to connect the work area with secure area (mining gallery mainly). TSM is light and hand able pipe made with aramid (Kevlar), carbon fibre, or other kind of new material. The TSM will be placed as a pipe line network with many in/out entrances/exits to rich and connect problem work areas with another parts in a safe mode. Different levels of instrumentation could be added inside such as micro-led escape way suggested, temperature, humidity, level of oxygen, etc.). The open hardware and software like Arduino will be the heart of control and automation system.
NASA Astrophysics Data System (ADS)
Saxena, Nishank; Hofmann, Ronny; Alpak, Faruk O.; Berg, Steffen; Dietderich, Jesse; Agarwal, Umang; Tandon, Kunj; Hunter, Sander; Freeman, Justin; Wilson, Ove Bjorn
2017-11-01
We generate a novel reference dataset to quantify the impact of numerical solvers, boundary conditions, and simulation platforms. We consider a variety of microstructures ranging from idealized pipes to digital rocks. Pore throats of the digital rocks considered are large enough to be well resolved with state-of-the-art micro-computerized tomography technology. Permeability is computed using multiple numerical engines, 12 in total, including, Lattice-Boltzmann, computational fluid dynamics, voxel based, fast semi-analytical, and known empirical models. Thus, we provide a measure of uncertainty associated with flow computations of digital media. Moreover, the reference and standards dataset generated is the first of its kind and can be used to test and improve new fluid flow algorithms. We find that there is an overall good agreement between solvers for idealized cross-section shape pipes. As expected, the disagreement increases with increase in complexity of the pore space. Numerical solutions for pipes with sinusoidal variation of cross section show larger variability compared to pipes of constant cross-section shapes. We notice relatively larger variability in computed permeability of digital rocks with coefficient of variation (of up to 25%) in computed values between various solvers. Still, these differences are small given other subsurface uncertainties. The observed differences between solvers can be attributed to several causes including, differences in boundary conditions, numerical convergence criteria, and parameterization of fundamental physics equations. Solvers that perform additional meshing of irregular pore shapes require an additional step in practical workflows which involves skill and can introduce further uncertainty. Computation times for digital rocks vary from minutes to several days depending on the algorithm and available computational resources. We find that more stringent convergence criteria can improve solver accuracy but at the expense of longer computation time.
Liu, Ze-Hua; Yin, Hua; Dang, Zhi
2017-01-01
With the widespread application of plastic pipes in drinking water distribution system, the effects of various leachable organic chemicals have been investigated and their occurrence in drinking water supplies is monitored. Most studies focus on the odor problems these substances may cause. This study investigates the potential endocrine disrupting effects of the migrating compound 2,4-di-tert-butylphenol (2,4-d-t-BP). The summarized results show that the migration of 2,4-d-t-BP from plastic pipes could result in chronic exposure and the migration levels varied greatly among different plastic pipe materials and manufacturing brands. Based on estrogen equivalent (EEQ), the migrating levels of the leachable compound 2,4-d-t-BP in most plastic pipes were relative low. However, the EEQ levels in drinking water migrating from four out of 15 pipes may pose significant adverse effects. With the increasingly strict requirements on regulation of drinking water quality, these results indicate that some drinking water transported with plastic pipes may not be safe for human consumption due to the occurrence of 2,4-d-t-BP. Moreover, 2,4-d-t-BP is not the only plastic pipe-migrating estrogenic compound, other compounds such as 2-tert-butylphenol (2-t-BP), 4-tert-butylphenol (4-t-BP), and others may also be leachable from plastic pipes.
Effect of PVC and iron materials on Mn(II) deposition in drinking water distribution systems.
Cerrato, José M; Reyes, Lourdes P; Alvarado, Carmen N; Dietrich, Andrea M
2006-08-01
Polyvinyl chloride (PVC) and iron pipe materials differentially impacted manganese deposition within a drinking water distribution system that experiences black water problems because it receives soluble manganese from a surface water reservoir that undergoes biogeochemical cycling of manganese. The water quality study was conducted in a section of the distribution system of Tegucigalpa, Honduras and evaluated the influence of iron and PVC pipe materials on the concentrations of soluble and particulate iron and manganese, and determined the composition of scales formed on PVC and iron pipes. As expected, total Fe concentrations were highest in water from iron pipes. Water samples obtained from PVC pipes showed higher total Mn concentrations and more black color than that obtained from iron pipes. Scanning electron microscopy demonstrated that manganese was incorporated into the iron tubercles and thus not readily dislodged from the pipes by water flow. The PVC pipes contained a thin surface scale consisting of white and brown layers of different chemical composition; the brown layer was in contact with the water and contained 6% manganese by weight. Mn composed a greater percentage by weight of the PVC scale than the iron pipe scale; the PVC scale was easily dislodged by flowing water. This research demonstrates that interactions between water and the infrastructure used for its supply affect the quality of the final drinking water.
Revisiting Yasinsky and Henry`s benchmark using modern nodal codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.; Becker, M.W.
1995-12-31
The numerical experiments analyzed by Yasinsky and Henry are quite trivial by comparison with today`s standards because they used the finite difference code WIGLE for their benchmark. Also, this problem is a simple slab (one-dimensional) case with no feedback mechanisms. This research attempts to obtain STAR (Ref. 2) and NEM (Ref. 3) code results in order to produce a more modern kinetics benchmark with results comparable WIGLE.
Simulation of a manual electric-arc welding in a working gas pipeline. 1. Formulation of the problem
NASA Astrophysics Data System (ADS)
Baikov, V. I.; Gishkelyuk, I. A.; Rus', A. M.; Sidorovich, T. V.; Tonkonogov, B. A.
2010-11-01
Problems of mathematical simulation of the temperature stresses arising in the wall of a pipe of a cross-country gas pipeline in the process of electric-arc welding of defects in it have been considered. Mathematical models of formation of temperatures, deformations, and stresses in a gas pipe subjected to phase transformations have been developed. These models were numerically realized in the form of algorithms representing a part of an application-program package. Results of verification of the computational complex and calculation results obtained with it are presented.
Benchmarks for target tracking
NASA Astrophysics Data System (ADS)
Dunham, Darin T.; West, Philip D.
2011-09-01
The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.
Pipe Flow Simulation Software: A Team Approach to Solve an Engineering Education Problem.
ERIC Educational Resources Information Center
Engel, Renata S.; And Others
1996-01-01
A computer simulation program for use in the study of fluid mechanics is described. The package is an interactive tool to explore the fluid flow characteristics of a pipe system by manipulating the physical construction of the system. The motivation, software design requirements, and specific details on how its objectives were met are presented.…
Pipes, Petrol, Paint and Pewter: The Rise and Fall of Lead
ERIC Educational Resources Information Center
Peacock, Alan
2010-01-01
Lead is a good example of a metal that was used for many things over centuries--in water pipes, paints, on roofs, and in leaded petrol, for example--but was superseded as scientists discovered "new" metals, and because its toxicity became a problem. It was originally an important element in pewter utensils, alloyed with tin; it made the…
NASA Technical Reports Server (NTRS)
1988-01-01
Solar Fundamentals, Inc.'s hot water system employs space-derived heat pipe technology. It is used by a meat packing plant to heat water for cleaning processing machinery. Unit is complete system with water heater, hot water storage, electrical controls and auxiliary components. Other than fans and a circulating pump, there are no moving parts. System's unique design eliminates problems of balancing, leaking, corroding, and freezing.
Evaluating Heat Pipe Performance in 1/6 g Acceleration: Problems and Prospects
NASA Technical Reports Server (NTRS)
Jaworske, Donald A.; McCollum, Timothy A.; Gibson, Marc A.; Sanzi, James L.; Sechkar, Edward A.
2011-01-01
Heat pipes composed of titanium and water are being considered for use in the heat rejection system of a fission power system option for lunar exploration. Placed vertically on the lunar surface, the heat pipes would operate as thermosyphons in the 1/6 g environment. The design of thermosyphons for such an application is determined, in part, by the flooding limit. Flooding is composed of two components, the thickness of the fluid film on the walls of the thermosyphon and the interaction of the fluid flow with the concurrent vapor counter flow. Both the fluid thickness contribution and interfacial shear contribution are inversely proportional to gravity. Hence, evaluating the performance of a thermosyphon in a 1 g environment on Earth may inadvertently lead to overestimating the performance of the same thermosyphon as experienced in the 1/6 g environment on the moon. Several concepts of varying complexity have been proposed for evaluating thermosyphon performance in reduced gravity, ranging from tilting the thermosyphons on Earth based on a cosine function, to flying heat pipes on a low-g aircraft. This paper summarizes the problems and prospects for evaluating thermosyphon performance in 1/6 g.
NASA Astrophysics Data System (ADS)
Plaut, R. H.
2006-01-01
Fluid-conveying pipes with supported ends buckle when the fluid velocity reaches a critical value. For higher velocities, the postbuckled equilibrium shape can be directly related to that for a column under a follower end load. However, the corresponding vibration frequencies are different due to the Coriolis force associated with the fluid flow. Clamped-clamped, pinned-pinned, and clamped-pinned pipes are considered first. Axial sliding is permitted at the downstream end. The pipe is modeled as an inextensible elastica. The equilibrium shape may have large displacements, and small motions about that shape are analyzed. The behavior is conservative in the prebuckling range and nonconservative in the postbuckling range (during which the Coriolis force does work and the motions decay). Next, related columns are studied, first with a concentrated follower load at the axially sliding end, and then with a distributed follower load. In all cases, a shooting method is used to solve the nonlinear boundary-value problem for the equilibrium configuration, and to solve the linear boundary-value problem for the first four vibration frequencies. The results for the three different types of loading are compared.
Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)
NASA Technical Reports Server (NTRS)
Hughes, Steven; Knittel, Jeremy; Shoan, Wendy; Kim, Youngkwang; Conway, Claire; Conway, Darrel J.
2017-01-01
This paper describes the processes and results of Verification and Validation (VV) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The VV effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.
Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)
NASA Technical Reports Server (NTRS)
Hughes, Steven; Knittel, Jeremy; Shoan, Wendy (Compiler); Kim, Youngkwang; Conway, Claire (Compiler); Conway, Darrel
2017-01-01
This paper describes the processes and results of Verification and Validation (V&V) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The V&V effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.
NASA Astrophysics Data System (ADS)
Hirsch, Piotr; Duzinkiewicz, Kazimierz; Grochowski, Michał
2017-11-01
District Heating (DH) systems are commonly supplied using local heat sources. Nowadays, modern insulation materials allow for effective and economically viable heat transportation over long distances (over 20 km). In the paper a method for optimized selection of design and operating parameters of long distance Heat Transportation System (HTS) is proposed. The method allows for evaluation of feasibility and effectivity of heat transportation from the considered heat sources. The optimized selection is formulated as multicriteria decision-making problem. The constraints for this problem include a static HTS model, allowing considerations of system life cycle, time variability and spatial topology. Thereby, variation of heat demand and ground temperature within the DH area, insulation and pipe aging and/or terrain elevation profile are taken into account in the decision-making process. The HTS construction costs, pumping power, and heat losses are considered as objective functions. Inner pipe diameter, insulation thickness, temperatures and pumping stations locations are optimized during the decision-making process. Moreover, the variants of pipe-laying e.g. one pipeline with the larger diameter or two with the smaller might be considered during the optimization. The analyzed optimization problem is multicriteria, hybrid and nonlinear. Because of such problem properties, the genetic solver was applied.
NASA Astrophysics Data System (ADS)
Saitou, Yutaka; Kikuchi, Yoshiaki; Kusakabe, Osamu; Kiyomiya, Osamu; Yoneyama, Haruo; Kawakami, Taiji
Steel sheet pipe pile foundations with large diameter steel pipe sheet pile were used for the foundation of the main pier of the Tokyo Gateway bridge. However, as for the large diameter steel pipe pile, the bearing mechanism including a pile tip plugging effect is still unclear due to lack of the practical examinations even though loading tests are performed on Trans-Tokyo Bay Highway. In the light of the foregoing problems, static pile loading tests both vertical and horizontal directions, a dynamic loading test, and cone penetration tests we re conducted for determining proper design parameters of the ground for the foundations. Design parameters were determined rationally based on the tests results. Rational design verification was obtained from this research.
PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Frederick, J. M.
2016-12-01
In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Benchmarking FEniCS for mantle convection simulations
NASA Astrophysics Data System (ADS)
Vynnytska, L.; Rognes, M. E.; Clark, S. R.
2013-01-01
This paper evaluates the usability of the FEniCS Project for mantle convection simulations by numerical comparison to three established benchmarks. The benchmark problems all concern convection processes in an incompressible fluid induced by temperature or composition variations, and cover three cases: (i) steady-state convection with depth- and temperature-dependent viscosity, (ii) time-dependent convection with constant viscosity and internal heating, and (iii) a Rayleigh-Taylor instability. These problems are modeled by the Stokes equations for the fluid and advection-diffusion equations for the temperature and composition. The FEniCS Project provides a novel platform for the automated solution of differential equations by finite element methods. In particular, it offers a significant flexibility with regard to modeling and numerical discretization choices; we have here used a discontinuous Galerkin method for the numerical solution of the advection-diffusion equations. Our numerical results are in agreement with the benchmarks, and demonstrate the applicability of both the discontinuous Galerkin method and FEniCS for such applications.
NASA Astrophysics Data System (ADS)
Guo, Zhouchao; Lu, Tao; Liu, Bo
2017-04-01
Turbulent penetration can occur when hot and cold fluids mix in a horizontal T-junction pipe at nuclear plants. Caused by the unstable turbulent penetration, temperature fluctuations with large amplitude and high frequency can lead to time-varying wall thermal stress and even thermal fatigue on the inner wall. Numerous cases, however, exist where inner wall temperatures cannot be measured and only outer wall temperature measurements are feasible. Therefore, it is one of the popular research areas in nuclear science and engineering to estimate temperature fluctuations on the inner wall from measurements of outer wall temperatures without damaging the structure of the pipe. In this study, both the one-dimensional (1D) and the two-dimensional (2D) inverse heat conduction problem (IHCP) were solved to estimate the temperature fluctuations on the inner wall. First, numerical models of both the 1D and the 2D direct heat conduction problem (DHCP) were structured in MATLAB, based on the finite difference method with an implicit scheme. Second, both the 1D IHCP and the 2D IHCP were solved by the steepest descent method (SDM), and the DHCP results of temperatures on the outer wall were used to estimate the temperature fluctuations on the inner wall. Third, we compared the temperature fluctuations on the inner wall estimated by the 1D IHCP with those estimated by the 2D IHCP in four cases: (1) when the maximum disturbance of temperature of fluid inside the pipe was 3°C, (2) when the maximum disturbance of temperature of fluid inside the pipe was 30°C, (3) when the maximum disturbance of temperature of fluid inside the pipe was 160°C, and (4) when the fluid temperatures inside the pipe were random from 50°C to 210°C.
TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer
NASA Astrophysics Data System (ADS)
Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.
2017-07-01
Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.
NAS Grid Benchmarks: A Tool for Grid Space Exploration
NASA Technical Reports Server (NTRS)
Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)
2001-01-01
We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We describe a new problem size, called Class D, for the NAS Parallel Benchmarks (NPB), whose MPI source code implementation is being released as NPB 2.4. A brief rationale is given for how the new class is derived. We also describe the modifications made to the MPI (Message Passing Interface) implementation to allow the new class to be run on systems with 32-bit integers, and with moderate amounts of memory. Finally, we give the verification values for the new problem size.
Electrohydrodynamic heat pipe research
NASA Technical Reports Server (NTRS)
Jones, T. B.; Perry, M. P.
1973-01-01
Experimental and theoretical applications to electrohydrodynamic heat pipe (EHDHP) research are presented. Two problems in the research which are discussed are the prediction of the effective thermal conductance of an EHDHP with threaded grooves for fluid distribution to the evaporator of an EHDHP. Hydrodynamic equations are included along with a discussion of boundary conditions and burn-out conditions. A discussion of the theoretical and experimental results is presented.
NASA Astrophysics Data System (ADS)
Sekine, Hideki; Yoshida, Kimiaki
This paper deals with the optimization problem of material composition for minimizing the stress intensity factor of radial edge crack in thick-walled functionally graded material (FGM) circular pipes under steady-state thermomechanical loading. Homogenizing the FGM circular pipes by simulating the inhomogeneity of thermal conductivity by a distribution of equivalent eigentemperature gradient and the inhomogeneity of Young's modulus and Poisson's ratio by a distribution of equivalent eigenstrain, we present an approximation method to obtain the stress intensity factor of radial edge crack in the FGM circular pipes. The optimum material composition for minimizing the stress intensity factor of radial edge crack is determined using a nonlinear mathematical programming method. Numerical results obtained for a thick-walled TiC/Al2O3 FGM circular pipe reveal that it is possible to decrease remarkably the stress intensity factor of radial edge crack by setting the optimum material composition profile.
NASA Technical Reports Server (NTRS)
Jones, J. A.
1983-01-01
In the Space Telescope's Wide Field Planetary Camera (WFPC) project, eight heat pipes (HPs) are used to remove heat from the camera's inner electronic sensors to the spacecraft's outer, cold radiator surface. For proper device functioning and maximization of the signal-to-noise ratios, the Charge Coupled Devices (CCD's) must be maintained at -95 C or lower. Thermoelectric coolers (TEC's) cool the CCD's, and heat pipes deliver each TEC's nominal six to eight watts of heat to the space radiator, which reaches an equilibrium temperature between -15 C to -70 C. An initial problem was related to the difficulty to produce gas-free aluminum/ammonia heat pipes. An investigation was, therefore, conducted to determine the cause of the gas generation and the impact of this gas on CCD cooling. In order to study the effect of gas slugs in the WFPC system, a separate HP was made. Attention is given to fabrication, testing, and heat pipe gas generation chemistry studies.
Thermodynamic analysis of alternate energy carriers, hydrogen and chemical heat pipes
NASA Technical Reports Server (NTRS)
Cox, K. E.; Carty, R. H.; Conger, W. L.; Soliman, M. A.; Funk, J. E.
1976-01-01
The paper discusses the production concept and efficiency of two new energy transmission and storage media intended to overcome the disadvantages of electricity as an overall energy carrier. These media are hydrogen produced by water-splitting and the chemical heat pipe. Hydrogen can be transported or stored, and burned as energy is needed, forming only water and thus obviating pollution problems. The chemical heat pipe envisions a system in which heat is stored as the heat of reaction in chemical species. The thermodynamic analysis of these two methods is discussed in terms of first-law and second-law efficiency. It is concluded that chemical heat pipes offer large advantages over thermochemical hydrogen generation schemes on a first-law efficiency basis except for the degradation of thermal energy in temperature thus providing a source of low-temperature (800 K) heat for process heat applications. On a second-law efficiency basis, hydrogen schemes are superior in that the amount of available work is greater as compared to chemical heat pipes.
Methods for calculating conjugate problems of heat transfer
NASA Astrophysics Data System (ADS)
Kalinin, E. K.; Dreitser, G. A.; Kostiuk, V. V.; Berlin, I. I.
Methods are examined for calculating various conjugate problems of heat transfer in channels and closed vessels in cases of single-phase and two-phase flow in steady and unsteady conditions. The single-phase-flow studies involve the investigation of gaseous and liquid heat-carriers in pipes, annular and plane channels, and pipe bundles in cases of cooling and heating. General relationships are presented for heat transfer in cases of film, transition, and nucleate boiling, as well as for boiling crises. Attention is given to methods for analyzing the filling and cooling of conduits and tanks by cryogenic liquids; and ways to intensify heat transfer in these conditions are examined.
Spence, Lisa A; Aschengrau, Ann; Gallagher, Lisa E; Webster, Thomas F; Heeren, Timothy C; Ozonoff, David M
2008-01-01
Background From May 1968 through March 1980, vinyl-lined asbestos-cement (VL/AC) water distribution pipes were installed in New England to avoid taste and odor problems associated with asbestos-cement pipes. The vinyl resin was applied to the inner pipe surface in a solution of tetrachloroethylene (perchloroethylene, PCE). Substantial amounts of PCE remained in the liner and subsequently leached into public drinking water supplies. Methods Once aware of the leaching problem and prior to remediation (April-November 1980), Massachusetts regulators collected drinking water samples from VL/AC pipes to determine the extent and severity of the PCE contamination. This study compares newly obtained historical records of PCE concentrations in water samples (n = 88) with concentrations estimated using an exposure model employed in epidemiologic studies on the cancer risk associated with PCE-contaminated drinking water. The exposure model was developed by Webler and Brown to estimate the mass of PCE delivered to subjects' residences. Results The mean and median measured PCE concentrations in the water samples were 66 and 0.5 μg/L, respectively, and the range extended from non-detectable to 2432 μg/L. The model-generated concentration estimates and water sample concentrations were moderately correlated (Spearman rank correlation coefficient = 0.48, p < 0.0001). Correlations were higher in samples taken at taps and spigots vs. hydrants (ρ = 0.84 vs. 0.34), in areas with simple vs. complex geometry (ρ = 0.51 vs. 0.38), and near pipes installed in 1973–1976 vs. other years (ρ = 0.56 vs. 0.42 for 1968–1972 and 0.37 for 1977–1980). Overall, 24% of the variance in measured PCE concentrations was explained by the model-generated concentration estimates (p < 0.0001). Almost half of the water samples had undetectable concentrations of PCE. Undetectable levels were more common in areas with the earliest installed VL/AC pipes, at the beginning and middle of VL/AC pipes, at hydrants, and in complex pipe configurations. Conclusion PCE concentration estimates generated using the Webler-Brown model were moderately correlated with measured water concentrations. The present analysis suggests that the exposure assessment process used in prior epidemiological studies could be improved with more accurate characterization of water flow. This study illustrates one method of validating an exposure model in an epidemiological study when historical measurements are not available. PMID:18518975
Spence, Lisa A; Aschengrau, Ann; Gallagher, Lisa E; Webster, Thomas F; Heeren, Timothy C; Ozonoff, David M
2008-06-02
From May 1968 through March 1980, vinyl-lined asbestos-cement (VL/AC) water distribution pipes were installed in New England to avoid taste and odor problems associated with asbestos-cement pipes. The vinyl resin was applied to the inner pipe surface in a solution of tetrachloroethylene (perchloroethylene, PCE). Substantial amounts of PCE remained in the liner and subsequently leached into public drinking water supplies. Once aware of the leaching problem and prior to remediation (April-November 1980), Massachusetts regulators collected drinking water samples from VL/AC pipes to determine the extent and severity of the PCE contamination. This study compares newly obtained historical records of PCE concentrations in water samples (n = 88) with concentrations estimated using an exposure model employed in epidemiologic studies on the cancer risk associated with PCE-contaminated drinking water. The exposure model was developed by Webler and Brown to estimate the mass of PCE delivered to subjects' residences. The mean and median measured PCE concentrations in the water samples were 66 and 0.5 microg/L, respectively, and the range extended from non-detectable to 2432 microg/L. The model-generated concentration estimates and water sample concentrations were moderately correlated (Spearman rank correlation coefficient = 0.48, p < 0.0001). Correlations were higher in samples taken at taps and spigots vs. hydrants (rho = 0.84 vs. 0.34), in areas with simple vs. complex geometry (rho = 0.51 vs. 0.38), and near pipes installed in 1973-1976 vs. other years (rho = 0.56 vs. 0.42 for 1968-1972 and 0.37 for 1977-1980). Overall, 24% of the variance in measured PCE concentrations was explained by the model-generated concentration estimates (p < 0.0001). Almost half of the water samples had undetectable concentrations of PCE. Undetectable levels were more common in areas with the earliest installed VL/AC pipes, at the beginning and middle of VL/AC pipes, at hydrants, and in complex pipe configurations. PCE concentration estimates generated using the Webler-Brown model were moderately correlated with measured water concentrations. The present analysis suggests that the exposure assessment process used in prior epidemiological studies could be improved with more accurate characterization of water flow. This study illustrates one method of validating an exposure model in an epidemiological study when historical measurements are not available.
NASA Technical Reports Server (NTRS)
Issacci, F.; Roche, G. L.; Klein, D. B.; Catton, I.
1988-01-01
The vapor flow in a heat pipe was mathematically modeled and the equations governing the transient behavior of the core were solved numerically. The modeled vapor flow is transient, axisymmetric (or two-dimensional) compressible viscous flow in a closed chamber. The two methods of solution are described. The more promising method failed (a mixed Galerkin finite difference method) whereas a more common finite difference method was successful. Preliminary results are presented showing that multi-dimensional flows need to be treated. A model of the liquid phase of a high temperature heat pipe was developed. The model is intended to be coupled to a vapor phase model for the complete solution of the heat pipe problem. The mathematical equations are formulated consistent with physical processes while allowing a computationally efficient solution. The model simulates time dependent characteristics of concern to the liquid phase including input phase change, output heat fluxes, liquid temperatures, container temperatures, liquid velocities, and liquid pressure. Preliminary results were obtained for two heat pipe startup cases. The heat pipe studied used lithium as the working fluid and an annular wick configuration. Recommendations for implementation based on the results obtained are presented. Experimental studies were initiated using a rectangular heat pipe. Both twin beam laser holography and laser Doppler anemometry were investigated. Preliminary experiments were completed and results are reported.
NASA Technical Reports Server (NTRS)
Ganapol, Barry D.; Townsend, Lawrence W.; Wilson, John W.
1989-01-01
Nontrivial benchmark solutions are developed for the galactic ion transport (GIT) equations in the straight-ahead approximation. These equations are used to predict potential radiation hazards in the upper atmosphere and in space. Two levels of difficulty are considered: (1) energy independent, and (2) spatially independent. The analysis emphasizes analytical methods never before applied to the GIT equations. Most of the representations derived have been numerically implemented and compared to more approximate calculations. Accurate ion fluxes are obtained (3 to 5 digits) for nontrivial sources. For monoenergetic beams, both accurate doses and fluxes are found. The benchmarks presented are useful in assessing the accuracy of transport algorithms designed to accommodate more complex radiation protection problems. In addition, these solutions can provide fast and accurate assessments of relatively simple shield configurations.
A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan
NASA Astrophysics Data System (ADS)
Rameshkumar, K.; Rajendran, C.
2018-02-01
In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.
NASA Astrophysics Data System (ADS)
Akhmedagaev, R.; Listratov, Y.
2017-11-01
The direct numerical simulation (DNS) of MHD-heat transfer problems in turbulent flow of liquid metal (LM) in a horizontal pipe with a joint effect of the longitudinal magnetic field (MF) and thermo-gravitational convection (TGC). The authors calculated the effect of TGC in a strong longitudinal MF for a homogeneous heating. Investigated the averaged fields of velocity and temperature, heat transfer characteristics, the distribution of wall temperature along the perimeter of the cross section of the pipe. The effect of TGC on the velocity field is affected stronger than in the temperature field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebelo, Ramoshweu Solomon, E-mail: sollyl@vut.ac.za
In this paper the CO{sub 2} emission and thermal stability in a long cylindrical pipe of combustible reactive material with variable thermal conductivity are investigated. It is assumed that the cylindrical pipe loses heat by both convection and radiation at the surface. The nonlinear differential equations governing the problem are tackled numerically using Runge-Kutta-Fehlberg method coupled with shooting technique method. The effects of various thermophysical parameters on the temperature and carbon dioxide fields, together with critical conditions for thermal ignition are illustrated and discussed quantitatively.
Study of the collector/heat pipe cooled externally configured thermionic diode
NASA Technical Reports Server (NTRS)
1973-01-01
A collector/heat pipe cooled, externally configured (heated) thermionic diode module was designed for use in a laboratory test to demonstrate the applicability of this concept as the fuel element/converter module of an in-core thermionic electric power source. During the course of the program, this module evolved from a simple experimental mock-up into an advanced unit which was more reactor prototypical. Detailed analysis of all diode components led to their engineering design, fabrication, and assembly, with the exception of the collector/heat pipe. While several designs of high power annular wicked heat pipes were fabricated and tested, each exhibited unexpected performance difficulties. It was concluded that the basic cause of these problems was the formation of crud which interfered with the liquid flow in the annular passage of the evaporator region.
Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...
Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...
Benchmark Problems of the Geothermal Technologies Office Code Comparison Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.
A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less
BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.
Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R
2015-02-20
Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .
Benchmark Problems Used to Assess Computational Aeroacoustics Codes
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Envia, Edmane
2005-01-01
The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.
Fluid-structure interaction with pipe-wall viscoelasticity during water hammer
NASA Astrophysics Data System (ADS)
Keramat, A.; Tijsseling, A. S.; Hou, Q.; Ahmadi, A.
2012-01-01
Fluid-structure interaction (FSI) due to water hammer in a pipeline which has viscoelastic wall behaviour is studied. Appropriate governing equations are derived and numerically solved. In the numerical implementation of the hydraulic and structural equations, viscoelasticity is incorporated using the Kelvin-Voigt mechanical model. The equations are solved by two different approaches, namely the Method of Characteristics-Finite Element Method (MOC-FEM) and full MOC. In both approaches two important effects of FSI in fluid-filled pipes, namely Poisson and junction coupling, are taken into account. The study proposes a more comprehensive model for studying fluid transients in pipelines as compared to previous works, which take into account either FSI or viscoelasticity. To verify the proposed mathematical model and its numerical solutions, the following problems are investigated: axial vibration of a viscoelastic bar subjected to a step uniaxial loading, FSI in an elastic pipe, and hydraulic transients in a pressurised polyethylene pipe without FSI. The results of each case are checked with available exact and experimental results. Then, to study the simultaneous effects of FSI and viscoelasticity, which is the new element of the present research, one problem is solved by the two different numerical approaches. Both numerical methods give the same results, thus confirming the correctness of the solutions.
Investigation of Freeze and Thaw Cycles of a Gas-Charged Heat Pipe
NASA Technical Reports Server (NTRS)
Ku, Jentung; Ottenstein, Laura; Krimchansky, Alexander
2012-01-01
The traditional constant conductance heat pipes (CCHPs) currently used on most spacecraft run the risk of bursting the pipe when the working fluid is frozen and later thawed. One method to avoid pipe bursting is to use a gas-charged heat pipe (GCHP) that can sustain repeated freeze/thaw cycles. The construction of the GCHP is similar to that of the traditional CCHP except that a small amount of non-condensable gas (NCG) is introduced and a small length is added to the CCHP condenser to serve as the NCG reservoir. During the normal operation, the NCG is mostly confined to the reservoir, and the GCHP functions as a passive variable conductance heat pipe (VCHP). When the liquid begins to freeze in the condenser section, the NCG will expand to fill the central core of the heat pipe, and ice will be formed only in the grooves located on the inner surface of the heat pipe in a controlled fashion. The ice will not bridge the diameter of the heat pipe, thus avoiding the risk of pipe bursting during freeze/thaw cycles. A GCHP using ammonia as the working fluid was fabricated and then tested inside a thermal vacuum chamber. The GCHP demonstrated a heat transport capability of more than 200W at 298K as designed. Twenty-seven freeze/thaw cycles were conducted under various conditions where the evaporator temperature ranged from 163K to 253K and the condenser/reservoir temperatures ranged from 123K to 173K. In all tests, the GCHP restarted without any problem with heat loads between 10W and 100W. No performance degradation was noticed after 27 freeze/thaw cycles. The ability of the GCHP to sustain repeated freeze/thaw cycles was thus successfully demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gylenhaal, J.; Bronevetsky, G.
2007-05-25
CLOMP is the C version of the Livermore OpenMP benchmark deeloped to measure OpenMP overheads and other performance impacts due to threading (like NUMA memory layouts, memory contention, cache effects, etc.) in order to influence future system design. Current best-in-class implementations of OpenMP have overheads at least ten times larger than is required by many of our applications for effective use of OpenMP. This benchmark shows the significant negative performance impact of these relatively large overheads and of other thread effects. The CLOMP benchmark highly configurable to allow a variety of problem sizes and threading effects to be studied andmore » it carefully checks its results to catch many common threading errors. This benchmark is expected to be included as part of the Sequoia Benchmark suite for the Sequoia procurement.« less
Ultrasonic Measurement of Erosion/corrosion Rates in Industrial Piping Systems
NASA Astrophysics Data System (ADS)
Sinclair, A. N.; Safavi, V.; Honarvar, F.
2011-06-01
Industrial piping systems that carry aggressive corrosion or erosion agents may suffer from a gradual wall thickness reduction that eventually threatens pipe integrity. Thinning rates could be estimated from the very small change in wall thickness values measured by conventional ultrasound over a time span of at least a few months. However, measurements performed over shorter time spans would yield no useful information—minor signal distortions originating from grain noise and ultrasonic equipment imperfections prevent a meaningful estimate of the minuscule reduction in echo travel time. Using a Model-Based Estimation (MBE) technique, a signal processing scheme has been developed that enables the echo signals from the pipe wall to be separated from the noise. This was implemented in a laboratory experimental program, featuring accelerated erosion/corrosion on the inner wall of a test pipe. The result was a reduction in the uncertainty in the wall thinning rate by a factor of four. This improvement enables a more rapid response by system operators to a change in plant conditions that could pose a pipe integrity problem. It also enables a rapid evaluation of the effectiveness of new corrosion inhibiting agents under plant operating conditions.
Building Bridges Between Geoscience and Data Science through Benchmark Data Sets
NASA Astrophysics Data System (ADS)
Thompson, D. R.; Ebert-Uphoff, I.; Demir, I.; Gel, Y.; Hill, M. C.; Karpatne, A.; Güereque, M.; Kumar, V.; Cabral, E.; Smyth, P.
2017-12-01
The changing nature of observational field data demands richer and more meaningful collaboration between data scientists and geoscientists. Thus, among other efforts, the Working Group on Case Studies of the NSF-funded RCN on Intelligent Systems Research To Support Geosciences (IS-GEO) is developing a framework to strengthen such collaborations through the creation of benchmark datasets. Benchmark datasets provide an interface between disciplines without requiring extensive background knowledge. The goals are to create (1) a means for two-way communication between geoscience and data science researchers; (2) new collaborations, which may lead to new approaches for data analysis in the geosciences; and (3) a public, permanent repository of complex data sets, representative of geoscience problems, useful to coordinate efforts in research and education. The group identified 10 key elements and characteristics for ideal benchmarks. High impact: A problem with high potential impact. Active research area: A group of geoscientists should be eager to continue working on the topic. Challenge: The problem should be challenging for data scientists. Data science generality and versatility: It should stimulate development of new general and versatile data science methods. Rich information content: Ideally the data set provides stimulus for analysis at many different levels. Hierarchical problem statement: A hierarchy of suggested analysis tasks, from relatively straightforward to open-ended tasks. Means for evaluating success: Data scientists and geoscientists need means to evaluate whether the algorithms are successful and achieve intended purpose. Quick start guide: Introduction for data scientists on how to easily read the data to enable rapid initial data exploration. Geoscience context: Summary for data scientists of the specific data collection process, instruments used, any pre-processing and the science questions to be answered. Citability: A suitable identifier to facilitate tracking the use of the benchmark later on, e.g. allowing search engines to find all research papers using it. A first sample benchmark developed in collaboration with the Jet Propulsion Laboratory (JPL) deals with the automatic analysis of imaging spectrometer data to detect significant methane sources in the atmosphere.
Guturu, Parthasarathy; Dantu, Ram
2008-06-01
Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.
NASA Astrophysics Data System (ADS)
Altabey, Wael A.; Noori, Mohammed
2017-05-01
Novel modulation electrical potential change (EPC) method for fatigue crack detection in a basalt fibre reinforced polymer (FRP) laminate composite pipe is carried out in this paper. The technique is applied to a laminate pipe with an embedded crack in three layers [0º/90º/0º]s. EPC is applied for evaluating the dielectric properties of basalt FRP pipe by using an electrical capacitance sensor (ECS) to discern damages in the pipe. Twelve electrodes are mounted on the outer surface of the pipe and the changes in the modulation dielectric properties of the piping system are analyzed to detect damages in the pipe. An embedded crack is created by a fatigue internal pressure test. The capacitance values, capacitance change and node potential distribution of ECS electrodes are calculated before and after crack initiates using a finite element method (FEM) by ANSYS and MATLAB, which are combined to simulate sensor characteristics and fatigue behaviour. The crack lengths of the basalt FRP are investigated for various number of cycles to failure for determining crack growth rate. Response surfaces are adopted as a tool for solving inverse problems to estimate crack lengths from the measured electric potential differences of all segments between electrodes to validate the FEM results. The results show that, the good convergence between the FEM and estimated results. Also the results of this study show that the electrical potential difference of the basalt FRP laminate increases during cyclic loading, caused by matrix cracking. The results indicate that the proposed method successfully provides fatigue crack detection for basalt FRP laminate composite pipes.
Benchmarking in national health service procurement in Scotland.
Walker, Scott; Masson, Ron; Telford, Ronnie; White, David
2007-11-01
The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.
Microbially Mediated Kinetic Sulfur Isotope Fractionation: Reactive Transport Modeling Benchmark
NASA Astrophysics Data System (ADS)
Wanner, C.; Druhan, J. L.; Cheng, Y.; Amos, R. T.; Steefel, C. I.; Ajo Franklin, J. B.
2014-12-01
Microbially mediated sulfate reduction is a ubiquitous process in many subsurface systems. Isotopic fractionation is characteristic of this anaerobic process, since sulfate reducing bacteria (SRB) favor the reduction of the lighter sulfate isotopologue (S32O42-) over the heavier isotopologue (S34O42-). Detection of isotopic shifts have been utilized as a proxy for the onset of sulfate reduction in subsurface systems such as oil reservoirs and aquifers undergoing uranium bioremediation. Reactive transport modeling (RTM) of kinetic sulfur isotope fractionation has been applied to field and laboratory studies. These RTM approaches employ different mathematical formulations in the representation of kinetic sulfur isotope fractionation. In order to test the various formulations, we propose a benchmark problem set for the simulation of kinetic sulfur isotope fractionation during microbially mediated sulfate reduction. The benchmark problem set is comprised of four problem levels and is based on a recent laboratory column experimental study of sulfur isotope fractionation. Pertinent processes impacting sulfur isotopic composition such as microbial sulfate reduction and dispersion are included in the problem set. To date, participating RTM codes are: CRUNCHTOPE, TOUGHREACT, MIN3P and THE GEOCHEMIST'S WORKBENCH. Preliminary results from various codes show reasonable agreement for the problem levels simulating sulfur isotope fractionation in 1D.
Decoupling pipeline influences in soil resistivity measurements with finite element techniques
NASA Astrophysics Data System (ADS)
Deo, R. N.; Azoor, R. M.; Zhang, C.; Kodikara, J. K.
2018-03-01
Periodic inspection of pipeline conditions is an important asset management strategy conducted by water and sewer utilities for efficient and economical operations of their assets in field. The Level 1 pipeline condition assessment involving resistivity profiling along the pipeline right-of-way is a common technique for delineating pipe sections that might be installed in highly corrosive soil environment. However, the technique can suffer from significant perturbations arising from the buried pipe itself, resulting in errors in native soil characterisation. To address this problem, a finite element model was developed to investigate the degree to which pipes of different a) diameters, b) burial depths, and c) surface conditions (bare or coated) can influence in-situ soil resistivity measurements using Wenner methods. It was found that the greatest errors can arise when conducting measurements over a bare pipe with the array aligned parallel to the pipe. Depending upon the pipe surface conditions, in-situ resistivity measurements can either be underestimated or overestimated from true soil resistivities. Following results based on simulations and decoupling equations, a guiding framework for removing pipe influences in soil resistivity measurements were developed that can be easily used to perform corrections on measurements. The equations require simple a-prior information on the pipe diameter, burial depth, surface condition, and the array length and orientation used. Findings from this study have immediate application and is envisaged to be useful for critical civil infrastructure monitoring and assessment.
NAS Parallel Benchmark Results 11-96. 1.0
NASA Technical Reports Server (NTRS)
Bailey, David H.; Bailey, David; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion. In other words, the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. These results represent the best results that have been reported to us by the vendors for the specific 3 systems listed. In this report, we present new NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, SGI Origin200, and SGI Origin2000. We also report High Performance Fortran (HPF) based NPB results for IBM SP2 Wide Nodes, HP/Convex Exemplar SPP2000, and SGI/CRAY T3D. These results have been submitted by Applied Parallel Research (APR) and Portland Group Inc. (PGI). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks.
Particle swarm optimization with recombination and dynamic linkage discovery.
Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung
2007-12-01
In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system.
Steam generator feedwater nozzle transition piece replacement experience at Salem Unit 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patten, D.B.; Perrin, J.S.; Roberts, A.T.
Cracking of steam generator feedwater piping adjacent to the feedwater nozzles has been a recurring problem since 1979 at Salem Unit 1 owned and operated by Public Service Electric and Gas Company. In addition to the cracking problem, erosion-corrosion at the leading edge of the feedwater nozzle thermal sleeve was also observed in 1992. To provide a long-term solution for the pipe cracking and thermal sleeve erosion-corrosion problems, a unique transition piece forging was specially designed, fabricated, and installed for each of the four steam generators during the 1995 outage. This paper discusses the design, fabrication, and installation of themore » transition piece forgings at Salem Unit 1, and the experiences gained from this project. It is believed that these experiences may help other utilities when planning similar replacements in the future.« less
Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.
ERIC Educational Resources Information Center
Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.
This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…
Improving Federal Education Programs through an Integrated Performance and Benchmarking System.
ERIC Educational Resources Information Center
Department of Education, Washington, DC. Office of the Under Secretary.
This document highlights the problems with current federal education program data collection activities and lists several factors that make movement toward a possible solution, then discusses the vision for the Integrated Performance and Benchmarking System (IPBS), a vision of an Internet-based system for harvesting information from states about…
A Critical Thinking Benchmark for a Department of Agricultural Education and Studies
ERIC Educational Resources Information Center
Perry, Dustin K.; Retallick, Michael S.; Paulsen, Thomas H.
2014-01-01
Due to an ever changing world where technology seemingly provides endless answers, today's higher education students must master a new skill set reflecting an emphasis on critical thinking, problem solving, and communications. The purpose of this study was to establish a departmental benchmark for critical thinking abilities of students majoring…
Benchmarking NNWSI flow and transport codes: COVE 1 results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayden, N.K.
1985-06-01
The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of themore » codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs.« less
Three dimensional contact/impact methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulak, R.F.
1987-01-01
The simulation of three-dimensional interface mechanics between reactor components and structures during static contact or dynamic impact is necessary to realistically evaluate their structural integrity to off-normal loads. In our studies of postulated core energy release events, we have found that significant structure-structure interactions occur in some reactor vessel head closure designs and that fluid-structure interactions occur within the reactor vessel. Other examples in which three-dimensional interface mechanics play an important role are: (1) impact response of shipping casks containing spent fuel, (2) whipping pipe impact on reinforced concrete panels or pipe-to-pipe impact after a pipe break, (3) aircraft crashmore » on secondary containment structures, (4) missiles generated by turbine failures or tornados, and (5) drops of heavy components due to lifting accidents. The above is a partial list of reactor safety problems that require adequate treatment of interface mechanics and are discussed in this paper.« less
Hybrid Heat Pipes for Lunar and Martian Surface and High Heat Flux Space Applications
NASA Technical Reports Server (NTRS)
Ababneh, Mohammed T.; Tarau, Calin; Anderson, William G.; Farmer, Jeffery T.; Alvarez-Hernandez, Angel R.
2016-01-01
Novel hybrid wick heat pipes are developed to operate against gravity on planetary surfaces, operate in space carrying power over long distances and act as thermosyphons on the planetary surface for Lunar and Martian landers and rovers. These hybrid heat pipes will be capable of operating at the higher heat flux requirements expected in NASA's future spacecraft and on the next generation of polar rovers and equatorial landers. In addition, the sintered evaporator wicks mitigate the start-up problems in vertical gravity aided heat pipes because of large number of nucleation sites in wicks which will allow easy boiling initiation. ACT, NASA Marshall Space Flight Center, and NASA Johnson Space Center, are working together on the Advanced Passive Thermal experiment (APTx) to test and validate the operation of a hybrid wick VCHP with warm reservoir and HiK"TM" plates in microgravity environment on the ISS.
Augmented neural networks and problem structure-based heuristics for the bin-packing problem
NASA Astrophysics Data System (ADS)
Kasap, Nihat; Agarwal, Anurag
2012-08-01
In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.
Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search
2017-01-01
Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima. PMID:28634487
Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search.
Huang, Xingwang; Zeng, Xuewen; Han, Rui
2017-01-01
Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.
Implementation and verification of global optimization benchmark problems
NASA Astrophysics Data System (ADS)
Posypkin, Mikhail; Usov, Alexander
2017-12-01
The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.
Benchmarking optimization software with COPS 3.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, E. D.; More, J. J.; Munson, T. S.
2004-05-24
The authors describe version 3.0 of the COPS set of nonlinearly constrained optimization problems. They have added new problems, as well as streamlined and improved most of the problems. They also provide a comparison of the FILTER, KNITRO, LOQO, MINOS, and SNOPT solvers on these problems.
EXFILTRATION IN SEWER SYSTEMS: IS IT A NATIONAL PROBLEM?
Many municipalities throughout the US have sewerage systems (separate and combined) that may experience exfiltration of untreated wastewater. This study was conducted to focus on the magnitude of the exfiltration problem from sewer pipes on a national basis. The method for estima...
How to handle stuck pipe and fishing problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brouse, M.
1983-01-01
This paper presents valuable information on evaluating fishing problems, including a rule of thumb equation for estimating the number of days that should be spent fishing. Also given is a description and usage breakdown of the numerous tools available to the operator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.
Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis ofmore » two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.« less
NASA Astrophysics Data System (ADS)
Ivankovic, A.; Muzaferija, S.; Demirdzic, I.
1997-07-01
Rapid Crack Propagation (RCP) along pressurised plastic pipes is by far the most dangerous pipe failure mode. Despite the economic benefits offered by increasing pipe size and operating pressure, both strategies increase the risk and the potential consequences of RCP. It is therefore extremely important to account for RCP in establishing the safe operational conditions. Combined experimental-numerical study is the only reliable approach of addressing the problem, and extensive research is undertaken by various fracture groups (e.g. Southwest Research Institute - USA, Imperial College - UK). This paper presents numerical results from finite volume modelling of full-scale test on medium density polyethylene gas pressurised pipes. The crack speed and pressure profile are prescribed in the analysis. Both steady-state and transient RCPs are considered, and the comparison between the two shown. The steady-state results are efficiently achieved employing a full multigrid acceleration technique, where sets of progressively finer grids are used in V-cycles. Also, the effect of inelastic behaviour of polyethylene on RCP results is demonstrated.
The design and fabrication of a Stirling engine heat exchanger module with an integral heat pipe
NASA Technical Reports Server (NTRS)
Schreiber, Jeffrey G.
1988-01-01
The conceptual design of a free-piston Stirling Space Engine (SSE) intended for space power applications has been generated. The engine was designed to produce 25 kW of electric power with heat supplied by a nuclear reactor. A novel heat exchanger module was designed to reduce the number of critical joints in the heat exchanger assembly while also incorporating a heat pipe as the link between the engine and the heat source. Two inexpensive verification tests are proposed. The SSE heat exchanger module is described and the operating conditions for the module are outlined. The design process of the heat exchanger modules, including the sodium heat pipe, is briefly described. Similarities between the proposed SSE heat exchanger modules and the LeRC test modules for two test engines are presented. The benefits and weaknesses of using a sodium heat pipe to transport heat to a Stirling engine are discussed. Similarly, the problems encountered when using a true heat pipe, as opposed to a more simple reflux boiler, are described. The instruments incorporated into the modules and the test program are also outlined.
Coliform non-compliance nightmares in water-supply distribution systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geldreich, E.E.
1988-01-01
Coliform occurrences in distribution systems have created a great concern for both utilities and water authorities because of the implied public-health implications and failure to meet Federal regulations. Many of the known cases involve systems in the east and midwest. The common denominator being systems that have significant amounts of pipe networks over 75 years old and all are treating surface waters. Origins for these contamination events can be found in source-water fluctuations, failures in treatment-barrier protection, or loss of pipe-network integrity. Once passage into the distribution network has been achieved, some of the coliforms (Klebsiella, Enterobacter, Citrobacter) and othermore » heterotrophic bacteria adapt to the pipe environment, finding protection and nutrient support in pipe sediments. Under conditions of seasonal warm waters (10 degC) and availability of assimilable organics in the pipe sediments and tubercles, colonization grows into biofilms that may slough-off into the water supply, creating a coliform non-compliance problem. Significance of these occurrences and control measures are part of a realistic action plan presented for guidance.« less
NASA Astrophysics Data System (ADS)
Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey
2016-04-01
Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT-Japan Joint Call and Istanbul Metropolitan Municipality are all acknowledged.
Test One to Test Many: A Unified Approach to Quantum Benchmarks
NASA Astrophysics Data System (ADS)
Bai, Ge; Chiribella, Giulio
2018-04-01
Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel A.
2016-11-01
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.
Mitchell, L
1996-01-01
The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.
Microwave-based medical diagnosis using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Modiri, Arezoo
This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level of complexity and randomness inherent to the selection of electromagnetic benchmark problems, a trend to resort to oversimplification in order to arrive at reasonable solutions has been taken in literature when utilizing analytical techniques. Here, an attempt has been made to avoid oversimplification when using the proposed swarm-based optimization algorithms.
NASA Astrophysics Data System (ADS)
Hanssen, R. F.
2017-12-01
In traditional geodesy, one is interested in determining the coordinates, or the change in coordinates, of predefined benchmarks. These benchmarks are clearly identifiable and are especially established to be representative of the signal of interest. This holds, e.g., for leveling benchmarks, for triangulation/trilateration benchmarks, and for GNSS benchmarks. The desired coordinates are not identical to the basic measurements, and need to be estimated using robust estimation procedures, where the stochastic nature of the measurements is taken into account. For InSAR, however, the `benchmarks' are not predefined. In fact, usually we do not know where an effective benchmark is located, even though we can determine its dynamic behavior pretty well. This poses several significant problems. First, we cannot describe the quality of the measurements, unless we already know the dynamic behavior of the benchmark. Second, if we don't know the quality of the measurements, we cannot compute the quality of the estimated parameters. Third, rather harsh assumptions need to be made to produce a result. These (usually implicit) assumptions differ between processing operators and the used software, and are severely affected by the amount of available data. Fourth, the `relative' nature of the final estimates is usually not explicitly stated, which is particularly problematic for non-expert users. Finally, whereas conventional geodesy applies rigorous testing to check for measurement or model errors, this is hardly ever done in InSAR-geodesy. These problems make it rather impossible to provide a precise, reliable, repeatable, and `universal' InSAR product or service. Here we evaluate the requirements and challenges to move towards InSAR as a geodetically-proof product. In particular this involves the explicit inclusion of contextual information, as well as InSAR procedures, standards and a technical protocol, supported by the International Association of Geodesy and the international scientific community.
How Much Debt Is Too Much? Defining Benchmarks for Manageable Student Debt
ERIC Educational Resources Information Center
Baum, Sandy; Schwartz, Saul
2006-01-01
Many discussions of student loan repayment focus on those students for whom repayment is a problem and conclude that the reliance on debt to finance postsecondary education is excessive. However, from both a pragmatic perspective and a logical perspective, a more appropriate approach is to develop different benchmarks for students in different…
Stratified Shear Flows In Pipe Geometries
NASA Astrophysics Data System (ADS)
Harabin, George; Camassa, Roberto; McLaughlin, Richard; UNC Joint Fluids Lab Team Team
2015-11-01
Exact and series solutions to the full Navier-Stokes equations coupled to the advection diffusion equation are investigated in tilted three-dimensional pipe geometries. Analytic techniques for studying the three-dimensional problem provide a means for tackling interesting questions such as the optimal domain for mass transport, and provide new avenues for experimental investigation of diffusion driven flows. Both static and time dependent solutions will be discussed. NSF RTG DMS-0943851, NSF RTG ARC-1025523, NSF DMS-1009750.
Resilience-based optimal design of water distribution network
NASA Astrophysics Data System (ADS)
Suribabu, C. R.
2017-11-01
Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.
Comparison of turbulence models and CFD solution options for a plain pipe
NASA Astrophysics Data System (ADS)
Canli, Eyub; Ates, Ali; Bilir, Sefik
2018-06-01
Present paper is partly a declaration of state of a currently ongoing PhD work about turbulent flow in a thick walled pipe in order to analyze conjugate heat transfer. An ongoing effort on CFD investigation of this problem using cylindrical coordinates and dimensionless governing equations is identified alongside a literature review. The mentioned PhD work will be conducted using an in-house developed code. However it needs preliminary evaluation by means of commercial codes available in the field. Accordingly ANSYS CFD was utilized in order to evaluate mesh structure needs and asses the turbulence models and solution options in terms of computational power versus difference signification. Present work contains a literature survey, an arrangement of governing equations of the PhD work, CFD essentials of the preliminary analysis and findings about the mesh structure and solution options. Mesh element number was changed between 5,000 and 320,000. k-ɛ, k-ω, Spalart-Allmaras and Viscous-Laminar models were compared. Reynolds number was changed between 1,000 and 50,000. As it may be expected due to the literature, k-ɛ yields more favorable results near the pipe axis and k-ωyields more convenient results near the wall. However k-ɛ is found sufficient to give turbulent structures for a conjugate heat transfer problem in a thick walled plain pipe.
NASA Astrophysics Data System (ADS)
Wu, Shengli; Du, Kaiping; Xu, Jian; Shen, Wei; Kou, Mingyin; Zhang, Zhekai
2014-07-01
In recent years, two parallel pipes of areal gas distribution (AGD) were installed into the COREX shaft furnace to improve the furnace efficiency. A three-dimensional mathematical model at steady state, which takes a modified three-interface unreacted core model into consideration, is developed in the current work to describe the effect of the AGD pipe on the inner characteristics of shaft furnace. The accuracy of the model is evaluated using the plant operational data. The AGD pipe effectively improves the uniformity of reducing gas distribution, which leads to an increase in gas temperature and concentration of CO or H2 around the AGD pipe, and hence it further contributes to the iron oxide reduction. As a result, the top gas utilization rate and the solid metallization rate (MR) at the bottom outlet are increased by 0.015 and 0.11, respectively. In addition, the optimizations of the flow volume ratio (FVR) of the reducing gas fed through the AGD inlet and the AGD pipe arrangement are further discussed based on the gas flow distribution and the solid MR. Despite the relative suitability of the current FVR (60%), it is still meaningful to enable a manual adjustment of FVR, instead of having it driven by pressure difference, to solve certain production problems. On the other hand, considering the flatter distribution of gas flow, the higher solid MR, and easy installation and replacement, the cross distribution arrangement of AGD pipe with a length of 3 m is recommended to replace the current AGD pipe arrangement.
Sarin, P; Snoeyink, V L; Bebee, J; Jim, K K; Beckett, M A; Kriven, W M; Clement, J A
2004-03-01
Iron release from corroded iron pipes is the principal cause of "colored water" problems in drinking water distribution systems. The corrosion scales present in corroded iron pipes restrict the flow of water, and can also deteriorate the water quality. This research was focused on understanding the effect of dissolved oxygen (DO), a key water quality parameter, on iron release from the old corroded iron pipes. Corrosion scales from 70-year-old galvanized iron pipe were characterized as porous deposits of Fe(III) phases (goethite (alpha-FeOOH), magnetite (Fe(3)O(4)), and maghemite (alpha-Fe(2)O(3))) with a shell-like, dense layer near the top of the scales. High concentrations of readily soluble Fe(II) content was present inside the scales. Iron release from these corroded pipes was investigated for both flow and stagnant water conditions. Our studies confirmed that iron was released to bulk water primarily in the ferrous form. When DO was present in water, higher amounts of iron release was observed during stagnation in comparison to flowing water conditions. Additionally, it was found that increasing the DO concentration in water during stagnation reduced the amount of iron release. Our studies substantiate that increasing the concentration of oxidants in water and maintaining flowing conditions can reduce the amount of iron release from corroded iron pipes. Based on our studies, it is proposed that iron is released from corroded iron pipes by dissolution of corrosion scales, and that the microstructure and composition of corrosion scales are important parameters that can influence the amount of iron released from such systems.
Application of CdZnTe Gamma-Ray Detector for Imaging Corrosion under Insulation
NASA Astrophysics Data System (ADS)
Abdullah, J.; Yahya, R.
2007-05-01
Corrosion under insulation (CUI) on the external wall of steel pipes is a common problem in many types of industrial plants. This is mainly due to the presence of moisture or water in the insulation materials. This type of corrosion can cause failures in areas that are not normally of a primary concern to an inspection program. The failures are often the result of localised corrosion and not general wasting over a large area. These failures can tee catastrophic in nature or at least have an adverse economic effect in terms of downtime and repairs. There are a number of techniques used today for CUI investigations. The main ones are profile radiography, pulse eddy current, ultrasonic spot readings and insulation removal. A new system now available is portable Pipe-CUI-Profiler. The nucleonic system is based on dual-beam gamma-ray absorption technique using Cadmium Zinc Telluride (CdZnTe) semiconductor detectors. The Pipe-CUI-Profiler is designed to inspect pipes of internal diameter 50, 65, 80, 90, 100, 125 and 150 mm. Pipeline of these sizes with aluminium or thin steel sheathing, containing fibreglass or calcium silicate insulation to thickness of 25, 40 and 50 mm can be inspected. The system has proven to be a safe, fast and effective method of inspecting pipe in industrial plant operations. This paper describes the application of gamma-ray techniques and CdZnTe semiconductor detectors in the development of Pipe-CUI-Profiler for non-destructive imaging of corrosion under insulation of steel pipes. Some results of actual pipe testing in large-scale industrial plant will be presented.
A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems
NASA Astrophysics Data System (ADS)
Abtahi, Amir-Reza; Bijari, Afsane
2017-03-01
In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.
A Bayesian approach to traffic light detection and mapping
NASA Astrophysics Data System (ADS)
Hosseinyalamdary, Siavash; Yilmaz, Alper
2017-03-01
Automatic traffic light detection and mapping is an open research problem. The traffic lights vary in color, shape, geolocation, activation pattern, and installation which complicate their automated detection. In addition, the image of the traffic lights may be noisy, overexposed, underexposed, or occluded. In order to address this problem, we propose a Bayesian inference framework to detect and map traffic lights. In addition to the spatio-temporal consistency constraint, traffic light characteristics such as color, shape and height is shown to further improve the accuracy of the proposed approach. The proposed approach has been evaluated on two benchmark datasets and has been shown to outperform earlier studies. The results show that the precision and recall rates for the KITTI benchmark are 95.78 % and 92.95 % respectively and the precision and recall rates for the LARA benchmark are 98.66 % and 94.65 % .
NASA Technical Reports Server (NTRS)
Lockard, David P.
2011-01-01
Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.
Novel probabilistic neuroclassifier
NASA Astrophysics Data System (ADS)
Hong, Jiang; Serpen, Gursel
2003-09-01
A novel probabilistic potential function neural network classifier algorithm to deal with classes which are multi-modally distributed and formed from sets of disjoint pattern clusters is proposed in this paper. The proposed classifier has a number of desirable properties which distinguish it from other neural network classifiers. A complete description of the algorithm in terms of its architecture and the pseudocode is presented. Simulation analysis of the newly proposed neuro-classifier algorithm on a set of benchmark problems is presented. Benchmark problems tested include IRIS, Sonar, Vowel Recognition, Two-Spiral, Wisconsin Breast Cancer, Cleveland Heart Disease and Thyroid Gland Disease. Simulation results indicate that the proposed neuro-classifier performs consistently better for a subset of problems for which other neural classifiers perform relatively poorly.
NASA Astrophysics Data System (ADS)
Bernatek-Jakiel, Anita; Jakiel, Michał; Krzemień, Kazimierz
2017-04-01
Soil erosion is caused not only by overland flow, but also by subsurface flow. Piping which is a process of mechanical removal of soil particles by concentrated subsurface flow is frequently being overlooked and not accounted for in soil erosion studies. However, it seems that it is far more widespread than it has often been supposed. Furthermore, our knowledge about piping dynamics and its quantification currently relies on a limited number of data available for mainly loess-mantled areas and marl badlands. Therefore, this research aims to recognize piping dynamics in mid-altitude mountains under a temperate climate, where piping occurs in Cambisols, not previously considered as piping-prone soils. The survey was carried out in the Bereźnica Wyżna catchment (305 ha), in the Bieszczady Mts. (the Eastern Carpathians, Poland), where 188 collapsed pipes were mapped. The research was based on the monitoring of selected piping systems located within grasslands (1971-1974, 2013-2016). The development of piping systems is mainly induced by the elongation of pipes and creation of new collapses (closed depressions and sinkholes), rather than by the enlargement of existing piping forms, or the deepening of pipes. It draws attention to the role of dense vegetation (grasslands) in the delay of pipe collapses and, also, to the boundary of pipe development (soil-bedrock interface). The obtained results reveal an episodic, and even stochastic nature of piping activity, expressed by varied one-year and short-term (3 years) erosion rates, and pipe elongation. Changes in soil loss vary significantly between different years (up to 27.36 t ha-1 y-1), reaching the rate of 1.34 t ha-1 y-1 for the 45-year study period. The elongation of pipes also differs, from no changes to 36 m during one year. The results indicate that soil loss due to piping can cause high soil loss even in highly vegetated lands (grasslands), which are generally considered as areas without a significant erosion problem. The scale of piping in the study area is at least by three orders of magnitude higher than surface erosion rates (i.e. sheet and rill erosion) under a similar land use (grasslands), and it is comparable to the scale of surface soil erosion on arable lands. It means that piping is an important sediment source for fluvial systems, and it leads to significant soil loss in mid-altitude mountains under a temperate climate. This study is supported by the National Science Centre of Poland, as a part of the first author's project - PRELUDIUM 3 (DEC-2012/05/N/ST10/03926). The first author was also granted the ETIUDA 3 doctoral scholarship (UMO-2015/16/T/ST10/00505) financed by the National Science Centre of Poland.
High-resolution Self-Organizing Maps for advanced visualization and dimension reduction.
Saraswati, Ayu; Nguyen, Van Tuc; Hagenbuchner, Markus; Tsoi, Ah Chung
2018-05-04
Kohonen's Self Organizing feature Map (SOM) provides an effective way to project high dimensional input features onto a low dimensional display space while preserving the topological relationships among the input features. Recent advances in algorithms that take advantages of modern computing hardware introduced the concept of high resolution SOMs (HRSOMs). This paper investigates the capabilities and applicability of the HRSOM as a visualization tool for cluster analysis and its suitabilities to serve as a pre-processor in ensemble learning models. The evaluation is conducted on a number of established benchmarks and real-world learning problems, namely, the policeman benchmark, two web spam detection problems, a network intrusion detection problem, and a malware detection problem. It is found that the visualization resulted from an HRSOM provides new insights concerning these learning problems. It is furthermore shown empirically that broad benefits from the use of HRSOMs in both clustering and classification problems can be expected. Copyright © 2018 Elsevier Ltd. All rights reserved.
I/O-Efficient Scientific Computation Using TPIE
NASA Technical Reports Server (NTRS)
Vengroff, Darren Erik; Vitter, Jeffrey Scott
1996-01-01
In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.
Splitting of turbulent spot in transitional pipe flow
NASA Astrophysics Data System (ADS)
Wu, Xiaohua; Moin, Parviz; Adrian, Ronald J.
2017-11-01
Recent study (Wu et al., PNAS, 1509451112, 2015) demonstrated the feasibility and accuracy of direct computation of the Osborne Reynolds' pipe transition problem without the unphysical, axially periodic boundary condition. Here we use this approach to study the splitting of turbulent spot in transitional pipe flow, a feature first discovered by E.R. Lindgren (Arkiv Fysik 15, 1959). It has been widely believed that spot splitting is a mysterious stochastic process that has general implications on the lifetime and sustainability of wall turbulence. We address the following two questions: (1) What is the dynamics of turbulent spot splitting in pipe transition? Specifically, we look into any possible connection between the instantaneous strain rate field and the spot splitting. (2) How does the passive scalar field behave during the process of pipe spot splitting. In this study, the turbulent spot is introduced at the inlet plane through a sixty degree wide numerical wedge within which fully-developed turbulent profiles are assigned over a short time interval; and the simulation Reynolds numbers are 2400 for a 500 radii long pipe, and 2300 for a 1000 radii long pipe, respectively. Numerical dye is tagged on the imposed turbulent spot at the inlet. Splitting of the imposed turbulent spot is detected very easily. Preliminary analysis of the DNS results seems to suggest that turbulent spot slitting can be easily understood based on instantaneous strain rate field, and such spot splitting may not be relevant in external flows such as the flat-plate boundary layer.
Hospital-affiliated practices reduce 'red ink'.
Bohlmann, R C
1998-01-01
Many complain that hospital-group practice affiliations are a failed model and should be abandoned. The author argues for a less rash approach, saying the goal should be to understand the problems precisely, then fix them. Benchmarking is a good place to start. The article outlines the basic definition and ground rules of bench-marking and explains what resources help accomplish the task.
1989-07-01
were checked by means of a cone penetrometer. Because of concerns that clogging would occur in the ran- dom zones, a special filter cloth sock was...that surrounded the pipes was dirty Figure 3. Old 24-in. BCCMP from toe drain; perforations are essentially plugged due to incrustation 46 Figure 4...associated deposits of ferric hydroxide have resulted in discolored water, unpalatable taste and odors , and reductions in flow through pipes. Additionally
NASA Astrophysics Data System (ADS)
Gornostaev, K. K.; Kovalev, A. V.; Malygina, Y. V.
2018-03-01
In the article the authors have considered the problem of determining the stress-strain state of the elastoplastic pipe with the Mises’ condition in case of plane strain for the compressible material taking into account the temperature. The task was solved using the method of the small parameter. The expressions for the fields of stresses and displacements were received as well as the ratio of the radius of the elastoplastic boundary in the zero and first approximations.
The Puzzle of a Marble in a Spinning Pipe
2015-05-01
MAY 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE The Puzzle of a Marble in a Spinning Pipe 5a. CONTRACT...Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT What trajectory does a marble follow if it is held...298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Physics Education 50 (3) 279 1. Problem statement A marble is placed one-third of the length along a
Benchmarking the Multidimensional Stellar Implicit Code MUSIC
NASA Astrophysics Data System (ADS)
Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.
2017-04-01
We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.
Analysis of fluid-structure interaction in a frame pipe undergoing plastic deformations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khamlichi, A.; Jezequel, L.; Jacques, Y.
1995-11-01
Water hammer pressure waves of sufficiently large magnitude can cause plastic flexural deformations in a frame pipe. In this study, the authors propose a modelization of this problem based on plane wave approximation for the fluid equations and approximation of the structure motion by a single-degree-of-freedom elastic-plastic oscillator. Direct analytical integration of elastic-plastic equations through pipe sections, then over the pipe length is performed in order to identify the oscillator parameters. Comparison of the global load-displacement relationship obtained with the finite element solution was considered and has shown good agreement. Fluid-structure coupling is achieved by assuming elbows to act likemore » plane monopole sources, where localized jumps of fluid velocity occur and where net pressure forces are exerted on the structure. The authors have applied this method to analyze the fluid-structure interaction in this range of deformations. Energy exchange between the fluid and the structure and energy dissipation are quantified.« less
Reactivity Studies of Inconel 625 with Sodium, and Lunar Regolith Stimulant
NASA Technical Reports Server (NTRS)
Gillies, Donald; Salvail, Pat; Reid, Bob; Colebaugh, James; Easterling, Greg
2008-01-01
In the event of the need for nuclear power in exploration, high flux heat pipes will be needed for heat transfer from space nuclear reactors to various energy conversion devices, and to safely dissipate excess heat. Successful habitation will necessitate continuous operation of alkali metal filled heat pipes for 10 or-more years in a hostile environment with little maintenance. They must be chemical and creep resistant in the high vacuum of space (lunar), and they must operate reliably in low gravity conditions with intermittent high radiation fluxes. One candidate material for the heat pipe shell, namely Inconel 625, has been tested to determine its compatibility with liquid sodium. Any reactivity could manifest itself as a problem over the long time periods anticipated. In addition, possible reactions with the lunar regolith will take place, as will evaporation of selected elements at the external surfaces of the heat pipes, and so there is a need for extensive long-term testing under simulated lunar conditions.
Analysis of collapse in flattening a micro-grooved heat pipe by lateral compression
NASA Astrophysics Data System (ADS)
Li, Yong; He, Ting; Zeng, Zhixin
2012-11-01
The collapse of thin-walled micro-grooved heat pipes is a common phenomenon in the tube flattening process, which seriously influences the heat transfer performance and appearance of heat pipe. At present, there is no other better method to solve this problem. A new method by heating the heat pipe is proposed to eliminate the collapse during the flattening process. The effectiveness of the proposed method is investigated through a theoretical model, a finite element(FE) analysis, and experimental method. Firstly, A theoretical model based on a deformation model of six plastic hinges and the Antoine equation of the working fluid is established to analyze the collapse of thin walls at different temperatures. Then, the FE simulation and experiments of flattening process at different temperatures are carried out and compared with theoretical model. Finally, the FE model is followed to study the loads of the plates at different temperatures and heights of flattened heat pipes. The results of the theoretical model conform to those of the FE simulation and experiments in the flattened zone. The collapse occurs at room temperature. As the temperature increases, the collapse decreases and finally disappears at approximately 130 °C for various heights of flattened heat pipes. The loads of the moving plate increase as the temperature increases. Thus, the reasonable temperature for eliminating the collapse and reducing the load is approximately 130 °C. The advantage of the proposed method is that the collapse is reduced or eliminated by means of the thermal deformation characteristic of heat pipe itself instead of by external support. As a result, the heat transfer efficiency of heat pipe is raised.
Accurate ω-ψ Spectral Solution of the Singular Driven Cavity Problem
NASA Astrophysics Data System (ADS)
Auteri, F.; Quartapelle, L.; Vigevano, L.
2002-08-01
This article provides accurate spectral solutions of the driven cavity problem, calculated in the vorticity-stream function representation without smoothing the corner singularities—a prima facie impossible task. As in a recent benchmark spectral calculation by primitive variables of Botella and Peyret, closed-form contributions of the singular solution for both zero and finite Reynolds numbers are subtracted from the unknown of the problem tackled here numerically in biharmonic form. The method employed is based on a split approach to the vorticity and stream function equations, a Galerkin-Legendre approximation of the problem for the perturbation, and an evaluation of the nonlinear terms by Gauss-Legendre numerical integration. Results computed for Re=0, 100, and 1000 compare well with the benchmark steady solutions provided by the aforementioned collocation-Chebyshev projection method. The validity of the proposed singularity subtraction scheme for computing time-dependent solutions is also established.
NASA Astrophysics Data System (ADS)
Job, Joshua; Wang, Zhihui; Rønnow, Troels; Troyer, Matthias; Lidar, Daniel
2014-03-01
We report on experimental work benchmarking the performance of the D-Wave Two programmable annealer on its native Ising problem, and a comparison to available classical algorithms. In this talk we will focus on the comparison with an algorithm originally proposed and implemented by Alex Selby. This algorithm uses dynamic programming to repeatedly optimize over randomly selected maximal induced trees of the problem graph starting from a random initial state. If one is looking for a quantum advantage over classical algorithms, one should compare to classical algorithms which are designed and optimized to maximally take advantage of the structure of the type of problem one is using for the comparison. In that light, this classical algorithm should serve as a good gauge for any potential quantum speedup for the D-Wave Two.
(U) Analytic First and Second Derivatives of the Uncollided Leakage for a Homogeneous Sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favorite, Jeffrey A.
2017-04-26
The second-order adjoint sensitivity analysis methodology (2nd-ASAM), developed by Cacuci, has been applied by Cacuci to derive second derivatives of a response with respect to input parameters for uncollided particles in an inhomogeneous transport problem. In this memo, we present an analytic benchmark for verifying the derivatives of the 2nd-ASAM. The problem is a homogeneous sphere, and the response is the uncollided total leakage. This memo does not repeat the formulas given in Ref. 2. We are preparing a journal article that will include the derivation of Ref. 2 and the benchmark of this memo.
A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.
1998-01-01
This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
Integrated Sensing Processor, Phase 2
2005-12-01
performance analysis for several baseline classifiers including neural nets, linear classifiers, and kNN classifiers. Use of CCDR as a preprocessing step...below the level of the benchmark non-linear classifier for this problem ( kNN ). Furthermore, the CCDR preconditioned kNN achieved a 10% improvement over...the benchmark kNN without CCDR. Finally, we found an important connection between intrinsic dimension estimation via entropic graphs and the optimal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pointer, William David; Shaver, Dillon; Liu, Yang
The U.S. Department of Energy, Office of Nuclear Energy charges participants in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program with the development of advanced modeling and simulation capabilities that can be used to address design, performance and safety challenges in the development and deployment of advanced reactor technology. The NEAMS has established a high impact problem (HIP) team to demonstrate the applicability of these tools to identification and mitigation of sources of steam generator flow induced vibration (SGFIV). The SGFIV HIP team is working to evaluate vibration sources in an advanced helical coil steam generator using computational fluidmore » dynamics (CFD) simulations of the turbulent primary coolant flow over the outside of the tubes and CFD simulations of the turbulent multiphase boiling secondary coolant flow inside the tubes integrated with high resolution finite element method assessments of the tubes and their associated structural supports. This report summarizes the demonstration of a methodology for the multiphase boiling flow analysis inside the helical coil steam generator tube. A helical coil steam generator configuration has been defined based on the experiments completed by Polytecnico di Milano in the SIET helical coil steam generator tube facility. Simulations of the defined problem have been completed using the Eulerian-Eulerian multi-fluid modeling capabilities of the commercial CFD code STAR-CCM+. Simulations suggest that the two phases will quickly stratify in the slightly inclined pipe of the helical coil steam generator. These results have been successfully benchmarked against both empirical correlations for pressure drop and simulations using an alternate CFD methodology, the dispersed phase mixture modeling capabilities of the open source CFD code Nek5000.« less
Predicting, examining, and evaluating FAC in US power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohn, M.J.; Garud, Y.S.; Raad, J. de
1999-11-01
There have been many pipe failures in fossil and nuclear power plant piping systems caused by flow-accelerated corrosion (FAC). In some piping systems, this failure mechanism maybe the most important type of damage to mitigate because FAC damage has led to catastrophic failures and fatalities. Detecting the damage and mitigating the problem can significantly reduce future forced outages and increase personnel safety. This article discusses the implementation of recent developments to select FAC inspection locations, perform cost-effective examinations, evaluate results, and mitigate FAC failures. These advances include implementing the combination of software to assist in selecting examination locations and anmore » improved pulsed eddy current technique to scan for wall thinning without removing insulation. The use of statistical evaluation methodology and possible mitigation strategies also are discussed.« less
Numerical Boundary Conditions for Computational Aeroacoustics Benchmark Problems
NASA Technical Reports Server (NTRS)
Tam, Chritsopher K. W.; Kurbatskii, Konstantin A.; Fang, Jun
1997-01-01
Category 1, Problems 1 and 2, Category 2, Problem 2, and Category 3, Problem 2 are solved computationally using the Dispersion-Relation-Preserving (DRP) scheme. All these problems are governed by the linearized Euler equations. The resolution requirements of the DRP scheme for maintaining low numerical dispersion and dissipation as well as accurate wave speeds in solving the linearized Euler equations are now well understood. As long as 8 or more mesh points per wavelength is employed in the numerical computation, high quality results are assured. For the first three categories of benchmark problems, therefore, the real challenge is to develop high quality numerical boundary conditions. For Category 1, Problems 1 and 2, it is the curved wall boundary conditions. For Category 2, Problem 2, it is the internal radiation boundary conditions inside the duct. For Category 3, Problem 2, they are the inflow and outflow boundary conditions upstream and downstream of the blade row. These are the foci of the present investigation. Special nonhomogeneous radiation boundary conditions that generate the incoming disturbances and at the same time allow the outgoing reflected or scattered acoustic disturbances to leave the computation domain without significant reflection are developed. Numerical results based on these boundary conditions are provided.
Supply network configuration—A benchmarking problem
NASA Astrophysics Data System (ADS)
Brandenburg, Marcus
2018-03-01
Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.
NASA Astrophysics Data System (ADS)
Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter
2018-05-01
This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.
The rotating movement of three immiscible fluids - A benchmark problem
Bakker, M.; Oude, Essink G.H.P.; Langevin, C.D.
2004-01-01
A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by interfaces. Initially, the interfaces between the fluids make a 45??angle with the horizontal. Over time, the fluids rotate to the stable position whereby the interfaces are horizontal; all flow is caused by density differences. Two cases of the problem are presented, one resulting in a symmetric flow field and one resulting in an asymmetric flow field. An exact analytical solution for the initial flow field is presented by application of the vortex theory and complex variables. Numerical results are obtained using three variable-density groundwater flow codes (SWI, MOCDENS3D, and SEAWAT). Initial horizontal velocities of the interfaces, as simulated by the three codes, compare well with the exact solution. The three codes are used to simulate the positions of the interfaces at two times; the three codes produce nearly identical results. The agreement between the results is evidence that the specific rotational behavior predicted by the models is correct. It also shows that the proposed problem may be used to benchmark variable-density codes. It is concluded that the three models can be used to model accurately the movement of interfaces between immiscible fluids, and have little or no numerical dispersion. ?? 2003 Elsevier B.V. All rights reserved.
A new numerical benchmark for variably saturated variable-density flow and transport in porous media
NASA Astrophysics Data System (ADS)
Guevara, Carlos; Graf, Thomas
2016-04-01
In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.
NASA Technical Reports Server (NTRS)
Yee, Karl Y.; Ganapathi, Gani B.; Sunada, Eric T.; Bae, Youngsam; Miller, Jennifer R.; Beinsford, Daniel F.
2013-01-01
Improved methods of heat dissipation are required for modern, high-power density electronic systems. As increased functionality is progressively compacted into decreasing volumes, this need will be exacerbated. High-performance chip power is predicted to increase monotonically and rapidly with time. Systems utilizing these chips are currently reliant upon decades of old cooling technology. Heat pipes offer a solution to this problem. Heat pipes are passive, self-contained, two-phase heat dissipation devices. Heat conducted into the device through a wick structure converts the working fluid into a vapor, which then releases the heat via condensation after being transported away from the heat source. Heat pipes have high thermal conductivities, are inexpensive, and have been utilized in previous space missions. However, the cylindrical geometry of commercial heat pipes is a poor fit to the planar geometries of microelectronic assemblies, the copper that commercial heat pipes are typically constructed of is a poor CTE (coefficient of thermal expansion) match to the semiconductor die utilized in these assemblies, and the functionality and reliability of heat pipes in general is strongly dependent on the orientation of the assembly with respect to the gravity vector. What is needed is a planar, semiconductor-based heat pipe array that can be used for cooling of generic MCM (multichip module) assemblies that can also function in all orientations. Such a structure would not only have applications in the cooling of space electronics, but would have commercial applications as well (e.g. cooling of microprocessors and high-power laser diodes). This technology is an improvement over existing heat pipe designs due to the finer porosity of the wick, which enhances capillary pumping pressure, resulting in greater effective thermal conductivity and performance in any orientation with respect to the gravity vector. In addition, it is constructed of silicon, and thus is better suited for the cooling of semiconductor devices.
A review of nondestructive examination technology for polyethylene pipe in nuclear power plant
NASA Astrophysics Data System (ADS)
Zheng, Jinyang; Zhang, Yue; Hou, Dongsheng; Qin, Yinkang; Guo, Weican; Zhang, Chuck; Shi, Jianfeng
2018-05-01
Polyethylene (PE) pipe, particularly high-density polyethylene (HDPE) pipe, has been successfully utilized to transport cooling water for both non-safety- and safety-related applications in nuclear power plant (NPP). Though ASME Code Case N755, which is the first code case related to NPP HDPE pipe, requires a thorough nondestructive examination (NDE) of HDPE joints. However, no executable regulations presently exist because of the lack of a feasible NDE technique for HDPE pipe in NPP. This work presents a review of current developments in NDE technology for both HDPE pipe in NPP with a diameter of less than 400 mm and that of a larger size. For the former category, phased array ultrasonic technique is proven effective for inspecting typical defects in HDPE pipe, and is thus used in Chinese national standards GB/T 29460 and GB/T 29461. A defect-recognition technique is developed based on pattern recognition, and a safety assessment principle is summarized from the database of destructive testing. On the other hand, recent research and practical studies reveal that in current ultrasonic-inspection technology, the absence of effective ultrasonic inspection for large size was lack of consideration of the viscoelasticity effect of PE on acoustic wave propagation in current ultrasonic inspection technology. Furthermore, main technical problems were analyzed in the paper to achieve an effective ultrasonic test method in accordance to the safety and efficiency requirements of related regulations and standards. Finally, the development trend and challenges of NDE test technology for HDPE in NPP are discussed.
NASA Astrophysics Data System (ADS)
John, Timm; Svensen, Henrik; Weyer, Stefan; Polozov, Alexander; Planke, Sverre
2010-05-01
The Siberian iron-bearing phreatomagmatic pipes represent world class Fe-ore deposit, and 5-6 are currently mined in eastern Siberia. The pipes formed within the vast Tunguska Basin, cutting thick accumulations of carbonates (dolostones) and evaporites (anhydrite, halite, dolostone). These sediments were intruded by the sub-volcanic part of the Siberian Traps at 252 Ma, and sills and dykes are abundant throughout the basin. The pipes formed during sediment-magma interactions in the deep parts of the basin, and the degassing is believed to have triggered the end-Permian environmental crisis. A major problem with understanding the pipe formation is related to the source of iron. Available hypotheses state that the iron was leached from a Fe-enriched magmatic melt that incorporated dolostones. It is currently unclear how the magmatic, hydrothermal, and sedimentary processes interacted to form the deposits, as there are no actual constraints to pin down the iron source. We hypothesize two end-member scenarios to account for the magnetite enrichment and deposition, which is testable by analyzing Fe-isotopes of magnetite: 1) Iron sourced from dolerite magma through leaching and metasomatism by chloride brines. 2) Leaching of iron from sedimentary rocks (shale, dolostone) during magma-sediment interactions. We focus on understanding the Fe-isotopic architecture of the pipes in order constrain the source of the Fe and the mechanism that caused this significant Fe redistribution. We further evaluate possible fractionation during fast metasomatic ore-forming process that took place soon after pipe formation.
Development of the monitoring system to detect the piping thickness reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, N. Y.; Ryu, K. H.; Oh, Y. J.
2006-07-01
As nuclear piping becomes aging, secondary piping which was considered safe, undergo thickness reduction problem these days. After some accidents caused by Flow Accelerated Corrosion (FAC), guidelines and recommendations for the thinned pipe management were issued, and thus need for monitoring increases. Through thinned pipe management program, monitoring activities based on the various analyses and the case study of other plants also increases. As the monitoring points increase, time needs to cover the recommended inspection area becomes increasing, while the time given to inspect the piping during overhaul becomes shortened. Existing Ultrasonic Technique (UT) can cover small area in amore » given time. Moreover, it cannot be applied to a complex geometry piping or a certain location like welded part. In this paper, we suggested Switching Direct Current Potential Drop (S-DCPD) method by which we can narrow down the FAC-susceptible area. To apply DCPD, we developed both resistance model and Finite Element Method (FEM) model to predict the DCPD feasibility. We tested elbow specimen to compare DCPD monitoring results with UT results to identify consistency. For the validation test, we designed simulation loop. To determine the text condition, we analyzed environmental parameters and introduced applicable wearing rate model. To obtain the model parameters, we developed electrodes and analyzed velocity profile in the test loop using CFX code. Based on the prediction model and prototype testing results, we are planning to perform validation test to identify applicability of S-DCPD in the NPP environment. Validation text plan will be described as a future work. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Signe K.; Purohit, Sumit; Boyd, Lauren W.
The Geothermal Technologies Office Code Comparison Study (GTO-CCS) aims to support the DOE Geothermal Technologies Office in organizing and executing a model comparison activity. This project is directed at testing, diagnosing differences, and demonstrating modeling capabilities of a worldwide collection of numerical simulators for evaluating geothermal technologies. Teams of researchers are collaborating in this code comparison effort, and it is important to be able to share results in a forum where technical discussions can easily take place without requiring teams to travel to a common location. Pacific Northwest National Laboratory has developed an open-source, flexible framework called Velo that providesmore » a knowledge management infrastructure and tools to support modeling and simulation for a variety of types of projects in a number of scientific domains. GTO-Velo is a customized version of the Velo Framework that is being used as the collaborative tool in support of the GTO-CCS project. Velo is designed around a novel integration of a collaborative Web-based environment and a scalable enterprise Content Management System (CMS). The underlying framework provides a flexible and unstructured data storage system that allows for easy upload of files that can be in any format. Data files are organized in hierarchical folders and each folder and each file has a corresponding wiki page for metadata. The user interacts with Velo through a web browser based wiki technology, providing the benefit of familiarity and ease of use. High-level folders have been defined in GTO-Velo for the benchmark problem descriptions, descriptions of simulator/code capabilities, a project notebook, and folders for participating teams. Each team has a subfolder with write access limited only to the team members, where they can upload their simulation results. The GTO-CCS participants are charged with defining the benchmark problems for the study, and as each GTO-CCS Benchmark problem is defined, the problem creator can provide a description using a template on the metadata page corresponding to the benchmark problem folder. Project documents, references and videos of the weekly online meetings are shared via GTO-Velo. A results comparison tool allows users to plot their uploaded simulation results on the fly, along with those of other teams, to facilitate weekly discussions of the benchmark problem results being generated by the teams. GTO-Velo is an invaluable tool providing the project coordinators and team members with a framework for collaboration among geographically dispersed organizations.« less
Problems due to superheating of cryogenic liquids
NASA Astrophysics Data System (ADS)
Hands, B. A.
1988-12-01
Superheating can cause several problems in the storage of cryogenic liquids: stratification can cause unexpectedly high tank pressures or, in multicomponent liquids, rollover with its consequential high vaporization rate; geysering causes the rapid expulsion of static liquid from a vertical tube; chugging is a similar phenomenon observed when liquid flows through a reasonably well-insulated pipe.
Forced Convection Heat Transfer in Circular Pipes
ERIC Educational Resources Information Center
Tosun, Ismail
2007-01-01
One of the pitfalls of engineering education is to lose the physical insight of the problem while tackling the mathematical part. Forced convection heat transfer (the Graetz-Nusselt problem) certainly falls into this category. The equation of energy together with the equation of motion leads to a partial differential equation subject to various…
NASA Astrophysics Data System (ADS)
Oon, Cheen Sean; Nee Yew, Sin; Chew, Bee Teng; Salim Newaz, Kazi Md; Al-Shamma'a, Ahmed; Shaw, Andy; Amiri, Ahmad
2015-05-01
Flow separation and reattachment of 0.2% TiO2 nanofluid in an asymmetric abrupt expansion is studied in this paper. Such flows occur in various engineering and heat transfer applications. Computational fluid dynamics package (FLUENT) is used to investigate turbulent nanofluid flow in the horizontal double-tube heat exchanger. The meshing of this model consists of 43383 nodes and 74891 elements. Only a quarter of the annular pipe is developed and simulated as it has symmetrical geometry. Standard k-epsilon second order implicit, pressure based-solver equation is applied. Reynolds numbers between 17050 and 44545, step height ratio of 1 and 1.82 and constant heat flux of 49050 W/m2 was utilized in the simulation. Water was used as a working fluid to benchmark the study of the heat transfer enhancement in this case. Numerical simulation results show that the increase in the Reynolds number increases the heat transfer coefficient and Nusselt number of the flowing fluid. Moreover, the surface temperature will drop to its lowest value after the expansion and then gradually increase along the pipe. Finally, the chaotic movement and higher thermal conductivity of the TiO2 nanoparticles have contributed to the overall heat transfer enhancement of the nanofluid compare to the water.
SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis Smith; James Knudsen
As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less
Standardised Benchmarking in the Quest for Orthologs
Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe
2016-01-01
The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882
Algorithm and Architecture Independent Benchmarking with SEAK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.
2016-05-23
Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, andmore » weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less
Benchmarking Problems Used in Second Year Level Organic Chemistry Instruction
ERIC Educational Resources Information Center
Raker, Jeffrey R.; Towns, Marcy H.
2010-01-01
Investigations of the problem types used in college-level general chemistry examinations have been reported in this Journal and were first reported in the "Journal of Chemical Education" in 1924. This study extends the findings from general chemistry to the problems of four college-level organic chemistry courses. Three problem…
Advanced spacecraft thermal control techniques
NASA Technical Reports Server (NTRS)
Fritz, C. H.
1977-01-01
The problems of rejecting large amounts of heat from spacecraft were studied. Shuttle Space Laboratory heat rejection uses 1 kW for pumps and fans for every 5 kW (thermal) heat rejection. This is rather inefficient, and for future programs more efficient methods were examined. Two advanced systems were studied and compared to the present pumped-loop system. The advanced concepts are the air-cooled semipassive system, which features rejection of a large percentage of the load through the outer skin, and the heat pipe system, which incorporates heat pipes for every thermal control function.
NASA Astrophysics Data System (ADS)
Tomilenko, A. A.; Kuzmin, D. V.; Bul'bak, T. A.; Sobolev, N. V.
2017-08-01
The primary melt and fluid inclusions in regenerated zonal crystals of olivine and homogeneous phenocrysts of olivine from kimberlites of the Udachnaya-East pipe, were first studied by means of microthermometry, optic and scanning electron microscopy, electron and ion microprobe analysis (SIMS), inductively coupled plasma mass-spectrometry (ICP MSC), and Raman spectroscopy. It was established that olivine crystals were regenerated from silicate-carbonate melts at a temperature of 1100°C.
Elasticity of fractal materials using the continuum model with non-integer dimensional space
NASA Astrophysics Data System (ADS)
Tarasov, Vasily E.
2015-01-01
Using a generalization of vector calculus for space with non-integer dimension, we consider elastic properties of fractal materials. Fractal materials are described by continuum models with non-integer dimensional space. A generalization of elasticity equations for non-integer dimensional space, and its solutions for the equilibrium case of fractal materials are suggested. Elasticity problems for fractal hollow ball and cylindrical fractal elastic pipe with inside and outside pressures, for rotating cylindrical fractal pipe, for gradient elasticity and thermoelasticity of fractal materials are solved.
Taylor dispersion of colloidal particles in narrow channels
NASA Astrophysics Data System (ADS)
Sané, Jimaan; Padding, Johan T.; Louis, Ard A.
2015-09-01
We use a mesoscopic particle-based simulation technique to study the classic convection-diffusion problem of Taylor dispersion for colloidal discs in confined flow. When the disc diameter becomes non-negligible compared to the diameter of the pipe, there are important corrections to the original Taylor picture. For example, the colloids can flow more rapidly than the underlying fluid, and their Taylor dispersion coefficient is decreased. For narrow pipes, there are also further hydrodynamic wall effects. The long-time tails in the velocity autocorrelation functions are altered by the Poiseuille flow.
NASA Astrophysics Data System (ADS)
Velioglu Sogut, Deniz; Yalciner, Ahmet Cevdet
2018-06-01
Field observations provide valuable data regarding nearshore tsunami impact, yet only in inundation areas where tsunami waves have already flooded. Therefore, tsunami modeling is essential to understand tsunami behavior and prepare for tsunami inundation. It is necessary that all numerical models used in tsunami emergency planning be subject to benchmark tests for validation and verification. This study focuses on two numerical codes, NAMI DANCE and FLOW-3D®, for validation and performance comparison. NAMI DANCE is an in-house tsunami numerical model developed by the Ocean Engineering Research Center of Middle East Technical University, Turkey and Laboratory of Special Research Bureau for Automation of Marine Research, Russia. FLOW-3D® is a general purpose computational fluid dynamics software, which was developed by scientists who pioneered in the design of the Volume-of-Fluid technique. The codes are validated and their performances are compared via analytical, experimental and field benchmark problems, which are documented in the ``Proceedings and Results of the 2011 National Tsunami Hazard Mitigation Program (NTHMP) Model Benchmarking Workshop'' and the ``Proceedings and Results of the NTHMP 2015 Tsunami Current Modeling Workshop". The variations between the numerical solutions of these two models are evaluated through statistical error analysis.
Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system
NASA Astrophysics Data System (ADS)
Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.
2017-05-01
We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.
Benchmark results in the 2D lattice Thirring model with a chemical potential
NASA Astrophysics Data System (ADS)
Ayyar, Venkitesh; Chandrasekharan, Shailesh; Rantaharju, Jarno
2018-03-01
We study the two-dimensional lattice Thirring model in the presence of a fermion chemical potential. Our model is asymptotically free and contains massive fermions that mimic a baryon and light bosons that mimic pions. Hence, it is a useful toy model for QCD, especially since it, too, suffers from a sign problem in the auxiliary field formulation in the presence of a fermion chemical potential. In this work, we formulate the model in both the world line and fermion-bag representations and show that the sign problem can be completely eliminated with open boundary conditions when the fermions are massless. Hence, we are able accurately compute a variety of interesting quantities in the model, and these results could provide benchmarks for other methods that are being developed to solve the sign problem in QCD.
Comas, J; Rodríguez-Roda, I; Poch, M; Gernaey, K V; Rosen, C; Jeppsson, U
2006-01-01
Wastewater treatment plant operators encounter complex operational problems related to the activated sludge process and usually respond to these by applying their own intuition and by taking advantage of what they have learnt from past experiences of similar problems. However, previous process experiences are not easy to integrate in numerical control, and new tools must be developed to enable re-use of plant operating experience. The aim of this paper is to investigate the usefulness of a case-based reasoning (CBR) approach to apply learning and re-use of knowledge gained during past incidents to confront actual complex problems through the IWA/COST Benchmark protocol. A case study shows that the proposed CBR system achieves a significant improvement of the benchmark plant performance when facing a high-flow event disturbance.
ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics (CAA)
NASA Technical Reports Server (NTRS)
Hardin, Jay C. (Editor); Ristorcelli, J. Ray (Editor); Tam, Christopher K. W. (Editor)
1995-01-01
The proceedings of the Benchmark Problems in Computational Aeroacoustics Workshop held at NASA Langley Research Center are the subject of this report. The purpose of the Workshop was to assess the utility of a number of numerical schemes in the context of the unusual requirements of aeroacoustical calculations. The schemes were assessed from the viewpoint of dispersion and dissipation -- issues important to long time integration and long distance propagation in aeroacoustics. Also investigated were the effect of implementation of different boundary conditions. The Workshop included a forum in which practical engineering problems related to computational aeroacoustics were discussed. This discussion took the form of a dialogue between an industrial panel and the workshop participants and was an effort to suggest the direction of evolution of this field in the context of current engineering needs.
Residual stresses in a stainless steel - titanium alloy joint made with the explosive technique
NASA Astrophysics Data System (ADS)
Taran, Yu V.; Balagurov, A. M.; Sabirov, B. M.; Evans, A.; Davydov, V.; Venter, A. M.
2012-02-01
Joining of pipes from stainless steel (SS) and titanium (Ti) alloy still experience serious technical problems. Recently, reliable and hermetic joining of SS and Ti pipes has been achieved with the explosive bonding technique in the Russian Federal Nuclear Center. Such adapters are earmarked for use at the future International Linear Collider. The manufactured SS-Ti adapters have excellent mechanical behavior at room and liquid nitrogen temperatures, during high-pressure tests and thermal cycling. We here report the first neutron diffraction investigation of the residual stresses in a SS-Ti adapter on the POLDI instrument at the SINQ spallation source. The strain scanning across the adapter walls into the SS-SS and SS-Ti pipes sections encompassed measurement of the axial, radial and hoop strain components, which were transformed into residual stresses. The full stress information was successfully determined for the three steel pipes involved in the joint. The residual stresses do not exceed 300 MPa in magnitude. All stress components have tensile values close to the adapter internal surface, whilst they are compressive close to the outer surface. The strong incoherent and weak coherent neutron scattering cross-sections of Ti did not allow for the reliable determination of stresses inside the titanic pipe.
Infrasound-array-element frequency response: in-situ measurement and modeling
NASA Astrophysics Data System (ADS)
Gabrielson, T.
2011-12-01
Most array elements at the infrasound stations of the International Monitoring System use some variant of a multiple-inlet pipe system for wind-noise suppression. These pipe systems have a significant impact on the overall frequency response of the element. The spatial distribution of acoustic inlets introduces a response dependence that is a function of frequency and of vertical and horizontal arrival angle; the system of inlets, pipes, and summing junctions further shapes that response as the signal is ducted to the transducer. In-situ measurements, using a co-located reference microphone, can determine the overall frequency response and diagnose problems with the system. As of July 2011, the in-situ frequency responses for 25 individual elements at 6 operational stations (I10, I53, I55, I56, I57, and I99) have been measured. In support of these measurements, a fully thermo-viscous model for the acoustics of these multiple-inlet pipe systems has been developed. In addition to measurements at operational stations, comparative analyses have been done on experimental systems: a multiple-inlet radial-pipe system with varying inlet hole size; a one-quarter scale model of a 70-meter rosette system; and vertical directionality of a small rosette system using aircraft flyovers. [Funded by the US Army Space and Missile Defense Command
NASA Astrophysics Data System (ADS)
Afanasyev, Andrey
2017-04-01
Numerical modelling of multiphase flows in porous medium is necessary in many applications concerning subsurface utilization. An incomplete list of those applications includes oil and gas fields exploration, underground carbon dioxide storage and geothermal energy production. The numerical simulations are conducted using complicated computer programs called reservoir simulators. A robust simulator should include a wide range of modelling options covering various exploration techniques, rock and fluid properties, and geological settings. In this work we present a recent development of new options in MUFITS code [1]. The first option concerns modelling of multiphase flows in double-porosity double-permeability reservoirs. We describe internal representation of reservoir models in MUFITS, which are constructed as a 3D graph of grid blocks, pipe segments, interfaces, etc. In case of double porosity reservoir, two linked nodes of the graph correspond to a grid cell. We simulate the 6th SPE comparative problem [2] and a five-spot geothermal production problem to validate the option. The second option concerns modelling of flows in porous medium coupled with flows in horizontal wells that are represented in the 3D graph as a sequence of pipe segments linked with pipe junctions. The well completions link the pipe segments with reservoir. The hydraulics in the wellbore, i.e. the frictional pressure drop, is calculated in accordance with Haaland's formula. We validate the option against the 7th SPE comparative problem [3]. We acknowledge financial support by the Russian Foundation for Basic Research (project No RFBR-15-31-20585). References [1] Afanasyev, A. MUFITS Reservoir Simulation Software (www.mufits.imec.msu.ru). [2] Firoozabadi A. et al. Sixth SPE Comparative Solution Project: Dual-Porosity Simulators // J. Petrol. Tech. 1990. V.42. N.6. P.710-715. [3] Nghiem L., et al. Seventh SPE Comparative Solution Project: Modelling of Horizontal Wells in Reservoir Simulation // SPE Symp. Res. Sim., 1991. DOI: 10.2118/21221-MS.
Terahertz inline wall thickness monitoring system for plastic pipe extrusion
NASA Astrophysics Data System (ADS)
Hauck, J.; Stich, D.; Heidemeyer, P.; Bastian, M.; Hochrein, T.
2014-05-01
Conventional and commercially available inline wall thickness monitoring systems for pipe extrusion are usually based on ultrasonic or x-ray technology. Disadvantages of ultrasonic systems are the usual need of water as a coupling media and the high damping in thick walled or foamed pipes. For x-ray systems special safety requirements have to be taken into account because of the ionizing radiation. The terahertz (THz) technology offers a novel approach to solve these problems. THz waves have many properties which are suitable for the non-destructive testing of plastics. The absorption of electrical isolators is typically very low and the radiation is non-ionizing in comparison to x-rays. Through the electromagnetic origin of the THz waves they can be used for contact free measurements. Foams show a much lower absorption in contrast to acoustic waves. The developed system uses THz pulses which are generated by stimulating photoconductive switches with femtosecond laser pulses. The time of flight of THz pulses can be determined with a resolution in the magnitude of several ten femtoseconds. Hence the thickness of an object like plastic pipes can be determined with a high accuracy by measuring the time delay between two reflections on materials interfaces e.g. at the pipe's inner and outer surface, similar to the ultrasonic technique. Knowing the refractive index of the sample the absolute layer thickness from the transit time difference can be calculated easily. This method in principle also allows the measurement of multilayer systems and the characterization of foamed pipes.
Effects of anisotropic conduction and heat pipe interaction on minimum mass space radiators
NASA Technical Reports Server (NTRS)
Baker, Karl W.; Lund, Kurt O.
1991-01-01
Equations are formulated for the two dimensional, anisotropic conduction of heat in space radiator fins. The transverse temperature field was obtained by the integral method, and the axial field by numerical integration. A shape factor, defined for the axial boundary condition, simplifies the analysis and renders the results applicable to general heat pipe/conduction fin interface designs. The thermal results are summarized in terms of the fin efficiency, a radiation/axial conductance number, and a transverse conductance surface Biot number. These relations, together with those for mass distribution between fins and heat pipes, were used in predicting the minimum radiator mass for fixed thermal properties and fin efficiency. This mass is found to decrease monotonically with increasing fin conductivity. Sensitivities of the minimum mass designs to the problem parameters are determined.
Studies on Single-phase and Multi-phase Heat Pipe for LED Panel for Efficient Heat Dissipation
NASA Astrophysics Data System (ADS)
Vyshnave, K. C.; Rohit, G.; Maithreya, D. V. N. S.; Rakesh, S. G.
2017-08-01
The popularity of LED panel as a source of illumination has soared recently due to its high efficiency. However, the removal of heat that is produced in the chip is still a major challenge in its design since this has an adverse effect on its reliability. If high junction temperature develops, the colour of the emitted light may diminish over prolonged usage or even a colour shift may occur. In this paper, a solution has been developed to address this problem by using a combination of heat pipe and heat fin technology. A single-phase and a two-phase heat pipes have been designed theoretically and computational simulations carried out using ANSYS FLUENT. The results of the theoretical calculations and those obtained from the simulations are found to be in agreement with each other.
Nonlinear gas oscillations in pipes. I - Theory.
NASA Technical Reports Server (NTRS)
Jimenez, J.
1973-01-01
The problem of forced acoustic oscillations in a pipe is studied theoretically. The oscillations are produced by a moving piston in one end of the pipe, while a variety of boundary conditions ranging from a completely closed to a completely open mouth at the other end are considered. The linear theory predicts large amplitudes near resonance and that nonlinear effects become crucially important. By expanding the equations of motion in a series in the Mach number, both the amplitude and waveform of the oscillation are predicted there. In both the open- and closed-end cases the need for shock waves in some range of parameters is found. The amplitude of the oscillation is different for the two cases, however, being proportional to the square root of the piston amplitude in the closed-end case and to the cube root for the open end.
NASA Technical Reports Server (NTRS)
Chen, C. P.
1990-01-01
An existing Computational Fluid Dynamics code for simulating complex turbulent flows inside a liquid rocket combustion chamber was validated and further developed. The Advanced Rocket Injector/Combustor Code (ARICC) is simplified and validated against benchmark flow situations for laminar and turbulent flows. The numerical method used in ARICC Code is re-examined for incompressible flow calculations. For turbulent flows, both the subgrid and the two equation k-epsilon turbulence models are studied. Cases tested include idealized Burger's equation in complex geometries and boundaries, a laminar pipe flow, a high Reynolds number turbulent flow, and a confined coaxial jet with recirculations. The accuracy of the algorithm is examined by comparing the numerical results with the analytical solutions as well as experimented data with different grid sizes.
A Benchmark Problem for Development of Autonomous Structural Modal Identification
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan
1996-01-01
This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.
Developing a benchmark for emotional analysis of music
Yang, Yi-Hsuan; Soleymani, Mohammad
2017-01-01
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400
Developing a benchmark for emotional analysis of music.
Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad
2017-01-01
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.
A large-scale benchmark of gene prioritization methods.
Guala, Dimitri; Sonnhammer, Erik L L
2017-04-21
In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.
Wu, Yong-li; Shi, Bao-you; Sun, Hui-fang; Zhang, Zhi-huan; Gu, Jun-nong; Wang, Dong-sheng
2013-09-01
To understand the processes of corrosion by-product release and the consequent "red water" problems caused by the variation of water chemical composition in drinking water distribution system, the effect of sulphate and dissolved oxygen (DO) concentration on total iron release in corroded old iron pipe sections historically transporting groundwater was investigated in laboratory using small-scale pipe section reactors. The release behaviors of some low-level metals, such as Mn, As, Cr, Cu, Zn and Ni, in the process of iron release were also monitored. The results showed that the total iron and Mn release increased significantly with the increase of sulphate concentration, and apparent red water occurred when sulphate concentration was above 400 mg x L(-1). With the increase of sulfate concentration, the effluent concentrations of As, Cr, Cu, Zn and Ni also increased obviously, however, the effluent concentrations of these metals were lower than the influent concentrations under most circumstances, which indicated that adsorption of these metals by pipe corrosion scales occurred. Increasing DO within a certain range could significantly inhibit the iron release.
A hybrid heuristic for the multiple choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Mansi, Raïd; Alves, Cláudio; Valério de Carvalho, J. M.; Hanafi, Saïd
2013-08-01
In this article, a new solution approach for the multiple choice multidimensional knapsack problem is described. The problem is a variant of the multidimensional knapsack problem where items are divided into classes, and exactly one item per class has to be chosen. Both problems are NP-hard. However, the multiple choice multidimensional knapsack problem appears to be more difficult to solve in part because of its choice constraints. Many real applications lead to very large scale multiple choice multidimensional knapsack problems that can hardly be addressed using exact algorithms. A new hybrid heuristic is proposed that embeds several new procedures for this problem. The approach is based on the resolution of linear programming relaxations of the problem and reduced problems that are obtained by fixing some variables of the problem. The solutions of these problems are used to update the global lower and upper bounds for the optimal solution value. A new strategy for defining the reduced problems is explored, together with a new family of cuts and a reformulation procedure that is used at each iteration to improve the performance of the heuristic. An extensive set of computational experiments is reported for benchmark instances from the literature and for a large set of hard instances generated randomly. The results show that the approach outperforms other state-of-the-art methods described so far, providing the best known solution for a significant number of benchmark instances.
Wang, Yang; Zhang, Xiao-jian; Chen, Chao; Pan, An-jun; Xu, Yang; Liao, Ping-an; Zhang, Su-xia; Gu, Jun-nong
2009-12-01
Red water phenomenon occurred in some communities of a city in China after water source switch in recent days. The origin of this red water problem and mechanism of iron release were investigated in the study. Water quality of local and new water sources was tested and tap water quality in suffered area had been monitored for 3 months since red water occurred. Interior corrosion scales on the pipe which was obtained from the suffered area were analyzed by XRD, SEM, and EDS. Corrosion rates of cast iron under the conditions of two source water were obtained by Annular Reactor. The influence of different source water on iron release was studied by pipe section reactor to simulate the distribution systems. The results indicated that large increase of sulfate concentration by water source shift was regarded as the cause of red water problem. The Larson ratio increased from about 0.4 to 1.7-1.9 and the red water problem happened in the taps of some urban communities just several days after the new water source was applied. The mechanism of iron release was concluded that the stable shell of scales in the pipes had been corrupted by this kind of high-sulfate-concentration source water and it was hard to recover soon spontaneously. The effect of sulfate on iron release of the old cast iron was more significant than its effect on enhancing iron corrosion. The rate of iron release increased with increasing Larson ratio, and the correlation of them was nonlinear on the old cast-iron. The problem remained quite a long time even if the water source re-shifted into the blended one with only small ratio of the new source and the Larson ratio reduced to about 0.6.
A dynamic fault tree model of a propulsion system
NASA Technical Reports Server (NTRS)
Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila
2006-01-01
We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.
Global-local methodologies and their application to nonlinear analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1989-01-01
An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.
2017-01-01
The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.
NASA Astrophysics Data System (ADS)
Macias, J.; Escalante, C.; Castro, M. J.
2017-12-01
Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less
40 CFR 141.86 - Monitoring requirements for lead and copper in tap water.
Code of Federal Regulations, 2011 CFR
2011-07-01
... procedures specified in this paragraph. To avoid problems of residents handling nitric acid, acidification of... no plastic pipes which contain lead plasticizers, or plastic service lines which contain lead...
40 CFR 141.86 - Monitoring requirements for lead and copper in tap water.
Code of Federal Regulations, 2010 CFR
2010-07-01
... procedures specified in this paragraph. To avoid problems of residents handling nitric acid, acidification of... no plastic pipes which contain lead plasticizers, or plastic service lines which contain lead...
Validation of optimization strategies using the linear structured production chains
NASA Astrophysics Data System (ADS)
Kusiak, Jan; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz
2017-06-01
Different optimization strategies applied to sequence of several stages of production chains were validated in this paper. Two benchmark problems described by ordinary differential equations (ODEs) were considered. A water tank and a passive CR-RC filter were used as the exemplary objects described by the first and the second order differential equations, respectively. Considered in the work optimization problems serve as the validators of strategies elaborated by the Authors. However, the main goal of research is selection of the best strategy for optimization of two real metallurgical processes which will be investigated in an on-going projects. The first problem will be the oxidizing roasting process of zinc sulphide concentrate where the sulphur from the input concentrate should be eliminated and the minimal concentration of sulphide sulphur in the roasted products has to be achieved. Second problem will be the lead refining process consisting of three stages: roasting to the oxide, oxide reduction to metal and the oxidizing refining. Strategies, which appear the most effective in considered benchmark problems will be candidates for optimization of the mentioned above industrial processes.
a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling
NASA Astrophysics Data System (ADS)
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-01
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
NASA Astrophysics Data System (ADS)
Sutanto, G. R.; Kim, S.; Kim, D.; Sutanto, H.
2018-03-01
One of the problems in dealing with capacitated facility location problem (CFLP) is occurred because of the difference between the capacity numbers of facilities and the number of customers that needs to be served. A facility with small capacity may result in uncovered customers. These customers need to be re-allocated to another facility that still has available capacity. Therefore, an approach is proposed to handle CFLP by using k-means clustering algorithm to handle customers’ allocation. And then, if customers’ re-allocation is needed, is decided by the overall average distance between customers and the facilities. This new approach is benchmarked to the existing approach by Liao and Guo which also use k-means clustering algorithm as a base idea to decide the facilities location and customers’ allocation. Both of these approaches are benchmarked by using three clustering evaluation methods with connectedness, compactness, and separations factors.
Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems
NASA Technical Reports Server (NTRS)
Choudhari, Meelan M.; Yamamoto, Kazuomi
2012-01-01
Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.
Simulated annealing with probabilistic analysis for solving traveling salesman problems
NASA Astrophysics Data System (ADS)
Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan
2013-09-01
Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.
Modified reactive tabu search for the symmetric traveling salesman problems
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Hong, Pei-Yee; Ramli, Razamin; Khalid, Ruzelan
2013-09-01
Reactive tabu search (RTS) is an improved method of tabu search (TS) and it dynamically adjusts tabu list size based on how the search is performed. RTS can avoid disadvantage of TS which is in the parameter tuning in tabu list size. In this paper, we proposed a modified RTS approach for solving symmetric traveling salesman problems (TSP). The tabu list size of the proposed algorithm depends on the number of iterations when the solutions do not override the aspiration level to achieve a good balance between diversification and intensification. The proposed algorithm was tested on seven chosen benchmarked problems of symmetric TSP. The performance of the proposed algorithm is compared with that of the TS by using empirical testing, benchmark solution and simple probabilistic analysis in order to validate the quality of solution. The computational results and comparisons show that the proposed algorithm provides a better quality solution than that of the TS.
Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D
2015-10-08
Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes.
INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom; Javier Ortensi; Sonat Sen
2013-09-01
The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less
Geochemical exploration for mineralized breccia pipes in northern Arizona, U.S.A.
Wenrich, K.J.
1986-01-01
Thousands of solution-collapse breccia pipe crop out in the canyons and on the plateaus of northern Arizona. Over 80 of these are known to contain U or Cu mineralized rock. The high-grade U ore associated with potentially economic concentrations of Ag, Pb, Zn, Cu, Co and Ni in some of these pipes has continued to stimulate mining and exploration activity in northern Arizona, despite periods of depressed U prices. Large expanses of northern Arizona are comprised of undissected high plateaus; recognition of pipes in these areas is particularly important because mining access to the plateaus is far better than to the canyons. The small size of the pipes, generally less than 600 ft (200 m) in diameter, and limited rock outcrop on the plateaus, compounds the recognition problem. Although the breccia pipes, which bottom in the Mississippian Redwall Limestone, are occasionally exposed on the plateaus as circular features, so are unmineralized near-surface collapse features that bottom in the Permian Kaibab and Toroweap Formations. The distinction between these two classes of circular features is critical during exploration for this unique type of U deposit. Various geochemical and geophysical exploration methods have been tested over these classes of collapse features. Because of the small size of the deposits, and the low-level geochemical signatures in the overlying rock that are rarely dispersed for distances in excess of several hundred feet, most reconnaissance geochemical surveys, such as hydrogeochemistry or stream sediment, will not delineete mineralized pipes. Several types of detailed geochemical surveys made over collapse features, located through examination of aerial photographs and later field mapping, have been successful at delineating collapse features from the surrounding host rock: (1) Rock geochemistry commonly shows low level Ag, As, Ba, Co, Cu, Ni, Pb, Se and Zn anomalies over mineralized breccia pipes; (2) Soil surveys appear to have the greatest potential for distinguishing mineralized breccia pipes from the surrounding terrane. Although the soil anomalies are only twice the background concentrations for most anomalous elements, traverses made over collapse features show consistent enrichment inside of the feature as compared to outside; (3) B. Cereus surveys over a known mineralized pipe show significantly more anomalous samples collected from within the ring fracture than from outside of the breccia pipe; (4) Helium soil-gas surveys were made over 7 collapse features with discouraging results from 5 of the 7 features. Geophysical surveys indicate that scaler audio-magnetotelluric (AMT) and E-field telluric profile data show diagnostic conductivity differences over mineralized pipes as compared to the surrounding terrane. These surveys, coupled with the geochemical surveys conducted as detailed studies over features mapped by field and aerial photograph examination, can be a significant asset in the selection of potential breccia pipes for drilling. ?? 1986.
Alternative design of pipe sleeve for liquid removal mechanism in mortar slab layer
NASA Astrophysics Data System (ADS)
Nazri, W. M. H. Wan; Anting, N.; Lim, A. J. M. S.; Prasetijo, J.; Shahidan, S.; Din, M. F. Md; Anuar, M. A. Mohd
2017-11-01
Porosity is one of the mortar’s characteristics that can cause problems, especially in the room space that used high amount of water, such as bathrooms. Waterproofing is one of the technology that normally used to minimize this problem which is preventing deep penetration of liquid water or moisture into underlying concrete layers. However, without the proper mechanism to remove liquid water and moisture from mortar system, waterproofing layer tends to be damaged after a long period of time by the static formation of liquid water and moisture at mortar layer. Thus, a solution has been proposed to drain out water that penetrated into the mortar layer. This paper introduces a new solution using a Modified Pipe Sleeve (MPS) that installed at the mortar layer. The MPS has been designed considering the percentage surface area of the pipe sleeve that having contact with mortar layer (2%, 4%, 6%, 8% and 10%) with angle of holes of 60°. Infiltration test and flow rate test have been conducted to identify the effectiveness of the MPS in order to drain out liquid water or moisture from the mortar layer. In this study shows that, MPS surface area 10%, angled 60°, function effectively as a water removal compared to other design.
Spherical Harmonic Solutions to the 3D Kobayashi Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, P.N.; Chang, B.; Hanebutte, U.R.
1999-12-29
Spherical harmonic solutions of order 5, 9 and 21 on spatial grids containing up to 3.3 million cells are presented for the Kobayashi benchmark suite. This suite of three problems with simple geometry of pure absorber with large void region was proposed by Professor Kobayashi at an OECD/NEA meeting in 1996. Each of the three problems contains a source, a void and a shield region. Problem 1 can best be described as a box in a box problem, where a source region is surrounded by a square void region which itself is embedded in a square shield region. Problems 2more » and 3 represent a shield with a void duct. Problem 2 having a straight and problem 3 a dog leg shaped duct. A pure absorber and a 50% scattering case are considered for each of the three problems. The solutions have been obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The Ardra code takes advantage of a two-level parallelization strategy, which combines message passing between processing nodes and thread based parallelism amongst processors on each node. All calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL.« less
Mathematical simulation for compensation capacities area of pipeline routes in ship systems
NASA Astrophysics Data System (ADS)
Ngo, G. V.; Sakhno, K. N.
2018-05-01
In this paper, the authors considered the problem of manufacturability’s enhancement of ship systems pipeline at the designing stage. The analysis of arrangements and possibilities for compensation of deviations for pipeline routes has been carried out. The task was set to produce the “fit pipe” together with the rest of the pipes in the route. It was proposed to compensate for deviations by movement of the pipeline route during pipe installation and to calculate maximum values of these displacements in the analyzed path. Theoretical bases of deviation compensation for pipeline routes using rotations of parallel section pairs of pipes are assembled. Mathematical and graphical simulations of compensation area capacities of pipeline routes with various configurations are completed. Prerequisites have been created for creating an automated program that will allow one to determine values of the compensatory capacities area for pipeline routes and to assign quantities of necessary allowances.
Buoyant miscible displacement flow of shear-thinning fluids: Experiments and Simulations
NASA Astrophysics Data System (ADS)
Ale Etrati Khosroshahi, Seyed Ali; Frigaard, Ian
2017-11-01
We study displacement flow of two miscible fluids with density and viscosity contrast in an inclined pipe. Our focus is mainly on displacements where transverse mixing is not significant and thus a two-layer, stratified flow develops. Our experiments are carried out in a long pipe, covering a wide range of flow-rates, inclination angles and viscosity ratios. Density and viscosity contrasts are achieved by adding Glycerol and Xanthan gum to water, respectively. At each angle, flow rate and viscosity ratio are varied and density contrast is fixed. We identify and map different flow regimes, instabilities and front dynamics based on Fr , Re / Frcosβ and viscosity ratio m. The problem is also studied numerically to get a better insight into the flow structure and shear-thinning effects. Numerical simulations are completed using OpenFOAM in both pipe and channel geometries and are compared against the experiments. Schlumberger, NSERC.
Reactive Transport in a Pipe in Soluble Rock: a Theoretical and Experimental Study
NASA Astrophysics Data System (ADS)
Li, W.; Opolot, M.; Sousa, R.; Einstein, H. H.
2015-12-01
Reactive transport processes within the dominant underground flow pathways such as fractures can lead to the widening or narrowing of rock fractures, potentially altering the flow and transport processes in the fractures. A flow-through experiment was designed to study the reactive transport process in a pipe in soluble rock to serve as a simplified representation of a fracture in soluble rock. Assumptions were made to formulate the problem as three coupled, one-dimensional partial differential equations: one for the flow, one for the transport and one for the radius change due to dissolution. Analytical and numerical solutions were developed to predict the effluent concentration and the change in pipe radius. The positive feedback of the radius increase is captured by the experiment and the numerical model. A comparison between the experiment and the simulation results demonstrates the validity of the analytical and numerical models.
Jamil, H; Templin, T; Fakhouri, M; Rice, V H; Khouri, R; Fakhouri, H; Al-Omran, Hasan; Al-Fauori, Ibrahim; Baker, Omar
2009-08-01
This study compared and contrasted personal characteristics, tobacco use (cigarette and water pipe smoking), and health states in Chaldean, Arab American and non-Middle Eastern White adults attending an urban community service center. The average age was 39.4 (SD = 14.2). The three groups differed significantly (P < .006) on ethnicity, age, gender distribution, marital status, language spoken, education, employment, and annual income. Current cigarette smoking was highest for non-Middle Eastern White adults (35.4%) and current water pipe smoking was highest for Arab Americans (3.6%). Arab Americans were more likely to smoke both cigarettes and the narghile (4.3%). Health problems were highest among former smokers in all three ethnic groups. Being male, older, unmarried, and non-Middle Eastern White predicted current cigarette smoking; being Arab or Chaldean and having less formal education predicted current water pipe use.
Benchmark and Framework for Encouraging Research on Multi-Threaded Testing Tools
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Stoller, Scott D.; Ur, Shmuel
2003-01-01
A problem that has been getting prominence in testing is that of looking for intermittent bugs. Multi-threaded code is becoming very common, mostly on the server side. As there is no silver bullet solution, research focuses on a variety of partial solutions. In this paper (invited by PADTAD 2003) we outline a proposed project to facilitate research. The project goals are as follows. The first goal is to create a benchmark that can be used to evaluate different solutions. The benchmark, apart from containing programs with documented bugs, will include other artifacts, such as traces, that are useful for evaluating some of the technologies. The second goal is to create a set of tools with open API s that can be used to check ideas without building a large system. For example an instrumentor will be available, that could be used to test temporal noise making heuristics. The third goal is to create a focus for the research in this area around which a community of people who try to solve similar problems with different techniques, could congregate.
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1986-01-01
An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, J; Dossa, D; Gokhale, M
Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less
The role of trees in urban stormwater management
Urban impervious surfaces convert precipitation to stormwater runoff, which causes water quality and quantity problems. While traditional stormwater management has relied on gray infrastructure such as piped conveyances to collect and convey stormwater to wastewater treatment fac...
Thermal Control System for a Small, Extended Duration Lunar Surface Science Platform
NASA Technical Reports Server (NTRS)
Bugby, D.; Farmer, J.; OConnor, B.; Wirzburger, M.; Abel, E.; Stouffer, C.
2010-01-01
The presentation slides include: Introduction: lunar mission definition, Problem: requirements/methodology, Concept: thermal switching options, Analysis: system evaluation, Plans: dual-radiator LHP (loop heat pipe) test bed, and Conclusions: from this study.
NASA Technical Reports Server (NTRS)
Feng, Hui-Yu; VanderWijngaart, Rob; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
We describe the design of a new method for the measurement of the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. The method involves the solution of a stylized heat transfer problem on an unstructured, adaptive grid. A Spectral Element Method (SEM) with an adaptive, nonconforming mesh is selected to discretize the transport equation. The relatively high order of the SEM lowers the fraction of wall clock time spent on inter-processor communication, which eases the load balancing task and allows us to concentrate on the memory accesses. The benchmark is designed to be three-dimensional. Parallelization and load balance issues of a reference implementation will be described in detail in future reports.
PID controller tuning using metaheuristic optimization algorithms for benchmark problems
NASA Astrophysics Data System (ADS)
Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.
2017-11-01
This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
1998-01-01
This report discusses the design implications for spacecraft radiators made possible by the successful fabrication and Proof-of-concept testing of a graphite-fiber-carbon-matrix composite (i.e., carbon-carbon (C-C)) heat pipe. The proto-type heat pipe, or space radiator element, consists of a C-C composite shell with integrally woven fins. It has a thin-walled furnace-brazed metallic (Nb-1%Zr) liner with end caps for containment of the potassium working fluid. A short extension of this liner, at increased wall thickness beyond the C-C shell, forms the heat pipe evaporator section which is in thermal contact with the radiator fluid that needs to be cooled. From geometric and thermal transport properties of the C-C composite heat pipe tested, a specific radiator mass of 1.45 kg/m2 can be derived. This is less than one-fourth the specific mass of present day satellite radiators. The report also discusses the advantage of segmented space radiator designs utilizing heat pipe elements, or segments, in their survivability to micro-meteoroid damage. This survivability is further raised by the use of condenser sections with attached fins, which also improve the radiation heat transfer rate. Since the problem of heat radiation from a fin does not lend itself to a closed analytical solution, a derivation of the governing differential equation and boundary conditions is given in appendix A, along with solutions for rectangular and parabolic fin profile geometries obtained by use of a finite difference computer code written by the author.
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2002-01-01
This report discusses the design implications for spacecraft radiators made possible by the successful fabrication and proof-of-concept testing of a graphite-fiber-carbon-matrix composite (i.e., carbon-carbon (C-C)) heat pipe. The prototype heat pipe, or space radiator element, consists of a C-C composite shell with integrally woven fins. It has a thin-walled furnace-brazed metallic (Nb-1%Zr) liner with end caps for containment of the potassium working fluid. A short extension of this liner, at increased wall thickness beyond the C-C shell, forms the heat pipe evaporator section which is in thermal contact with the radiator fluid that needs to be cooled. From geometric and thermal transport properties of the C-C composite heat pipe tested, a specific radiator mass of 1.45 kg/sq m can be derived. This is less than one-fourth the specific mass of present day satellite radiators. The report also discusses the advantage of segmented space radiator designs utilizing heat pipe elements, or segments, in their survivability to micrometeoroid damage. This survivability is further raised by the use of condenser sections with attached fins, which also improve the radiation heat transfer rate. Since the problem of heat radiation from a fin does not lend itself to a closed analytical solution, a derivation of the governing differential equation and boundary conditions is given in appendix A, along with solutions for rectangular and parabolic fin profile geometries obtained by use of a finite difference computer code written by the author.
Ultrasonic multi-skip tomography for pipe inspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volker, Arno; Zon, Tim van
The inspection of wall loss corrosion is difficult at pipe supports due to limited accessibility. The recently developed ultrasonic Multi-Skip screening technique is suitable for this problem. The method employs ultrasonic transducers in a pitch-catch geometry positioned on opposite sides of the pipe support. Shear waves are transmitted in the axial direction within the pipe wall, reflecting multiple times between the inner and outer surfaces before reaching the receivers. Along this path, the signals accumulate information on the integral wall thickness (e.g., via variations in travel time). The method is very sensitive in detecting the presence of wall loss, butmore » it is difficult to quantify both the extent and depth of the loss. Multi-skip tomography has been developed to reconstruct the wall thickness profile along the axial direction of the pipe. The method uses model-based full wave field inversion; this consists of a forward model for predicting the measured wave field and an iterative process that compares the predicted and measured wave fields and minimizes the differences with respect to the model parameters (i.e., the wall thickness profile). Experimental results are very encouraging. Various defects (slot and flat bottom hole) are reconstructed using the tomographic inversion. The general shape and width are well recovered. The current sizing accuracy is in the order of 1 mm.« less
Using Toyota's A3 Thinking for Analyzing MBA Business Cases
ERIC Educational Resources Information Center
Anderson, Joe S.; Morgan, James N.; Williams, Susan K.
2011-01-01
A3 Thinking is fundamental to Toyota's benchmark management philosophy and to their lean production system. It is used to solve problems, gain agreement, mentor team members, and lead organizational improvements. A structured problem-solving approach, A3 Thinking builds improvement opportunities through experience. We used "The Toyota…
For QSAR and QSPR modeling of biological and physicochemical properties, estimating the accuracy of predictions is a critical problem. The “distance to model” (DM) can be defined as a metric that defines the similarity between the training set molecules and the test set compound ...
Benditz, A; Drescher, J; Greimel, F; Zeman, F; Grifka, J; Meißner, W; Völlner, F
2016-12-05
Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16 th in terms of activity-related pain and 9 th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1 st activity-related pain and to 2 nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA.
Benditz, A.; Drescher, J.; Greimel, F.; Zeman, F.; Grifka, J.; Meißner, W.; Völlner, F.
2016-01-01
Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16th in terms of activity-related pain and 9th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1st activity-related pain and to 2nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA. PMID:27917911
New NAS Parallel Benchmarks Results
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)
1997-01-01
NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.
Clarke, Frank Eldridge; Barnes, Ivan
1969-01-01
Seepage from rivers and irrigation canals has contributed to waterlogging and soil salinization problems in much of the Indus Plains of West Pakistan. These problems are being overcome in part by tube-well dewatering and deep leaching of salinized soils. The ground waters described here are anaerobic and some are supersaturated with troublesome minerals such as calcium carbonate (calcite) and iron carbonate (siderite). These waters are moderately corrosive to steel. Some wells contain sulfate-reducing bacteria, which catalyze corrosion, and pH-electrode potential relationships favorable to the solution of iron also are rather common. Corrosion is concentrated in the relatively active (anodic) saw slots of water-well filter pipes (screens), where metal loss is least tolerable. Local changes in chemical properties of the water, because of corrosion, apparently cause deposition of calcium carbonate, iron carbonate, and other minerals which clog the filter pipes. In some places well capacities are seriously reduced in very short periods of time. There appears to be no practicable preventive treatment for corrosion and encrustation in these wells. Even chemical sterilization for bacterial control has yielded poor results. Periodic rehabilitation by down-hole blasting or by other effective mechanical or chemical cleaning methods will prolong well life. It may be possible to repair severely damaged well screens by inserting perforated sleeves of plastic or other inert material. The most promising approach to future, well-field development is to use filter pipes of epoxy-resin-bonded fiber glass, stainless steel, or other inert material which minimizes both corrosion and corrosion-catalyzed encrustation. Fiberglass plastic pipe appears to be the most economically practicable construction material at this time and already is being used with promising results.
Direct numerical simulation of turbulent pipe flow using the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping
2018-03-01
In this paper, we present a first direct numerical simulation (DNS) of a turbulent pipe flow using the mesoscopic lattice Boltzmann method (LBM) on both a D3Q19 lattice grid and a D3Q27 lattice grid. DNS of turbulent pipe flows using LBM has never been reported previously, perhaps due to inaccuracy and numerical stability associated with the previous implementations of LBM in the presence of a curved solid surface. In fact, it was even speculated that the D3Q19 lattice might be inappropriate as a DNS tool for turbulent pipe flows. In this paper, we show, through careful implementation, accurate turbulent statistics can be obtained using both D3Q19 and D3Q27 lattice grids. In the simulation with D3Q19 lattice, a few problems related to the numerical stability of the simulation are exposed. Discussions and solutions for those problems are provided. The simulation with D3Q27 lattice, on the other hand, is found to be more stable than its D3Q19 counterpart. The resulting turbulent flow statistics at a friction Reynolds number of Reτ = 180 are compared systematically with both published experimental and other DNS results based on solving the Navier-Stokes equations. The comparisons cover the mean-flow profile, the r.m.s. velocity and vorticity profiles, the mean and r.m.s. pressure profiles, the velocity skewness and flatness, and spatial correlations and energy spectra of velocity and vorticity. Overall, we conclude that both D3Q19 and D3Q27 simulations yield accurate turbulent flow statistics. The use of the D3Q27 lattice is shown to suppress the weak secondary flow pattern in the mean flow due to numerical artifacts.
Noise Radiation Of A Strongly Pulsating Tailpipe Exhaust
NASA Astrophysics Data System (ADS)
Peizi, Li; Genhua, Dai; Zhichi, Zhu
1993-11-01
The method of characteristics is used to solve the problem of the propagation of a strongly pulsating flow in an exhaust system tailpipe. For a strongly pulsating exhaust, the flow may shock at the pipe's open end at some point in a pulsating where the flow pressure exceeds its critical value. The method fails if one insists on setting the flow pressure equal to the atmospheric pressure as the pipe end boundary condition. To solve the problem, we set the Mach number equal to 1 as the boundary condition when the flow pressure exceeds its critical value. For a strongly pulsating flow, the fluctuations of flow variables may be much higher than their respective time averages. Therefore, the acoustic radiation method would fail in the computation of the noise radiation from the pipe's open end. We simulate the exhaust flow out of the open end as a simple sound source to compute the noise radiation, which has been successfully applied in reference [1]. The simple sound source strength is proportional to the volume acceleration of exhaust gas. Also computed is the noise radiation from the turbulence of the exhaust flow, as was done in reference [1]. Noise from a reciprocating valve simulator has been treated in detail. The radiation efficiency is very low for the pressure range considered and is about 10 -5. The radiation efficiency coefficient increases with the square of the frequency. Computation of the pipe length dependence of the noise radiation and mass flux allows us to design a suitable length for an aerodynamic noise generator or a reciprocating internal combustion engine. For the former, powerful noise radiation is preferable. For the latter, maximum mass flux is desired because a freer exhaust is preferable.
Pipe and Solids Analysis: What Can I Learn?
This presentation gives a brief overview of techniques that regulators, utilities and consultants might want to request from laboratories to anticipate or solve water treatment and distribution system water quality problems. Actual examples will be given from EPA collaborations,...
Application of the gravity search algorithm to multi-reservoir operation optimization
NASA Astrophysics Data System (ADS)
Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.
2016-12-01
Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.
Finite element modeling of borehole heat exchanger systems. Part 1. Fundamentals
NASA Astrophysics Data System (ADS)
Diersch, H.-J. G.; Bauer, D.; Heidemann, W.; Rühaak, W.; Schätzl, P.
2011-08-01
Single borehole heat exchanger (BHE) and arrays of BHE are modeled by using the finite element method. The first part of the paper derives the fundamental equations for BHE systems and their finite element representations, where the thermal exchange between the borehole components is modeled via thermal transfer relations. For this purpose improved relationships for thermal resistances and capacities of BHE are introduced. Pipe-to-grout thermal transfer possesses multiple grout points for double U-shape and single U-shape BHE to attain a more accurate modeling. The numerical solution of the final 3D problems is performed via a widely non-sequential (essentially non-iterative) coupling strategy for the BHE and porous medium discretization. Four types of vertical BHE are supported: double U-shape (2U) pipe, single U-shape (1U) pipe, coaxial pipe with annular (CXA) and centred (CXC) inlet. Two computational strategies are used: (1) The analytical BHE method based on Eskilson and Claesson's (1988) solution, (2) numerical BHE method based on Al-Khoury et al.'s (2005) solution. The second part of the paper focusses on BHE meshing aspects, the validation of BHE solutions and practical applications for borehole thermal energy store systems.
A bubble detection system for propellant filling pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, Wen; Zong, Guanghua; Bi, Shusheng
2014-06-15
This paper proposes a bubble detection system based on the ultrasound transmission method, mainly for probing high-speed bubbles in the satellite propellant filling pipeline. First, three common ultrasonic detection methods are compared and the ultrasound transmission method is used in this paper. Then, the ultrasound beam in a vertical pipe is investigated, suggesting that the width of the beam used for detection is usually smaller than the internal diameter of the pipe, which means that when bubbles move close to the pipe wall, they may escape from being detected. A special device is designed to solve this problem. It canmore » generate the spiral flow to force all the bubbles to ascend along the central line of the pipe. In the end, experiments are implemented to evaluate the performance of this system. Bubbles of five different sizes are generated and detected. Experiment results show that the sizes and quantity of bubbles can be estimated by this system. Also, the bubbles of different radii can be distinguished from each other. The numerical relationship between the ultrasound attenuation and the bubble radius is acquired and it can be utilized for estimating the unknown bubble size and measuring the total bubble volume.« less
Osborne Reynolds pipe flow: direct numerical simulation from laminar to fully-developed turbulence
NASA Astrophysics Data System (ADS)
Adrian, R. J.; Wu, X.; Moin, P.; Baltzer, J. R.
2014-11-01
Osborne Reynolds' pipe experiment marked the onset of modern viscous flow research, yet the detailed mechanism carrying the laminar state to fully-developed turbulence has been quite elusive, despite notable progress related to dynamic edge-state theory. Here, we continue our direct numerical simulation study on this problem using a 250R long, spatially-developing pipe configuration with various Reynolds numbers, inflow disturbances, and inlet base flow states. For the inlet base flow, both fully-developed laminar profile and the uniform plug profile are considered. Inlet disturbances consist of rings of turbulence of different width and radial location. In all the six cases examined so far, energy norms show exponential growth with axial distance until transition after an initial decay near the inlet. Skin-friction overshoots the Moody's correlation in most, but not all, the cases. Another common theme is that lambda vortices amplified out of susceptible elements in the inlet disturbances trigger rapidly growing hairpin packets at random locations and times, after which infant turbulent spots appear. Mature turbulent spots in the pipe transition are actually tight concentrations of hairpin packets looking like a hairpin forest. The plug flow inlet profile requires much stronger disturbances to transition than the parabolic profile.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.
NASA Astrophysics Data System (ADS)
Cas, R. A. F.; Hayman, P.; Pittari, A.; Porritt, L.
2008-06-01
Five significant problems hinder advances in understanding of the volcanology of kimberlites: (1) kimberlite geology is very model driven; (2) a highly genetic terminology drives deposit or facies interpretation; (3) the effects of alteration on preserved depositional textures have been grossly underestimated; (4) the level of understanding of the physical process significance of preserved textures is limited; and, (5) some inferred processes and deposits are not based on actual, modern volcanological processes. These issues need to be addressed in order to advance understanding of kimberlite volcanological pipe forming processes and deposits. The traditional, steep-sided southern African pipe model (Class I) consists of a steep tapering pipe with a deep root zone, a middle diatreme zone and an upper crater zone (if preserved). Each zone is thought to be dominated by distinctive facies, respectively: hypabyssal kimberlite (HK, descriptively called here massive coherent porphyritic kimberlite), tuffisitic kimberlite breccia (TKB, descriptively here called massive, poorly sorted lapilli tuff) and crater zone facies, which include variably bedded pyroclastic kimberlite and resedimented and reworked volcaniclastic kimberlite (RVK). Porphyritic coherent kimberlite may, however, also be emplaced at different levels in the pipe, as later stage intrusions, as well as dykes in the surrounding country rock. The relationship between HK and TKB is not always clear. Sub-terranean fluidisation as an emplacement process is a largely unsubstantiated hypothesis; modern in-vent volcanological processes should initially be considered to explain observed deposits. Crater zone volcaniclastic deposits can occur within the diatreme zone of some pipes, indicating that the pipe was largely empty at the end of the eruption, and subsequently began to fill-in largely through resedimentation and sourcing of pyroclastic deposits from nearby vents. Classes II and III Canadian kimberlite models have a more factual, descriptive basis, but are still inadequately documented given the recency of their discovery. The diversity amongst kimberlite bodies suggests that a three-model classification is an over-simplification. Every kimberlite is altered to varying degrees, which is an intrinsic consequence of the ultrabasic composition of kimberlite and the in-vent context; few preserve original textures. The effects of syn- to post-emplacement alteration on original textures have not been adequately considered to date, and should be back-stripped to identify original textural elements and configurations. Applying sedimentological textural configurations as a guide to emplacement processes would be useful. The traditional terminology has many connotations about spatial position in pipe and of process. Perhaps the traditional terminology can be retained in the industrial situation as a general lithofacies-mining terminological scheme because it is so entrenched. However, for research purposes a more descriptive lithofacies terminology should be adopted to facilitate detailed understanding of deposit characteristics, important variations in these, and the process origins. For example every deposit of TKB is different in componentry, texture, or depositional structure. However, because so many deposits in many different pipes are called TKB, there is an implication that they are all similar and that similar processes were involved, which is far from clear.
A benchmark for subduction zone modeling
NASA Astrophysics Data System (ADS)
van Keken, P.; King, S.; Peacock, S.
2003-04-01
Our understanding of subduction zones hinges critically on the ability to discern its thermal structure and dynamics. Computational modeling has become an essential complementary approach to observational and experimental studies. The accurate modeling of subduction zones is challenging due to the unique geometry, complicated rheological description and influence of fluid and melt formation. The complicated physics causes problems for the accurate numerical solution of the governing equations. As a consequence it is essential for the subduction zone community to be able to evaluate the ability and limitations of various modeling approaches. The participants of a workshop on the modeling of subduction zones, held at the University of Michigan at Ann Arbor, MI, USA in 2002, formulated a number of case studies to be developed into a benchmark similar to previous mantle convection benchmarks (Blankenbach et al., 1989; Busse et al., 1991; Van Keken et al., 1997). Our initial benchmark focuses on the dynamics of the mantle wedge and investigates three different rheologies: constant viscosity, diffusion creep, and dislocation creep. In addition we investigate the ability of codes to accurate model dynamic pressure and advection dominated flows. Proceedings of the workshop and the formulation of the benchmark are available at www.geo.lsa.umich.edu/~keken/subduction02.html We strongly encourage interested research groups to participate in this benchmark. At Nice 2003 we will provide an update and first set of benchmark results. Interested researchers are encouraged to contact one of the authors for further details.
Numerical benchmarking of a Coarse-Mesh Transport (COMET) Method for medical physics applications
NASA Astrophysics Data System (ADS)
Blackburn, Megan Satterfield
2009-12-01
Radiation therapy has become a very import method for treating cancer patients. Thus, it is extremely important to accurately determine the location of energy deposition during these treatments, maximizing dose to the tumor region and minimizing it to healthy tissue. A Coarse-Mesh Transport Method (COMET) has been developed at the Georgia Institute of Technology in the Computational Reactor and Medical Physics Group for use very successfully with neutron transport to analyze whole-core criticality. COMET works by decomposing a large, heterogeneous system into a set of smaller fixed source problems. For each unique local problem that exists, a solution is obtained that we call a response function. These response functions are pre-computed and stored in a library for future use. The overall solution to the global problem can then be found by a linear superposition of these local problems. This method has now been extended to the transport of photons and electrons for use in medical physics problems to determine energy deposition from radiation therapy treatments. The main goal of this work was to develop benchmarks for testing in order to evaluate the COMET code to determine its strengths and weaknesses for these medical physics applications. For response function calculations, legendre polynomial expansions are necessary for space, angle, polar angle, and azimuthal angle. An initial sensitivity study was done to determine the best orders for future testing. After the expansion orders were found, three simple benchmarks were tested: a water phantom, a simplified lung phantom, and a non-clinical slab phantom. Each of these benchmarks was decomposed into 1cm x 1cm and 0.5cm x 0.5cm coarse meshes. Three more clinically relevant problems were developed from patient CT scans. These benchmarks modeled a lung patient, a prostate patient, and a beam re-entry situation. As before, the problems were divided into 1cm x 1cm, 0.5cm x 0.5cm, and 0.25cm x 0.25cm coarse mesh cases. Multiple beam energies were also tested for each case. The COMET solutions for each case were compared to a reference solution obtained by pure Monte Carlo results from EGSnrc. When comparing the COMET results to the reference cases, a pattern of differences appeared in each phantom case. It was found that better results were obtained for lower energy incident photon beams as well as for larger mesh sizes. Possible changes may need to be made with the expansion orders used for energy and angle to better model high energy secondary electrons. Heterogeneity also did not pose a problem for the COMET methodology. Heterogeneous results were found in a comparable amount of time to the homogeneous water phantom. The COMET results were typically found in minutes to hours of computational time, whereas the reference cases typically required hundreds or thousands of hours. A second sensitivity study was also performed on a more stringent problem and with smaller coarse meshes. Previously, the same expansion order was used for each incident photon beam energy so better comparisons could be made. From this second study, it was found that it is optimal to have different expansion orders based on the incident beam energy. Recommendations for future work with this method include more testing on higher expansion orders or possible code modification to better handle secondary electrons. The method also needs to handle more clinically relevant beam descriptions with an energy and angular distribution associated with it.
... endoscopy to look for abnormal tissue in the food pipe. You may have tests to look for anemia or iron deficiency. Treatment Taking iron supplements may improve the swallowing problems. If supplements do not help, the web of tissue can be ... People with this condition generally ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, Timothy P.; Martz, Roger L.; Kiedrowski, Brian C.
New unstructured mesh capabilities in MCNP6 (developmental version during summer 2012) show potential for conducting multi-physics analyses by coupling MCNP to a finite element solver such as Abaqus/CAE[2]. Before these new capabilities can be utilized, the ability of MCNP to accurately estimate eigenvalues and pin powers using an unstructured mesh must first be verified. Previous work to verify the unstructured mesh capabilities in MCNP was accomplished using the Godiva sphere [1], and this work attempts to build on that. To accomplish this, a criticality benchmark and a fuel assembly benchmark were used for calculations in MCNP using both the Constructivemore » Solid Geometry (CSG) native to MCNP and the unstructured mesh geometry generated using Abaqus/CAE. The Big Ten criticality benchmark [3] was modeled due to its geometry being similar to that of a reactor fuel pin. The C5G7 3-D Mixed Oxide (MOX) Fuel Assembly Benchmark [4] was modeled to test the unstructured mesh capabilities on a reactor-type problem.« less
The Suite for Embedded Applications and Kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-05-10
Many applications of high performance embedded computing are limited by performance or power bottlenecks. We havedesigned SEAK, a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions to these bottlenecks? and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) andgoal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user blackbox evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informativemore » for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less
Paraffin problems in crude oil production and transportation: A review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Misra, S.; Baruah, S.; Singh, K.
1995-02-01
Problems related to crystallization and deposition of paraffin waxes during production and transportation of crude oil cause losses of billions of dollars yearly to petroleum industry. The goal of this paper is to present the knowledge on such problems in a systematic and comprehensive form. The fundamental aspects of these problems are defined, and characterization of paraffins and their solubility tendencies have been discussed. It has been established conclusively that n-paraffins are predominantly responsible for this problem. Comprehensive discussion on the mechanism of crystallization of paraffins has been included. Compounds other than n-paraffins, especially asphaltenes and resins, have profound effectsmore » on solubility of n-paraffins. In evaluations of the wax potential of a crude, the climate of the area concerned should be considered. Under the most favorable conditions, n-paraffins form clearly defined orthorhombic crystals, but unfavorable conditions and the presence of impurities lead to hexagonal and/or amorphous crystallization.The gelation characteristics are also affected the same way. An attempt was made to classify the paraffin problems into those resulting from high pipeline pressure, high restarting pressure, and deposition on pipe surfaces. Fundamental aspects and mechanism of these dimensions are described. Wax deposition depends on flow rate, the temperature differential between crude and pipe surface, the cooling rate, and surface properties. Finally, methods available in the literature for predicting these problems and evaluating their mitigatory techniques are reviewed. The available methods present a very diversified picture; hence, using them to evaluate these problems becomes taxing. A top priority is standardizing these methods for the benefit of the industry. 56 refs.« less
The ab-initio density matrix renormalization group in practice.
Olivares-Amaya, Roberto; Hu, Weifeng; Nakatani, Naoki; Sharma, Sandeep; Yang, Jun; Chan, Garnet Kin-Lic
2015-01-21
The ab-initio density matrix renormalization group (DMRG) is a tool that can be applied to a wide variety of interesting problems in quantum chemistry. Here, we examine the density matrix renormalization group from the vantage point of the quantum chemistry user. What kinds of problems is the DMRG well-suited to? What are the largest systems that can be treated at practical cost? What sort of accuracies can be obtained, and how do we reason about the computational difficulty in different molecules? By examining a diverse benchmark set of molecules: π-electron systems, benchmark main-group and transition metal dimers, and the Mn-oxo-salen and Fe-porphine organometallic compounds, we provide some answers to these questions, and show how the density matrix renormalization group is used in practice.
Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture
NASA Astrophysics Data System (ADS)
Meng, Chunfang
2017-03-01
We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.
Encoding color information for visual tracking: Algorithms and benchmark.
Liang, Pengpeng; Blasch, Erik; Ling, Haibin
2015-12-01
While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.
The cost of simplifying air travel when modeling disease spread.
Lessler, Justin; Kaufman, James H; Ford, Daniel A; Douglas, Judith V
2009-01-01
Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei
2018-03-01
The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.
Analytical study of the liquid phase transient behavior of a high temperature heat pipe. M.S. Thesis
NASA Technical Reports Server (NTRS)
Roche, Gregory Lawrence
1988-01-01
The transient operation of the liquid phase of a high temperature heat pipe is studied. The study was conducted in support of advanced heat pipe applications that require reliable transport of high temperature drops and significant distances under a broad spectrum of operating conditions. The heat pipe configuration studied consists of a sealed cylindrical enclosure containing a capillary wick structure and sodium working fluid. The wick is an annular flow channel configuration formed between the enclosure interior wall and a concentric cylindrical tube of fine pore screen. The study approach is analytical through the solution of the governing equations. The energy equation is solved over the pipe wall and liquid region using the finite difference Peaceman-Rachford alternating direction implicit numerical method. The continuity and momentum equations are solved over the liquid region by the integral method. The energy equation and liquid dynamics equation are tightly coupled due to the phase change process at the liquid-vapor interface. A kinetic theory model is used to define the phase change process in terms of the temperature jump between the liquid-vapor surface and the bulk vapor. Extensive auxiliary relations, including sodium properties as functions of temperature, are used to close the analytical system. The solution procedure is implemented in a FORTRAN algorithm with some optimization features to take advantage of the IBM System/370 Model 3090 vectorization facility. The code was intended for coupling to a vapor phase algorithm so that the entire heat pipe problem could be solved. As a test of code capabilities, the vapor phase was approximated in a simple manner.
Burner balancing Salem Harbor Station
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sload, A.W.; Dube, R.J.
The traditional method of burner balancing is first to determine the fuel distribution, then to measure the economizer outlet excess oxygen distribution and to adjust the burners accordingly. Fuel distribution is typically measured by clean and dirty air probing. Coal pipe flow can then be adjusted, if necessary, through the use of coal pipe orificing or by other means. Primary air flow must be adjusted to meet the design criteria of the burner. Once coal pipe flow is balanced to within the desired criteria, secondary air flow to individual burners can be changed by adjusting windbox dampers, burner registers, shroudsmore » or other devices in the secondary air stream. This paper discusses problems encountered in measuring excess O{sub 2} at the economizer outlet. It is important to recognize that O{sub 2} measurements at the economizer outlet, by themselves, can be very misleading. If measurement problems are suspected or encountered, an alternate approach similar to that described should be considered. The alternate method is not only useful for burner balancing but also can be used to help in calibrating the plant excess O{sub 2} instruments and provide an on line means of cross-checking excess air measurements. Balanced burners operate closer to their design stoichiometry, providing better NO{sub x} reduction. For Salem Harbor Station, this means a significant saving in urea consumption.« less
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilkowski, Gery M.; Rudland, David L.; Shim, Do-Jun
2008-06-30
The potential to save trillions of BTU’s in energy usage and billions of dollars in cost on an annual basis based on use of higher strength steel in major oil and gas transmission pipeline construction is a compelling opportunity recognized by both the US Department of Energy (DOE). The use of high-strength steels (X100) is expected to result in energy savings across the spectrum, from manufacturing the pipe to transportation and fabrication, including welding of line pipe. Elementary examples of energy savings include more the 25 trillion BTUs saved annually based on lower energy costs to produce the thinner-walled high-strengthmore » steel pipe, with the potential for the US part of the Alaskan pipeline alone saving more than 7 trillion BTU in production and much more in transportation and assembling. Annual production, maintenance and installation of just US domestic transmission pipeline is likely to save 5 to 10 times this amount based on current planned and anticipated expansions of oil and gas lines in North America. Among the most important conclusions from these studies were: • While computational weld models to predict residual stress and distortions are well-established and accurate, related microstructure models need improvement. • Fracture Initiation Transition Temperature (FITT) Master Curve properly predicts surface-cracked pipe brittle-to-ductile initiation temperature. It has value in developing Codes and Standards to better correlate full-scale behavior from either CTOD or Charpy test results with the proper temperature shifts from the FITT master curve method. • For stress-based flaw evaluation criteria, the new circumferentially cracked pipe limit-load solution in the 2007 API 1104 Appendix A approach is overly conservative by a factor of 4/π, which has additional implications. . • For strain-based design of girth weld defects, the hoop stress effect is the most significant parameter impacting CTOD-driving force and can increase the crack-driving force by a factor of 2 depending on strain-hardening, pressure level as a % of SMYS, and flaw size. • From years of experience in circumferential fracture analyses and experimentation, there has not been sufficient integration of work performed for other industries into analogous problems facing the oil and gas pipeline markets. Some very basic concepts and problems solved previously in these fields could have circumvented inconsistencies seen in the stress-based and strain-based analysis efforts. For example, in nuclear utility piping work, more detailed elastic-plastic fracture analyses were always validated in their ability to predict loads and displacements (stresses and strains). The eventual implementation of these methodologies will result in acceleration of the industry adoption of higher-strength line-pipe steels.« less
Principles for Developing Benchmark Criteria for Staff Training in Responsible Gambling.
Oehler, Stefan; Banzer, Raphaela; Gruenerbl, Agnes; Malischnig, Doris; Griffiths, Mark D; Haring, Christian
2017-03-01
One approach to minimizing the negative consequences of excessive gambling is staff training to reduce the rate of the development of new cases of harm or disorder within their customers. The primary goal of the present study was to assess suitable benchmark criteria for the training of gambling employees at casinos and lottery retailers. The study utilised the Delphi Method, a survey with one qualitative and two quantitative phases. A total of 21 invited international experts in the responsible gambling field participated in all three phases. A total of 75 performance indicators were outlined and assigned to six categories: (1) criteria of content, (2) modelling, (3) qualification of trainer, (4) framework conditions, (5) sustainability and (6) statistical indicators. Nine of the 75 indicators were rated as very important by 90 % or more of the experts. Unanimous support for importance was given to indicators such as (1) comprehensibility and (2) concrete action-guidance for handling with problem gamblers, Additionally, the study examined the implementation of benchmarking, when it should be conducted, and who should be responsible. Results indicated that benchmarking should be conducted every 1-2 years regularly and that one institution should be clearly defined and primarily responsible for benchmarking. The results of the present study provide the basis for developing a benchmarking for staff training in responsible gambling.
Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization
Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong
2014-01-01
This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness. PMID:24592200
Hierarchical artificial bee colony algorithm for RFID network planning optimization.
Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong
2014-01-01
This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.
NASA Astrophysics Data System (ADS)
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
ASSESSING ECOLOGICAL RISKS AT LARGE SPATIAL SCALES
The history of environmental management and regulation in the United States has been one of initial focus on localized, end-of-the-pipe problems to increasing attention to multi-scalar, multi-stressor, and multi- resource issues. Concomitant with this reorientation is the need fo...
EXFILTRATION IN SANITARY SEWER SYSTEMS IN THE U.S.
Many municipalities throughout the US have sewerage systems (separate and combined) that may experience exfiltration of untreated wastewater. This study was conducted to focus on the magnitude of the exfiltration problem from sewer pipes on a national basis. The method for estima...
UNSOLVED PROBLEMS WITH CORROSION AND DISTRIBUTION SYSTEM INORGANICS
This presentation provides an overview of new research results and remaining research needs with respect to both corrosion control issues (lead, copper, iron) and to issues of inorganic contaminants that can form or accumulate in distribution system water, pipe scales and distrib...
The Paucity Problem: Where Have All the Space Reactor Experiments Gone?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Marshall, Margaret A.
2016-10-01
The Handbooks of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) together contain a plethora of documented and evaluated experiments essential in the validation of nuclear data, neutronics codes, and modeling of various nuclear systems. Unfortunately, only a minute selection of handbook data (twelve evaluations) are of actual experimental facilities and mockups designed specifically for space nuclear research. There is a paucity problem, such that the multitude of space nuclear experimental activities performed in the past several decades have yet to be recovered and made available in such detail that themore » international community could benefit from these valuable historical research efforts. Those experiments represent extensive investments in infrastructure, expertise, and cost, as well as constitute significantly valuable resources of data supporting past, present, and future research activities. The ICSBEP and IRPhEP were established to identify and verify comprehensive sets of benchmark data; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data. See full abstract in attached document.« less
NASA Astrophysics Data System (ADS)
Maestrelli, Daniele; Jihad, Ali; Iacopini, David; Bond, Clare
2016-04-01
Fluid escape pipes are key features of primary interest for the analysis of vertical fluid flow and secondary hydrocarbon migration in sedimentary basin. Identified worldwide (Løset et al., 2009), they acquired more and more importance as they represent critical pathways for supply of methane and potential structure for leakage into the storage reservoir (Cartwright & Santamarina, 2015). Therefore, understanding their genesis, internal characteristics and seismic expression, is of great significance for the exploration industry. Here we propose a detailed characterization of the internal seismic texture of some seal bypass system (e.g fluid escape pipes) from a 4D seismic survey (released by the BP) recently acquired in the Loyal Field. The seal by pass structure are characterized by big-scale fluid escape pipes affecting the Upper Paleogene/Neogene stratigraphic succession in the Loyal Field, Scotland (UK). The Loyal field, is located on the edge of the Faroe-Shetland Channel slope, about 130 km west of Shetland (Quadrants 204/205 of the UKCS) and has been recently re-appraised and re developed by a consortium led by BP. The 3D detailed mapping analysis of the full and partial stack survey (processed using amplitude preservation workflows) shows a complex system of fluid pipe structure rooted in the pre Lista formation and developed across the paleogene and Neogene Units. Geometrical analysis show that pipes got diameter varying between 100-300 m and a length of 500 m to 2 km. Most pipes seem to terminate abruptly at discrete subsurface horizons or in diffuse termination suggesting multiple overpressured events and lateral fluid migration (through Darcy flows) across the overburden units. The internal texture analysis of the large pipes, (across both the root and main conduit zones), using near, medium and far offset stack dataset (processed through an amplitude preserved PSTM workflow) shows a tendency of up-bending of reflection (rather than pulls up artefacts) affected by large scale fracture (semblance image) and seem consistent with a suspended mud/sand mixture non-fluidized fluid flow. Near-Middle-Far offsets amplitude analysis confirms that most of the amplitude anomalies within the pipes conduit and terminus are only partly related to gas. An interpretation of the possible texture observed is proposed with a discussion of the noise and artefact induced by resolution and migration problems. Possible hypothetical formation mechanisms for those Pipes are discussed.
An automatic detection method for the boiler pipe header based on real-time image acquisition
NASA Astrophysics Data System (ADS)
Long, Yi; Liu, YunLong; Qin, Yongliang; Yang, XiangWei; Li, DengKe; Shen, DingJie
2017-06-01
Generally, an endoscope is used to test the inner part of the thermal power plants boiler pipe header. However, since the endoscope hose manual operation, the length and angle of the inserted probe cannot be controlled. Additionally, it has a big blind spot observation subject to the length of the endoscope wire. To solve these problems, an automatic detection method for the boiler pipe header based on real-time image acquisition and simulation comparison techniques was proposed. The magnetic crawler with permanent magnet wheel could carry the real-time image acquisition device to complete the crawling work and collect the real-time scene image. According to the obtained location by using the positioning auxiliary device, the position of the real-time detection image in a virtual 3-D model was calibrated. Through comparing of the real-time detection images and the computer simulation images, the defects or foreign matter fall into could be accurately positioning, so as to repair and clean up conveniently.
On solving the compressible Navier-Stokes equations for unsteady flows at very low Mach numbers
NASA Technical Reports Server (NTRS)
Pletcher, R. H.; Chen, K.-H.
1993-01-01
The properties of a preconditioned, coupled, strongly implicit finite difference scheme for solving the compressible Navier-Stokes equations in primitive variables are investigated for two unsteady flows at low speeds, namely the impulsively started driven cavity and the startup of pipe flow. For the shear-driven cavity flow, the computational effort was observed to be nearly independent of Mach number, especially at the low end of the range considered. This Mach number independence was also observed for steady pipe flow calculations; however, rather different conclusions were drawn for the unsteady calculations. In the pressure-driven pipe startup problem, the compressibility of the fluid began to significantly influence the physics of the flow development at quite low Mach numbers. The present scheme was observed to produce the expected characteristics of completely incompressible flow when the Mach number was set at very low values. Good agreement with incompressible results available in the literature was observed.
Sinha, S K; Karray, F
2002-01-01
Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.
Evaluating Biology Achievement Scores in an ICT Integrated PBL Environment
ERIC Educational Resources Information Center
Osman, Kamisah; Kaur, Simranjeet Judge
2014-01-01
Students' achievement in Biology is often looked up as a benchmark to evaluate the mode of teaching and learning in higher education. Problem-based learning (PBL) is an approach that focuses on students' solving a problem through collaborative groups. There were eighty samples involved in this study. The samples were divided into three groups: ICT…
Wilderness visitor management practices: a benchmark and an assessment of progress
Alan E. Watson
1989-01-01
In the short time that wilderness visitor management practices have been monitored, some obvious trends have developed. The managing agencies, however, have appeared to provide different solutions to similar problems. In the early years, these problems revolved around concern about overuse of the resource and crowded conditions. Some of those concerns exist today, but...
A multiagent evolutionary algorithm for constraint satisfaction problems.
Liu, Jing; Zhong, Weicai; Jiao, Licheng
2006-02-01
With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10(4) to 10(7), and has a linear time complexity. Even for 10(7)-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained.
Better Medicare Cost Report data are needed to help hospitals benchmark costs and performance.
Magnus, S A; Smith, D G
2000-01-01
To evaluate costs and achieve cost control in the face of new technology and demands for efficiency from both managed care and governmental payers, hospitals need to benchmark their costs against those of other comparable hospitals. Since they typically use Medicare Cost Report (MCR) data for this purpose, a variety of cost accounting problems with the MCR may hamper hospitals' understanding of their relative costs and performance. Managers and researchers alike need to investigate the validity, accuracy, and timeliness of the MCR's cost accounting data.
DOT National Transportation Integrated Search
1985-03-01
Louisiana's Office of Highways reacts to a major problem when it attempts to shape and control drainage patterns along its right-of-ways. The Office's design engineers meet this challenge through proper section design and appropriate application of d...
COPPER PITTING CORROSION: A CASE STUDY
Localized or pitting corrosion of copper pipes used in household drinking-water plumbing is a problem for many water utilities and their customers. Extreme attack can lead to pinhole water leaks that may result in water damage, mold growth, and costly repairs. Water quality has b...
THE IMPACT OF PHOSPHATE ON COPPER PITTING CORROSION
Pinhole leaks caused by extensive localized or pitting corrosion of copper pipes is a problem for many homeowners. Pinhole water leaks may result in water damage, mold growth, and costly repairs. A large water system in Florida has been addressing a widespread pinhole leak proble...
DOT National Transportation Integrated Search
1981-11-01
The Louisiana Department of Transportation and Development reacts to a major problem when it attempts to shape and control drainage patterns along its right-of-ways. The Department's design engineers meet this challenge through proper section design ...
Water utilities adjust water quality treatment procedures to minimize corrosion and to remain in compliance with local, state, and federal regulations. Some treatment changes though can adversely affect tubercle stability and cause red water and/or other related problems. Therefo...
Reconstruction of gas distribution pipelines in MOZG in Poland using PE and PA pipes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borowicz, W.; Podziemski, T.; Kramek, E.
1996-12-31
MOZG--Warsaw Regional Gas Distribution Company was established in 1856. Now it is one of six gas distribution companies in Poland. Due to steadily increasing safety demands, some of the pipelines will need reconstruction. The majority of the substandard piping is located in urban areas. The company wanted to gain experiences in applying reconstruction technologies using two different plastic materials polyethylene and polyamide. They also wanted to assess the technical and economic practicalities of performing relining processes. A PE project--large diameter polyethylene relining (450 mm) conducted in Warsaw in 1994/95 and PA projects--relining using polyamide pipes, projects conducted in Radom andmore » in Warsaw during 1993 and 1994 are the most interesting and representative for this kind of works. Thanks to the experience obtained whilst carrying out these projects, reconstruction of old gas pipelines has become routine. Now they often use polyethylene relining of smaller diameters and they continue both construction and reconstruction of gas network using PA pipes. This paper presents the accumulated knowledge showing the advantages and disadvantages of applied methods. It describes project design and implementation with details and reports on the necessary preparation work, on site job organization and the most common problems arising during the construction works.« less
Monitoring of hot pipes at the power plant Neurath using guided waves
NASA Astrophysics Data System (ADS)
Weihnacht, Bianca; Klesse, Thomas; Neubeck, Robert; Schubert, Lars
2013-04-01
In order to reduce the CO2-emissions and to increase the energy efficiency, the operating temperatures of power plants will be increased up to 720°C. This demands for novel high-performance steels in the piping systems. Higher temperatures lead to a higher risk of damage and have a direct impact on the structure stability and the deposition structure. Adequately trusted results for the prediction of the residual service life of those high strength steels are not available so far. To overcome these problems the implementation of an online monitoring system in addition to periodic testing is needed. RWE operates the lignite power plant Neurath. All test and research activities have to be checked regarding their safety and have to be coordinated with the business operation of the plant. An extra bypass was established for this research and made the investigations independent from the power plant operating. In order to protect the actuators and sensors from the heat radiated from the pipe, waveguides were welded to the bypass. The data was evaluated regarding their dependencies on the environmental influences like temperature and correction algorithms were developed. Furthermore, damages were introduced into the pipe with diameters of 8 mm to 10 mm and successfully detected by the acoustic method.
Computational Efficiency of the Simplex Embedding Method in Convex Nondifferentiable Optimization
NASA Astrophysics Data System (ADS)
Kolosnitsyn, A. V.
2018-02-01
The simplex embedding method for solving convex nondifferentiable optimization problems is considered. A description of modifications of this method based on a shift of the cutting plane intended for cutting off the maximum number of simplex vertices is given. These modification speed up the problem solution. A numerical comparison of the efficiency of the proposed modifications based on the numerical solution of benchmark convex nondifferentiable optimization problems is presented.
Modeling Flue Pipes: Subsonic Flow, Lattice Boltzmann, and Parallel Distributed Computers.
1995-01-01
Abstract The problem of simulating the hydrodynamics and the acoustic waves inside wind musical instruments such as the recorder, the organ, and the ute...inside wind musical instruments such as the recorder, the organ, and the ute is considered. The problem is attacked by developing suitable local...applications such as the simulation of uid dynamics inside wind musical instruments. In the past, he has also worked on numerical methods for ordinary di
Listening to the occupants: a Web-based indoor environmental quality survey.
Zagreus, Leah; Huizenga, Charlie; Arens, Edward; Lehrer, David
2004-01-01
Building occupants are a rich source of information about indoor environmental quality and its effect on comfort and productivity. The Center for the Built Environment has developed a Web-based survey and accompanying online reporting tools to quickly and inexpensively gather, process and present this information. The core questions assess occupant satisfaction with the following IEQ areas: office layout, office furnishings, thermal comfort, indoor air quality, lighting, acoustics, and building cleanliness and maintenance. The survey can be used to assess the performance of a building, identify areas needing improvement, and provide useful feedback to designers and operators about specific aspects of building design features and operating strategies. The survey has been extensively tested and refined and has been conducted in more than 70 buildings, creating a rapidly growing database of standardized survey data that is used for benchmarking. We present three case studies that demonstrate different applications of the survey: a pre/post analysis of occupants moving to a new building, a survey used in conjunction with physical measurements to determine how environmental factors affect occupants' perceived comfort and productivity levels, and a benchmarking example of using the survey to establish how new buildings are meeting a client's design objectives. In addition to its use in benchmarking a building's performance against other buildings, the CBE survey can be used as a diagnostic tool to identify specific problems and their sources. Whenever a respondent indicates dissatisfaction with an aspect of building performance, a branching page follows with more detailed questions about the nature of the problem. This systematically collected information provides a good resource for solving indoor environmental problems in the building. By repeating the survey after a problem has been corrected it is also possible to assess the effectiveness of the solution.
NASA Technical Reports Server (NTRS)
Morris, J. F.
1981-01-01
Thermionic energy converters and metallic-fluid heat pipes are well suited to serve together synergistically. The two operating cycles appear as simple and isolated as their material problems seem forebodingly deceptive and complicated. Simplified equations verify material properties and interactions as primary influences on the operational effectiveness of both. Each experiences flow limitations in thermal emission and vaporization because of temperature restrictions redounding from thermophysicochemical stability considerations. Topics discussed include: (1) successful limitation of alkali-metal corrosion; (2) protection against external hot corrosive gases; (3) coping with external and internal vaporization; (4) controlling interfacial reactions and diffusion; and (5) meeting other thermophysical challenges; expansion matches and creep.
A suite of exercises for verifying dynamic earthquake rupture codes
Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis
2018-01-01
We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.
DOT National Transportation Integrated Search
1978-03-01
Louisiana's Office of Highways reacts to a major problem when it attempts to shape and control drainage patterns along its right-of-ways. The Office's design engineers meet this challenge through proper section design and appropriate application of d...
DOT National Transportation Integrated Search
1977-03-01
Louisiana's Office of Highways reacts to a major problem when it attempts to shape and control drainage patterns along its right-of-ways. The Office's design engineers meet this challenge through proper section design and appropriate application of d...
CHARACTERIZATION OF LOCALIZED CORROSION OF COPPER PIPES USED IN DRINKING WATER
Localized corrosion of copper, or "copper pitting" in water distribution tubing is a large problem at many utilities. Pitting can lead to pinhole leaks less than a year. Tubing affected by copper pitting will often fail in ultiple locations, resulting in a frustrating situation ...
COPPER PITTING CORROSION AND PINHOLE LEAKS: A CASE STUDY
Localized corrosion, or "pitting", of copper drinking water pipe continues is a problem for many water utilities and their customers. Extreme attack leads to pinhole leaks that can potentially lead to water damage, mold growth, and costly repairs for the homeowners, as well as th...
CASE STUDIES IN THE INTEGRATED USE OF SCALE ANALYSES TO SOLVE LEAD PROBLEMS
All methods of controlling lead corrosion involve immobilizing lead into relatively insoluble compounds that deposit on the interior wall of water pipes. Many different solid phases can form under the disparate conditions that exist in distribution systems, which range in how the...
A Comprehensive Investigation of Copper Pitting Corrosion in a Drinking Water Distribution System
Copper pipe pitting is a complicated corrosion process for which exact causes and solutions are uncertain. This paper presents the findings of a comprehensive investigation of a cold water copper pitting corrosion problem in a drinking water distribution system, including a refi...
The Effect of Water Chemistry on the Release of Iron from Pipe Walls
Colored water problems originating from distribution system materials may be reduced by controlling corrosion, iron released from corrosion scales, and better understanding of the form and properties of the iron particles. The objective of this research was to evaluate the effect...
Problems with the random number generator RANF implemented on the CDC cyber 205
NASA Astrophysics Data System (ADS)
Kalle, Claus; Wansleben, Stephan
1984-10-01
We show that using RANF may lead to wrong results when lattice models are simulated by Monte Carlo methods. We present a shift-register sequence random number generator which generates two random numbers per cycle on a two pipe CDC Cyber 205.
Child Sacrifice: Black America's Price of Paying the Media Piper.
ERIC Educational Resources Information Center
Orange, Carolyn M.; George, Amiso M.
2000-01-01
Explores the sacrifice of African American children to the broadcast media and video games in terms of the players ("media pipers"), the messages ("piping"), and the consequences to children. Proposes some solutions for the problems associated with excessive television viewing and undesirable programming. (SLD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnis Judzis
2002-10-01
This document details the progress to date on the OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE -- A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING contract for the quarter starting July 2002 through September 2002. Even though we are awaiting the optimization portion of the testing program, accomplishments include the following: (1) Smith International agreed to participate in the DOE Mud Hammer program. (2) Smith International chromed collars for upcoming benchmark tests at TerraTek, now scheduled for 4Q 2002. (3) ConocoPhillips had a field trial of the Smith fluid hammer offshore Vietnam. The hammer functioned properly, though themore » well encountered hole conditions and reaming problems. ConocoPhillips plan another field trial as a result. (4) DOE/NETL extended the contract for the fluid hammer program to allow Novatek to ''optimize'' their much delayed tool to 2003 and to allow Smith International to add ''benchmarking'' tests in light of SDS Digger Tools' current financial inability to participate. (5) ConocoPhillips joined the Industry Advisors for the mud hammer program. (6) TerraTek acknowledges Smith International, BP America, PDVSA, and ConocoPhillips for cost-sharing the Smith benchmarking tests allowing extension of the contract to complete the optimizations.« less
Robust visual tracking via multiple discriminative models with object proposals
NASA Astrophysics Data System (ADS)
Zhang, Yuanqiang; Bi, Duyan; Zha, Yufei; Li, Huanyu; Ku, Tao; Wu, Min; Ding, Wenshan; Fan, Zunlin
2018-04-01
Model drift is an important reason for tracking failure. In this paper, multiple discriminative models with object proposals are used to improve the model discrimination for relieving this problem. Firstly, the target location and scale changing are captured by lots of high-quality object proposals, which are represented by deep convolutional features for target semantics. And then, through sharing a feature map obtained by a pre-trained network, ROI pooling is exploited to wrap the various sizes of object proposals into vectors of the same length, which are used to learn a discriminative model conveniently. Lastly, these historical snapshot vectors are trained by different lifetime models. Based on entropy decision mechanism, the bad model owing to model drift can be corrected by selecting the best discriminative model. This would improve the robustness of the tracker significantly. We extensively evaluate our tracker on two popular benchmarks, the OTB 2013 benchmark and UAV20L benchmark. On both benchmarks, our tracker achieves the best performance on precision and success rate compared with the state-of-the-art trackers.
2012-02-09
1nclud1ng suggestions for reduc1ng the burden. to the Department of Defense. ExecutiVe Serv1ce D>rectorate (0704-0188) Respondents should be aware...benchmark problem we contacted Bertrand LeCun who in their poject CHOC from 2005-2008 had applied their parallel B&B framework BOB++ to the RLT1
Alternative industrial carbon emissions benchmark based on input-output analysis
NASA Astrophysics Data System (ADS)
Han, Mengyao; Ji, Xi
2016-12-01
Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.
Instruction-matrix-based genetic programming.
Li, Gang; Wang, Jin Feng; Lee, Kin Hong; Leung, Kwong-Sak
2008-08-01
In genetic programming (GP), evolving tree nodes separately would reduce the huge solution space. However, tree nodes are highly interdependent with respect to their fitness. In this paper, we propose a new GP framework, namely, instruction-matrix (IM)-based GP (IMGP), to handle their interactions. IMGP maintains an IM to evolve tree nodes and subtrees separately. IMGP extracts program trees from an IM and updates the IM with the information of the extracted program trees. As the IM actually keeps most of the information of the schemata of GP and evolves the schemata directly, IMGP is effective and efficient. Our experimental results on benchmark problems have verified that IMGP is not only better than those of canonical GP in terms of the qualities of the solutions and the number of program evaluations, but they are also better than some of the related GP algorithms. IMGP can also be used to evolve programs for classification problems. The classifiers obtained have higher classification accuracies than four other GP classification algorithms on four benchmark classification problems. The testing errors are also comparable to or better than those obtained with well-known classifiers. Furthermore, an extended version, called condition matrix for rule learning, has been used successfully to handle multiclass classification problems.
Helmholtz and parabolic equation solutions to a benchmark problem in ocean acoustics.
Larsson, Elisabeth; Abrahamsson, Leif
2003-05-01
The Helmholtz equation (HE) describes wave propagation in applications such as acoustics and electromagnetics. For realistic problems, solving the HE is often too expensive. Instead, approximations like the parabolic wave equation (PE) are used. For low-frequency shallow-water environments, one persistent problem is to assess the accuracy of the PE model. In this work, a recently developed HE solver that can handle a smoothly varying bathymetry, variable material properties, and layered materials, is used for an investigation of the errors in PE solutions. In the HE solver, a preconditioned Krylov subspace method is applied to the discretized equations. The preconditioner combines domain decomposition and fast transform techniques. A benchmark problem with upslope-downslope propagation over a penetrable lossy seamount is solved. The numerical experiments show that, for the same bathymetry, a soft and slow bottom gives very similar HE and PE solutions, whereas the PE model is far from accurate for a hard and fast bottom. A first attempt to estimate the error is made by computing the relative deviation from the energy balance for the PE solution. This measure gives an indication of the magnitude of the error, but cannot be used as a strict error bound.
Andrews, William J.; Burrough, Steven P.
2002-01-01
The Bromide Pavilion in Chickasaw National Recreation Area drew many thousands of people annually to drink the mineral-rich waters piped from nearby Bromide and Medicine Springs. Periodic detection of fecal coliform bacteria in water piped to the pavilion from the springs, low yields of the springs, or flooding by adjacent Rock Creek prompted National Park Service officials to discontinue piping of the springs to the pavilion in the 1970s. Park officials would like to resume piping mineralized spring water to the pavilion to restore it as a visitor attraction, but they are concerned about the ability of the springs to provide sufficient quantities of potable water. Pumping and sampling of Bromide and Medicine Springs and Rock Creek six times during 2000 indicate that these springs may not provide sufficient water for Bromide Pavilion to supply large numbers of visitors. A potential problem with piping water from Medicine Spring is the presence of an undercut, overhanging cliff composed of conglomerate, which may collapse. Evidence of intermittent inundation of the springs by Rock Creek and seepage of surface water into the spring vaults from the adjoining creek pose a threat of contamination of the springs. Escherichia coli, fecal coliform, and fecal streptococcal bacteria were detected in some samples from the springs, indicating possible fecal contamination. Cysts of Giardia lamblia and oocysts of Cryptosporidium parvum protozoa were not detected in the creek or the springs. Total culturable enteric viruses were detected in only one water sample taken from Rock Creek.
Innately Split Model for Job-shop Scheduling Problem
NASA Astrophysics Data System (ADS)
Ikeda, Kokolo; Kobayashi, Sigenobu
Job-shop Scheduling Problem (JSP) is one of the most difficult benchmark problems. GA approaches often fail searching the global optimum because of the deception UV-structure of JSPs. In this paper, we introduce a novel framework model of GA, Innately Split Model (ISM) which prevents UV-phenomenon, and discuss on its power particularly. Next we analyze the structure of JSPs with the help of the UV-structure hypothesys, and finally we show ISM's excellent performance on JSP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong; Andrs, David; Martineau, Richard Charles
This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for timemore » integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.; Kornreich, D.E.
Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less
Mamo, Dereje; Hazel, Elizabeth; Lemma, Israel; Guenther, Tanya; Bekele, Abeba; Demeke, Berhanu
2014-10-01
Program managers require feasible, timely, reliable, and valid measures of iCCM implementation to identify problems and assess progress. The global iCCM Task Force developed benchmark indicators to guide implementers to develop or improve monitoring and evaluation (M&E) systems. To assesses Ethiopia's iCCM M&E system by determining the availability and feasibility of the iCCM benchmark indicators. We conducted a desk review of iCCM policy documents, monitoring tools, survey reports, and other rele- vant documents; and key informant interviews with government and implementing partners involved in iCCM scale-up and M&E. Currently, Ethiopia collects data to inform most (70% [33/47]) iCCM benchmark indicators, and modest extra effort could boost this to 83% (39/47). Eight (17%) are not available given the current system. Most benchmark indicators that track coordination and policy, human resources, service delivery and referral, supervision, and quality assurance are available through the routine monitoring systems or periodic surveys. Indicators for supply chain management are less available due to limited consumption data and a weak link with treatment data. Little information is available on iCCM costs. Benchmark indicators can detail the status of iCCM implementation; however, some indicators may not fit country priorities, and others may be difficult to collect. The government of Ethiopia and partners should review and prioritize the benchmark indicators to determine which should be included in the routine M&E system, especially since iCCMdata are being reviewed for addition to the HMIS. Moreover, the Health Extension Worker's reporting burden can be minimized by an integrated reporting approach.
Benchmarking in pathology: development of an activity-based costing model.
Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John
2012-12-01
Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.
Zhao, Xin-Feng; Yang, Li-Rong; Shi, Qian; Ma, Yan; Zhang, Yan-Yan; Chen, Li-Ding; Zheng, Hai-Feng
2008-11-01
Nitrate pollution in groundwater has become a worldwide problem. It may affect the water quality for daily use and thus the health of people. The temporal and spatial characteristics of nitrate pollution in the groundwater were addressed by sample analysis of the drinkable water from 157 wells in Hailun, Heilongjiang, northeastern China. It was found that the mean value of nitrate concentration in all wells was 14.01 mg x L(-1). Of all the samples, the nitrate concentrations of 26.11% wells exceeded the standard of drinkable water (10.00 mg x L(-1)). A significant difference was found on the spatial distribution of nitrate pollution in the study area. The pollution degree in term of nitrate pollution was in the order: the central rolling hills and flooding plain > the northeastern mountain area > the southwest rolling hills and plain. Based on the results, the factors causing the pollution we analyzed from the well properties and pollution sources. As for well properties, the type of the pipe material plays a critical role in the groundwater nitrate pollution. It was found that the wells with seamless pipe have less pollution than those with multiple-sections pipe. The concentrations of seamless pipe wells and multiple ones were respectively 5.08 mg x L(-1) and 32.57 mg x L(-1), 12.26% and 82.35% of these two kinds wells exceeded 10.00 mg x L(-1), the state drinking water standard. In the whole Hailun, there is no statistically relationship between nitrate-N levels of wells and the well depth. However, a statistically lower nitrate-N was observed in the deep wells than that in the shallower ones. The mean values of nitrate concentration of the seamless-pipe deep wells, seamless-pipe shallow wells, multiple-section-pipe deep wells and multiple-section-pipe shallow wells were 1.84, 12.02, 25.14 and 45.61 mg x L(-1). Analysis of pollution source shows that the heavily polluted regions are usually associated with large use of nitrogen fertilizer and household livestock or poultry. This indicates a positive correlation between the nitrate-N pollution of groundwater and the nitrogen fertilizer, household livestock, poultry.
NASA Astrophysics Data System (ADS)
Nizamuddin, Syed
Glass fiber reinforced vinylester (GFRE) and epoxy (GFRE) pipes have been used for more than three decades to mitigate corrosion problems in oil fields, chemical and industrial plants. In these services, both GFRV and GFRE pipes are exposed to various environmental conditions. Long-term mechanical durability of these pipes after exposure to environmental conditions, which include natural weathering exposure to seasonal temperature variation, sea water, humidity and other corrosive fluids like crude oil, should be well known. Although extensive research has been undertaken, several major issues pertaining to the performance of these pipes under a number of environmental conditions still remain unresolved. The main objective of this study is to investigate the effects of natural weathering, combined natural weathering with seawater and crude oil exposure, for time periods ranging from 3 to 36 months respectively, on the tensile and stress rupture behavior of GFRV and GFRE pipes. Ring specimens are machined from GFRV and GFRE pipes and tested before and after exposure to different weathering conditions prevalent in the eastern region (Dhahran) of Saudi Arabia and present under service conditions. The natural weathering and combined natural weathering with crude oil exposure of GFRV specimens revealed increased tensile strength even after 36 months of exposure when compared with that of the as received samples. However, the combined natural weathering with seawater exposure of GFRV samples revealed better tensile behavior till 24 months of exposure, and after 36 months their tensile strength was seen to be below that of the as received GFRV samples. The stress rupture behavior of natural weather exposed GFRV samples showed an improvement after 12 months of exposure and it decreased after 24 and 36 months of exposure when compared with the as received GFRV samples. The combined natural weathering with crude oil and seawater exposure of GFRV sample revealed improved stress rupture behavior after 12 months of exposure. The as received GFRE pipe specimens revealed higher average tensile strength when compared to the as received GFRV sample, whereas the stress rupture behavior was comparatively low. The seawater exposure of the GFRE specimens resulted in drastic reduction in both tensile and stress rupture properties. Fractographic analysis was performed using an optical microscope and SEM in order to explain the possible controlling mechanisms of failure.
Basic problems and new potentials in monitoring sediment transport using Japanese pipe type geophone
NASA Astrophysics Data System (ADS)
Sakajo, Saiichi
2016-04-01
The authors have conducted a lot of series of monitoring of sediment transport by pipe type geophone in a model hydrological channel with various gradients and water discharge, using the various size of particles from 2 to 21 mm in the diameter. In the case of casting soils particle by particle into the water channel, 1,000 test cases were conducted. In the case of casting all soils at a breath into the water channel, 100 test cases were conducted. The all test results were totally analyzed by the conventional method, with visible judgement by video pictures. Then several important basic problems were found in estimating the volume and particle distributions by the conventional method, which was not found in the past similar studies. It was because the past studies did not consider the types of collisions between sediment particle and pipe. Based on these experiments, the authors have firstly implemented this idea into the old formula to estimate the amount of sediment transport. In the formula, two factors of 1) the rate of sensing in a single collision and 2) the rate of collided particles to a cast all soil particles were concretely considered. The parameters of these factors could be determined from the experimental results and it was found that the obtained formula could estimate grain size distribution. In this paper, they explain the prototype formula to estimate a set of volume and distribution of sediment transport. Another finding in this study is to propose a single collision as a river index to recognize its characteristics of sediment transport. This result could characterize the risk ranking of sediment transport in the rivers and mudflow in the mountainous rivers. Furthermore, in this paper the authors explain how the preciseness of the pipe geophone to sense the smaller sediment particles shall be improved, which has never been able to be sensed.
Estimation procedure of the efficiency of the heat network segment
NASA Astrophysics Data System (ADS)
Polivoda, F. A.; Sokolovskii, R. I.; Vladimirov, M. A.; Shcherbakov, V. P.; Shatrov, L. A.
2017-07-01
An extensive city heat network contains many segments, and each segment operates with different efficiency of heat energy transfer. This work proposes an original technical approach; it involves the evaluation of the energy efficiency function of the heat network segment and interpreting of two hyperbolic functions in the form of the transcendental equation. In point of fact, the problem of the efficiency change of the heat network depending on the ambient temperature was studied. Criteria dependences used for evaluation of the set segment efficiency of the heat network and finding of the parameters for the most optimal control of the heat supply process of the remote users were inferred with the help of the functional analysis methods. Generally, the efficiency function of the heat network segment is interpreted by the multidimensional surface, which allows illustrating it graphically. It was shown that the solution of the inverse problem is possible as well. Required consumption of the heating agent and its temperature may be found by the set segment efficient and ambient temperature; requirements to heat insulation and pipe diameters may be formulated as well. Calculation results were received in a strict analytical form, which allows investigating the found functional dependences for availability of the extremums (maximums) under the set external parameters. A conclusion was made that it is expedient to apply this calculation procedure in two practically important cases: for the already made (built) network, when the change of the heat agent consumption and temperatures in the pipe is only possible, and for the projecting (under construction) network, when introduction of changes into the material parameters of the network is possible. This procedure allows clarifying diameter and length of the pipes, types of insulation, etc. Length of the pipes may be considered as the independent parameter for calculations; optimization of this parameter is made in accordance with other, economical, criteria for the specific project.
Features of a SINDA/FLUINT model of a liquid oxygen supply line
NASA Astrophysics Data System (ADS)
Simmonds, Boris G.
1993-11-01
The modeling features used in a steady-state heat transfer problem using SINDA/FLUINT are described. The problem modeled is a 125 feet long, 3 inch diameter pipe, filled with liquid oxygen flow driven by a given pressure gradient. The pipe is fully insulated in five sections. Three sections of 1 inch thick spray-on foam and two sections of vacuum jacket. The model evaluates friction, turns losses and convection heat transfer between the fluid and the pipe wall. There is conduction through the foam insulation with temperature dependent thermal conductivity. The vacuum space is modeled with radiation and gas molecular conduction, if present, in the annular gap. Heat is transferred between the outer surface and surrounding ambient by natural convection and radiation; and, by axial conduction along the pipe and through the vacuum jacket spacers and welded seal flanges. The model makes extensive use of SINDA/FLUINT basic capabilities such as the GEN option for nodes and conductors (to generate groups of nodes or conductors), the SIV option (to generate single, temperature varying conductors), the SIM option (for multiple, temperature varying conductors) and the M HX macros for fluids (to generate strings of lumps, paths, and ties representing a diabatic duct). It calls subroutine CONTRN (returns the relative location in the G-array of a network conductor, given an actual conductor number) enabling an extensive manipulation of conductor (calculates an assignment of their values) with DO loops. Models like this illustrate to the new and even to the old SINDA/FLUINT user, features of the program that are not so obvious or known, and that are extremely handy when trying to take advantage of both, the automation of the DATA headers and make surgical modifications to specific parameters of the thermal or fluid elements in the OPERATIONS portion of the model.
Features of a SINDA/FLUINT model of a liquid oxygen supply line
NASA Technical Reports Server (NTRS)
Simmonds, Boris G.
1993-01-01
The modeling features used in a steady-state heat transfer problem using SINDA/FLUINT are described. The problem modeled is a 125 feet long, 3 inch diameter pipe, filled with liquid oxygen flow driven by a given pressure gradient. The pipe is fully insulated in five sections. Three sections of 1 inch thick spray-on foam and two sections of vacuum jacket. The model evaluates friction, turns losses and convection heat transfer between the fluid and the pipe wall. There is conduction through the foam insulation with temperature dependent thermal conductivity. The vacuum space is modeled with radiation and gas molecular conduction, if present, in the annular gap. Heat is transferred between the outer surface and surrounding ambient by natural convection and radiation; and, by axial conduction along the pipe and through the vacuum jacket spacers and welded seal flanges. The model makes extensive use of SINDA/FLUINT basic capabilities such as the GEN option for nodes and conductors (to generate groups of nodes or conductors), the SIV option (to generate single, temperature varying conductors), the SIM option (for multiple, temperature varying conductors) and the M HX macros for fluids (to generate strings of lumps, paths, and ties representing a diabatic duct). It calls subroutine CONTRN (returns the relative location in the G-array of a network conductor, given an actual conductor number) enabling an extensive manipulation of conductor (calculates an assignment of their values) with DO loops. Models like this illustrate to the new and even to the old SINDA/FLUINT user, features of the program that are not so obvious or known, and that are extremely handy when trying to take advantage of both, the automation of the DATA headers and make surgical modifications to specific parameters of the thermal or fluid elements in the OPERATIONS portion of the model.
Pitting Corrosion of Copper in Waters with High pH and Low Alkalinity
Localized or pitting corrosion of copper pipes used in household drinking-water plumbing is a problem for many water utilities and their customers. Extreme attack can lead to pinhole water leaks that may result in water damage, mold growth, and costly repairs. Water quality has b...
Kindergarteners, Fish, and Worms ... Oh My!
ERIC Educational Resources Information Center
Plevyak, Linda; Arlington, Rebecca
2012-01-01
Children are natural scientists. They do what professional scientists do, but for slightly different and less conscious reasons--whether observing water flowing down a pipe, investigating how to make different colors with paints, or reasoning through a series of problems in relation to building a bridge. A kindergarten teacher wanted to expand and…
Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique
Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep
2015-01-01
In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032
A comparative study of upwind and MacCormack schemes for CAA benchmark problems
NASA Technical Reports Server (NTRS)
Viswanathan, K.; Sankar, L. N.
1995-01-01
In this study, upwind schemes and MacCormack schemes are evaluated as to their suitability for aeroacoustic applications. The governing equations are cast in a curvilinear coordinate system and discretized using finite volume concepts. A flux splitting procedure is used for the upwind schemes, where the signals crossing the cell faces are grouped into two categories: signals that bring information from outside into the cell, and signals that leave the cell. These signals may be computed in several ways, with the desired spatial and temporal accuracy achieved by choosing appropriate interpolating polynomials. The classical MacCormack schemes employed here are fourth order accurate in time and space. Results for categories 1, 4, and 6 of the workshop's benchmark problems are presented. Comparisons are also made with the exact solutions, where available. The main conclusions of this study are finally presented.
FY16 Status Report on NEAMS Neutronics Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Shemon, E. R.; Smith, M. A.
2016-09-30
The goal of the NEAMS neutronics effort is to develop a neutronics toolkit for use on sodium-cooled fast reactors (SFRs) which can be extended to other reactor types. The neutronics toolkit includes the high-fidelity deterministic neutron transport code PROTEUS and many supporting tools such as a cross section generation code MC 2-3, a cross section library generation code, alternative cross section generation tools, mesh generation and conversion utilities, and an automated regression test tool. The FY16 effort for NEAMS neutronics focused on supporting the release of the SHARP toolkit and existing and new users, continuing to develop PROTEUS functions necessarymore » for performance improvement as well as the SHARP release, verifying PROTEUS against available existing benchmark problems, and developing new benchmark problems as needed. The FY16 research effort was focused on further updates of PROTEUS-SN and PROTEUS-MOCEX and cross section generation capabilities as needed.« less
NASA Astrophysics Data System (ADS)
Birgin, Ernesto G.; Ronconi, Débora P.
2012-10-01
The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bill Bruce; Nancy Porter; George Ritter
2005-07-20
The two broad categories of fiber-reinforced composite liner repair and deposited weld metal repair technologies were reviewed and evaluated for potential application for internal repair of gas transmission pipelines. Both are used to some extent for other applications and could be further developed for internal, local, structural repair of gas transmission pipelines. Principal conclusions from a survey of natural gas transmission industry pipeline operators can be summarized in terms of the following performance requirements for internal repair: (1) Use of internal repair is most attractive for river crossings, under other bodies of water, in difficult soil conditions, under highways, undermore » congested intersections, and under railway crossings. (2) Internal pipe repair offers a strong potential advantage to the high cost of horizontal direct drilling when a new bore must be created to solve a leak or other problem. (3) Typical travel distances can be divided into three distinct groups: up to 305 m (1,000 ft.); between 305 m and 610 m (1,000 ft. and 2,000 ft.); and beyond 914 m (3,000 ft.). All three groups require pig-based systems. A despooled umbilical system would suffice for the first two groups which represents 81% of survey respondents. The third group would require an onboard self-contained power unit for propulsion and welding/liner repair energy needs. (4) The most common size range for 80% to 90% of operators surveyed is 508 mm (20 in.) to 762 mm (30 in.), with 95% using 558.8 mm (22 in.) pipe. Evaluation trials were conducted on pipe sections with simulated corrosion damage repaired with glass fiber-reinforced composite liners, carbon fiber-reinforced composite liners, and weld deposition. Additional un-repaired pipe sections were evaluated in the virgin condition and with simulated damage. Hydrostatic failure pressures for pipe sections repaired with glass fiber-reinforced composite liner were only marginally greater than that of pipe sections without liners, indicating that this type of liner is only marginally effective at restoring the pressure containing capabilities of pipelines. Failure pressures for larger diameter pipe repaired with a semi-circular patch of carbon fiber-reinforced composite lines were also marginally greater than that of a pipe section with un-repaired simulated damage without a liner. These results indicate that fiber reinforced composite liners have the potential to increase the burst pressure of pipe sections with external damage Carbon fiber based liners are viewed as more promising than glass fiber based liners because of the potential for more closely matching the mechanical properties of steel. Pipe repaired with weld deposition failed at pressures lower than that of un-repaired pipe in both the virgin and damaged conditions, indicating that this repair technology is less effective at restoring the pressure containing capability of pipe than a carbon fiber-reinforced liner repair. Physical testing indicates that carbon fiber-reinforced liner repair is the most promising technology evaluated to-date. In lieu of a field installation on an abandoned pipeline, a preliminary nondestructive testing protocol is being developed to determine the success or failure of the fiber-reinforced liner pipeline repairs. Optimization and validation activities for carbon-fiber repair methods are ongoing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robin Gordon; Bill Bruce; Ian Harris
2004-12-31
The two broad categories of fiber-reinforced composite liner repair and deposited weld metal repair technologies were reviewed and evaluated for potential application for internal repair of gas transmission pipelines. Both are used to some extent for other applications and could be further developed for internal, local, structural repair of gas transmission pipelines. Principal conclusions from a survey of natural gas transmission industry pipeline operators can be summarized in terms of the following performance requirements for internal repair: (1) Use of internal repair is most attractive for river crossings, under other bodies of water, in difficult soil conditions, under highways, undermore » congested intersections, and under railway crossings. (2) Internal pipe repair offers a strong potential advantage to the high cost of horizontal direct drilling when a new bore must be created to solve a leak or other problem. (3) Typical travel distances can be divided into three distinct groups: up to 305 m (1,000 ft.); between 305 m and 610 m (1,000 ft. and 2,000 ft.); and beyond 914 m (3,000 ft.). All three groups require pig-based systems. A despooled umbilical system would suffice for the first two groups which represents 81% of survey respondents. The third group would require an onboard self-contained power unit for propulsion and welding/liner repair energy needs. (4) The most common size range for 80% to 90% of operators surveyed is 508 mm (20 in.) to 762 mm (30 in.), with 95% using 558.8 mm (22 in.) pipe. Evaluation trials were conducted on pipe sections with simulated corrosion damage repaired with glass fiber-reinforced composite liners, carbon fiber-reinforced composite liners, and weld deposition. Additional un-repaired pipe sections were evaluated in the virgin condition and with simulated damage. Hydrostatic failure pressures for pipe sections repaired with glass fiber-reinforced composite liner were only marginally greater than that of pipe sections without liners, indicating that this type of liner is only marginally effective at restoring the pressure containing capabilities of pipelines. Failure pressures for larger diameter pipe repaired with a semi-circular patch of carbon fiber-reinforced composite lines were also marginally greater than that of a pipe section with un-repaired simulated damage without a liner. These results indicate that fiber reinforced composite liners have the potential to increase the burst pressure of pipe sections with external damage Carbon fiber based liners are viewed as more promising than glass fiber based liners because of the potential for more closely matching the mechanical properties of steel. Pipe repaired with weld deposition failed at pressures lower than that of un-repaired pipe in both the virgin and damaged conditions, indicating that this repair technology is less effective at restoring the pressure containing capability of pipe than a carbon fiber-reinforced liner repair. Physical testing indicates that carbon fiber-reinforced liner repair is the most promising technology evaluated to-date. The first round of optimization and validation activities for carbon-fiber repairs are complete. Development of a comprehensive test plan for this process is recommended for use in the field trial portion of this program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robin Gordon; Bill Bruce; Ian Harris
2004-08-17
The two broad categories of fiber-reinforced composite liner repair and deposited weld metal repair technologies were reviewed and evaluated for potential application for internal repair of gas transmission pipelines. Both are used to some extent for other applications and could be further developed for internal, local, structural repair of gas transmission pipelines. Principal conclusions from a survey of natural gas transmission industry pipeline operators can be summarized in terms of the following performance requirements for internal repair: (1) Use of internal repair is most attractive for river crossings, under other bodies of water, in difficult soil conditions, under highways, undermore » congested intersections, and under railway. (2) Internal pipe repair offers a strong potential advantage to the high cost of horizontal direct drilling when a new bore must be created to solve a leak or other problem. (3) Typical travel distances can be divided into three distinct groups: up to 305 m (1,000 ft.); between 305 m and 610 m (1,000 ft. and 2,000 ft.); and beyond 914 m (3,000 ft.). All three groups require pig-based systems. A despooled umbilical system would suffice for the first two groups which represents 81% of survey respondents. The third group would require an onboard self-contained power unit for propulsion and welding/liner repair energy needs. (4) The most common size range for 80% to 90% of operators surveyed is 508 mm (20 in.) to 762 mm (30 in.), with 95% using 558.8 mm (22 in.) pipe. Evaluation trials were conducted on pipe sections with simulated corrosion damage repaired with glass fiber-reinforced composite liners, carbon fiber-reinforced composite liners, and weld deposition. Additional un-repaired pipe sections were evaluated in the virgin condition and with simulated damage. Hydrostatic failure pressures for pipe sections repaired with glass fiber-reinforced composite liner were only marginally greater than that of pipe sections without liners, indicating that this type of liner is only marginally effective at restoring the pressure containing capabilities of pipelines. Failure pressures for larger diameter pipe repaired with a semi-circular patch of carbon fiber-reinforced composite lines were also marginally greater than that of a pipe section with un-repaired simulated damage without a liner. These results indicate that fiber reinforced composite liners have the potential to increase the burst pressure of pipe sections with external damage Carbon fiber based liners are viewed as more promising than glass fiber based liners because of the potential for more closely matching the mechanical properties of steel. Pipe repaired with weld deposition failed at pressures lower than that of un-repaired pipe in both the virgin and damaged conditions, indicating that this repair technology is less effective at restoring the pressure containing capability of pipe than a carbon fiber-reinforced liner repair. Physical testing indicates that carbon fiber-reinforced liner repair is the most promising technology evaluated to-date. Development of a comprehensive test plan for this process is recommended for use in the field trial portion of this program.« less
NASA Astrophysics Data System (ADS)
Lau, Chun Sing
This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in closed form. Numerical examples demonstrate that the pricing and hedging errors are in general less than 1% relative to the benchmark prices obtained by numerical integration or Monte Carlo simulation. By exploiting an explicit relationship between the option price and the underlying probability distribution, we further derive an approximate distribution function for the general basket-spread variable. It can be used to approximate the transition probability distribution of any linear combination of correlated GBMs. Finally, an implicit perturbation is applied to reduce the pricing errors by factors of up to 100. When compared against the existing methods, the basket-spread option formula coupled with the implicit perturbation turns out to be one of the most robust and accurate approximation methods.
Gluon and ghost correlation functions of 2-color QCD at finite density
NASA Astrophysics Data System (ADS)
Hajizadeh, Ouraman; Boz, Tamer; Maas, Axel; Skullerud, Jon-Ivar
2018-03-01
2-color QCD, i. e. QCD with the gauge group SU(2), is the simplest non-Abelian gauge theory without sign problem at finite quark density. Therefore its study on the lattice is a benchmark for other non-perturbative approaches at finite density. To provide such benchmarks we determine the minimal-Landau-gauge 2-point and 3-gluon correlation functions of the gauge sector and the running gauge coupling at finite density. We observe no significant effects, except for some low-momentum screening of the gluons at and above the supposed high-density phase transition.
NASA Astrophysics Data System (ADS)
Capo-Lugo, Pedro A.
Formation flying consists of multiple spacecraft orbiting in a required configuration about a planet or through Space. The National Aeronautics and Space Administration (NASA) Benchmark Tetrahedron Constellation is one of the proposed constellations to be launched in the year 2009 and provides the motivation for this investigation. The problem that will be researched here consists of three stages. The first stage contains the deployment of the satellites; the second stage is the reconfiguration process to transfer the satellites through different specific sizes of the NASA benchmark problem; and, the third stage is the station-keeping procedure for the tetrahedron constellation. Every stage contains different control schemes and transfer procedures to obtain/maintain the proposed tetrahedron constellation. In the first stage, the deployment procedure will depend on a combination of two techniques in which impulsive maneuvers and a digital controller are used to deploy the satellites and to maintain the tetrahedron constellation at the following apogee point. The second stage that corresponds to the reconfiguration procedure shows a different control scheme in which the intelligent control systems are implemented to perform this procedure. In this research work, intelligent systems will eliminate the use of complex mathematical models and will reduce the computational time to perform different maneuvers. Finally, the station-keeping process, which is the third stage of this research problem, will be implemented with a two-level hierarchical control scheme to maintain the separation distance constraints of the NASA Benchmark Tetrahedron Constellation. For this station-keeping procedure, the system of equations defining the dynamics of a pair of satellites is transformed to take in account the perturbation due to the oblateness of the Earth and the disturbances due to solar pressure. The control procedures used in this research will be transformed from a continuous control system to a digital control system which will simplify the implementation into the computer onboard the satellite. In addition, this research will show an introductory chapter on attitude dynamics that can be used to maintain the orientation of the satellites, and an adaptive intelligent control scheme will be proposed to maintain the desired orientation of the spacecraft. In conclusion, a solution for the dynamics of the NASA Benchmark Tetrahedron Constellation will be presented in this research work. The main contribution of this work is the use of discrete control schemes, impulsive maneuvers, and intelligent control schemes that can be used to reduce the computational time in which these control schemes can be easily implemented in the computer onboard the satellite. These contributions are explained through the deployment, reconfiguration, and station-keeping process of the proposed NASA Benchmark Tetrahedron Constellation.
NASA Astrophysics Data System (ADS)
Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.
2014-12-01
As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a useful and nontrivial benchmarking problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, B.C.J.; Sha, W.T.; Doria, M.L.
1980-11-01
The governing equations, i.e., conservation equations for mass, momentum, and energy, are solved as a boundary-value problem in space and an initial-value problem in time. BODYFIT-1FE code uses the technique of boundary-fitted coordinate systems where all the physical boundaries are transformed to be coincident with constant coordinate lines in the transformed space. By using this technique, one can prescribe boundary conditions accurately without interpolation. The transformed governing equations in terms of the boundary-fitted coordinates are then solved by using implicit cell-by-cell procedure with a choice of either central or upwind convective derivatives. It is a true benchmark rod-bundle code withoutmore » invoking any assumptions in the case of laminar flow. However, for turbulent flow, some empiricism must be employed due to the closure problem of turbulence modeling. The detailed velocity and temperature distributions calculated from the code can be used to benchmark and calibrate empirical coefficients employed in subchannel codes and porous-medium analyses.« less
Fuzzy Kernel k-Medoids algorithm for anomaly detection problems
NASA Astrophysics Data System (ADS)
Rustam, Z.; Talita, A. S.
2017-07-01
Intrusion Detection System (IDS) is an essential part of security systems to strengthen the security of information systems. IDS can be used to detect the abuse by intruders who try to get into the network system in order to access and utilize the available data sources in the system. There are two approaches of IDS, Misuse Detection and Anomaly Detection (behavior-based intrusion detection). Fuzzy clustering-based methods have been widely used to solve Anomaly Detection problems. Other than using fuzzy membership concept to determine the object to a cluster, other approaches as in combining fuzzy and possibilistic membership or feature-weighted based methods are also used. We propose Fuzzy Kernel k-Medoids that combining fuzzy and possibilistic membership as a powerful method to solve anomaly detection problem since on numerical experiment it is able to classify IDS benchmark data into five different classes simultaneously. We classify IDS benchmark data KDDCup'99 data set into five different classes simultaneously with the best performance was achieved by using 30 % of training data with clustering accuracy reached 90.28 percent.
Once-through integral system (OTIS): Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gloudemans, J R
1986-09-01
A scaled experimental facility, designated the once-through integral system (OTIS), was used to acquire post-small break loss-of-coolant accident (SBLOCA) data for benchmarking system codes. OTIS was also used to investigate the application of the Abnormal Transient Operating Guidelines (ATOG) used in the Babcock and Wilcox (B and W) designed nuclear steam supply system (NSSS) during the course of an SBLOCA. OTIS was a single-loop facility with a plant to model power scale factor of 1686. OTIS maintained the key elevations, approximate component volumes, and loop flow resistances, and simulated the major component phenomena of a B and W raised-loop nuclearmore » plant. A test matrix consisting of 15 tests divided into four categories was performed. The largest group contained 10 tests and was defined to parametrically obtain an extensive set of plant-typical experimental data for code benchmarking. Parameters such as leak size, leak location, and high-pressure injection (HPI) shut-off head were individually varied. The remaining categories were specified to study the impact of the ATOGs (2 tests), to note the effect of guard heater operation on observed phenomena (2 tests), and to provide a data set for comparison with previous test experience (1 test). A summary of the test results and a detailed discussion of Test 220100 is presented. Test 220100 was the nominal or reference test for the parametric studies. This test was performed with a scaled 10-cm/sup 2/ leak located in the cold leg suction piping.« less
NaK Variable Conductance Heat Pipe for Radioisotope Stirling Systems
NASA Technical Reports Server (NTRS)
Tarau, Calin; Anderson, William G.; Walker, Kara
2008-01-01
In a Stirling radioisotope power system, heat must continually be removed from the General Purpose Heat Source (GPHS) modules to maintain the modules and surrounding insulation at acceptable temperatures. The Stirling convertor normally provides most of this cooling. If the Stirling convertor stops in the current system, the insulation is designed to spoil, preventing damage to the GPHS, but also ending use of that convertor for the mission. An alkali-metal Variable Conductance Heat Pipe (VCHP) was designed to allow multiple stops and restarts of the Stirling convertor. In the design of the VCHP for the Advanced Stirling Radioisotope Generator, the VCHP reservoir temperature can vary between 40 and 120 C. While sodium, potassium, or cesium could be used as the working fluid, their melting temperatures are above the minimum reservoir temperature, allowing working fluid to freeze in the reservoir. In contrast, the melting point of NaK is -12 C, so NaK can't freeze in the reservoir. One potential problem with NaK as a working fluid is that previous tests with NaK heat pipes have shown that NaK heat pipes can develop temperature non-uniformities in the evaporator due to NaK's binary composition. A NaK heat pipe was fabricated to measure the temperature non-uniformities in a scale model of the VCHP for the Stirling Radioisotope system. The temperature profiles in the evaporator and condenser were measured as a function of operating temperature and power. The largest delta T across the condenser was 2S C. However, the condenser delta T decreased to 16 C for the 775 C vapor temperature at the highest heat flux applied, 7.21 W/ square cm. This decrease with increasing heat flux was caused by the increased mixing of the sodium and potassium in the vapor. This temperature differential is similar to the temperature variation in this ASRG heat transfer interface without a heat pipe, so NaK can be used as the VCHP working fluid.
Optimally stopped variational quantum algorithms
NASA Astrophysics Data System (ADS)
Vinci, Walter; Shabani, Alireza
2018-04-01
Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.
An Integrated Development Environment for Adiabatic Quantum Programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Bennink, Ryan S
2014-01-01
Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less
Benchmarks for single-phase flow in fractured porous media
NASA Astrophysics Data System (ADS)
Flemisch, Bernd; Berre, Inga; Boon, Wietse; Fumagalli, Alessio; Schwenck, Nicolas; Scotti, Anna; Stefansson, Ivar; Tatomir, Alexandru
2018-01-01
This paper presents several test cases intended to be benchmarks for numerical schemes for single-phase fluid flow in fractured porous media. A number of solution strategies are compared, including a vertex and two cell-centred finite volume methods, a non-conforming embedded discrete fracture model, a primal and a dual extended finite element formulation, and a mortar discrete fracture model. The proposed benchmarks test the schemes by increasing the difficulties in terms of network geometry, e.g. intersecting fractures, and physical parameters, e.g. low and high fracture-matrix permeability ratio as well as heterogeneous fracture permeabilities. For each problem, the results presented are the number of unknowns, the approximation errors in the porous matrix and in the fractures with respect to a reference solution, and the sparsity and condition number of the discretized linear system. All data and meshes used in this study are publicly available for further comparisons.
BIOREL: the benchmark resource to estimate the relevance of the gene networks.
Antonov, Alexey V; Mewes, Hans W
2006-02-06
The progress of high-throughput methodologies in functional genomics has lead to the development of statistical procedures to infer gene networks from various types of high-throughput data. However, due to the lack of common standards, the biological significance of the results of the different studies is hard to compare. To overcome this problem we propose a benchmark procedure and have developed a web resource (BIOREL), which is useful for estimating the biological relevance of any genetic network by integrating different sources of biological information. The associations of each gene from the network are classified as biologically relevant or not. The proportion of genes in the network classified as "relevant" is used as the overall network relevance score. Employing synthetic data we demonstrated that such a score ranks the networks fairly in respect to the relevance level. Using BIOREL as the benchmark resource we compared the quality of experimental and theoretically predicted protein interaction data.
A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.
Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas
2014-01-01
The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.
Numerical methods for the inverse problem of density functional theory
Jensen, Daniel S.; Wasserman, Adam
2017-07-17
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Numerical methods for the inverse problem of density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Daniel S.; Wasserman, Adam
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Self-growing neural network architecture using crisp and fuzzy entropy
NASA Technical Reports Server (NTRS)
Cios, Krzysztof J.
1992-01-01
The paper briefly describes the self-growing neural network algorithm, CID2, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results of a real-life recognition problem of distinguishing defects in a glass ribbon and of a benchmark problem of differentiating two spirals are shown and discussed.
Information Based Numerical Practice.
1987-02-01
characterization by comparative computational studies of various benchmark problems. See e.g. [MacNeal, Harder (1985)], [Robinson, Blackham (1981)] any...FOR NONADAPTIVE METHODS 2.1. THE QUADRATURE FORMULA The simplest example studied in detail in the literature is the problem of the optimal quadrature...formulae and the functional analytic prerequisites for the study of optimal formulae, we refer to the large monography (808 p) of [Sobolev (1974)]. Let us
A stable partitioned FSI algorithm for incompressible flow and deforming beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L., E-mail: lil19@rpi.edu; Henshaw, W.D., E-mail: henshw@rpi.edu; Banks, J.W., E-mail: banksj3@rpi.edu
2016-05-01
An added-mass partitioned (AMP) algorithm is described for solving fluid–structure interaction (FSI) problems coupling incompressible flows with thin elastic structures undergoing finite deformations. The new AMP scheme is fully second-order accurate and stable, without sub-time-step iterations, even for very light structures when added-mass effects are strong. The fluid, governed by the incompressible Navier–Stokes equations, is solved in velocity-pressure form using a fractional-step method; large deformations are treated with a mixed Eulerian-Lagrangian approach on deforming composite grids. The motion of the thin structure is governed by a generalized Euler–Bernoulli beam model, and these equations are solved in a Lagrangian frame usingmore » two approaches, one based on finite differences and the other on finite elements. The key AMP interface condition is a generalized Robin (mixed) condition on the fluid pressure. This condition, which is derived at a continuous level, has no adjustable parameters and is applied at the discrete level to couple the partitioned domain solvers. Special treatment of the AMP condition is required to couple the finite-element beam solver with the finite-difference-based fluid solver, and two coupling approaches are described. A normal-mode stability analysis is performed for a linearized model problem involving a beam separating two fluid domains, and it is shown that the AMP scheme is stable independent of the ratio of the mass of the fluid to that of the structure. A traditional partitioned (TP) scheme using a Dirichlet–Neumann coupling for the same model problem is shown to be unconditionally unstable if the added mass of the fluid is too large. A series of benchmark problems of increasing complexity are considered to illustrate the behavior of the AMP algorithm, and to compare the behavior with that of the TP scheme. The results of all these benchmark problems verify the stability and accuracy of the AMP scheme. Results for one benchmark problem modeling blood flow in a deforming artery are also compared with corresponding results available in the literature.« less
Validation and Performance Comparison of Numerical Codes for Tsunami Inundation
NASA Astrophysics Data System (ADS)
Velioglu, D.; Kian, R.; Yalciner, A. C.; Zaytsev, A.
2015-12-01
In inundation zones, tsunami motion turns from wave motion to flow of water. Modelling of this phenomenon is a complex problem since there are many parameters affecting the tsunami flow. In this respect, the performance of numerical codes that analyze tsunami inundation patterns becomes important. The computation of water surface elevation is not sufficient for proper analysis of tsunami behaviour in shallow water zones and on land and hence for the development of mitigation strategies. Velocity and velocity patterns are also crucial parameters and have to be computed at the highest accuracy. There are numerous numerical codes to be used for simulating tsunami inundation. In this study, FLOW 3D and NAMI DANCE codes are selected for validation and performance comparison. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. FLOW 3D is used specificaly for flood problems. NAMI DANCE uses finite difference computational method to solve linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In this study, these codes are validated and their performances are compared using two benchmark problems which are discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. One of the problems is an experiment of a single long-period wave propagating up a piecewise linear slope and onto a small-scale model of the town of Seaside, Oregon. Other benchmark problem is an experiment of a single solitary wave propagating up a triangular shaped shelf with an island feature located at the offshore point of the shelf. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. All results are presented with discussions and comparisons. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement No 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)
Classification and assessment tools for structural motif discovery algorithms.
Badr, Ghada; Al-Turaiki, Isra; Mathkour, Hassan
2013-01-01
Motif discovery is the problem of finding recurring patterns in biological data. Patterns can be sequential, mainly when discovered in DNA sequences. They can also be structural (e.g. when discovering RNA motifs). Finding common structural patterns helps to gain a better understanding of the mechanism of action (e.g. post-transcriptional regulation). Unlike DNA motifs, which are sequentially conserved, RNA motifs exhibit conservation in structure, which may be common even if the sequences are different. Over the past few years, hundreds of algorithms have been developed to solve the sequential motif discovery problem, while less work has been done for the structural case. In this paper, we survey, classify, and compare different algorithms that solve the structural motif discovery problem, where the underlying sequences may be different. We highlight their strengths and weaknesses. We start by proposing a benchmark dataset and a measurement tool that can be used to evaluate different motif discovery approaches. Then, we proceed by proposing our experimental setup. Finally, results are obtained using the proposed benchmark to compare available tools. To the best of our knowledge, this is the first attempt to compare tools solely designed for structural motif discovery. Results show that the accuracy of discovered motifs is relatively low. The results also suggest a complementary behavior among tools where some tools perform well on simple structures, while other tools are better for complex structures. We have classified and evaluated the performance of available structural motif discovery tools. In addition, we have proposed a benchmark dataset with tools that can be used to evaluate newly developed tools.
Handbook of experiences in the design and installation of solar heating and cooling systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, D.S.; Oberoi, H.S.
1980-07-01
A large array of problems encountered are detailed, including design errors, installation mistakes, cases of inadequate durability of materials and unacceptable reliability of components, and wide variations in the performance and operation of different solar systems. Durability, reliability, and design problems are reviewed for solar collector subsystems, heat transfer fluids, thermal storage, passive solar components, piping/ducting, and reliability/operational problems. The following performance topics are covered: criteria for design and performance analysis, domestic hot water systems, passive space heating systems, active space heating systems, space cooling systems, analysis of systems performance, and performance evaluations. (MHR)
Comparison of the CENTRM resonance processor to the NITAWL resonance processor in SCALE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollenbach, D.F.; Petrie, L.M.
1998-01-01
This report compares the MTAWL and CENTRM resonance processors in the SCALE code system. The cases examined consist of the International OECD/NEA Criticality Working Group Benchmark 20 problem. These cases represent fuel pellets partially dissolved in a borated solution. The assumptions inherent to the Nordheim Integral Treatment, used in MTAWL, are not valid for these problems. CENTRM resolves this limitation by explicitly calculating a problem dependent point flux from point cross sections, which is then used to create group cross sections.
Comparison of higher order modes damping techniques for 800 MHz single cell superconducting cavities
NASA Astrophysics Data System (ADS)
Shashkov, Ya. V.; Sobenin, N. P.; Petrushina, I. I.; Zobov, M. M.
2014-12-01
At present, applications of 800 MHz harmonic cavities in both bunch lengthening and shortening regimes are under consideration and discussion in the framework of the High Luminosity LHC project. In this paper we study electromagnetic characteristics of high order modes (HOMs) for a single cell 800 MHz superconducting cavity and arrays of such cavities connected by drifts tubes. Different techniques for the HOMs damping such as beam pipe grooves, coaxial-notch loads, fluted beam pipes etc. are investigated and compared. The influence of the sizes and geometry of the drift tubes on the HOMs damping is analyzed. The problems of a multipacting discharge in the considered structures are discussed and the operating frequency detuning due to the Lorentz force is evaluated.
The Stress Corrosion Performance Research of Three Kinds of Commonly Used Pipe Materials
NASA Astrophysics Data System (ADS)
Hu, Yayun; Zhang, Yiliang; Jia, Xiaoliang
The corrosion of pipe is most common problem for oil and gas industry. In this article, three kinds of tubes will be analyzed in terms of their resistance against stress corrosion. They are respectively N80 / 1, N80/ Q and P110. The loading method chosen in this test is constant tensile stress loading. In the test, samples will be separated in different groups, gradually loaded under specific levels and then soaked in H2S saturated solution. What can get from this test is threshold value of stress corrosion and stress-life curve, which can be used for evaluating the stress corrosion property of materials, as well as giving guidance for practical engineering.
Li, Guiwei; Ding, Yuanxun; Xu, Hongfu; Jin, Junwei; Shi, Baoyou
2018-04-01
Inorganic contaminants accumulation in drinking water distribution systems (DWDS) is a great threat to water quality and safety. This work assessed the main risk factors for different water pipes and discovered the release profile of accumulated materials in a full scale distribution system frequently suffered from water discoloration problem. Physicochemical characterization of pipe deposits were performed using X-ray fluorescence, scanning electron microscopy, X-ray diffraction, X-ray photoelectron spectroscopy and Fourier transform infrared spectroscopy. The metal release profile was obtained through continuous monitoring of a full-scale DWDS area. The results showed that aluminum and manganese were the main metals of deposits in nonmetallic pipes, while iron was dominant in iron-based pipe corrosion scales. Manganese primarily existed as MnO 2 without well crystalline form. The relative abundance of Mn and Fe in deposits changed with their distance from the water treatment plant. Compared with iron in corrosion scales, Mn and Al were more labile to be released back into bulk water during unidirectional flushing process. A main finding of this work is the co-release behavior of Mn and Al in particulate form and significant correlation exists between these two metals. Dual control of manganese and aluminum in treated water is proposed to be essential to cope with discoloration and trace metal contamination in DWDS. Copyright © 2018 Elsevier Ltd. All rights reserved.
Development of High-power LED Lighting Luminaires Using Loop Heat Pipe
NASA Astrophysics Data System (ADS)
Huang, Bin-Juine; Huang, Huan-Hsiang; Chen, Chun-Wei; Wu, Min-Sheng
High-power LED should reject about 6 times of heat of the conventional lighting device and keep the LED junction temperature below 80°C to assure reliability and low light decay. In addition, no fan is allowed and the heat dissipation design should not interfere with the industrial design of lighting fixture and have a light weight. This thus creates an extreme thermal management problem. The present study has shown that, using a special heat dissipation technology (loop heat pipe), the high-power LED lighting luminaire with input power from 36 to 150W for outdoor and indoor applications can be achieved with light weight, among 0.96 to 1.57 kg per 1,000 lumen of net luminous flux output from the luminaire. The loop heat pipe uses a flexible connecting pipe as the condenser which can be wounded around the reflector of the luminaire to dissipate the heat to the ambient air by natural convection. For roadway or street lighting application, the present study shows that a better optical design of LED lamps can further result in power consumption reduction, based on the same illumination on road surface. The high-power LED luminaries developed in the present study have shown that the energy saving is > 50% in road lighting applications as compared to sodium light or > 70% compared to mercury light.
Needleless Electrospinning Experimental Study and Nanofiber Application in Semiconductor Packaging
NASA Astrophysics Data System (ADS)
Sun, Tianwei
Electronics especially mobile electronics such as smart phones, tablet PCs, notebooks and digital cameras are undergoing rapid development nowadays and have thoroughly changed our lives. With the requirement of more transistors, higher power, smaller size, lighter weight and even bendability, thermal management of these devices became one of the key challenges. Compared to active heat management system, heat pipe, which is a passive fluidic system, is considered promising to solve this problem. However, traditional heat pipes have size, weight and capillary limitation. Thus new type of heat pipe with smaller size, lighter weight and higher capillary pressure is needed. Nanofiber has been proved with superior properties and has been applied in multiple areas. This study discussed the possibility of applying nanofiber in heat pipe as new wick structure. In this study, a needleless electrospinning device with high productivity rate was built onsite to systematically investigate the effect of processing parameters on fiber properties as well as to generate nanofiber mat to evaluate its capability in electronics cooling. Polyethylene oxide (PEO) and Polyvinyl Alcohol (PVA) nanofibers were generated. Tensiometer was used for wettability measurement. The results show that independent parameters including spinneret type, working distance, solution concentration and polymer type are strongly correlated with fiber morphology compared to other parameters. The results also show that the fabricated nanofiber mat has high capillary pressure.
Boundary element analysis of corrosion problems for pumps and pipes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyasaka, M.; Amaya, K.; Kishimoto, K.
1995-12-31
Three-dimensional (3D) and axi-symmetric boundary element methods (BEM) were developed to quantitatively estimate cathodic protection and macro-cell corrosion. For 3D analysis, a multiple-region method (MRM) was developed in addition to a single-region method (SRM). The validity and usefulness of the BEMs were demonstrated by comparing numerical results with experimental data from galvanic corrosion systems of a cylindrical model and a seawater pipe, and from a cathodic protection system of an actual seawater pump. It was shown that a highly accurate analysis could be performed for fluid machines handling seawater with complex 3D fields (e.g. seawater pump) by taking account ofmore » flow rate and time dependencies of polarization curve. Compared to the 3D BEM, the axi-symmetric BEM permitted large reductions in numbers of elements and nodes, which greatly simplified analysis of axi-symmetric fields such as pipes. Computational accuracy and CPU time were compared between analyses using two approximation methods for polarization curves: a logarithmic-approximation method and a linear-approximation method.« less
NASA Technical Reports Server (NTRS)
Shih, C. C.
1973-01-01
In order to establish a foundation of scaling laws for the highly nonlinear waves associated with the launch vehicle, the basic knowledge of the relationships among the paramaters pertinent to the energy dissipation process associated with the propagation of nonlinear pressure waves in thermoviscous media is required. The problem of interest is to experimentally investigate the temporal and spacial velocity profiles of fluid flow in a 3-inch open-end pipe of various lengths, produced by the propagation of nonlinear pressure waves for various diaphragm burst pressures of a pressure wave generator. As a result, temporal and spacial characteristics of wave propagation for a parametric set of nonlinear pressure waves in the pipe containing air under atmospheric conditions were determined. Velocity measurements at five sections along the pipes of up to 210 ft. in length were made with hot-film anemometers for five pressure waves produced by a piston. The piston was derived with diaphragm burst pressures at 20, 40, 60, 80 and 100 psi in the driver chamber of the pressure wave generator.
Numerical Investigation of Ice Slurry Flow in a Horizontal Pipe
NASA Astrophysics Data System (ADS)
Rawat, K. S.; Pratihar, A. K.
2018-02-01
In the last decade, phase changing material slurry (PCMS) gained much attention as a cooling medium due to its high energy storage capacity and transportability. However the flow of PCM slurry is a complex phenomenon as it affected by various parameters, i.e. fluid properties, velocity, particle size and concentration etc.. In the present work ice is used as a PCM and numerical investigation of heterogeneous slurry flow has been carried out using Eulerian KTGF model in a horizontal pipe. Firstly the present model is validated with existing experiment results available in the literature, and then model is applied to the present problem. Results show that, flow is almost homogeneous for ethanol based ice slurry with particle diameter of 0.1 mm at the velocity of 1 m/s. It is also found that ice particle distribution is more uniform at higher velocity, concentration of ice and ethanol in slurry. Results also show that ice concentration increases on the top of the pipe, and the effect of particle wall collision is more significant at higher particle diameter.
Determination of the Residence Time of Food Particles During Aseptic Sterilization
NASA Technical Reports Server (NTRS)
Carl, J. R.; Arndt, G. D.; Nguyen, T. X.
1994-01-01
The paper describes a non-invasive method to measure the time an individual particle takes to move through a length of stainless steel pipe. The food product is in two phase flow (liquids and solids) and passes through a pipe with pressures of approximately 60 psig and temperatures of 270-285 F. The proposed problem solution is based on the detection of transitory amplitude and/or phase changes in a microwave transmission path caused by the passage of the particles of interest. The particles are enhanced in some way, as will be discussed later, such that they will provide transitory changes that are distinctive enough not to be mistaken for normal variations in the received signal (caused by the non-homogeneous nature of the medium). Two detectors (transmission paths across the pipe) will be required and place at a known separation. A minimum transit time calculation is made from which the maximum velocity can be determined. This provides the minimum residence time. Also average velocity and statistical variations can be computed so that the amount of 'over-cooking' can be determined.
Numerical Analysis of Flow-Induced Vibrations in Closed Side Branches
NASA Astrophysics Data System (ADS)
KníŽat, Branislav; Troják, Michal
2011-12-01
Vibrations occuring in closed side branches connected to a main pipe are a frequent problem during pipeline system operation. At the design stage of pipeline systems, this problem is sometimes overlooked or underestimated which can later lead to the shortening of the systems life cycle or may even cause injury. The aim of this paper is a numerical analysis of the start of self-induced vibrations on the edge of a closed side branch. Calculation conditions and obtained results are presented within.
The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook
NASA Astrophysics Data System (ADS)
Mai, P. M.
2017-12-01
Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.
Caoili, Salvador Eugenio C.
2014-01-01
B-cell epitope prediction can enable novel pharmaceutical product development. However, a mechanistically framed consensus has yet to emerge on benchmarking such prediction, thus presenting an opportunity to establish standards of practice that circumvent epistemic inconsistencies of casting the epitope prediction task as a binary-classification problem. As an alternative to conventional dichotomous qualitative benchmark data, quantitative dose-response data on antibody-mediated biological effects are more meaningful from an information-theoretic perspective in the sense that such effects may be expressed as probabilities (e.g., of functional inhibition by antibody) for which the Shannon information entropy (SIE) can be evaluated as a measure of informativeness. Accordingly, half-maximal biological effects (e.g., at median inhibitory concentrations of antibody) correspond to maximally informative data while undetectable and maximal biological effects correspond to minimally informative data. This applies to benchmarking B-cell epitope prediction for the design of peptide-based immunogens that elicit antipeptide antibodies with functionally relevant cross-reactivity. Presently, the Immune Epitope Database (IEDB) contains relatively few quantitative dose-response data on such cross-reactivity. Only a small fraction of these IEDB data is maximally informative, and many more of them are minimally informative (i.e., with zero SIE). Nevertheless, the numerous qualitative data in IEDB suggest how to overcome the paucity of informative benchmark data. PMID:24949474
NASA Astrophysics Data System (ADS)
Komkov, M. A.; Moiseev, V. A.; Tarasov, V. A.; Timofeev, M. P.
2015-12-01
Some ecological problems related to heavy-oil extraction and ways for minimizing the negative impacts of this process on the biosphere are discussed. The ecological hazard of, for example, frequently used multistage hydraulic fracturing of formation is noted and the advantages and perspectives of superheated steam injection are considered. Steam generators of a new type and ecologically clean and costeffective insulating for tubing pipes (TPs) are necessary to develop the superheated steam injection method. The article is devoted to solving one of the most important and urgent tasks, i.e., the development and usage of lightweight, nonflammable, environmentally safe, and cost-effective insulating materials. It is shown that, for tubing shielding operating at temperatures up to 420°C, the most effective thermal insulation is a highly porous material based on basalt fiber. The process of filtration deposition of short basalt fibers with a bunch of alumina thermal insulation tubing pipe coatings in the form of cylinders and cylindrical shells from liquid pulp is substantiated. Based on the thermophysical characteristics of basalt fibers and on the technological features of manufacturing highly porous coating insulation, the thickness of a tubing pipe is determined. During the prolonged pumping of the air at an operating temperature of 400°C in the model sample of tubing pipes with insulation and a protective layer, we find that the surface temperature of the thermal barrier coating does not exceed 60°C. Introducing the described technology will considerably reduce the negative impact of heavy-oil extraction on the biosphere.
NASA Astrophysics Data System (ADS)
Gao, Yan; Liu, Yuyou; Ma, Yifan; Cheng, Xiaobin; Yang, Jun
2018-11-01
One major challenge currently facing pipeline networks across the world is the improvement of leak detection technologies in urban environments. There is an imperative to locate accurately leaks in buried water pipes to avoid serious environmental, social and economic consequences. Much attention has been paid to time delay estimation (TDE) in determining the position of a leak by utilising cross-correlation, which has been proven to be effective with varying degrees of success over the past half century. Previous research in published literature has demonstrated the effectiveness of the pre-whitening process for accentuating the peak in the cross-correlation associated with the time delay. This paper is concerned with the implementation of the differentiation process for TDE, with particular focus on the problem of determining a leak in pipelines by means of pipe pressure measurements. Rather than the pre-whitening operation, the proposed cross-correlation via the differentiation process, termed here DIF, changes the characteristics of the pipe system so that the pipe effectively acts as a band-pass filter. This method has the potential to eliminate some ambiguity caused by the interference at low frequencies and to allow more high frequency information to pass. Given an appropriate differentiation order, a more pronounced and reliable peak is obtained in the cross-correlation result. The use of differentiation process may provide a viable cross-correlation method suited to water leak detection. Its performance in relation to leak detection is further compared to the basic cross-correlation and pre-whitening methods for TDE in detecting a leak from actual PVC water pipes. Experimental results are presented to show an additional property of the DIF compensating for the resonance effects that may exist in cross-spectral density measurements, and hence better performance for TDE.
Ultrasonic multi-skip tomography for pipe inspection
NASA Astrophysics Data System (ADS)
Volker, Arno; Vos, Rik; Hunter, Alan; Lorenz, Maarten
2012-05-01
The inspection of wall loss corrosion is difficult at pipe support locations due to limited accessibility. However, the recently developed ultrasonic Multi-Skip screening technique is suitable for this problem. The method employs ultrasonic transducers in a pitch-catch geometry positioned on opposite sides of the pipe support. Shear waves are transmitted in the axial direction within the pipe wall, reflecting multiple times between the inner and outer surfaces before reaching the receivers. Along this path, the signals accumulate information on the integral wall thickness (e.g., via variations in travel time). The method is very sensitive in detecting the presence of wall loss, but it is difficult to quantify both the extent and depth of the loss. If the extent is unknown, then only a conservative estimate of the depth can be made due to the cumulative nature of the travel time variations. Multi-Skip tomography is an extension of Multi-Skip screening and has shown promise as a complimentary follow-up inspection technique. In recent work, we have developed the technique and demonstrated its use for reconstructing high-resolution estimates of pipe wall thickness profiles. The method operates via a model-based full wave field inversion; this consists of a forward model for predicting the measured wave field and an iterative process that compares the predicted and measured wave fields and minimizes the differences with respect to the model parameters (i.e., the wall thickness profile). This paper presents our recent developments in Multi-Skip tomographic inversion, focusing on the initial localization of corrosion regions for efficient parameterization of the surface profile model and utilization of the signal phase information for improving resolution.
NASA Technical Reports Server (NTRS)
McNelis, Mark E.; Staab, Lucas D.; Akers, James C.; Hughes, William O.; Chang, Li C.; Hozman, Aron D.; Henry, Michael W.
2012-01-01
The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) has led the design and build of the new world-class vibroacoustic test capabilities at the NASA GRC's Plum Brook Station in Sandusky, Ohio, USA from 2007 to 2011. SAIC-Benham has completed construction of a new reverberant acoustic test facility to support the future testing needs of NASA's space exploration program and commercial customers. The large Reverberant Acoustic Test Facility (RATF) is approximately 101,000 cubic feet in volume and was designed to operate at a maximum empty chamber acoustic overall sound pressure level (OASPL) of 163 dB. This combination of size and acoustic power is unprecedented amongst the world s known active reverberant acoustic test facilities. Initial checkout acoustic testing was performed on March 2011 by SAIC-Benham at test levels up to 161 dB OASPL. During testing, several branches of the gaseous nitrogen (GN2) piping system, which supply the fluid to the noise generating acoustic modulators, failed at their T-junctions connecting the 12 in. supply line to their respective 4 in. branch lines. The problem was initially detected when the oxygen sensors in the horn room indicated a lower than expected oxygen level from which was inferred GN2 leaks in the piping system. In subsequent follow up inspections, cracks were identified in the failed T-junction connections through non-destructive evaluation testing. Through structural dynamic modeling of the piping system, the root cause of the T-junction connection failures was determined. The structural dynamic assessment identified several possible corrective design improvements to the horn room piping system. The effectiveness of the chosen design repairs were subsequently evaluated in September 2011 during acoustic verification testing to 161 dB OASPL.
NASA Technical Reports Server (NTRS)
McNelis, Mark E.; Staab, Lucas D.; Akers, James C.; Hughes, WIlliam O.; Chang, Li, C.; Hozman, Aron D.; Henry, Michael W.
2012-01-01
The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) has led the design and build of the new world-class vibroacoustic test capabilities at the NASA GRC's Plum Brook Station in Sandusky, Ohio, USA from 2007-2011. SAIC-Benham has completed construction of a new reverberant acoustic test facility to support the future testing needs of NASA's space exploration program and commercial customers. The large Reverberant Acoustic Test Facility (RATF) is approximately 101,000 cu ft in volume and was designed to operate at a maximum empty chamber acoustic overall sound pressure level (OASPL) of 163 dB. This combination of size and acoustic power is unprecedented amongst the world's known active reverberant acoustic test facilities. Initial checkout acoustic testing was performed on March 2011 by SAIC-Benham at test levels up to 161 dB OASPL. During testing, several branches of the gaseous nitrogen (GN2) piping system, which supply the fluid to the noise generating acoustic modulators, failed at their "t-junctions" connecting the 12 inch supply line to their respective 4 inch branch lines. The problem was initially detected when the oxygen sensors in the horn room indicated a lower than expected oxygen level from which was inferred GN2 leaks in the piping system. In subsequent follow up inspections, cracks were identified in the failed "t-junction" connections through non-destructive evaluation testing . Through structural dynamic modeling of the piping system, the root cause of the "t-junction" connection failures was determined. The structural dynamic assessment identified several possible corrective design improvements to the horn room piping system. The effectiveness of the chosen design repairs were subsequently evaluated in September 2011 during acoustic verification testing to 161 dB OASPL.
Closing the Attainment Gap--A Realistic Proposition or an Elusive Pipe-Dream?
ERIC Educational Resources Information Center
Mowat, Joan Gaynor
2018-01-01
The attainment gap associated with socio-economic status is an international problem that is highly resistant to change. This conceptual paper critiques the drive by the Scottish Government to address the attainment gap through the Scottish Attainment Challenge and the National Improvement Framework. It draws upon a range of theoretical…
Cloud Network Helps Stretch IT Dollars
ERIC Educational Resources Information Center
Collins, Hilton
2012-01-01
No matter how many car washes or bake sales schools host to raise money, adding funds to their coffers is a recurring problem. This perpetual financial difficulty makes expansive technology purchases or changes seem like a pipe dream for school CIOs and has education technologists searching for ways to stretch money. In 2005, state K-12 school…
This project puts the U.S. Environmental Protection Agency (EPA) into a unique position of being able to bring analytical tools to bear to solve or anticipate future drinking water infrastructure water quality and metallic or cement material performance problems, for which little...
Strategic Defense Initiative: Splendid Defense or Pipe Dream? Headline Series No. 275.
ERIC Educational Resources Information Center
Armstrong, Scott; Grier, Peter
This pamphlet presents a discussion of the various components of President Reagan's Strategic Defense Initiative (SDI) including the problem of pulling together various new technologies into an effective defensive system and the politics of the so-called "star wars" system. An important part of the defense initiative is the…
Turbulence as a Problem in Non-equilibrium Statistical Mechanics
NASA Astrophysics Data System (ADS)
Goldenfeld, Nigel; Shih, Hong-Yan
2017-05-01
The transitional and well-developed regimes of turbulent shear flows exhibit a variety of remarkable scaling laws that are only now beginning to be systematically studied and understood. In the first part of this article, we summarize recent progress in understanding the friction factor of turbulent flows in rough pipes and quasi-two-dimensional soap films, showing how the data obey a two-parameter scaling law known as roughness-induced criticality, and exhibit power-law scaling of friction factor with Reynolds number that depends on the precise form of the nature of the turbulent cascade. These results hint at a non-equilibrium fluctuation-dissipation relation that applies to turbulent flows. The second part of this article concerns the lifetime statistics in smooth pipes around the transition, showing how the remarkable super-exponential scaling with Reynolds number reflects deep connections between large deviation theory, extreme value statistics, directed percolation and the onset of coexistence in predator-prey ecosystems. Both these phenomena reflect the way in which turbulence can be fruitfully approached as a problem in non-equilibrium statistical mechanics.
HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN
While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.
Pairwise measures of causal direction in the epidemiology of sleep problems and depression.
Rosenström, Tom; Jokela, Markus; Puttonen, Sampsa; Hintsanen, Mirka; Pulkki-Råback, Laura; Viikari, Jorma S; Raitakari, Olli T; Keltikangas-Järvinen, Liisa
2012-01-01
Depressive mood is often preceded by sleep problems, suggesting that they increase the risk of depression. Sleep problems can also reflect prodromal symptom of depression, thus temporal precedence alone is insufficient to confirm causality. The authors applied recently introduced statistical causal-discovery algorithms that can estimate causality from cross-sectional samples in order to infer the direction of causality between the two sets of symptoms from a novel perspective. Two common-population samples were used; one from the Young Finns study (690 men and 997 women, average age 37.7 years, range 30-45), and another from the Wisconsin Longitudinal study (3101 men and 3539 women, average age 53.1 years, range 52-55). These included three depression questionnaires (two in Young Finns data) and two sleep problem questionnaires. Three different causality estimates were constructed for each data set, tested in a benchmark data with a (practically) known causality, and tested for assumption violations using simulated data. Causality algorithms performed well in the benchmark data and simulations, and a prediction was drawn for future empirical studies to confirm: for minor depression/dysphoria, sleep problems cause significantly more dysphoria than dysphoria causes sleep problems. The situation may change as depression becomes more severe, or more severe levels of symptoms are evaluated; also, artefacts due to severe depression being less well presented in the population data than minor depression may intervene the estimation for depression scales that emphasize severe symptoms. The findings are consistent with other emerging epidemiological and biological evidence.
Least-squares Legendre spectral element solutions to sound propagation problems.
Lin, W H
2001-02-01
This paper presents a novel algorithm and numerical results of sound wave propagation. The method is based on a least-squares Legendre spectral element approach for spatial discretization and the Crank-Nicolson [Proc. Cambridge Philos. Soc. 43, 50-67 (1947)] and Adams-Bashforth [D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications (CBMS-NSF Monograph, Siam 1977)] schemes for temporal discretization to solve the linearized acoustic field equations for sound propagation. Two types of NASA Computational Aeroacoustics (CAA) Workshop benchmark problems [ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics, edited by J. C. Hardin, J. R. Ristorcelli, and C. K. W. Tam, NASA Conference Publication 3300, 1995a] are considered: a narrow Gaussian sound wave propagating in a one-dimensional space without flows, and the reflection of a two-dimensional acoustic pulse off a rigid wall in the presence of a uniform flow of Mach 0.5 in a semi-infinite space. The first problem was used to examine the numerical dispersion and dissipation characteristics of the proposed algorithm. The second problem was to demonstrate the capability of the algorithm in treating sound propagation in a flow. Comparisons were made of the computed results with analytical results and results obtained by other methods. It is shown that all results computed by the present method are in good agreement with the analytical solutions and results of the first problem agree very well with those predicted by other schemes.
Finite element analysis of wrinkling membranes
NASA Technical Reports Server (NTRS)
Miller, R. K.; Hedgepeth, J. M.; Weingarten, V. I.; Das, P.; Kahyai, S.
1984-01-01
The development of a nonlinear numerical algorithm for the analysis of stresses and displacements in partly wrinkled flat membranes, and its implementation on the SAP VII finite-element code are described. A comparison of numerical results with exact solutions of two benchmark problems reveals excellent agreement, with good convergence of the required iterative procedure. An exact solution of a problem involving axisymmetric deformations of a partly wrinkled shallow curved membrane is also reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robin Gordon; Bill Bruce; Ian Harris
2004-04-12
The two broad categories of deposited weld metal repair and fiber-reinforced composite liner repair technologies were reviewed for potential application for internal repair of gas transmission pipelines. Both are used to some extent for other applications and could be further developed for internal, local, structural repair of gas transmission pipelines. Preliminary test programs were developed for both deposited weld metal repair and for fiber-reinforced composite liner repair. Evaluation trials have been conducted using a modified fiber-reinforced composite liner provided by RolaTube and pipe sections without liners. All pipe section specimens failed in areas of simulated damage. Pipe sections containing fiber-reinforcedmore » composite liners failed at pressures marginally greater than the pipe sections without liners. The next step is to evaluate a liner material with a modulus of elasticity approximately 95% of the modulus of elasticity for steel. Preliminary welding parameters were developed for deposited weld metal repair in preparation of the receipt of Pacific Gas & Electric's internal pipeline welding repair system (that was designed specifically for 559 mm (22 in.) diameter pipe) and the receipt of 559 mm (22 in.) pipe sections from Panhandle Eastern. The next steps are to transfer welding parameters to the PG&E system and to pressure test repaired pipe sections to failure. A survey of pipeline operators was conducted to better understand the needs and performance requirements of the natural gas transmission industry regarding internal repair. Completed surveys contained the following principal conclusions: (1) Use of internal weld repair is most attractive for river crossings, under other bodies of water, in difficult soil conditions, under highways, under congested intersections, and under railway crossings. (2) Internal pipe repair offers a strong potential advantage to the high cost of horizontal direct drilling (HDD) when a new bore must be created to solve a leak or other problem. (3) Typical travel distances can be divided into three distinct groups: up to 305 m (1,000 ft.); between 305 m and 610 m (1,000 ft. and 2,000 ft.); and beyond 914 m (3,000 ft.). All three groups require pig-based systems. A despooled umbilical system would suffice for the first two groups which represents 81% of survey respondents. The third group would require an onboard self-contained power unit for propulsion and welding/liner repair energy needs. (4) Pipe diameter sizes range from 50.8 mm (2 in.) through 1,219.2 mm (48 in.). The most common size range for 80% to 90% of operators surveyed is 508 mm to 762 mm (20 in. to 30 in.), with 95% using 558.8 mm (22 in.) pipe. An evaluation of potential repair methods clearly indicates that the project should continue to focus on the development of a repair process involving the use of GMAW welding and on the development of a repair process involving the use of fiber-reinforced composite liners.« less
Improved artificial bee colony algorithm for vehicle routing problem with time windows
Yan, Qianqian; Zhang, Mengjie; Yang, Yunong
2017-01-01
This paper investigates a well-known complex combinatorial problem known as the vehicle routing problem with time windows (VRPTW). Unlike the standard vehicle routing problem, each customer in the VRPTW is served within a given time constraint. This paper solves the VRPTW using an improved artificial bee colony (IABC) algorithm. The performance of this algorithm is improved by a local optimization based on a crossover operation and a scanning strategy. Finally, the effectiveness of the IABC is evaluated on some well-known benchmarks. The results demonstrate the power of IABC algorithm in solving the VRPTW. PMID:28961252
Adane, Metadel; Mengistie, Bezatu; Medhin, Girmay; Kloos, Helmut; Mulat, Worku
2017-01-01
The problem of intermittent piped water supplies that exists in low- and middle-income countries is particularly severe in the slums of sub-Saharan Africa. However, little is known about whether there is deterioration of the microbiological quality of the intermittent piped water supply at a household level and whether it is a factor in reducing or increasing the occurrence of acute diarrhea among under-five children in slums of Addis Ababa. This study aimed to determine the association of intermittent piped water supplies and point-of-use (POU) contamination of household stored water by Escherichia coli (E. coli) with acute diarrhea among under-five children in slums of Addis Ababa. A community-based matched case-control study was conducted from November to December, 2014. Cases were defined as under-five children with acute diarrhea during the two weeks before the survey. Controls were matched by age and neighborhood with cases by individual matching. Data were collected using a pre-tested structured questionnaire and E. coli analysis of water from piped water supplies and household stored water. A five-tube method of Most Probable Number (MPN)/100 ml standard procedure was used for E. coli analysis. Multivariable conditional logistic regression with 95% confidence interval (CI) was used for data analysis by controlling potential confounding effects of selected socio-demographic characteristics. During the two weeks before the survey, 87.9% of case households and 51.0% of control households had an intermittent piped water supply for an average of 4.3 days and 3.9 days, respectively. POU contamination of household stored water by E. coli was found in 83.3% of the case households, and 52.1% of the control households. In a fully adjusted model, a periodically intermittent piped water supply (adjusted matched odds ratio (adjusted mOR) = 4.8; 95% CI: 1.3-17.8), POU water contamination in household stored water by E. coli (adjusted mOR = 3.3; 95% CI: 1.1-10.1), water retrieved from water storage containers using handle-less vessels (adjusted mOR = 16.3; 95% CI: 4.4-60.1), and water retrieved by interchangeably using vessels both with and without handle (adjusted mOR = 5.4; 95% CI: 1.1-29.1) were independently associated with acute diarrhea. We conclude that provision of continuously available piped water supplies and education of caregivers about proper water retrieval methods of household stored water can effectively reduce POU contamination of water at the household level and thereby reduce acute diarrhea among under-five children in slums of Addis Ababa. Promotion of household water treatment is also highly encouraged until the City's water authority is able to deliver continuously available piped water supplies.
Double wall vacuum tubing and method of manufacture
Stahl, Charles R.; Gibson, Michael A.; Knudsen, Christian W.
1989-01-01
An evacuated double wall tubing is shown together with a method for the manufacture of such tubing which includes providing a first pipe of predetermined larger diameter and a second pipe having an O.D. substantially smaller than the I.D. of the first pipe. An evacuation opening is then in the first pipe. The second pipe is inserted inside the first pipe with an annular space therebetween. The pipes are welded together at one end. A stretching tool is secured to the other end of the second pipe after welding. The second pipe is then prestressed mechanically with the stretching tool an amount sufficient to prevent substantial buckling of the second pipe under normal operating conditions of the double wall pipe. The other ends of the first pipe and the prestressed second pipe are welded together, preferably by explosion welding, without the introduction of mechanical spacers between the pipes. The annulus between the pipes is evacuated through the evacuation opening, and the evacuation opening is finally sealed. The first pipe is preferably of steel and the second pipe is preferably of titanium. The pipes may be of a size and wall thickness sufficient for the double wall pipe to be structurally load bearing or may be of a size and wall thickness insufficient for the double wall pipe to be structurally load bearing, and the double wall pipe positioned with a sliding fit inside a third pipe of a load-bearing size.
A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm
Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...
2016-02-17
We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.
Building America Industrialized Housing Partnership (BAIHP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIlvaine, Janet; Chandra, Subrato; Barkaszi, Stephen
This final report summarizes the work conducted by the Building America Industrialized Housing Partnership (www.baihp.org) for the period 9/1/99-6/30/06. BAIHP is led by the Florida Solar Energy Center of the University of Central Florida and focuses on factory built housing. In partnership with over 50 factory and site builders, work was performed in two main areas--research and technical assistance. In the research area--through site visits in over 75 problem homes, we discovered the prime causes of moisture problems in some manufactured homes and our industry partners adopted our solutions to nearly eliminate this vexing problem. Through testing conducted in overmore » two dozen housing factories of six factory builders we documented the value of leak free duct design and construction which was embraced by our industry partners and implemented in all the thousands of homes they built. Through laboratory test facilities and measurements in real homes we documented the merits of 'cool roof' technologies and developed an innovative night sky radiative cooling concept currently being tested. We patented an energy efficient condenser fan design, documented energy efficient home retrofit strategies after hurricane damage, developed improved specifications for federal procurement for future temporary housing, compared the Building America benchmark to HERS Index and IECC 2006, developed a toolkit for improving the accuracy and speed of benchmark calculations, monitored the field performance of over a dozen prototype homes and initiated research on the effectiveness of occupancy feedback in reducing household energy use. In the technical assistance area we provided systems engineering analysis, conducted training, testing and commissioning that have resulted in over 128,000 factory built and over 5,000 site built homes which are saving their owners over $17,000,000 annually in energy bills. These include homes built by Palm Harbor Homes, Fleetwood, Southern Energy Homes, Cavalier and the manufacturers participating in the Northwest Energy Efficient Manufactured Home program. We worked with over two dozen Habitat for Humanity affiliates and helped them build over 700 Energy Star or near Energy Star homes. We have provided technical assistance to several show homes constructed for the International builders show in Orlando, FL and assisted with other prototype homes in cold climates that save 40% over the benchmark reference. In the Gainesville Fl area we have several builders that are consistently producing 15 to 30 homes per month in several subdivisions that meet the 30% benchmark savings goal. We have contributed to the 2006 DOE Joule goals by providing two community case studies meeting the 30% benchmark goal in marine climates.« less
Investigation of propulsion system for large LNG ships
NASA Astrophysics Data System (ADS)
Sinha, R. P.; Nik, Wan Mohd Norsani Wan
2012-09-01
Requirements to move away from coal for power generation has made LNG as the most sought after fuel source, raising steep demands on its supply and production. Added to this scenario is the gradual depletion of the offshore oil and gas fields which is pushing future explorations and production activities far away into the hostile environment of deep sea. Production of gas in such environment has great technical and commercial impacts on gas business. For instance, laying gas pipes from deep sea to distant receiving terminals will be technically and economically challenging. Alternative to laying gas pipes will require installing re-liquefaction unit on board FPSOs to convert gas into liquid for transportation by sea. But, then because of increased distance between gas source and receiving terminals the current medium size LNG ships will no longer remain economical to operate. Recognizing this business scenario shipowners are making huge investments in the acquisition of large LNG ships. As power need of large LNG ships is very different from the current small ones, a variety of propulsion derivatives such as UST, DFDE, 2-Stroke DRL and Combined cycle GT have been proposed by leading engine manufacturers. Since, propulsion system constitutes major element of the ship's capital and life cycle cost, which of these options is most suited for large LNG ships is currently a major concern of the shipping industry and must be thoroughly assessed. In this paper the authors investigate relative merits of these propulsion options against the benchmark performance criteria of BOG disposal, fuel consumption, gas emissions, plant availability and overall life cycle cost.
Optimal portfolio selection in a Lévy market with uncontrolled cash flow and only risky assets
NASA Astrophysics Data System (ADS)
Zeng, Yan; Li, Zhongfei; Wu, Huiling
2013-03-01
This article considers an investor who has an exogenous cash flow evolving according to a Lévy process and invests in a financial market consisting of only risky assets, whose prices are governed by exponential Lévy processes. Two continuous-time portfolio selection problems are studied for the investor. One is a benchmark problem, and the other is a mean-variance problem. The first problem is solved by adopting the stochastic dynamic programming approach, and the obtained results are extended to the second problem by employing the duality theory. Closed-form solutions of these two problems are derived. Some existing results are found to be special cases of our results.
AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT
A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...
Real-case benchmark for flow and tracer transport in the fractured rock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hokr, M.; Shao, H.; Gardner, W. P.
The paper is intended to define a benchmark problem related to groundwater flow and natural tracer transport using observations of discharge and isotopic tracers in fractured, crystalline rock. Three numerical simulators: Flow123d, OpenGeoSys, and PFLOTRAN are compared. The data utilized in the project were collected in a water-supply tunnel in granite of the Jizera Mountains, Bedrichov, Czech Republic. The problem configuration combines subdomains of different dimensions, 3D continuum for hard-rock blocks or matrix and 2D features for fractures or fault zones, together with realistic boundary conditions for tunnel-controlled drainage. Steady-state and transient flow and a pulse injection tracer transport problemmore » are solved. The results confirm mostly consistent behavior of the codes. Both the codes Flow123d and OpenGeoSys with 3D–2D coupling implemented differ by several percent in most cases, which is appropriate to, e.g., effects of discrete unknown placing in the mesh. Some of the PFLOTRAN results differ more, which can be explained by effects of the dispersion tensor evaluation scheme and of the numerical diffusion. Here, the phenomenon can get stronger with fracture/matrix coupling and with parameter magnitude contrasts. Although the study was not aimed on inverse solution, the models were fit to the measured data approximately, demonstrating the intended real-case relevance of the benchmark.« less
Real-case benchmark for flow and tracer transport in the fractured rock
Hokr, M.; Shao, H.; Gardner, W. P.; ...
2016-09-19
The paper is intended to define a benchmark problem related to groundwater flow and natural tracer transport using observations of discharge and isotopic tracers in fractured, crystalline rock. Three numerical simulators: Flow123d, OpenGeoSys, and PFLOTRAN are compared. The data utilized in the project were collected in a water-supply tunnel in granite of the Jizera Mountains, Bedrichov, Czech Republic. The problem configuration combines subdomains of different dimensions, 3D continuum for hard-rock blocks or matrix and 2D features for fractures or fault zones, together with realistic boundary conditions for tunnel-controlled drainage. Steady-state and transient flow and a pulse injection tracer transport problemmore » are solved. The results confirm mostly consistent behavior of the codes. Both the codes Flow123d and OpenGeoSys with 3D–2D coupling implemented differ by several percent in most cases, which is appropriate to, e.g., effects of discrete unknown placing in the mesh. Some of the PFLOTRAN results differ more, which can be explained by effects of the dispersion tensor evaluation scheme and of the numerical diffusion. Here, the phenomenon can get stronger with fracture/matrix coupling and with parameter magnitude contrasts. Although the study was not aimed on inverse solution, the models were fit to the measured data approximately, demonstrating the intended real-case relevance of the benchmark.« less
NASA Technical Reports Server (NTRS)
Baskaran, S.
1974-01-01
The cut-off frequencies for high order circumferential modes were calculated for various eccentricities of an elliptic duct section. The problem was studied with a view to the reduction of jet engine compressor noise by elliptic ducts, instead of circular ducts. The cut-off frequencies for even functions decrease with increasing eccentricity. The third order eigen frequencies are oscillatory as the eccentricity increases for odd functions. The eigen frequencies decrease for higher order odd functions inasmuch as, for higher orders, they assume the same values as those for even functions. Deformation of a circular pipe into an elliptic one of sufficiently large eccentricity produces only a small reduction in the cut-off frequency, provided the area of the pipe section is kept invariable.
NASA Astrophysics Data System (ADS)
Haavisto, Sanna; Cardona, Maria J.; Salmela, Juha; Powell, Robert L.; McCarthy, Michael J.; Kataja, Markku; Koponen, Antti I.
2017-11-01
A hybrid multi-scale velocimetry method utilizing Doppler optical coherence tomography in combination with either magnetic resonance imaging or ultrasound velocity profiling is used to investigate pipe flow of four rheologically different working fluids under varying flow regimes. These fluids include water, an aqueous xanthan gum solution, a softwood fiber suspension, and a microfibrillated cellulose suspension. The measurement setup enables not only the analysis of the rheological (bulk) behavior of a studied fluid but gives simultaneously information on their wall layer dynamics, both of which are needed for analyzing and solving practical fluid flow-related problems. Preliminary novel results on rheological and boundary layer flow properties of the working fluids are reported and the potential of the hybrid measurement setup is demonstrated.
I-NERI Quarterly Technical Report (April 1 to June 30, 2005)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang Oh; Prof. Hee Cheon NO; Prof. John Lee
2005-06-01
The objective of this Korean/United States/laboratory/university collaboration is to develop new advanced computational methods for safety analysis codes for very-high-temperature gas-cooled reactors (VHTGRs) and numerical and experimental validation of these computer codes. This study consists of five tasks for FY-03: (1) development of computational methods for the VHTGR, (2) theoretical modification of aforementioned computer codes for molecular diffusion (RELAP5/ATHENA) and modeling CO and CO2 equilibrium (MELCOR), (3) development of a state-of-the-art methodology for VHTGR neutronic analysis and calculation of accurate power distributions and decay heat deposition rates, (4) reactor cavity cooling system experiment, and (5) graphite oxidation experiment. Second quartermore » of Year 3: (A) Prof. NO and Kim continued Task 1. As a further plant application of GAMMA code, we conducted two analyses: IAEA GT-MHR benchmark calculation for LPCC and air ingress analysis for PMR 600MWt. The GAMMA code shows comparable peak fuel temperature trend to those of other country codes. The analysis results for air ingress show much different trend from that of previous PBR analysis: later onset of natural circulation and less significant rise in graphite temperature. (B) Prof. Park continued Task 2. We have designed new separate effect test device having same heat transfer area and different diameter and total number of U-bands of air cooling pipe. New design has smaller pressure drop in the air cooling pipe than the previous one as designed with larger diameter and less number of U-bands. With the device, additional experiments have been performed to obtain temperature distributions of the water tank, the surface and the center of cooling pipe on axis. The results will be used to optimize the design of SNU-RCCS. (C) Prof. NO continued Task 3. The experimental work of air ingress is going on without any concern: With nuclear graphite IG-110, various kinetic parameters and reaction rates for the C/CO2 reaction were measured. Then, the rates of C/CO2 reaction were compared to the ones of C/O2 reaction. The rate equation for C/CO2 has been developed. (D) INL added models to RELAP5/ATHENA to cacilate the chemical reactions in a VHTR during an air ingress accident. Limited testing of the models indicate that they are calculating a correct special distribution in gas compositions. (E) INL benchmarked NACOK natural circulation data. (F) Professor Lee et al at the University of Michigan (UM) Task 5. The funding was received from the DOE Richland Office at the end of May and the subcontract paperwork was delivered to the UM on the sixth of June. The objective of this task is to develop a state of the art neutronics model for determining power distributions and decay heat deposition rates in a VHTGR core. Our effort during the reporting period covered reactor physics analysis of coated particles and coupled nuclear-thermal-hydraulic (TH) calculations, together with initial calculations for decay heat deposition rates in the core.« less
Heat Pipe Vapor Dynamics. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Issacci, Farrokh
1990-01-01
The dynamic behavior of the vapor flow in heat pipes is investigated at startup and during operational transients. The vapor is modeled as two-dimensional, compressible viscous flow in an enclosure with inflow and outflow boundary conditions. For steady-state and operating transients, the SIMPLER method is used. In this method a control volume approach is employed on a staggered grid which makes the scheme very stable. It is shown that for relatively low input heat fluxes the compressibility of the vapor flow is low and the SIMPLER scheme is suitable for the study of transient vapor dynamics. When the input heat flux is high or the process under a startup operation starts at very low pressures and temperatures, the vapor is highly compressible and a shock wave is created in the evaporator. It is shown that for a wide range of input heat fluxes, the standard methods, including the SIMPLER scheme, are not suitable. A nonlinear filtering technique, along with the centered difference scheme, are then used for shock capturing as well as for the solution of the cell Reynolds-number problem. For high heat flux, the startup transient phase involves multiple shock reflections in the evaporator region. Each shock reflection causes a significant increase in the local pressure and a large pressure drop along the heat pipe. Furthermore, shock reflections cause flow reversal in the evaporation region and flow circulations in the adiabatic region. The maximum and maximum-averaged pressure drops in different sections of the heat pipe oscillate periodically with time because of multiple shock reflections. The pressure drop converges to a constant value at steady state. However, it is significantly higher than its steady-state value at the initiation of the startup transient. The time for the vapor core to reach steady-state condition depends on the input heat flux, the heat pipe geometry, the working fluid, and the condenser conditions. However, the vapor transient time, for an Na-filled heat pipe is on the order of seconds. Depending on the time constant for the overall system, the vapor transient time may be very short. Therefore, the vapor core may be assumed to be quasi-steady in the transient analysis of a heat pipe operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi
2012-10-01
PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICSmore » (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.« less
ERIC Educational Resources Information Center
Binous, Housam
2007-01-01
We study four non-Newtonian fluid mechanics problems using Mathematica[R]. Constitutive equations describing the behavior of power-law, Bingham and Carreau models are recalled. The velocity profile is obtained for the horizontal flow of power-law fluids in pipes and annuli. For the vertical laminar film flow of a Bingham fluid we determine the…
NASA Astrophysics Data System (ADS)
Wang, Kuo-Lung; Lin, Jun-Tin; Lee, Yi-Hsuan; Lin, Meei-Ling; Chen, Chao-Wei; Liao, Ray-Tang; Chi, Chung-Chi; Lin, Hsi-Hung
2016-04-01
Landslide is always not hazard until mankind development in highly potential area. The study tries to map deep seated landslide before the initiation of landslide. Study area in central Taiwan is selected and the geological condition is quite unique, which is slate. Major direction of bedding in this area is northeast and the dip ranges from 30-75 degree to southeast. Several deep seated landslides were discovered in the same side of bedding from rainfall events. The benchmarks from 2002 ~ 2009 are in this study. However, the benchmarks were measured along Highway No. 14B and the road was constructed along the peak of mountains. Taiwan located between sea plates and continental plate. The elevation of mountains is rising according to most GPS and benchmarks in the island. The same trend is discovered from benchmarks in this area. But some benchmarks are located in landslide area thus the elevation is below average and event negative. The aerial photos from 1979 to 2007 are used for orthophoto generation. The changes of land use are obvious during 30 years and enlargement of river channel is also observed in this area. Both benchmarks and aerial photos have discovered landslide potential did exist this area but how big of landslide in not easy to define currently. Thus SAR data utilization is adopted in this case. DInSAR and SBAS sar analysis are used in this research and ALOS/PALSAR from 2006 to 2010 is adopted. DInSAR analysis shows that landslide is possible mapped but the error is not easy to reduce. The error is possibly form several conditions such as vegetation, clouds, vapor, etc. To conquer the problem, time series analysis, SBAS, is adopted in this research. The result of SBAS in this area shows that large deep seated landslides are easy mapped and the accuracy of vertical displacement is reasonable.
Computation of Turbulent Recirculating Flow in Channels, and for Impingement Cooling
NASA Technical Reports Server (NTRS)
Chang, Byong Hoon
1992-01-01
Fully elliptic forms of the transport equations have been solved numerically for two flow configurations. The first is turbulent flow in a channel with transverse rectangular ribs, and the second is impingement cooling of a plane surface. Both flows are relevant to proposed designs for active cooling of hypersonic vehicles using supercritical hydrogen as the coolant. Flow downstream of an abrupt pipe expansion and of a backward-facing step were also solved with various near-wall turbulence models as benchmark problems. A simple form of periodicity boundary condition was used for the channel flow with transverse rectangular ribs. The effects of various parameters on heat transfer in channel flow with transverse ribs and in impingement cooling were investigated using the Yap modified Jones and Launder low Reynolds number k-epsilon turbulence model. For the channel flow, predictions were in adequate agreement with experiment for constant property flow, with the results for friction superior to those for heat transfer. For impingement cooling, the agreement with experiment was generally good, but the results suggest that improved modelling of the dissipation rate of turbulence kinetic energy is required in order to obtain improved heat transfer prediction, especially near the stagnation point. The k-epsilon turbulence model was used to predict the mean flow and heat transfer for constant and variable property flows. The effect of variable properties for channel flow was investigated using the same turbulence model, but comparison with experiment yielded no clear conclusions. Also, the wall function method was modified for use in the variable properties flow with a non-adiabatic surface, and an empirical model is suggested to correctly account for the behavior of the viscous sublayer with heating.
Simulation of underwater explosion benchmark experiments with ALE3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couch, R.; Faux, D.
1997-05-19
Some code improvements have been made during the course of this study. One immediately obvious need was for more flexibility in the constitutive representation for materials in shell elements. To remedy this situation, a model with a tabular representation of stress versus strain and rate dependent effects was implemented. This was required in order to obtain reasonable results in the IED cylinder simulation. Another deficiency was in the ability to extract and plot variables associated with shell elements. The pipe whip analysis required the development of a scheme to tally and plot time dependent shell quantities such as stresses andmore » strains. This capability had previously existed only for solid elements. Work was initiated to provide the same range of plotting capability for structural elements that exist with the DYNA3D/TAURUS tools. One of the characteristics of these problems is the disparity in zoning required in the vicinity of the charge and bubble compared to that needed in the far field. This disparity can cause the equipotential relaxation logic to provide a less than optimal solution. Various approaches were utilized to bias the relaxation to obtain more optimal meshing during relaxation. Extensions of these techniques have been developed to provide more powerful options, but more work still needs to be done. The results presented here are representative of what can be produced with an ALE code structured like ALE3D. They are not necessarily the best results that could have been obtained. More experience in assessing sensitivities to meshing and boundary conditions would be very useful. A number of code deficiencies discovered in the course of this work have been corrected and are available for any future investigations.« less
Ge, Liang; Sotiropoulos, Fotis
2007-08-01
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.
Ge, Liang; Sotiropoulos, Fotis
2008-01-01
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus. PMID:19194533
NASA Astrophysics Data System (ADS)
Yang, Jinyeol; Lee, Hyeonseok; Lim, Hyung Jin; Kim, Nakhyeon; Yeo, Hwasoo; Sohn, Hoon
2013-08-01
This study develops an embeddable optical fiber-guided laser ultrasonic system for structural health monitoring (SHM) of pipelines exposed to high temperature and gamma radiation inside nuclear power plants (NPPs). Recently, noncontact laser ultrasonics is gaining popularity among the SHM community because of its advantageous characteristics such as (a) scanning capability, (b) immunity against electromagnetic interference (EMI) and (c) applicability to high-temperature surfaces. However, its application to NPP pipelines has been hampered because pipes inside NPPs are often covered by insulators and/or target surfaces are not easily accessible. To overcome this problem, this study designs embeddable optical fibers and fixtures so that laser beams used for ultrasonic inspection can be transmitted between the laser sources and the target pipe. For guided-wave generation, an Nd:Yag pulsed laser coupled with an optical fiber is used. A high-power pulsed laser beam is guided through the optical fiber onto a target structure. Based on the principle of laser interferometry, the corresponding response is measured using a different type of laser beam guided by another optical fiber. All devices are especially designed to sustain high temperature and gamma radiation. The robustness/resilience of the proposed measurement system installed on a stainless steel pipe specimen has been experimentally verified by exposing the specimen to high temperature of up to 350 °C and optical fibers to gamma radiation of up to 125 kGy (20 kGy h-1).
A mathematical model to predict the effect of heat recovery on the wastewater temperature in sewers.
Dürrenmatt, David J; Wanner, Oskar
2014-01-01
Raw wastewater contains considerable amounts of energy that can be recovered by means of a heat pump and a heat exchanger installed in the sewer. The technique is well established, and there are approximately 50 facilities in Switzerland, many of which have been successfully using this technique for years. The planning of new facilities requires predictions of the effect of heat recovery on the wastewater temperature in the sewer because altered wastewater temperatures may cause problems for the biological processes used in wastewater treatment plants and receiving waters. A mathematical model is presented that calculates the discharge in a sewer conduit and the spatial profiles and dynamics of the temperature in the wastewater, sewer headspace, pipe, and surrounding soil. The model was implemented in the simulation program TEMPEST and was used to evaluate measured time series of discharge and temperatures. It was found that the model adequately reproduces the measured data and that the temperature and thermal conductivity of the soil and the distance between the sewer pipe and undisturbed soil are the most sensitive model parameters. The temporary storage of heat in the pipe wall and the exchange of heat between wastewater and the pipe wall are the most important processes for heat transfer. The model can be used as a tool to determine the optimal site for heat recovery and the maximal amount of extractable heat. Copyright © 2013 Elsevier Ltd. All rights reserved.
The Case for a Heat-Pipe Phase of Planet Evolution on the Moon
NASA Technical Reports Server (NTRS)
Simon, J. I.; Moore, W. B.; Webb, A. A. G.
2015-01-01
The prevalence of anorthosite in the lunar highlands is generally attributed to the flotation of less dense plagioclase in the late stages of the solidification of the lunar magma ocean. It is not clear, however, that these models are capable of producing the extremely high plagioclase contents (near 100%) observed in both Apollo samples and remote sensing data, since a mostly solid lithosphere forms (at 60-70% solidification) before plagioclase feldspar reaches saturation (at approximately 80% solidification). Formation as a floating cumulate is made even more problematic by the near uniformity of the alkali composition of the plagioclase, even as the mafic phases record significant variations in Mg/(Mg+Fe) ratios. These problems can be resolved for the Moon if the plagioclase-rich crust is produced and refined through a widespread episode of heat-pipe magmatism rather than a process dominated by density-driven plagioclase flotation. Heat-pipes are an important feature of terrestrial planets at high heat flow, as illustrated by Io's present activity. Evidence for their operation early in Earth's history suggests that all terrestrial bodies should experience an early episode of heat-pipe cooling. As the Moon likely represents the most wellpreserved example of early planetary thermal evolution in our solar system, studies of the lunar surface and of lunar materials provide useful data to test the idea of a universal model of the way terrestrial bodies transition from a magma ocean state into subsequent single-plate, rigid-lid convection or plate tectonic phases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bufe, M.
Every leaking gas main presents a challenge. But when that leaking main follows a series of sweeping bends on its route through North America`s largest urban wildlife park, it presents a special challenge. That was the situation Consolidated Edison Company of New York Inc. recently faced when numerous leaks were discovered along 3,000 feet of a 95-year-old, 12-inch cast-iron main that winds through New York City`s Bronx Zoo. Con Edison knew it needed a long-term solution to this increasingly common problem. Thanks to the cooperative efforts of the utility`s operations and research departments and the Gas Research Institute (GRI), Conmore » Edison also knew about a trenchless technology that could renew the pipe with minimal impact to the environment--while saving the utility a substantial sum of money in the process. In the Paltem system, a woven polyester hose is injected with an epoxy resin and then inverted (turned inside out) into a pipe at an access pit. The liner then bonds to the inside wall of the pipe, forming a smooth, flexible and pressure-resistant lining. The process is environmentally safe, requires very little excavation and can be installed in days, rather than the weeks or months it can take to dig up and replace existing pipe.« less
Exploring New Physics Frontiers Through Numerical Relativity.
Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Sperhake, Ulrich
2015-01-01
The demand to obtain answers to highly complex problems within strong-field gravity has been met with significant progress in the numerical solution of Einstein's equations - along with some spectacular results - in various setups. We review techniques for solving Einstein's equations in generic spacetimes, focusing on fully nonlinear evolutions but also on how to benchmark those results with perturbative approaches. The results address problems in high-energy physics, holography, mathematical physics, fundamental physics, astrophysics and cosmology.
2017-01-01
Computational scientists have designed many useful algorithms by exploring a biological process or imitating natural evolution. These algorithms can be used to solve engineering optimization problems. Inspired by the change of matter state, we proposed a novel optimization algorithm called differential cloud particles evolution algorithm based on data-driven mechanism (CPDD). In the proposed algorithm, the optimization process is divided into two stages, namely, fluid stage and solid stage. The algorithm carries out the strategy of integrating global exploration with local exploitation in fluid stage. Furthermore, local exploitation is carried out mainly in solid stage. The quality of the solution and the efficiency of the search are influenced greatly by the control parameters. Therefore, the data-driven mechanism is designed for obtaining better control parameters to ensure good performance on numerical benchmark problems. In order to verify the effectiveness of CPDD, numerical experiments are carried out on all the CEC2014 contest benchmark functions. Finally, two application problems of artificial neural network are examined. The experimental results show that CPDD is competitive with respect to other eight state-of-the-art intelligent optimization algorithms. PMID:28761438
A new effective operator for the hybrid algorithm for solving global optimisation problems
NASA Astrophysics Data System (ADS)
Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac
2018-04-01
Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.
A Diagnostic Assessment of Evolutionary Multiobjective Optimization for Water Resources Systems
NASA Astrophysics Data System (ADS)
Reed, P.; Hadka, D.; Herman, J.; Kasprzyk, J.; Kollat, J.
2012-04-01
This study contributes a rigorous diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) and highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems. This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations. The diagnostic assessment measures the effectiveness, efficiency, reliability, and controllability of ten benchmark MOEAs for a representative suite of water resources applications addressing rainfall-runoff calibration, long-term groundwater monitoring (LTM), and risk-based water supply portfolio planning. The suite of problems encompasses a range of challenging problem properties including (1) many-objective formulations with 4 or more objectives, (2) multi-modality (or false optima), (3) nonlinearity, (4) discreteness, (5) severe constraints, (6) stochastic objectives, and (7) non-separability (also called epistasis). The applications are representative of the dominant problem classes that have shaped the history of MOEAs in water resources and that will be dominant foci in the future. Recommendations are provided for which modern MOEAs should serve as tools and benchmarks in the future water resources literature.
A hybridizable discontinuous Galerkin method for modeling fluid-structure interaction
NASA Astrophysics Data System (ADS)
Sheldon, Jason P.; Miller, Scott T.; Pitt, Jonathan S.
2016-12-01
This work presents a novel application of the hybridizable discontinuous Galerkin (HDG) finite element method to the multi-physics simulation of coupled fluid-structure interaction (FSI) problems. Recent applications of the HDG method have primarily been for single-physics problems including both solids and fluids, which are necessary building blocks for FSI modeling. Utilizing these established models, HDG formulations for linear elastostatics, a nonlinear elastodynamic model, and arbitrary Lagrangian-Eulerian Navier-Stokes are derived. The elasticity formulations are written in a Lagrangian reference frame, with the nonlinear formulation restricted to hyperelastic materials. With these individual solid and fluid formulations, the remaining challenge in FSI modeling is coupling together their disparate mathematics on the fluid-solid interface. This coupling is presented, along with the resultant HDG FSI formulation. Verification of the component models, through the method of manufactured solutions, is performed and each model is shown to converge at the expected rate. The individual components, along with the complete FSI model, are then compared to the benchmark problems proposed by Turek and Hron [1]. The solutions from the HDG formulation presented in this work trend towards the benchmark as the spatial polynomial order and the temporal order of integration are increased.
A hybridizable discontinuous Galerkin method for modeling fluid–structure interaction
Sheldon, Jason P.; Miller, Scott T.; Pitt, Jonathan S.
2016-08-31
This study presents a novel application of the hybridizable discontinuous Galerkin (HDG) finite element method to the multi-physics simulation of coupled fluid–structure interaction (FSI) problems. Recent applications of the HDG method have primarily been for single-physics problems including both solids and fluids, which are necessary building blocks for FSI modeling. Utilizing these established models, HDG formulations for linear elastostatics, a nonlinear elastodynamic model, and arbitrary Lagrangian–Eulerian Navier–Stokes are derived. The elasticity formulations are written in a Lagrangian reference frame, with the nonlinear formulation restricted to hyperelastic materials. With these individual solid and fluid formulations, the remaining challenge in FSI modelingmore » is coupling together their disparate mathematics on the fluid–solid interface. This coupling is presented, along with the resultant HDG FSI formulation. Verification of the component models, through the method of manufactured solutions, is performed and each model is shown to converge at the expected rate. The individual components, along with the complete FSI model, are then compared to the benchmark problems proposed by Turek and Hron [1]. The solutions from the HDG formulation presented in this work trend towards the benchmark as the spatial polynomial order and the temporal order of integration are increased.« less
A symmetry measure for damage detection with mode shapes
NASA Astrophysics Data System (ADS)
Chen, Justin G.; Büyüköztürk, Oral
2017-11-01
This paper introduces a feature for detecting damage or changes in structures, the continuous symmetry measure, which can quantify the amount of a particular rotational, mirror, or translational symmetry in a mode shape of a structure. Many structures in the built environment have geometries that are either symmetric or almost symmetric, however damage typically occurs in a local manner causing asymmetric changes in the structure's geometry or material properties, and alters its mode shapes. The continuous symmetry measure can quantify these changes in symmetry as a novel indicator of damage for data-based structural health monitoring approaches. This paper describes the concept as a basis for detecting changes in mode shapes and detecting structural damage. Application of the method is demonstrated in various structures with different symmetrical properties: a pipe cross-section with a finite element model and experimental study, the NASA 8-bay truss model, and the simulated IASC-ASCE structural health monitoring benchmark structure. The applicability and limitations of the feature in applying it to structures of varying geometries is discussed.
Selecting and implementing the PBS scheduler on an SGI Onyx 2/Orgin 2000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bittner, S.
1999-06-28
In the Mathematics and Computer Science Division at Argonne, the demand for resources on the Onyx 2 exceeds the resources available for consumption. To distribute these scarce resources effectively, we need a scheduling and resource management package with multiple capabilities. In particular, it must accept standard interactive user logins, allow batch jobs, backfill the system based on available resources, and permit system activities such as accounting to proceed without interruption. The package must include a mechanism to treat the graphic pipes as a schedulable resource. Also required is the ability to create advance reservations, offer dedicated system modes for largemore » resource runs and benchmarking, and track the resources consumed for each job run. Furthermore, our users want to be able to obtain repeatable timing results on job runs. And, of course, package costs must be carefully considered. We explored several options, including NQE and various third-party products, before settling on the PBS scheduler.« less
Modeling Longitudinal Dynamics in the Fermilab Booster Synchrotron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostiguy, Jean-Francois; Bhat, Chandra; Lebedev, Valeri
2016-06-01
The PIP-II project will replace the existing 400 MeV linac with a new, CW-capable, 800 MeV superconducting one. With respect to current operations, a 50% increase in beam intensity in the rapid cycling Booster synchrotron is expected. Booster batches are combined in the Recycler ring; this process limits the allowed longitudinal emittance of the extracted Booster beam. To suppress eddy currents, the Booster has no beam pipe; magnets are evacuated, exposing the beam to core laminations and this has a substantial impact on the longitudinal impedance. Noticeable longitudinal emittance growth is already observed at transition crossing. Operation at higher intensitymore » will likely necessitate mitigation measures. We describe systematic efforts to construct a predictive model for current operating conditions. A longitudinal only code including a laminated wall impedance model, space charge effects, and feedback loops is developed. Parameter validation is performed using detailed measurements of relevant beam, rf and control parameters. An attempt is made to benchmark the code at operationally favorable machine settings.« less
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak; Majumdar, Alok
2007-01-01
The present paper describes the verification and validation of a quasi one-dimensional pressure based finite volume algorithm, implemented in Generalized Fluid System Simulation Program (GFSSP), for predicting compressible flow with friction, heat transfer and area change. The numerical predictions were compared with two classical solutions of compressible flow, i.e. Fanno and Rayleigh flow. Fanno flow provides an analytical solution of compressible flow in a long slender pipe where incoming subsonic flow can be choked due to friction. On the other hand, Raleigh flow provides analytical solution of frictionless compressible flow with heat transfer where incoming subsonic flow can be choked at the outlet boundary with heat addition to the control volume. Nonuniform grid distribution improves the accuracy of numerical prediction. A benchmark numerical solution of compressible flow in a converging-diverging nozzle with friction and heat transfer has been developed to verify GFSSP's numerical predictions. The numerical predictions compare favorably in all cases.
GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise Paul
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary. 09/2016: Tables 6 and 8 updated. AGR-2 input data added« less