Sample records for simulated pwr benchmark

  1. Coupled Neutronics Thermal-Hydraulic Solution of a Full-Core PWR Using VERA-CS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarno, Kevin T; Palmtag, Scott; Davidson, Gregory G

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a core simulator called VERA-CS to model operating PWR reactors with high resolution. This paper describes how the development of VERA-CS is being driven by a set of progression benchmark problems that specify the delivery of useful capability in discrete steps. As part of this development, this paper will describe the current capability of VERA-CS to perform a multiphysics simulation of an operating PWR at Hot Full Power (HFP) conditions using a set of existing computer codes coupled together in a novel method. Results for several single-assembly casesmore » are shown that demonstrate coupling for different boron concentrations and power levels. Finally, high-resolution results are shown for a full-core PWR reactor modeled in quarter-symmetry.« less

  2. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less

  3. MC21 analysis of the MIT PWR benchmark: Hot zero power results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly Iii, D. J.; Aviles, B. N.; Herman, B. R.

    2013-07-01

    MC21 Monte Carlo results have been compared with hot zero power measurements from an operating pressurized water reactor (PWR), as specified in a new full core PWR performance benchmark from the MIT Computational Reactor Physics Group. Included in the comparisons are axially integrated full core detector measurements, axial detector profiles, control rod bank worths, and temperature coefficients. Power depressions from grid spacers are seen clearly in the MC21 results. Application of Coarse Mesh Finite Difference (CMFD) acceleration within MC21 has been accomplished, resulting in a significant reduction of inactive batches necessary to converge the fission source. CMFD acceleration has alsomore » been shown to work seamlessly with the Uniform Fission Site (UFS) variance reduction method. (authors)« less

  4. Sensitivity Analysis of OECD Benchmark Tests in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less

  5. Validation Data and Model Development for Fuel Assembly Response to Seismic Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardet, Philippe; Ricciardi, Guillaume

    2016-01-31

    Vibrations are inherently present in nuclear reactors, especially in cores and steam generators of pressurized water reactors (PWR). They can have significant effects on local heat transfer and wear and tear in the reactor and often set safety margins. The simulation of these multiphysics phenomena from first principles requires the coupling of several codes, which is one the most challenging tasks in modern computer simulation. Here an ambitious multiphysics multidisciplinary validation campaign is conducted. It relied on an integrated team of experimentalists and code developers to acquire benchmark and validation data for fluid-structure interaction codes. Data are focused on PWRmore » fuel bundle behavior during seismic transients.« less

  6. Main steam line break accident simulation of APR1400 using the model of ATLAS facility

    NASA Astrophysics Data System (ADS)

    Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.

    2018-02-01

    A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.

  7. Structural Integrity of Water Reactor Pressure Boundary Components.

    DTIC Science & Technology

    1981-02-20

    environment, and load waveform parameters . A theory of the influence of dissolved oxygen content on the fatigue crack growth results in simulated PWR ...simulated PWR coolant is - (Continues ) DD IJN7 1473 EDITION OF I NOV S ..OSL- -C 2 S/ 0102-LF-014-6601 S1ECURITY CLASSI1FICATION OF THIS PAGE (When...not seem to influence the data, which was produced for a load ratio of 0.2 and a simulated PWR coolant environment. Test results for A106 Grade C piping

  8. The underwater coincidence counter (UWCC) for plutonium measurements in mixed oxide fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eccleston, G.W.; Menlove, H.O.; Abhold, M.

    1998-12-31

    The use of fresh uranium-plutonium mixed oxide (MOX) fuel in light-water reactors (LWR) is increasing in Europe and Japan and it is necessary to verify the plutonium content in the fuel for international safeguards purposes. The UWCC is a new instrument that has been designed to operate underwater and nondestructively measure the plutonium in unirradiated MOX fuel assemblies. The UWCC can be quickly configured to measure either boiling-water reactor (BWR) or pressurized-water reactor (PWR) fuel assemblies. The plutonium loading per unit length is measured using the UWCC to precisions of less than 1% in a measurement time of 2 tomore » 3 minutes. Initial calibrations of the UWCC were completed on measurements of MOX fuel in Mol, Belgium. The MCNP-REN Monte Carlo simulation code is being benchmarked to the calibration measurements to allow accurate simulations for extended calibrations of the UWCC.« less

  9. Uniaxial low cycle fatigue behavior for pre-corroded 16MND5 bainitic steel in simulated pressurized water reactor environment

    NASA Astrophysics Data System (ADS)

    Chen, Xu; Ren, Bin; Yu, Dunji; Xu, Bin; Zhang, Zhe; Chen, Gang

    2018-06-01

    The effects of uniaxial tension properties and low cycle fatigue behavior of 16MND5 bainitic steel cylinder pre-corroded in simulated pressurized water reactor (PWR) were investigated by fatigue at room temperature in air and immersion test system, scanning electron microscopy (SEM), energy disperse spectroscopy (EDS). The experimental results indicated that the corrosion fatigue lives of 16MND5 specimen were significantly affected by the strain amplitude and simulated PWR environments. The compositions of corrosion products were complexly formed in simulated PWR environments. The porous corrosion surface of pre-corroded materials tended to generate pits as a result of promoting contact area to the fresh metal, which promoted crack initiation. For original materials, the fatigue cracks initiated at inclusions imbedded in the micro-cracks. Moreover, the simulated PWR environments degraded the mechanical properties and low cycle fatigue behavior of 16MND5 specimens remarkably. Pre-corrosion of 16MND5 specimen mainly affected the plastic term of the Coffin-Manson equation.

  10. Validation of the BUGJEFF311.BOLIB, BUGENDF70.BOLIB and BUGLE-B7 broad-group libraries on the PCA-Replica (H2O/Fe) neutron shielding benchmark experiment

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Orsi, Roberto; Frisoni, Manuela

    2016-03-01

    The PCA-Replica 12/13 (H2O/Fe) neutron shielding benchmark experiment was analysed using the TORT-3.2 3D SN code. PCA-Replica reproduces a PWR ex-core radial geometry with alternate layers of water and steel including a pressure vessel simulator. Three broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format with the same energy group structure (47 n + 20 γ) and based on different nuclear data were alternatively used: the ENEA BUGJEFF311.BOLIB (JEFF-3.1.1) and UGENDF70.BOLIB (ENDF/B-VII.0) libraries and the ORNL BUGLE-B7 (ENDF/B-VII.0) library. Dosimeter cross sections derived from the IAEA IRDF-2002 dosimetry file were employed. The calculated reaction rates for the Rh-103(n,n')Rh-103m, In-115(n,n')In-115m and S-32(n,p)P-32 threshold activation dosimeters and the calculated neutron spectra are compared with the corresponding experimental results.

  11. Qualification of CASMO5 / SIMULATE-3K against the SPERT-III E-core cold start-up experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grandi, G.; Moberg, L.

    SIMULATE-3K is a three-dimensional kinetic code applicable to LWR Reactivity Initiated Accidents. S3K has been used to calculate several international recognized benchmarks. However, the feedback models in the benchmark exercises are different from the feedback models that SIMULATE-3K uses for LWR reactors. For this reason, it is worth comparing the SIMULATE-3K capabilities for Reactivity Initiated Accidents against kinetic experiments. The Special Power Excursion Reactor Test III was a pressurized-water, nuclear-research facility constructed to analyze the reactor kinetic behavior under initial conditions similar to those of commercial LWRs. The SPERT III E-core resembles a PWR in terms of fuel type, moderator,more » coolant flow rate, and system pressure. The initial test conditions (power, core flow, system pressure, core inlet temperature) are representative of cold start-up, hot start-up, hot standby, and hot full power. The qualification of S3K against the SPERT III E-core measurements is an ongoing work at Studsvik. In this paper, the results for the 30 cold start-up tests are presented. The results show good agreement with the experiments for the reactivity initiated accident main parameters: peak power, energy release and compensated reactivity. Predicted and measured peak powers differ at most by 13%. Measured and predicted reactivity compensations at the time of the peak power differ less than 0.01 $. Predicted and measured energy release differ at most by 13%. All differences are within the experimental uncertainty. (authors)« less

  12. Fabrication of simulated DUPIC fuel

    NASA Astrophysics Data System (ADS)

    Kang, Kweon Ho; Song, Ki Chan; Park, Hee Sung; Moon, Je Sun; Yang, Myung Seung

    2000-12-01

    Simulated DUPIC fuel provides a convenient way to investigate the DUPIC fuel properties and behavior such as thermal conductivity, thermal expansion, fission gas release, leaching, and so on without the complications of handling radioactive materials. Several pellets simulating the composition and microstructure of DUPIC fuel are fabricated by resintering the powder, which was treated through OREOX process of simulated spent PWR fuel pellets, which had been prepared from a mixture of UO2 and stable forms of constituent nuclides. The key issues for producing simulated pellets that replicate the phases and microstructure of irradiated fuel are to achieve a submicrometre dispersion during mixing and diffusional homogeneity during sintering. This study describes the powder treatment, OREOX, compaction and sintering to fabricate simulated DUPIC fuel using the simulated spent PWR fuel. The homogeneity of additives in the powder was observed after attrition milling. The microstructure of the simulated spent PWR fuel agrees well with the other studies. The leading structural features observed are as follows: rare earth and other oxides dissolved in the UO2 matrix, small metallic precipitates distributed throughout the matrix, and a perovskite phase finely dispersed on grain boundaries.

  13. FY2012 summary of tasks completed on PROTEUS-thermal work.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.H.; Smith, M.A.

    2012-06-06

    PROTEUS is a suite of the neutronics codes, both old and new, that can be used within the SHARP codes being developed under the NEAMS program. Discussion here is focused on updates and verification and validation activities of the SHARP neutronics code, DeCART, for application to thermal reactor analysis. As part of the development of SHARP tools, the different versions of the DeCART code created for PWR, BWR, and VHTR analysis were integrated. Verification and validation tests for the integrated version were started, and the generation of cross section libraries based on the subgroup method was revisited for the targetedmore » reactor types. The DeCART code has been reorganized in preparation for an efficient integration of the different versions for PWR, BWR, and VHTR analysis. In DeCART, the old-fashioned common blocks and header files have been replaced by advanced memory structures. However, the changing of variable names was minimized in order to limit problems with the code integration. Since the remaining stability problems of DeCART were mostly caused by the CMFD methodology and modules, significant work was performed to determine whether they could be replaced by more stable methods and routines. The cross section library is a key element to obtain accurate solutions. Thus, the procedure for generating cross section libraries was revisited to provide libraries tailored for the targeted reactor types. To improve accuracy in the cross section library, an attempt was made to replace the CENTRM code by the MCNP Monte Carlo code as a tool obtaining reference resonance integrals. The use of the Monte Carlo code allows us to minimize problems or approximations that CENTRM introduces since the accuracy of the subgroup data is limited by that of the reference solutions. The use of MCNP requires an additional set of libraries without resonance cross sections so that reference calculations can be performed for a unit cell in which only one isotope of interest includes resonance cross sections, among the isotopes in the composition. The OECD MHTGR-350 benchmark core was simulated using DeCART as initial focus of the verification/validation efforts. Among the benchmark problems, Exercise 1 of Phase 1 is a steady-state benchmark case for the neutronics calculation for which block-wise cross sections were provided in 26 energy groups. This type of problem was designed for a homogenized geometry solver like DIF3D rather than the high-fidelity code DeCART. Instead of the homogenized block cross sections given in the benchmark, the VHTR-specific 238-group ENDF/B-VII.0 library of DeCART was directly used for preliminary calculations. Initial results showed that the multiplication factors of a fuel pin and a fuel block with or without a control rod hole were off by 6, -362, and -183 pcm Dk from comparable MCNP solutions, respectively. The 2-D and 3-D one-third core calculations were also conducted for the all-rods-out (ARO) and all-rods-in (ARI) configurations, producing reasonable results. Figure 1 illustrates the intermediate (1.5 eV - 17 keV) and thermal (below 1.5 eV) group flux distributions. As seen from VHTR cores with annular fuels, the intermediate group fluxes are relatively high in the fuel region, but the thermal group fluxes are higher in the inner and outer graphite reflector regions than in the fuel region. To support the current project, a new three-year I-NERI collaboration involving ANL and KAERI was started in November 2011, focused on performing in-depth verification and validation of high-fidelity multi-physics simulation codes for LWR and VHTR. The work scope includes generating improved cross section libraries for the targeted reactor types, developing benchmark models for verification and validation of the neutronics code with or without thermo-fluid feedback, and performing detailed comparisons of predicted reactor parameters against both Monte Carlo solutions and experimental measurements. The following list summarizes the work conducted so far for PROTEUS-Thermal Tasks: Unification of different versions of DeCART was initiated, and at the same time code modernization was conducted to make code unification efficient; (2) Regeneration of cross section libraries was attempted for the targeted reactor types, and the procedure for generating cross section libraries was updated by replacing CENTRM with MCNP for reference resonance integrals; (3) The MHTGR-350 benchmark core was simulated using DeCART with VHTR-specific 238-group ENDF/B-VII.0 library, and MCNP calculations were performed for comparison; and (4) Benchmark problems for PWR and BWR analysis were prepared for the DeCART verification/validation effort. In the coming months, the work listed above will be completed. Cross section libraries will be generated with optimized group structures for specific reactor types.« less

  14. VERA Core Simulator Methodology for PWR Cycle Depletion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kochunas, Brendan; Collins, Benjamin S; Jabaay, Daniel

    2015-01-01

    This paper describes the methodology developed and implemented in MPACT for performing high-fidelity pressurized water reactor (PWR) multi-cycle core physics calculations. MPACT is being developed primarily for application within the Consortium for the Advanced Simulation of Light Water Reactors (CASL) as one of the main components of the VERA Core Simulator, the others being COBRA-TF and ORIGEN. The methods summarized in this paper include a methodology for performing resonance self-shielding and computing macroscopic cross sections, 2-D/1-D transport, nuclide depletion, thermal-hydraulic feedback, and other supporting methods. These methods represent a minimal set needed to simulate high-fidelity models of a realistic nuclearmore » reactor. Results demonstrating this are presented from the simulation of a realistic model of the first cycle of Watts Bar Unit 1. The simulation, which approximates the cycle operation, is observed to be within 50 ppm boron (ppmB) reactivity for all simulated points in the cycle and approximately 15 ppmB for a consistent statepoint. The verification and validation of the PWR cycle depletion capability in MPACT is the focus of two companion papers.« less

  15. Surrogate fuel assembly multi-axis shaker tests to simulate normal conditions of rail and truck transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McConnell, Paul E.; Koenig, Greg John; Uncapher, William Leonard

    2016-05-01

    This report describes the third set of tests (the “DCLa shaker tests”) of an instrumented surrogate PWR fuel assembly. The purpose of this set of tests was to measure strains and accelerations on Zircaloy-4 fuel rods when the PWR assembly was subjected to rail and truck loadings simulating normal conditions of transport when affixed to a multi-axis shaker. This is the first set of tests of the assembly simulating rail normal conditions of transport.

  16. Surrogate fuel assembly multi-axis shaker tests to simulate normal conditions of rail and truck transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McConnell, Paul E.; Koenig, Greg John; Uncapher, William Leonard

    2016-05-12

    This report describes the third set of tests (the “DCL a shaker tests”) of an instrumented surrogate PWR fuel assembly. The purpose of this set of tests was to measure strains and accelerations on Zircaloy-4 fuel rods when the PWR assembly was subjected to rail and truck loadings simulating normal conditions of transport when affixed to a multi-axis shaker. This is the first set of tests of the assembly simulating rail normal conditions of transport.

  17. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  18. Cyclic and SCC Behavior of Alloy 690 HAZ in a PWR Environment

    NASA Astrophysics Data System (ADS)

    Alexandreanu, Bogdan; Chen, Yiren; Natesan, Ken; Shack, Bill

    The objective of this work is to determine the cyclic and stress corrosion cracking (SCC) crack growth rates (CGRs) in a simulated PWR water environment for Alloy 690 heat affected zone (HAZ). In order to meet the objective, an Alloy 152 J-weld was produced on a piece of Alloy 690 tubing, and the test specimens were aligned with the HAZ. The environmental enhancement of cyclic CGRs for Alloy 690 HAZ was comparable to that measured for the same alloy in the as-received condition. The two Alloy 690 HAZ samples tested exhibited maximum SCC CGR rates of 10-11 m/s in the simulated PWR environment at 320°C, however, on average, these rates are similar or only slightly higher than those for the as-received alloy.

  19. Xenon-induced power oscillations in a generic small modular reactor

    NASA Astrophysics Data System (ADS)

    Kitcher, Evans Damenortey

    As world demand for energy continues to grow at unprecedented rates, the world energy portfolio of the future will inevitably include a nuclear energy contribution. It has been suggested that the Small Modular Reactor (SMR) could play a significant role in the spread of civilian nuclear technology to nations previously without nuclear energy. As part of the design process, the SMR design must be assessed for the threat to operations posed by xenon-induced power oscillations. In this research, a generic SMR design was analyzed with respect to just such a threat. In order to do so, a multi-physics coupling routine was developed with MCNP/MCNPX as the neutronics solver. Thermal hydraulic assessments were performed using a single channel analysis tool developed in Python. Fuel and coolant temperature profiles were implemented in the form of temperature dependent fuel cross sections generated using the SIGACE code and reactor core coolant densities. The Power Axial Offset (PAO) and Xenon Axial Offset (XAO) parameters were chosen to quantify any oscillatory behavior observed. The methodology was benchmarked against results from literature of startup tests performed at a four-loop PWR in Korea. The developed benchmark model replicated the pertinent features of the reactor within ten percent of the literature values. The results of the benchmark demonstrated that the developed methodology captured the desired phenomena accurately. Subsequently, a high fidelity SMR core model was developed and assessed. Results of the analysis revealed an inherently stable SMR design at beginning of core life and end of core life under full-power and half-power conditions. The effect of axial discretization, stochastic noise and convergence of the Monte Carlo tallies in the calculations of the PAO and XAO parameters was investigated. All were found to be quite small and the inherently stable nature of the core design with respect to xenon-induced power oscillations was confirmed. Finally, a preliminary investigation into excess reactivity control options for the SMR design was conducted confirming the generally held notion that existing PWR control mechanisms can be used in iPWR SMRs with similar effectiveness. With the desire to operate the SMR under the boron free coolant condition, erbium oxide fuel integral burnable absorber rods were identified as a possible means to retain the dispersed absorber effect of soluble boron in the reactor coolant in replacement.

  20. Best estimate plus uncertainty analysis of departure from nucleate boiling limiting case with CASL core simulator VERA-CS in response to PWR main steam line break event

    DOE PAGES

    Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa; ...

    2016-09-07

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less

  1. Best estimate plus uncertainty analysis of departure from nucleate boiling limiting case with CASL core simulator VERA-CS in response to PWR main steam line break event

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less

  2. New core-reflector boundary conditions for transient nodal reactor calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, E.K.; Kim, C.H.; Joo, H.K.

    1995-09-01

    New core-reflector boundary conditions designed for the exclusion of the reflector region in transient nodal reactor calculations are formulated. Spatially flat frequency approximations for the temporal neutron behavior and two types of transverse leakage approximations in the reflector region are introduced to solve the transverse-integrated time-dependent one-dimensional diffusion equation and then to obtain relationships between net current and flux at the core-reflector interfaces. To examine the effectiveness of new core-reflector boundary conditions in transient nodal reactor computations, nodal expansion method (NEM) computations with and without explicit representation of the reflector are performed for Laboratorium fuer Reaktorregelung und Anlagen (LRA) boilingmore » water reactor (BWR) and Nuclear Energy Agency Committee on Reactor Physics (NEACRP) pressurized water reactor (PWR) rod ejection kinetics benchmark problems. Good agreement between two NEM computations is demonstrated in all the important transient parameters of two benchmark problems. A significant amount of CPU time saving is also demonstrated with the boundary condition model with transverse leakage (BCMTL) approximations in the reflector region. In the three-dimensional LRA BWR, the BCMTL and the explicit reflector model computations differ by {approximately}4% in transient peak power density while the BCMTL results in >40% of CPU time saving by excluding both the axial and the radial reflector regions from explicit computational nodes. In the NEACRP PWR problem, which includes six different transient cases, the largest difference is 24.4% in the transient maximum power in the one-node-per-assembly B1 transient results. This difference in the transient maximum power of the B1 case is shown to reduce to 11.7% in the four-node-per-assembly computations. As for the computing time, BCMTL is shown to reduce the CPU time >20% in all six transient cases of the NEACRP PWR.« less

  3. SINGLE PHASE ANALYTICAL MODELS FOR TERRY TURBINE NOZZLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling

    All BWR RCIC (Reactor Core Isolation Cooling) systems and PWR AFW (Auxiliary Feed Water) systems use Terry turbine, which is composed of the wheel with turbine buckets and several groups of fixed nozzles and reversing chambers inside the turbine casing. The inlet steam is accelerated through the turbine nozzle and impacts on the wheel buckets, generating work to drive the RCIC pump. As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC systems in Fukushima accidents and extend BWR RCIC and PWR AFW operational range and flexibility, mechanistic models for the Terry turbine, based on Sandiamore » National Laboratories’ original work, has been developed and implemented in the RELAP-7 code to simulate the RCIC system. RELAP-7 is a new reactor system code currently under development with the funding support from U.S. Department of Energy. The RELAP-7 code is a fully implicit code and the preconditioned Jacobian-free Newton-Krylov (JFNK) method is used to solve the discretized nonlinear system. This paper presents a set of analytical models for simulating the flow through the Terry turbine nozzles when inlet fluid is pure steam. The implementation of the models into RELAP-7 will be briefly discussed. In the Sandia model, the turbine bucket inlet velocity is provided according to a reduced-order model, which was obtained from a large number of CFD simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine bucket inlet. The models include both adiabatic expansion process inside the nozzle and free expansion process out of the nozzle to reach the ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input conditions for the Terry Turbine rotor model. The nozzle analytical models were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The cases with two-phase flow at the turbine inlet will be pursued in future work.« less

  4. Development and Testing of Neutron Cross Section Covariance Data for SCALE 6.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Williams, Mark L; Wiarda, Dorothea

    2015-01-01

    Neutron cross-section covariance data are essential for many sensitivity/uncertainty and uncertainty quantification assessments performed both within the TSUNAMI suite and more broadly throughout the SCALE code system. The release of ENDF/B-VII.1 included a more complete set of neutron cross-section covariance data: these data form the basis for a new cross-section covariance library to be released in SCALE 6.2. A range of testing is conducted to investigate the properties of these covariance data and ensure that the data are reasonable. These tests include examination of the uncertainty in critical experiment benchmark model k eff values due to nuclear data uncertainties, asmore » well as similarity assessments of irradiated pressurized water reactor (PWR) and boiling water reactor (BWR) fuel with suites of critical experiments. The contents of the new covariance library, the testing performed, and the behavior of the new covariance data are described in this paper. The neutron cross-section covariances can be combined with a sensitivity data file generated using the TSUNAMI suite of codes within SCALE to determine the uncertainty in system k eff caused by nuclear data uncertainties. The Verified, Archived Library of Inputs and Data (VALID) maintained at Oak Ridge National Laboratory (ORNL) contains over 400 critical experiment benchmark models, and sensitivity data are generated for each of these models. The nuclear data uncertainty in k eff is generated for each experiment, and the resulting uncertainties are tabulated and compared to the differences in measured and calculated results. The magnitude of the uncertainty for categories of nuclides (such as actinides, fission products, and structural materials) is calculated for irradiated PWR and BWR fuel to quantify the effect of covariance library changes between the SCALE 6.1 and 6.2 libraries. One of the primary applications of sensitivity/uncertainty methods within SCALE is the assessment of similarities between benchmark experiments and safety applications. This is described by a c k value for each experiment with each application. Several studies have analyzed typical c k values for a range of critical experiments compared with hypothetical irradiated fuel applications. The c k value is sensitive to the cross-section covariance data because the contribution of each nuclide is influenced by its uncertainty; large uncertainties indicate more likely bias sources and are thus given more weight. Changes in c k values resulting from different covariance data can be used to examine and assess underlying data changes. These comparisons are performed for PWR and BWR fuel in storage and transportation systems.« less

  5. Evaluation of neutron thermalization parameters and benchmark reactor calculations using a synthetic scattering function for molecular gases

    NASA Astrophysics Data System (ADS)

    Gillette, V. H.; Patiño, N. E.; Granada, J. R.; Mayer, R. E.

    1989-08-01

    Using a synthetic incoherent scattering function which describes the interaction of neutrons with molecular gases we provide analytical expressions for zero- and first-order scattering kernels, σ0( E0 → E), σ1( E0 → E), and total cross section σ0( E0). Based on these quantities, we have performed calculations of thermalization parameters and transport coefficients for H 2O, D 2O, C 6H 6 and (CH 2) n at room temperature. Comparison of such values with available experimental data and other calculations is satisfactory. We also generated nuclear data libraries for H 2O with 47 thermal groups at 300 K and performed some benchmark calculations ( 235U, 239Pu, PWR cell and typical APWR cell); the resulting reactivities are compared with experimental data and ENDF/B-IV calculations.

  6. Grid-to-rod flow-induced impact study for PWR fuel in reactor

    DOE PAGES

    Jiang, Hao; Qu, Jun; Lu, Roger Y.; ...

    2016-06-10

    The source for grid-to-rod fretting in a pressurized water nuclear reactor (PWR) is the dynamic contact impact from hydraulic flow-induced fuel assembly vibration. In order to support grid-to-rod fretting wear mitigation research, finite element analysis (FEA) was used to evaluate the hydraulic flow-induced impact intensity between the fuel rods and the spacer grids. Three-dimensional FEA models, with detailed geometries of the dimple and spring of the actual spacer grids along with fuel rods, were developed for flow impact simulation. The grid-to-rod dynamic impact simulation provided insights of the contact phenomena at grid-rod interface. Finally, it is an essential and effectivemore » way to evaluate contact forces and provide guidance for simulative bench fretting-impact tests.« less

  7. Simulation of German PKL refill/reflood experiment K9A using RELAP4/MOD7. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, M.T.; Davis, C.B.; Behling, S.R.

    This paper describes a RELAP4/MOD7 simulation of West Germany's Kraftwerk Union (KWU) Primary Coolant Loop (PKL) refill/reflood experiment K9A. RELAP4/MOD7, a best-estimate computer program for the calculation of thermal and hydraulic phenomena in a nuclear reactor or related system, is the latest version in the RELAP4 code development series. This study was the first major simulation using RELAP4/MOD7 since its release by the Idaho National Engineering Laboratory (INEL). The PKL facility is a reduced scale (1:134) representation of a typical West German four-loop 1300 MW pressurized water reactor (PWR). A prototypical scale of the total volume to power ratio wasmore » maintained. The test facility was designed specifically for an experiment simulating the refill/reflood phase of a Loss-of-Coolant Accident (LOCA).« less

  8. Design, Construction and Testing of an In-Pile Loop for PWR (Pressurized Water Reactor) Simulation.

    DTIC Science & Technology

    1987-06-01

    computer modeling remains at best semiempirical (C-i), this large variation in scaling factor makes extrapolation of data impossible. The DIDO Water...in a full scale PWR are not practical. The reactor plant is not controlled to tolerances necessary for research, and utilities are reluctant to vary...MIT Reactor Safeguards Committee, in revision 1 to the PCCL Safety Evaluation Report (SER), for final approval to begin in-pile testing and

  9. MELCOR model for an experimental 17x17 spent fuel PWR assembly.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoni, Jeffrey

    2010-11-01

    A MELCOR model has been developed to simulate a pressurized water reactor (PWR) 17 x 17 assembly in a spent fuel pool rack cell undergoing severe accident conditions. To the extent possible, the MELCOR model reflects the actual geometry, materials, and masses present in the experimental arrangement for the Sandia Fuel Project (SFP). The report presents an overview of the SFP experimental arrangement, the MELCOR model specifications, demonstration calculation results, and the input model listing.

  10. Effects of Thermo-Mechanical Treatments on Deformation Behavior and IGSCC Susceptibility of Stainless Steels in Pwr Primary Water Chemistry

    NASA Astrophysics Data System (ADS)

    Nouraei, S.; Tice, D. R.; Mottershead, K. J.; Wright, D. M.

    Field experience of 300 series stainless steels in the primary circuit of PWR plant has been good. Stress Corrosion Cracking of components has been infrequent and mainly associated with contamination by impurities/oxygen in occluded locations. However, some instances of failures have been observed which cannot necessarily be attributed to deviations in the water chemistry. These failures appear to be associated with the presence of cold-work produced by surface finishing and/or by welding-induced shrinkage. Recent data indicate that some heats of SS show an increased susceptibility to SCC; relatively high crack growth rates were observed even when the crack growth direction is orthogonal to the cold-work direction. SCC of cold-worked SS in PWR coolant is therefore determined by a complex interaction of material composition, microstructure, prior cold-work and heat treatment. This paper will focus on the interactions between these parameters on crack propagation in simulated PWR conditions.

  11. Generation of the V4.2m5 and AMPX and MPACT 51 and 252-Group Libraries with ENDF/B-VII.0 and VII.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog

    The evaluated nuclear data file (ENDF)/B-7.0 v4.1m3 MPACT 47-group library has been used as a main library for the Consortium for Advanced Simulation of Light Water Reactors (CASL) neutronics simulator in simulating pressurized water reactor (PWR) problems. Recent analysis for the high void boiling water reactor (BWR) fuels and burnt fuels indicates that the 47-group library introduces relatively large reactivity bias. Since the 47- group structure does not match with the SCALE 6.2 252-group boundaries, the CASL Virtual Environment for Reactor Applications Core Simulator (VERA-CS) MPACT library must be maintained independently, which causes quality assurance concerns. In order to addressmore » this issue, a new 51-group structure has been proposed based on the MPACT 47- g and SCALE 252-g structures. In addition, the new CASL library will include a 19-group structure for gamma production and interaction cross section data based on the SCALE 19- group structure. New AMPX and MPACT 51-group libraries have been developed with the ENDF/B-7.0 and 7.1 evaluated nuclear data. The 19-group gamma data also have been generated for future use, but they are only available on the AMPX 51-g library. In addition, ENDF/B-7.0 and 7.1 MPACT 252-g libraries have been generated for verification purposes. Various benchmark calculations have been performed to verify and validate the newly developed libraries.« less

  12. Effects of crack tip plastic zone on corrosion fatigue cracking of alloy 690(TT) in pressurized water reactor environments

    NASA Astrophysics Data System (ADS)

    Xiao, J.; Qiu, S. Y.; Chen, Y.; Fu, Z. H.; Lin, Z. X.; Xu, Q.

    2015-01-01

    Alloy 690(TT) is widely used for steam generator tubes in pressurized water reactor (PWR), where it is susceptible to corrosion fatigue. In this study, the corrosion fatigue behavior of Alloy 690(TT) in simulated PWR environments was investigated. The microstructure of the plastic zone near the crack tip was investigated and labyrinth structures were observed. The relationship between the crack tip plastic zone and fatigue crack growth rates and the environment factor Fen was illuminated.

  13. Modeling of a Flooding Induced Station Blackout for a Pressurized Water Reactor Using the RISMC Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Prescott, Steven R; Smith, Curtis L

    2011-07-01

    In the Risk Informed Safety Margin Characterization (RISMC) approach we want to understand not just the frequency of an event like core damage, but how close we are (or are not) to key safety-related events and how might we increase our safety margins. The RISMC Pathway uses the probabilistic margin approach to quantify impacts to reliability and safety by coupling both probabilistic (via stochastic simulation) and mechanistic (via physics models) approaches. This coupling takes place through the interchange of physical parameters and operational or accident scenarios. In this paper we apply the RISMC approach to evaluate the impact of amore » power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., system activation) and to perform statistical analyses (e.g., run multiple RELAP-7 simulations where sequencing/timing of events have been changed according to a set of stochastic distributions). By using the RISMC toolkit, we can evaluate how power uprate affects the system recovery measures needed to avoid core damage after the PWR lost all available AC power by a tsunami induced flooding. The simulation of the actual flooding is performed by using a smooth particle hydrodynamics code: NEUTRINO.« less

  14. A flooding induced station blackout analysis for a pressurized water reactor using the RISMC toolkit

    DOE PAGES

    Mandelli, Diego; Prescott, Steven; Smith, Curtis; ...

    2015-05-17

    In this paper we evaluate the impact of a power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: the RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., component/system activation) and to perform statistical analyses. In our case, the simulation of the flooding is performed by using an advanced smooth particle hydrodynamics code calledmore » NEUTRINO. The obtained results allow the user to investigate and quantify the impact of timing and sequencing of events on system safety. The impact of power uprate is determined in terms of both core damage probability and safety margins.« less

  15. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

    1991-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  16. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    NASA Astrophysics Data System (ADS)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.

  17. Irradiation performance of (Th,Pu)O2 fuel under Pressurized Water Reactor conditions

    NASA Astrophysics Data System (ADS)

    Boer, B.; Lemehov, S.; Wéber, M.; Parthoens, Y.; Gysemans, M.; McGinley, J.; Somers, J.; Verwerft, M.

    2016-04-01

    This paper examines the in-pile safety performance of (Th,Pu)O2 fuel pins under simulated Pressurized Water Reactor (PWR) conditions. Both sol-gel and SOLMAS produced (Th,Pu)O2 fuels at enrichments of 7.9% and 12.8% in Pu/HM have been irradiated at SCK·CEN. The irradiation has been performed under PWR conditions (155 bar, 300 °C) in a dedicated loop of the BR-2 reactor. The loop is instrumented with flow and temperature monitors at inlet and outlet, which allow for an accurate measurement of the deposited enthalpy.

  18. Benchmarking nitrogen removal suspended-carrier biofilm systems using dynamic simulation.

    PubMed

    Vanhooren, H; Yuan, Z; Vanrolleghem, P A

    2002-01-01

    We are witnessing an enormous growth in biological nitrogen removal from wastewater. It presents specific challenges beyond traditional COD (carbon) removal. A possibility for optimised process design is the use of biomass-supporting media. In this paper, attached growth processes (AGP) are evaluated using dynamic simulations. The advantages of these systems that were qualitatively described elsewhere, are validated quantitatively based on a simulation benchmark for activated sludge treatment systems. This simulation benchmark is extended with a biofilm model that allows for fast and accurate simulation of the conversion of different substrates in a biofilm. The economic feasibility of this system is evaluated using the data generated with the benchmark simulations. Capital savings due to volume reduction and reduced sludge production are weighed out against increased aeration costs. In this evaluation, effluent quality is integrated as well.

  19. A collision history-based approach to Sensitivity/Perturbation calculations in the continuous energy Monte Carlo code SERPENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giuseppe Palmiotti

    In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.

  20. Nuclear power plant digital system PRA pilot study with the dynamic flow-graph methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yau, M.; Motamed, M.; Guarro, S.

    2006-07-01

    Current Probabilistic Risk Assessment (PRA) methodology is well established in analyzing hardware and some of the key human interactions. However processes for analyzing the software functions of digital systems within a plant PRA framework, and accounting for the digital system contribution to the overall risk are not generally available nor are they well understood and established. A recent study reviewed a number of methodologies that have potential applicability to modeling and analyzing digital systems within a PRA framework. This study identified the Dynamic Flow-graph Methodology (DFM) and the Markov Methodology as the most promising tools. As a result of thismore » study, a task was defined under the framework of a collaborative agreement between the U.S. Nuclear Regulatory Commission (NRC) and the Ohio State Univ. (OSU). The objective of this task is to set up benchmark systems representative of digital systems used in nuclear power plants and to evaluate DFM and the Markov methodology with these benchmark systems. The first benchmark system is a typical Pressurized Water Reactor (PWR) Steam Generator (SG) Feedwater System (FWS) level control system based on an earlier ASCA work with the U.S. NRC 2, upgraded with modern control laws. ASCA, Inc. is currently under contract to OSU to apply DFM to this benchmark system. The goal is to investigate the feasibility of using DFM to analyze and quantify digital system risk, and to integrate the DFM analytical results back into the plant event tree/fault tree PRA model. (authors)« less

  1. ACHILLES: Heat Transfer in PWR Core During LOCA Reflood Phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-11-01

    1. NAME AND TITLE OF DATA LIBRARY ACHILLES -Heat Transfer in PWR Core During LOCA Reflood Phase. 2. NAME AND TITLE OF DATA RETRIEVAL PROGRAMS N/A 3. CONTRIBUTOR AEA Technology, Winfrith Technology Centre, Dorchester DT2 8DH United Kingdom through the OECD Nuclear Energy Agency Data Bank, Issy-les-Moulineaux, France. 4. DESCRIPTION OF TEST FACILITY The most important features of the Achilles rig were the shroud vessel, which contained the test section, and the downcomer. These may be thought of as representing the core barrel and the annular downcomer in the reactor pressure vessel. The test section comprises a cluster of 69more » rods in a square array within a circular shroud vessel. The rod diameter and pitch (9.5 mm and 12.6 mm) were typical of PWR dimensions. The internal diameter of the shroud vessel was 128 mm. Each rod was electrically heated over a length of 3.66 m, which is typical of the nuclear heated length in a PWR fuel rod, and each contained 6 internal thermocouples. These were arranged in one of 8 groupings which concentrated the thermocouples in different axial zones. The spacer grids were at prototypic PWR locations. Each grid had two thermocouples attached to its trailing edge at radial locations. The axial power profile along the rods was an 11 step approximation to a "chopped cosine". The shroud vessel had 5 heating zones whose power could be independently controlled. 5. DESCRIPTION OF TESTS The Achilles experiments investigated the heat transfer in the core of a Pressurized Water Reactor during the re-flood phase of a postulated large break loss of coolant accident. The results provided data to validate codes and to improve modeling. Different types of experiments were carried out which included single phase cooling, re-flood under low flow conditions, level swell and re-flood under high flow conditions. Three series of experiments were performed. The first and the third used the same test section but the second used another test section, similar in all respects except that it contained a partial blockage formed by attaching sleeves (or "balloons") to some of the rods. 6. SOURCE AND SCOPE OF DATA Phenomena Tested - Heat transfer in the core of a PWR during a re-flood phase of postulated large break LOCA. Test Designation - Achilles Rig. The programme includes the following types of experiments: - on an unballooned cluster: -- single phase air flow -- low pressure level swell -- low flooding rate re-flood -- high flooding rate re-flood - on a ballooned cluster containing 80% blockage formed by 16 balloon sleeves -- single phase air flow -- low flooding rate re-flood 7. DISCUSSION OF THE DATA RETRIEVAL PROGRAM N/A 8. DATA FORMAT AND COMPUTER Many Computers (M00019MNYCP00). 9. TYPICAL RUNNING TIME N/A 11. CONTENTS OF LIBRARY The ACHILLES package contains test data and associated data processing software as well as the documentation listed above. 12. DATE OF ABSTRACT November 2013. KEYWORDS: DATABASES, BENCHMARKS, HEAT TRANSFER, LOSS-OF-COLLANT ACCIDENT, PWR REACTORS, REFLOODING« less

  2. PWR FLECHT SEASET 163-Rod Bundle Flow Blockage Task data report. NRC/EPRI/Westinghouse report No. 13, August-October 1982

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loftus, M J; Hochreiter, L E; McGuire, M F

    This report presents data from the 163-Rod Bundle Blow Blockage Task of the Full-Length Emergency Cooling Heat Transfer Systems Effects and Separate Effects Test Program (FLECHT SEASET). The task consisted of forced and gravity reflooding tests utilizing electrical heater rods with a cosine axial power profile to simulate PWR nuclear core fuel rod arrays. These tests were designed to determine effects of flow blockage and flow bypass on reflooding behavior and to aid in the assessment of computational models in predicting the reflooding behavior of flow blockage in rod bundle arrays.

  3. Fatigue crack growth rates in a pressure vessel steel under various conditions of loading and the environment

    NASA Astrophysics Data System (ADS)

    Hicks, P. D.; Robinson, F. P. A.

    1986-10-01

    Corrosion fatigue (CF) tests have been carried out on SA508 Cl 3 pressure vessel steel, in simulated P.W.R. environments. The test variables investigated included air and P.W.R. water environments, frequency variation over the range 1 Hz to 10 Hz, transverse and longitudinal crack growth directions, temperatures of 20 °C and 50 °C, and R-ratios of 0.2 and 0.7. It was found that decreasing the test frequency increased fatigue crack growth rates (FCGR) in P.W.R. environments, P.W.R. environment testing gave enhanced crack growth (vs air tests), FCGRs were greater for cracks growing in the longitudinal direction, slight increases in temperature gave noticeable accelerations in FCGR, and several air tests gave FCGR greater than those predicted by the existing ASME codes. Fractographic evidence indicates that FCGRs were accelerated by a hydrogen embrittlement mechanism. The presence of elongated MnS inclusions aided both mechanical fatigue and hydrogen embrittlement processes, thus producing synergistically fast FCGRs. Both anodic dissolution and hydrogen embrittlement mechanisms have been proposed for the environmental enhancement of crack growth rates. Electrochemical potential measurements and potentiostatic tests have shown that sample isolation of the test specimens from the clevises in the apparatus is not essential during low temperature corrosion fatigue testing.

  4. Neutron-gamma flux and dose calculations in a Pressurized Water Reactor (PWR)

    NASA Astrophysics Data System (ADS)

    Brovchenko, Mariya; Dechenaux, Benjamin; Burn, Kenneth W.; Console Camprini, Patrizio; Duhamel, Isabelle; Peron, Arthur

    2017-09-01

    The present work deals with Monte Carlo simulations, aiming to determine the neutron and gamma responses outside the vessel and in the basemat of a Pressurized Water Reactor (PWR). The model is based on the Tihange-I Belgian nuclear reactor. With a large set of information and measurements available, this reactor has the advantage to be easily modelled and allows validation based on the experimental measurements. Power distribution calculations were therefore performed with the MCNP code at IRSN and compared to the available in-core measurements. Results showed a good agreement between calculated and measured values over the whole core. In this paper, the methods and hypotheses used for the particle transport simulation from the fission distribution in the core to the detectors outside the vessel of the reactor are also summarized. The results of the simulations are presented including the neutron and gamma doses and flux energy spectra. MCNP6 computational results comparing JEFF3.1 and ENDF-B/VII.1 nuclear data evaluations and sensitivity of the results to some model parameters are presented.

  5. Shift Verification and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less

  6. Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation.

    PubMed

    Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk

    2017-05-01

    To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  7. SIGACE Code for Generating High-Temperature ACE Files; Validation and Benchmarking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Amit R.; Ganesan, S.; Trkov, A.

    2005-05-24

    A code named SIGACE has been developed as a tool for MCNP users within the scope of a research contract awarded by the Nuclear Data Section of the International Atomic Energy Agency (IAEA) (Ref: 302-F4-IND-11566 B5-IND-29641). A new recipe has been evolved for generating high-temperature ACE files for use with the MCNP code. Under this scheme the low-temperature ACE file is first converted to an ENDF formatted file using the ACELST code and then Doppler broadened, essentially limited to the data in the resolved resonance region, to any desired higher temperature using SIGMA1. The SIGACE code then generates a high-temperaturemore » ACE file for use with the MCNP code. A thinning routine has also been introduced in the SIGACE code for reducing the size of the ACE files. The SIGACE code and the recipe for generating ACE files at higher temperatures has been applied to the SEFOR fast reactor benchmark problem (sodium-cooled fast reactor benchmark described in ENDF-202/BNL-19302, 1974 document). The calculated Doppler coefficient is in good agreement with the experimental value. A similar calculation using ACE files generated directly with the NJOY system also agrees with our SIGACE computed results. The SIGACE code and the recipe is further applied to study the numerical benchmark configuration of selected idealized PWR pin cell configurations with five different fuel enrichments as reported by Mosteller and Eisenhart. The SIGACE code that has been tested with several FENDL/MC files will be available, free of cost, upon request, from the Nuclear Data Section of the IAEA.« less

  8. Grain boundary damage evolution and SCC initiation of cold-worked alloy 690 in simulated PWR primary water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Ziqing; Toloczko, Mychailo B.; Kruska, Karen

    Long-term grain boundary (GB) damage evolution and stress corrosion crack initiation in alloy 690 are being investigated by constant load tensile testing in high-temperature, simulated PWR primary water. Six commercial alloy 690 heats are being tested in various cold work conditions loaded at their yield stress. This paper reviews the basic test approach and detailed characterizations performed on selected specimens after an exposure time of ~1 year. Intergranular crack nucleation was observed under constant stress in certain highly cold-worked (CW) alloy 690 heats and was found to be associated with the formation of GB cavities. Somewhat surprisingly, the heats mostmore » susceptible to cavity formation and crack nucleation were thermally treated materials with most uniform coverage of small GB carbides. Microstructure, % cold work and applied stress comparisons are made among the alloy 690 heats to better understand the factors influencing GB cavity formation and crack initiation.« less

  9. Effects of iron content in Ni-Cr-xFe alloys and immersion time on the oxide films formed in a simulated PWR water environment

    NASA Astrophysics Data System (ADS)

    Ru, Xiangkun; Lu, Zhanpeng; Chen, Junjie; Han, Guangdong; Zhang, Jinlong; Hu, Pengfei; Liang, Xue

    2017-12-01

    The iron content in Ni-Cr-xFe (x = 0-9 at.%) alloys strongly affected the properties of oxide films after 978 h of immersion in the simulated PWR primary water environment at 310 °C. Increasing the iron content in the alloys increased the amount of iron-bearing polyhedral spinel oxide particles in the outer oxide layer and increased the local oxidation penetrations into the alloy matrix from the chromium-rich inner oxide layer. The effects of iron content in the alloys on the oxide film properties after 500 h of immersion were less significant than those after 978 h. Iron content increased, and chromium content decreased, in the outer oxide layer with increasing iron content in the alloys. Increasing the immersion time facilitated the formation of the local oxidation penetrations along the matrix/film interface and the nickel-bearing spinel oxides in the outer oxide layer.

  10. Application of the TEMPEST computer code for simulating hydrogen distribution in model containment structures. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trent, D.S.; Eyler, L.L.

    In this study several aspects of simulating hydrogen distribution in geometric configurations relevant to reactor containment structures were investigated using the TEMPEST computer code. Of particular interest was the performance of the TEMPEST turbulence model in a density-stratified environment. Computed results illustrated that the TEMPEST numerical procedures predicted the measured phenomena with good accuracy under a variety of conditions and that the turbulence model used is a viable approach in complex turbulent flow simulation.

  11. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  12. Analysis of the return to power scenario following a LBLOCA in a PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macian, R.; Tyler, T.N.; Mahaffy, J.H.

    1995-09-01

    The risk of reactivity accidents has been considered an important safety issue since the beginning of the nuclear power industry. In particular, several events leading to such scenarios for PWR`s have been recognized and studied to assess the potential risk of fuel damage. The present paper analyzes one such event: the possible return to power during the reflooding phase following a LBLOCA. TRAC-PF1/MOD2 coupled with a three-dimensional neutronic model of the core based on the Nodal Expansion Method (NEM) was used to perform the analysis. The system computer model contains a detailed representation of a complete typical 4-loop PWR. Thus,more » the simulation can follow complex system interactions during reflooding, which may influence the neutronics feedback in the core. Analyses were made with core models bases on cross sections generated by LEOPARD. A standard and a potentially more limiting case, with increased pressurizer and accumulator inventories, were run. In both simulations, the reactor reaches a stable state after the reflooding is completed. The lower core region, filled with cold water, generates enough power to boil part of the incoming liquid, thus preventing the core average liquid fraction from reaching a value high enough to cause a return to power. At the same time, the mass flow rate through the core is adequate to maintain the rod temperature well below the fuel damage limit.« less

  13. Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set

    NASA Astrophysics Data System (ADS)

    Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.

    2017-05-01

    A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.

  14. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  15. 40 CFR 59.506 - How do I demonstrate compliance if I manufacture multi-component kits?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... multi-component kits as defined in § 59.503, then the Kit PWR must not exceed the Total Reactivity Limit. (b) You must calculate the Kit PWR and the Total Reactivity Limit as follows: (1) KIT PWR = (PWR(1) × W1) + (PWR(2) × W2) +. ...+ (PWR(n) × Wn) (2) Total Reactivity Limit = (RL1 × W1) + (RL2 × W2...

  16. 40 CFR 59.506 - How do I demonstrate compliance if I manufacture multi-component kits?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... multi-component kits as defined in § 59.503, then the Kit PWR must not exceed the Total Reactivity Limit. (b) You must calculate the Kit PWR and the Total Reactivity Limit as follows: (1) KIT PWR = (PWR(1) × W1) + (PWR(2) × W2) +. ...+ (PWR(n) × Wn) (2) Total Reactivity Limit = (RL1 × W1) + (RL2 × W2...

  17. 40 CFR 59.506 - How do I demonstrate compliance if I manufacture multi-component kits?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... multi-component kits as defined in § 59.503, then the Kit PWR must not exceed the Total Reactivity Limit. (b) You must calculate the Kit PWR and the Total Reactivity Limit as follows: (1) KIT PWR = (PWR(1) × W1) + (PWR(2) × W2) +. ...+ (PWR(n) × Wn) (2) Total Reactivity Limit = (RL1 × W1) + (RL2 × W2...

  18. 40 CFR 59.506 - How do I demonstrate compliance if I manufacture multi-component kits?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... multi-component kits as defined in § 59.503, then the Kit PWR must not exceed the Total Reactivity Limit. (b) You must calculate the Kit PWR and the Total Reactivity Limit as follows: (1) KIT PWR = (PWR(1) × W1) + (PWR(2) × W2) +. ...+ (PWR(n) × Wn) (2) Total Reactivity Limit = (RL1 × W1) + (RL2 × W2...

  19. Calculation and benchmarking of an azimuthal pressure vessel neutron fluence distribution using the BOXER code and scraping experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzgrewe, F.; Hegedues, F.; Paratte, J.M.

    1995-03-01

    The light water reactor BOXER code was used to determine the fast azimuthal neutron fluence distribution at the inner surface of the reactor pressure vessel after the tenth cycle of a pressurized water reactor (PWR). Using a cross-section library in 45 groups, fixed-source calculations in transport theory and x-y geometry were carried out to determine the fast azimuthal neutron flux distribution at the inner surface of the pressure vessel for four different cycles. From these results, the fast azimuthal neutron fluence after the tenth cycle was estimated and compared with the results obtained from scraping test experiments. In these experiments,more » small samples of material were taken from the inner surface of the pressure vessel. The fast neutron fluence was then determined form the measured activity of the samples. Comparing the BOXER and scraping test results have maximal differences of 15%, which is very good, considering the factor of 10{sup 3} neutron attenuation between the reactor core and the pressure vessel. To compare the BOXER results with an independent code, the 21st cycle of the PWR was also calculated with the TWODANT two-dimensional transport code, using the same group structure and cross-section library. Deviations in the fast azimuthal flux distribution were found to be <3%, which verifies the accuracy of the BOXER results.« less

  20. SCC Initiation Behavior of Alloy 182 in PWR Primary Water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toloczko, Mychailo B.; Zhai, Ziqing; Bruemmer, Stephen M.

    SCC initiation behavior of 15% cold forged specimens cut from four different alloy 182 weldments was investigated in 360°C simulated PWR primary water under constant load at the yield stress using direct current potential drop to perform in-situ monitoring of SCC initiation time. Within each weldment, one or more specimens underwent SCC initiation within 24 hours of reaching full load while some specimens had much longer initiation times, in a few cases exceeding 2500 hours. Detailed examinations were conducted on these specimens with a focus on different microstructural features such as preexisting defects, grain orientation and second phases, highlighting anmore » important role of microstructure in crack initiation of alloy 182.« less

  1. WWTP dynamic disturbance modelling--an essential module for long-term benchmarking development.

    PubMed

    Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    Intensive use of the benchmark simulation model No. 1 (BSM1), a protocol for objective comparison of the effectiveness of control strategies in biological nitrogen removal activated sludge plants, has also revealed a number of limitations. Preliminary definitions of the long-term benchmark simulation model No. 1 (BSM1_LT) and the benchmark simulation model No. 2 (BSM2) have been made to extend BSM1 for evaluation of process monitoring methods and plant-wide control strategies, respectively. Influent-related disturbances for BSM1_LT/BSM2 are to be generated with a model, and this paper provides a general overview of the modelling methods used. Typical influent dynamic phenomena generated with the BSM1_LT/BSM2 influent disturbance model, including diurnal, weekend, seasonal and holiday effects, as well as rainfall, are illustrated with simulation results. As a result of the work described in this paper, a proposed influent model/file has been released to the benchmark developers for evaluation purposes. Pending this evaluation, a final BSM1_LT/BSM2 influent disturbance model definition is foreseen. Preliminary simulations with dynamic influent data generated by the influent disturbance model indicate that default BSM1 activated sludge plant control strategies will need extensions for BSM1_LT/BSM2 to efficiently handle 1 year of influent dynamics.

  2. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these benchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  3. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    NASA Technical Reports Server (NTRS)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  4. In-situ Condition Monitoring of Components in Small Modular Reactors Using Process and Electrical Signature Analysis. Final report, volume 1. Development of experimental flow control loop, data analysis and plant monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhyaya, Belle; Hines, J. Wesley; Damiano, Brian

    The research and development under this project was focused on the following three major objectives: Objective 1: Identification of critical in-vessel SMR components for remote monitoring and development of their low-order dynamic models, along with a simulation model of an integral pressurized water reactor (iPWR). Objective 2: Development of an experimental flow control loop with motor-driven valves and pumps, incorporating data acquisition and on-line monitoring interface. Objective 3: Development of stationary and transient signal processing methods for electrical signatures, machinery vibration, and for characterizing process variables for equipment monitoring. This objective includes the development of a data analysis toolbox. Themore » following is a summary of the technical accomplishments under this project: - A detailed literature review of various SMR types and electrical signature analysis of motor-driven systems was completed. A bibliography of literature is provided at the end of this report. Assistance was provided by ORNL in identifying some key references. - A review of literature on pump-motor modeling and digital signal processing methods was performed. - An existing flow control loop was upgraded with new instrumentation, data acquisition hardware and software. The upgrading of the experimental loop included the installation of a new submersible pump driven by a three-phase induction motor. All the sensors were calibrated before full-scale experimental runs were performed. - MATLAB-Simulink model of a three-phase induction motor and pump system was completed. The model was used to simulate normal operation and fault conditions in the motor-pump system, and to identify changes in the electrical signatures. - A simulation model of an integral PWR (iPWR) was updated and the MATLAB-Simulink model was validated for known transients. The pump-motor model was interfaced with the iPWR model for testing the impact of primary flow perturbations (upsets) on plant parameters and the pump electrical signatures. Additionally, the reactor simulation is being used to generate normal operation data and data with instrumentation faults and process anomalies. A frequency controller was interfaced with the motor power supply in order to vary the electrical supply frequency. The experimental flow control loop was used to generate operational data under varying motor performance characteristics. Coolant leakage events were simulated by varying the bypass loop flow rate. The accuracy of motor power calculation was improved by incorporating the power factor, computed from motor current and voltage in each phase of the induction motor.- A variety of experimental runs were made for steady-state and transient pump operating conditions. Process, vibration, and electrical signatures were measured using a submersible pump with variable supply frequency. High correlation was seen between motor current and pump discharge pressure signal; similar high correlation was exhibited between pump motor power and flow rate. Wide-band analysis indicated high coherence (in the frequency domain) between motor current and vibration signals. - Wide-band operational data from a PWR were acquired from AMS Corporation and used to develop time-series models, and to estimate signal spectrum and sensor time constant. All the data were from different pressure transmitters in the system, including primary and secondary loops. These signals were pre-processed using the wavelet transform for filtering both low-frequency and high-frequency bands. This technique of signal pre-processing provides minimum distortion of the data, and results in a more optimal estimation of time constants of plant sensors using time-series modeling techniques.« less

  5. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    NASA Astrophysics Data System (ADS)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  6. Simulator for SUPO, a Benchmark Aqueous Homogeneous Reactor (AHR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Steven Karl; Determan, John C.

    2015-10-14

    A simulator has been developed for SUPO (Super Power) an aqueous homogeneous reactor (AHR) that operated at Los Alamos National Laboratory (LANL) from 1951 to 1974. During that period SUPO accumulated approximately 600,000 kWh of operation. It is considered the benchmark for steady-state operation of an AHR. The SUPO simulator was developed using the process that resulted in a simulator for an accelerator-driven subcritical system, which has been previously reported.

  7. Posttest analysis of international standard problem 10 using RELAP4/MOD7. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, M.; Davis, C.B.; Peterson, A.C. Jr.

    RELAP4/MOD7, a best estimate computer code for the calculation of thermal and hydraulic phenomena in a nuclear reactor or related system, is the latest version in the RELAP4 code development series. This paper evaluates the capability of RELAP4/MOD7 to calculate refill/reflood phenomena. This evaluation uses the data of International Standard Problem 10, which is based on West Germany's KWU PKL refill/reflood experiment K9A. The PKL test facility represents a typical West German four-loop, 1300 MW pressurized water reactor (PWR) in reduced scale while maintaining prototypical volume-to-power ratio. The PKL facility was designed to specifically simulate the refill/reflood phase of amore » hypothetical loss-of-coolant accident (LOCA).« less

  8. Development of on-line monitoring system for Nuclear Power Plant (NPP) using neuro-expert, noise analysis, and modified neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subekti, M.; Center for Development of Reactor Safety Technology, National Nuclear Energy Agency of Indonesia, Puspiptek Complex BO.80, Serpong-Tangerang, 15340; Ohno, T.

    2006-07-01

    The neuro-expert has been utilized in previous monitoring-system research of Pressure Water Reactor (PWR). The research improved the monitoring system by utilizing neuro-expert, conventional noise analysis and modified neural networks for capability extension. The parallel method applications required distributed architecture of computer-network for performing real-time tasks. The research aimed to improve the previous monitoring system, which could detect sensor degradation, and to perform the monitoring demonstration in High Temperature Engineering Tested Reactor (HTTR). The developing monitoring system based on some methods that have been tested using the data from online PWR simulator, as well as RSG-GAS (30 MW research reactormore » in Indonesia), will be applied in HTTR for more complex monitoring. (authors)« less

  9. Electrochemical study of pre- and post-transition corrosion of Zr alloys in PWR coolant

    NASA Astrophysics Data System (ADS)

    Macák, Jan; Novotný, Radek; Sajdl, Petr; Renčiuková, Veronika; Vrtílková, Věra

    Corrosion properties of Zr-Sn and Zr-Nb zirconium alloys were studied under simulated PWR conditions (or, more exactly, VVER conditions — boric acid, potassium hydroxide, lithium hydroxide) at temperatures up to 340°C and 15MPa using in-situ electrochemical impedance spectroscopy (EIS) and polarization measurements. EIS spectra were obtained in a wide range of frequencies (typically 100kHz — 100μHz). It enabled to gain information of both dielectric properties of oxide layers developing on the Zr-alloys surface and of the kinetics of the corrosion process and the associated charge and mass transfer phenomena. Experiments were run for more than 380 days; thus, the study of all the corrosion stages (pre-transition, transition, post-transition) was possible.

  10. EMERALD REV.1. PWR Accident Activity Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunot, W.K.; Fray, R.R.; Gillespie, S.G.

    1975-10-01

    The EMERALD program is designed for the calculation of radiation releases and exposures resulting from abnormal operation of a large pressurized water reactor (PWR). The approach used in EMERALD is similar to an analog simulation of a real system. Each component or volume in the plant which contains a radioactive material is represented by a subroutine which keeps track of the production, transfer, decay and absorption of radioactivity in that volume. During the course of the analysis of an accident, activity is transferred from subroutine to subroutine in the program as it would be transferred from place to place inmore » the plant. For example, in the calculation of the doses resulting from a loss-of-coolant accident the program first calculates the activity built up in the fuel before the accident, then releases some of this activity to the containment volume. Some of this activity is then released to the atmosphere. The rates of transfer, leakage, production, cleanup, decay, and release are read in as input to the program. Subroutines are also included which calculate the on-site and off-site radiation exposures at various distances for individual isotopes and sums of isotopes. The program contains a library of physical data for the twenty-five isotopes of most interest in licensing calculations, and other isotopes can be added or substituted. Because of the flexible nature of the simulation approach, the EMERALD program can be used for most calculations involving the production and release of radioactive materials during abnormal operation of a PWR. These include design, operational, and licensing studies.« less

  11. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Ellis; Derek Gaston; Benoit Forget

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less

  12. Development of burnup dependent fuel rod model in COBRA-TF

    NASA Astrophysics Data System (ADS)

    Yilmaz, Mine Ozdemir

    The purpose of this research was to develop a burnup dependent fuel thermal conductivity model within Pennsylvania State University, Reactor Dynamics and Fuel Management Group (RDFMG) version of the subchannel thermal-hydraulics code COBRA-TF (CTF). The model takes into account first, the degradation of fuel thermal conductivity with high burnup; and second, the fuel thermal conductivity dependence on the Gadolinium content for both UO2 and MOX fuel rods. The modified Nuclear Fuel Industries (NFI) model for UO2 fuel rods and Duriez/Modified NFI Model for MOX fuel rods were incorporated into CTF and fuel centerline predictions were compared against Halden experimental test data and FRAPCON-3.4 predictions to validate the burnup dependent fuel thermal conductivity model in CTF. Experimental test cases from Halden reactor fuel rods for UO2 fuel rods at Beginning of Life (BOL), through lifetime without Gd2O3 and through lifetime with Gd 2O3 and a MOX fuel rod were simulated with CTF. Since test fuel rod and FRAPCON-3.4 results were based on single rod measurements, CTF was run for a single fuel rod surrounded with a single channel configuration. Input decks for CTF were developed for one fuel rod located at the center of a subchannel (rod-centered subchannel approach). Fuel centerline temperatures predicted by CTF were compared against the measurements from Halden experimental test data and the predictions from FRAPCON-3.4. After implementing the new fuel thermal conductivity model in CTF and validating the model with experimental data, CTF model was applied to steady state and transient calculations. 4x4 PWR fuel bundle configuration from Purdue MOX benchmark was used to apply the new model for steady state and transient calculations. First, one of each high burnup UO2 and MOX fuel rods from 4x4 matrix were selected to carry out single fuel rod calculations and fuel centerline temperatures predicted by CTF/TORT-TD were compared against CTF /TORT-TD /FRAPTRAN predictions. After confirming that the new fuel thermal conductivity model in CTF worked and provided consistent results with FRAPTRAN predictions for a single fuel rod configuration, the same type of analysis was carried out for a bigger system which is the 4x4 PWR bundle consisting of 15 fuel pins and one control guide tube. Steady- state calculations at Hot Full Power (HFP) conditions for control guide tube out (unrodded) were performed using the 4x4 PWR array with CTF/TORT-TD coupled code system. Fuel centerline, surface and average temperatures predicted by CTF/TORT-TD with and without the new fuel thermal conductivity model were compared against CTF/TORT-TD/FRAPTRAN predictions to demonstrate the improvement in fuel centerline predictions when new model was used. In addition to that constant and CTF dynamic gap conductance model were used with the new thermal conductivity model to show the performance of the CTF dynamic gap conductance model and its impact on fuel centerline and surface temperatures. Finally, a Rod Ejection Accident (REA) scenario using the same 4x4 PWR array was run both at Hot Zero Power (HZP) and Hot Full Power (HFP) condition, starting at a position where half of the control rod is inserted. This scenario was run using CTF/TORT-TD coupled code system with and without the new fuel thermal conductivity model. The purpose of this transient analysis was to show the impact of thermal conductivity degradation (TCD) on feedback effects, specifically Doppler Reactivity Coefficient (DRC) and, eventually, total core reactivity.

  13. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less

  14. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DOE PAGES

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.; ...

    2017-06-13

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less

  15. 77 FR 37795 - Airworthiness Directives; Dassault Aviation Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-25

    ... display of ELEC:LH ESS PWR LO or ELEC:LH ESS NO PWR (Abnormal procedure 3-190-40), land at nearest suitable airport Upon display of ELEC:RH ESS PWR LO and ELEC:RH ESS NO PWR (Abnormal procedure 3-190-45...

  16. Benchmark Simulation Model No 2: finalisation of plant layout and default control strategy.

    PubMed

    Nopens, I; Benedetti, L; Jeppsson, U; Pons, M-N; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A

    2010-01-01

    The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted in more than 300 publications worldwide demonstrates the interest in and need of such tools within the research community. Recent efforts within the IWA Task Group on "Benchmarking of control strategies for WWTPs" have focused on an extension of the benchmark simulation model. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pretreatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given.

  17. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE PAGES

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...

    2018-03-26

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  18. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  19. PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Frederick, J. M.

    2016-12-01

    In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.

  20. Simulation Studies for Inspection of the Benchmark Test with PATRASH

    NASA Astrophysics Data System (ADS)

    Shimosaki, Y.; Igarashi, S.; Machida, S.; Shirakata, M.; Takayama, K.; Noda, F.; Shigaki, K.

    2002-12-01

    In order to delineate the halo-formation mechanisms in a typical FODO lattice, a 2-D simulation code PATRASH (PArticle TRAcking in a Synchrotron for Halo analysis) has been developed. The electric field originating from the space charge is calculated by the Hybrid Tree code method. Benchmark tests utilizing three simulation codes of ACCSIM, PATRASH and SIMPSONS were carried out. These results have been confirmed to be fairly in agreement with each other. The details of PATRASH simulation are discussed with some examples.

  1. The influence of psychological factors on post-partum weight retention at 9 months.

    PubMed

    Phillips, Joanne; King, Ross; Skouteris, Helen

    2014-11-01

    Post-partum weight retention (PWR) has been identified as a critical pathway for long-term overweight and obesity. In recent years, psychological factors have been demonstrated to play a key role in contributing to and maintaining PWR. Therefore, the aim of this study was to explore the relationship between post-partum psychological distress and PWR at 9 months, after controlling for maternal weight factors, sleep quality, sociocontextual influences, and maternal behaviours. Pregnant women (N = 126) completed a series of questionnaires at multiple time points from early pregnancy until 9 months post-partum. Hierarchical regression indicated that gestational weight gain, shorter duration (6 months or less) of breastfeeding, and post-partum body dissatisfaction at 3 and 6 months are associated with higher PWR at 9 months; stress, depression, and anxiety had minimal influence. Interventions aimed at preventing excessive PWR should specifically target the prevention of body dissatisfaction and excessive weight gain during pregnancy. What is already known on this subject? Post-partum weight retention (PWR) is a critical pathway for long-term overweight and obesity. Causes of PWR are complex and multifactorial. There is increasing evidence that psychological factors play a key role in predicting high PWR. What does this study add? Post-partum body dissatisfaction at 3 and 6 months is associated with PWR at 9 months post-birth. Post-partum depression, stress and anxiety have less influence on PWR at 9 months. Interventions aimed at preventing excessive PWR should target body dissatisfaction. © 2013 The British Psychological Society.

  2. Verification of cardiac mechanics software: benchmark problems and solutions for testing active and passive material behaviour.

    PubMed

    Land, Sander; Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E; Hurtado, Daniel E; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P; Sundnes, Joakim; Rice, J Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A

    2015-12-08

    Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software.

  3. 40 CFR Appendix A to Part 76 - Phase I Affected Coal-Fired Utility Units With Group 1 or Cell Burner Boilers

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... MONTROSE 2 KANSAS CITY PWR & LT. MISSOURI MONTROSE 3 KANSAS CITY PWR & LT. NEW YORK DUNKIRK 3 NIAGARA MOHAWK PWR. NEW YORK DUNKIRK 4 NIAGARA MOHAWK PWR. NEW YORK GREENIDGE 6 NY STATE ELEC & GAS. NEW YORK...

  4. 40 CFR Appendix A to Part 76 - Phase I Affected Coal-Fired Utility Units With Group 1 or Cell Burner Boilers

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... MONTROSE 2 KANSAS CITY PWR & LT. MISSOURI MONTROSE 3 KANSAS CITY PWR & LT. NEW YORK DUNKIRK 3 NIAGARA MOHAWK PWR. NEW YORK DUNKIRK 4 NIAGARA MOHAWK PWR. NEW YORK GREENIDGE 6 NY STATE ELEC & GAS. NEW YORK...

  5. 40 CFR Appendix A to Part 76 - Phase I Affected Coal-Fired Utility Units With Group 1 or Cell Burner Boilers

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... MONTROSE 2 KANSAS CITY PWR & LT. MISSOURI MONTROSE 3 KANSAS CITY PWR & LT. NEW YORK DUNKIRK 3 NIAGARA MOHAWK PWR. NEW YORK DUNKIRK 4 NIAGARA MOHAWK PWR. NEW YORK GREENIDGE 6 NY STATE ELEC & GAS. NEW YORK...

  6. Simulation-based comprehensive benchmarking of RNA-seq aligners

    PubMed Central

    Baruzzo, Giacomo; Hayer, Katharina E; Kim, Eun Ji; Di Camillo, Barbara; FitzGerald, Garret A; Grant, Gregory R

    2018-01-01

    Alignment is the first step in most RNA-seq analysis pipelines, and the accuracy of downstream analyses depends heavily on it. Unlike most steps in the pipeline, alignment is particularly amenable to benchmarking with simulated data. We performed a comprehensive benchmarking of 14 common splice-aware aligners for base, read, and exon junction-level accuracy and compared default with optimized parameters. We found that performance varied by genome complexity, and accuracy and popularity were poorly correlated. The most widely cited tool underperforms for most metrics, particularly when using default settings. PMID:27941783

  7. Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole

    2017-03-01

    The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.

  8. Hot zero power reactor calculations using the Insilico code

    DOE PAGES

    Hamilton, Steven P.; Evans, Thomas M.; Davidson, Gregory G.; ...

    2016-03-18

    In this paper we describe the reactor physics simulation capabilities of the insilico code. A description of the various capabilities of the code is provided, including detailed discussion of the geometry, meshing, cross section processing, and neutron transport options. Numerical results demonstrate that the insilico SP N solver with pin-homogenized cross section generation is capable of delivering highly accurate full-core simulation of various PWR problems. Comparison to both Monte Carlo calculations and measured plant data is provided.

  9. Three-dimensional benchmark for variable-density flow and transport simulation: matching semi-analytic stability modes for steady unstable convection in an inclined porous box

    USGS Publications Warehouse

    Voss, Clifford I.; Simmons, Craig T.; Robinson, Neville I.

    2010-01-01

    This benchmark for three-dimensional (3D) numerical simulators of variable-density groundwater flow and solute or energy transport consists of matching simulation results with the semi-analytical solution for the transition from one steady-state convective mode to another in a porous box. Previous experimental and analytical studies of natural convective flow in an inclined porous layer have shown that there are a variety of convective modes possible depending on system parameters, geometry and inclination. In particular, there is a well-defined transition from the helicoidal mode consisting of downslope longitudinal rolls superimposed upon an upslope unicellular roll to a mode consisting of purely an upslope unicellular roll. Three-dimensional benchmarks for variable-density simulators are currently (2009) lacking and comparison of simulation results with this transition locus provides an unambiguous means to test the ability of such simulators to represent steady-state unstable 3D variable-density physics.

  10. A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators.

    PubMed

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2015-12-19

    In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented.

  11. Human Factors and Technical Considerations for a Computerized Operator Support System Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulrich, Thomas Anthony; Lew, Roger Thomas; Medema, Heather Dawne

    2015-09-01

    A prototype computerized operator support system (COSS) has been developed in order to demonstrate the concept and provide a test bed for further research. The prototype is based on four underlying elements consisting of a digital alarm system, computer-based procedures, PI&D system representations, and a recommender module for mitigation actions. At this point, the prototype simulates an interface to a sensor validation module and a fault diagnosis module. These two modules will be fully integrated in the next version of the prototype. The initial version of the prototype is now operational at the Idaho National Laboratory using the U.S. Departmentmore » of Energy’s Light Water Reactor Sustainability (LWRS) Human Systems Simulation Laboratory (HSSL). The HSSL is a full-scope, full-scale glass top simulator capable of simulating existing and future nuclear power plant main control rooms. The COSS is interfaced to the Generic Pressurized Water Reactor (gPWR) simulator with industry-typical control board layouts. The glass top panels display realistic images of the control boards that can be operated by touch gestures. A section of the simulated control board was dedicated to the COSS human-system interface (HSI), which resulted in a seamless integration of the COSS into the normal control room environment. A COSS demonstration scenario has been developed for the prototype involving the Chemical & Volume Control System (CVCS) of the PWR simulator. It involves a primary coolant leak outside of containment that would require tripping the reactor if not mitigated in a very short timeframe. The COSS prototype presents a series of operator screens that provide the needed information and soft controls to successfully mitigate the event.« less

  12. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    PubMed

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  14. Experiment data report for Semiscale Mod-1 Test S-05-1 (alternate ECC injection test)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldman, E. M.; Patton, Jr., M. L.; Sackett, K. E.

    Recorded test data are presented for Test S-05-1 of the Semiscale Mod-1 alternate ECC injection test series. These tests are among several Semiscale Mod-1 experiments conducted to investigate the thermal and hydraulic phenomena accompanying a hypothesized loss-of-coolant accident in a pressurized water reactor (PWR) system. Test S-05-1 was conducted from initial conditions of 2263 psia and 544/sup 0/F to investigate the response of the Semiscale Mod-1 system to a depressurization and reflood transient following a simulated double-ended offset shear of the cold leg broken loop piping. During the test, cooling water was injected into the vessel lower plenum to simulatemore » emergency core coolant injection in a PWR, with the flow rate based on system volume scaling.« less

  15. Development of cement solidification process for sodium borate waste generated from PWR plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirofumi Okabe; Tatsuaki Sato; Yuichi Shoji

    2013-07-01

    A cement solidification process for treating sodium borate waste produced in pressurized water reactor (PWR) plants was studied. To obtain high volume reduction and high mechanical strength of the waste, simulated concentrated borate liquid waste with a sodium / boron (Na/B) mole ratio of 0.27 was dehydrated and powdered by using a wiped film evaporator. To investigate the effect of the Na/B mole ratio on the solidification process, a sodium tetraborate decahydrate reagent with a Na/B mole ratio of 0.5 was also used. Ordinary portland cement (OPC) and some additives were used for the solidification. Solidified cement prepared from powderedmore » waste with a Na/B mole ratio 0.24 and having a high silica sand content (silica sand/cement>2) showed to improved uniaxial compressive strength. (authors)« less

  16. EMERALD REVISION 1; PWR accident activity release. [IBM360,370; FORTRAN IV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowler, T.B.; Tobias, M.L.; Fox, J.N.

    The EMERALD program is designed for the calculation of radiation releases and exposures resulting from abnormal operation of a large pressurized water reactor (PWR). The approach used in EMERALD is similar to an analog simulation of a real system. Each component or volume in the plant which contains a radioactive material is represented by a subroutine which keeps track of the production, transfer, decay and absorption of radioactivity in that volume. During the course of the analysis of an accident, activity is transferred from subroutine to subroutine in the program as it would be transferred from place to place inmore » the plant. For example, in the calculation of the doses resulting from a loss-of-coolant accident the program first calculates the activity built up in the fuel before the accident, then releases some of this activity to the containment volume. Some of this activity is then released to the atmosphere. The rates of transfer, leakage, production, cleanup, decay, and release are read in as input to the program. Subroutines are also included which calculate the on-site and off-site radiation exposures at various distances for individual isotopes and sums of isotopes. The program contains a library of physical data for the twenty-five isotopes of most interest in licensing calculations, and other isotopes can be added or substituted. Because of the flexible nature of the simulation approach, the EMERALD program can be used for most calculations involving the production and release of radioactive materials during abnormal operation of a PWR. These include design, operational, and licensing studies.IBM360,370; FORTRAN IV; OS/360,370 (IBM360,370); 520K bytes of memory are required..« less

  17. Low Platelet to White Blood Cell Ratio Indicates Poor Prognosis for Acute-On-Chronic Liver Failure.

    PubMed

    Jie, Yusheng; Gong, Jiao; Xiao, Cuicui; Zhu, Shuguang; Zhou, Wenying; Luo, Juan; Chong, Yutian; Hu, Bo

    2018-01-01

    Background. Platelet to white blood cell ratio (PWR) was an independent prognostic predictor for outcomes in some diseases. However, the prognostic role of PWR is still unclear in patients with hepatitis B related acute-on-chronic liver failure (ACLF). In this study, we evaluated the clinical performances of PWR in predicting prognosis in HBV-related ACLF. Methods. A total of 530 subjects were recruited, including 97 healthy controls and 433 with HBV-related ACLF. Liver function, prothrombin time activity (PTA), international normalized ratio (INR), HBV DNA measurement, and routine hematological testing were performed at admission. Results . At baseline, PWR in patients with HBV-related ACLF (14.03 ± 7.17) was significantly decreased compared to those in healthy controls (39.16 ± 9.80). Reduced PWR values were clinically associated with the severity of liver disease and the increased mortality rate. Furthermore, PWR may be an inexpensive, easily accessible, and significant independent prognostic index for mortality on multivariate analysis (HR = 0.660, 95% CI: 0.438-0.996, p = 0.048) as well as model for end-stage liver disease (MELD) score. Conclusions . The PWR values were markedly decreased in ACLF patients compared with healthy controls and associated with severe liver disease. Moreover, PWR was an independent prognostic indicator for the mortality rate in patients with ACLF. This investigation highlights that PWR comprised a useful biomarker for prediction of liver severity.

  18. Establishing objective benchmarks in robotic virtual reality simulation at the level of a competent surgeon using the RobotiX Mentor simulator.

    PubMed

    Watkinson, William; Raison, Nicholas; Abe, Takashige; Harrison, Patrick; Khan, Shamim; Van der Poel, Henk; Dasgupta, Prokar; Ahmed, Kamran

    2018-05-01

    To establish objective benchmarks at the level of a competent robotic surgeon across different exercises and metrics for the RobotiX Mentor virtual reality (VR) simulator suitable for use within a robotic surgical training curriculum. This retrospective observational study analysed results from multiple data sources, all of which used the RobotiX Mentor VR simulator. 123 participants with varying experience from novice to expert completed the exercises. Competency was established as the 25th centile of the mean advanced intermediate score. Three basic skill exercises and two advanced skill exercises were used. King's College London. 84 Novice, 26 beginner intermediates, 9 advanced intermediates and 4 experts were used in this retrospective observational study. Objective benchmarks derived from the 25th centile of the mean scores of the advanced intermediates provided suitably challenging yet also achievable targets for training surgeons. The disparity in scores was greatest for the advanced exercises. Novice surgeons are able to achieve the benchmarks across all exercises in the majority of metrics. We have successfully created this proof-of-concept study, which requires validation in a larger cohort. Objective benchmarks obtained from the 25th centile of the mean scores of advanced intermediates provide clinically relevant benchmarks at the standard of a competent robotic surgeon that are challenging yet also attainable. That can be used within a VR training curriculum allowing participants to track and monitor their progress in a structured and progressional manner through five exercises. Providing clearly defined targets, ensuring that a universal training standard has been achieved across training surgeons. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Access to a simulator is not enough: the benefits of virtual reality training based on peer-group-derived benchmarks--a randomized controlled trial.

    PubMed

    von Websky, Martin W; Raptis, Dimitri A; Vitz, Martina; Rosenthal, Rachel; Clavien, P A; Hahnloser, Dieter

    2013-11-01

    Virtual reality (VR) simulators are widely used to familiarize surgical novices with laparoscopy, but VR training methods differ in efficacy. In the present trial, self-controlled basic VR training (SC-training) was tested against training based on peer-group-derived benchmarks (PGD-training). First, novice laparoscopic residents were randomized into a SC group (n = 34), and a group using PGD-benchmarks (n = 34) for basic laparoscopic training. After completing basic training, both groups performed 60 VR laparoscopic cholecystectomies for performance analysis. Primary endpoints were simulator metrics; secondary endpoints were program adherence, trainee motivation, and training efficacy. Altogether, 66 residents completed basic training, and 3,837 of 3,960 (96.8 %) cholecystectomies were available for analysis. Course adherence was good, with only two dropouts, both in the SC-group. The PGD-group spent more time and repetitions in basic training until the benchmarks were reached and subsequently showed better performance in the readout cholecystectomies: Median time (gallbladder extraction) showed significant differences of 520 s (IQR 354-738 s) in SC-training versus 390 s (IQR 278-536 s) in the PGD-group (p < 0.001) and 215 s (IQR 175-276 s) in experts, respectively. Path length of the right instrument also showed significant differences, again with the PGD-training group being more efficient. Basic VR laparoscopic training based on PGD benchmarks with external assessment is superior to SC training, resulting in higher trainee motivation and better performance in simulated laparoscopic cholecystectomies. We recommend such a basic course based on PGD benchmarks before advancing to more elaborate VR training.

  20. Effect of surface state on the oxidation behavior of welded 308L in simulated nominal primary water of PWR

    NASA Astrophysics Data System (ADS)

    Ming, Hongliang; Zhang, Zhiming; Wang, Jiazhen; Zhu, Ruolin; Ding, Jie; Wang, Jianqiu; Han, En-Hou; Ke, Wei

    2015-05-01

    The oxidation behavior of 308L weld metal (WM) with different surface state in the simulated nominal primary water of pressurized water reactor (PWR) was studied by scanning electron microscopy (SEM) equipped with energy dispersive X-ray spectroscopy (EDS), X-ray diffraction (XRD) analyzer and X-ray photoelectron spectroscopy (XPS). After 480 h immersion, a duplex oxide film composed of a Fe-rich outer layer (Fe3O4, Fe2O3 and a small amount of NiFe2O4, Ni(OH)2, Cr(OH)3 and (Ni, Fe)Cr2O4) and a Cr-rich inner layer (FeCr2O4 and NiCr2O4) can be formed on the 308L WM samples with different surface state. The surface state has no influence on the phase composition of the oxide films but obviously affects the thickness of the oxide films and the morphology of the oxides (number & size). With increasing the density of dislocations and subgrain boundaries in the cold-worked superficial layer, the thickness of the oxide film, the number and size of the oxides decrease.

  1. Joining dissimilar stainless steels for pressure vessel components

    NASA Astrophysics Data System (ADS)

    Sun, Zheng; Han, Huai-Yue

    1994-03-01

    A series of studies was carried out to examine the weldability and properties of dissimilar steel joints between martensitic and austenitic stainless steels - F6NM (OCr13Ni4Mo) and AISI 347, respectively. Such joints are important parts in, e.g. the primary circuit of a pressurized water reactor (PWR). This kind of joint requires both good mechanical properties, corrosion resistance and a stable magnetic permeability besides good weldability. The weldability tests included weld thermal simulation of the martensitic steel for investigating the influence of weld thermal cycles and post-weld heat treatment (PWHT) on the mechanical properties of the heat-affected zone (HAZ); implant testing for examining the tendency for cold cracking of martensitic steel; rigid restraint testing for determining hot crack susceptibility of the multi-pass dissimilar steel joints. The joints were subjected to various mechanical tests including a tensile test, bending test and impact test at various temperatures, as well as slow strain-rate test for examining the stress corrosion cracking tendency in the simulated environment of a primary circuit of a PWR. The results of various tests indicated that the quality of the tube/tube joints is satisfactory for meeting all the design requirements.

  2. Multidimensional Mixing Behavior of Steam-Water Flow in a Downcomer Annulus During LBLOCA Reflood Phase with a Direct Vessel Injection Mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, Tae-Soon; Yun, Byong-Jo; Euh, Dong-Jin

    Multidimensional thermal-hydraulic behavior in the downcomer annulus of a pressurized water reactor (PWR) vessel with a direct vessel injection mode is presented based on the experimental observation in the MIDAS (multidimensional investigation in downcomer annulus simulation) steam-water test facility. From the steady-state test results to simulate the late reflood phase of a large-break loss-of-coolant accident (LBLOCA), isothermal lines show the multidimensional phenomena of a phasic interaction between steam and water in the downcomer annulus very well. MIDAS is a steam-water separate effect test facility, which is 1/4.93 linearly scaled down to a 1400-MW(electric) PWR type of a nuclear reactor, focusedmore » on understanding multidimensional thermal-hydraulic phenomena in a downcomer annulus with various types of safety injection during the refill or reflood phase of an LBLOCA. The initial and the boundary conditions are scaled from the pretest analysis based on the preliminary calculation using the TRAC code. The superheated steam with a superheating degree of 80 K at a given downcomer pressure of 180 kPa is injected equally through three intact cold legs into the downcomer.« less

  3. Annual progress report on the NSRR experiments, (21)

    NASA Astrophysics Data System (ADS)

    1992-05-01

    Fuel behavior studies under simulated reactivity-initiated accident (RIA) conditions have been performed in the Nuclear Safety Research Reactor (NSRR) since 1975. This report gives the results of experiments performed from April, 1989 through March, 1990 and discussions of them. A total of 41 tests were carried out during this period. The tests are distinguished into pre-irradiated fuel tests and fresh fuel tests; the former includes 2 JMTR pre-irradiated fuel tests, 2 PWR pre-irradiated fuel tests, and 2 BWR pre-irradiated fuel tests, and the latter includes 6 standard fuel tests (6 SP(center dot)CP scoping tests), 7 power and cooling condition parameter tests (4 flow shrouded fuel tests, 1 bundle simulation test, 1 fully water-filled vessel test, 1 high pressure/high temperature loop test), 12 special fuel tests (3 stainless steel clad fuel tests, 3 improved PWR fuel tests, 6 improved BWR fuel tests), 3 severe fuel damage tests (1 high temperature flooding test, 1 flooding behavior observation test, 1 debris coolability test), 3 fast breeder reactor fuel tests (2 moderator material characteristic measurement tests, 1 fuel behavior observation test), and 2 miscellaneous tests (2 preliminary tests for pre-irradiated fuel tests).

  4. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  5. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  6. Pretest analysis of natural circulation on the PWR model PACTEL with horizontal steam generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kervinen, T.; Riikonen, V.; Ritonummi, T.

    A new tests facility - parallel channel tests loop (PACTEL)- has been designed and built to simulate the major components and system behavior of pressurized water reactors (PWRs) during postulated small- and medium-break loss-of-coolant accidents. Pretest calculations have been performed for the first test series, and the results of these calculations are being used for planning experiments, for adjusting the data acquisition system, and for choosing the optimal position and type of instrumentation. PACTEL is a volumetrically scaled (1:305) model of the VVER-440 PWR. In all the calculated cases, the natural circulation was found to be effective in removing themore » heat from the core to the steam generator. The loop mass flow rate peaked at 60% mass inventory. The straightening of the loop seals increased the mass flow rate significantly.« less

  7. Posttest RELAP5 simulations of the Semiscale S-UT series experiments. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leonard, M.T.

    The RELAP5/MOD1 computer code was used to perform posttest calculations, simulating six experiments, run in the Semiscale Mod-2A facility, investigating the effects of upper head injection on small break transient behavior. The results of these calculations and corresponding test data are presented in this report. An evaluation is made of the capability of RELAP5 to calculate the thermal-hydraulic response of the Mod-2A system over a spectrum of break sizes, with and without the use of upper head injection.

  8. BACT Simulation User Guide (Version 7.0)

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1997-01-01

    This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.

  9. An overview of the ENEA activities in the field of coupled codes NPP simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parisi, C.; Negrenti, E.; Sepielli, M.

    2012-07-01

    In the framework of the nuclear research activities in the fields of safety, training and education, ENEA (the Italian National Agency for New Technologies, Energy and the Sustainable Development) is in charge of defining and pursuing all the necessary steps for the development of a NPP engineering simulator at the 'Casaccia' Research Center near Rome. A summary of the activities in the field of the nuclear power plants simulation by coupled codes is here presented with the long term strategy for the engineering simulator development. Specifically, results from the participation in international benchmarking activities like the OECD/NEA 'Kalinin-3' benchmark andmore » the 'AER-DYN-002' benchmark, together with simulations of relevant events like the Fukushima accident, are here reported. The ultimate goal of such activities performed using state-of-the-art technology is the re-establishment of top level competencies in the NPP simulation field in order to facilitate the development of Enhanced Engineering Simulators and to upgrade competencies for supporting national energy strategy decisions, the nuclear national safety authority, and the R and D activities on NPP designs. (authors)« less

  10. [Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy].

    PubMed

    Renner, Franziska

    2016-09-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.

  11. Generalizable open source urban water portfolio simulation framework demonstrated using a multi-objective risk-based planning benchmark problem.

    NASA Astrophysics Data System (ADS)

    Trindade, B. C.; Reed, P. M.

    2017-12-01

    The growing access and reduced cost for computing power in recent years has promoted rapid development and application of multi-objective water supply portfolio planning. As this trend continues there is a pressing need for flexible risk-based simulation frameworks and improved algorithm benchmarking for emerging classes of water supply planning and management problems. This work contributes the Water Utilities Management and Planning (WUMP) model: a generalizable and open source simulation framework designed to capture how water utilities can minimize operational and financial risks by regionally coordinating planning and management choices, i.e. making more efficient and coordinated use of restrictions, water transfers and financial hedging combined with possible construction of new infrastructure. We introduce the WUMP simulation framework as part of a new multi-objective benchmark problem for planning and management of regionally integrated water utility companies. In this problem, a group of fictitious water utilities seek to balance the use of the mentioned reliability driven actions (e.g., restrictions, water transfers and infrastructure pathways) and their inherent financial risks. Several traits of this problem make it ideal for a benchmark problem, namely the presence of (1) strong non-linearities and discontinuities in the Pareto front caused by the step-wise nature of the decision making formulation and by the abrupt addition of storage through infrastructure construction, (2) noise due to the stochastic nature of the streamflows and water demands, and (3) non-separability resulting from the cooperative formulation of the problem, in which decisions made by stakeholder may substantially impact others. Both the open source WUMP simulation framework and its demonstration in a challenging benchmarking example hold value for promoting broader advances in urban water supply portfolio planning for regions confronting change.

  12. VERA Core Simulator methodology for pressurized water reactor cycle depletion

    DOE PAGES

    Kochunas, Brendan; Collins, Benjamin; Stimpson, Shane; ...

    2017-01-12

    This paper describes the methodology developed and implemented in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS) to perform high-fidelity, pressurized water reactor (PWR), multicycle, core physics calculations. Depletion of the core with pin-resolved power and nuclide detail is a significant advance in the state of the art for reactor analysis, providing the level of detail necessary to address the problems of the U.S. Department of Energy Nuclear Reactor Simulation Hub, the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS has three main components: the neutronics solver MPACT, the thermal-hydraulic (T-H) solver COBRA-TF (CTF), and the nuclidemore » transmutation solver ORIGEN. This paper focuses on MPACT and provides an overview of the resonance self-shielding methods, macroscopic-cross-section calculation, two-dimensional/one-dimensional (2-D/1-D) transport, nuclide depletion, T-H feedback, and other supporting methods representing a minimal set of the capabilities needed to simulate high-fidelity models of a commercial nuclear reactor. Results are presented from the simulation of a model of the first cycle of Watts Bar Unit 1. The simulation is within 16 parts per million boron (ppmB) reactivity for all state points compared to cycle measurements, with an average reactivity bias of <5 ppmB for the entire cycle. Comparisons to cycle 1 flux map data are also provided, and the average 2-D root-mean-square (rms) error during cycle 1 is 1.07%. To demonstrate the multicycle capability, a state point at beginning of cycle (BOC) 2 was also simulated and compared to plant data. The comparison of the cycle 2 BOC state has a reactivity difference of +3 ppmB from measurement, and the 2-D rms of the comparison in the flux maps is 1.77%. Lastly, these results provide confidence in VERA-CS’s capability to perform high-fidelity calculations for practical PWR reactor problems.« less

  13. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  14. A chemical EOR benchmark study of different reservoir simulators

    NASA Astrophysics Data System (ADS)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    Interest in chemical EOR processes has intensified in recent years due to the advancements in chemical formulations and injection techniques. Injecting Polymer (P), surfactant/polymer (SP), and alkaline/surfactant/polymer (ASP) are techniques for improving sweep and displacement efficiencies with the aim of improving oil production in both secondary and tertiary floods. There has been great interest in chemical flooding recently for different challenging situations. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. More oil reservoirs are reaching maturity where secondary polymer floods and tertiary surfactant methods have become increasingly important. This significance has added to the industry's interest in using reservoir simulators as tools for reservoir evaluation and management to minimize costs and increase the process efficiency. Reservoir simulators with special features are needed to represent coupled chemical and physical processes present in chemical EOR processes. The simulators need to be first validated against well controlled lab and pilot scale experiments to reliably predict the full field implementations. The available data from laboratory scale include 1) phase behavior and rheological data; and 2) results of secondary and tertiary coreflood experiments for P, SP, and ASP floods under reservoir conditions, i.e. chemical retentions, pressure drop, and oil recovery. Data collected from corefloods are used as benchmark tests comparing numerical reservoir simulators with chemical EOR modeling capabilities such as STARS of CMG, ECLIPSE-100 of Schlumberger, REVEAL of Petroleum Experts. The research UTCHEM simulator from The University of Texas at Austin is also included since it has been the benchmark for chemical flooding simulation for over 25 years. The results of this benchmark comparison will be utilized to improve chemical design for field-scale studies using commercial simulators. The benchmark tests illustrate the potential of commercial simulators for chemical flooding projects and provide a comprehensive table of strengths and limitations of each simulator for a given chemical EOR process. Mechanistic simulations of chemical EOR processes will provide predictive capability and can aid in optimization of the field injection projects. The objective of this paper is not to compare the computational efficiency and solution algorithms; it only focuses on the process modeling comparison.

  15. A suite of exercises for verifying dynamic earthquake rupture codes

    USGS Publications Warehouse

    Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis

    2018-01-01

    We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.

  16. Plasmid partition system of the P1par family from the pWR100 virulence plasmid of Shigella flexneri.

    PubMed

    Sergueev, Kirill; Dabrazhynetskaya, Alena; Austin, Stuart

    2005-05-01

    P1par family members promote the active segregation of a variety of plasmids and plasmid prophages in gram-negative bacteria. Each has genes for ParA and ParB proteins, followed by a parS partition site. The large virulence plasmid pWR100 of Shigella flexneri contains a new P1par family member: pWR100par. Although typical parA and parB genes are present, the putative pWR100parS site is atypical in sequence and organization. However, pWR100parS promoted accurate plasmid partition in Escherichia coli when the pWR100 Par proteins were supplied. Unique BoxB hexamer motifs within parS define species specificities among previously described family members. Although substantially different from P1parS from the P1 plasmid prophage of E. coli, pWR100parS has the same BoxB sequence. As predicted, the species specificity of the two types proved identical. They also shared partition-mediated incompatibility, consistent with the proposed mechanistic link between incompatibility and species specificity. Among several informative sequence differences between pWR100parS and P1parS is the presence of a 21-bp insert at the center of the pWR100parS site. Deletion of this insert left much of the parS activity intact. Tolerance of central inserts with integral numbers of helical DNA turns reflects the critical topology of these sites, which are bent by binding the host IHF protein.

  17. Monte Carlo characterization of PWR spent fuel assemblies to determine the detectability of pin diversion

    NASA Astrophysics Data System (ADS)

    Burdo, James S.

    This research is based on the concept that the diversion of nuclear fuel pins from Light Water Reactor (LWR) spent fuel assemblies is feasible by a careful comparison of spontaneous fission neutron and gamma levels in the guide tube locations of the fuel assemblies. The goal is to be able to determine whether some of the assembly fuel pins are either missing or have been replaced with dummy or fresh fuel pins. It is known that for typical commercial power spent fuel assemblies, the dominant spontaneous neutron emissions come from Cm-242 and Cm-244. Because of the shorter half-life of Cm-242 (0.45 yr) relative to that of Cm-244 (18.1 yr), Cm-244 is practically the only neutron source contributing to the neutron source term after the spent fuel assemblies are more than two years old. Initially, this research focused upon developing MCNP5 models of PWR fuel assemblies, modeling their depletion using the MONTEBURNS code, and by carrying out a preliminary depletion of a ¼ model 17x17 assembly from the TAKAHAMA-3 PWR. Later, the depletion and more accurate isotopic distribution in the pins at discharge was modeled using the TRITON depletion module of the SCALE computer code. Benchmarking comparisons were performed with the MONTEBURNS and TRITON results. Subsequently, the neutron flux in each of the guide tubes of the TAKAHAMA-3 PWR assembly at two years after discharge as calculated by the MCNP5 computer code was determined for various scenarios. Cases were considered for all spent fuel pins present and for replacement of a single pin at a position near the center of the assembly (10,9) and at the corner (17,1). Some scenarios were duplicated with a gamma flux calculation for high energies associated with Cm-244. For each case, the difference between the flux (neutron or gamma) for all spent fuel pins and with a pin removed or replaced is calculated for each guide tube. Different detection criteria were established. The first was whether the relative error of the difference was less than 1.00, allowing for the existence of the difference within the margin of error. The second was whether the difference between the two values was big enough to prevent their error bars from overlapping. Error analysis was performed both using a one second count and pseudo-Maxwell statistics for a projected 60 second count, giving four criteria for detection. The number of guide tubes meeting these criteria was compared and graphed for each case. Further analysis at extremes of high and low enrichment and long and short burnup times was done using data from assemblies at the Beaver Valley 1 and 2 PWR. In all neutron flux cases, at least two guide tube locations meet all the criteria for detection of pin diversion. At least one location in almost all of the gamma flux cases does. These results show that placing detectors in the empty guide tubes of spent fuel bundles to identify possible pin diversion is feasible.

  18. Root-cause analysis of the better performance of the coarse-mesh finite-difference method for CANDU-type reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, W.

    2012-07-01

    Recent assessment results indicate that the coarse-mesh finite-difference method (FDM) gives consistently smaller percent differences in channel powers than the fine-mesh FDM when compared to the reference MCNP solution for CANDU-type reactors. However, there is an impression that the fine-mesh FDM should always give more accurate results than the coarse-mesh FDM in theory. To answer the question if the better performance of the coarse-mesh FDM for CANDU-type reactors was just a coincidence (cancellation of errors) or caused by the use of heavy water or the use of lattice-homogenized cross sections for the cluster fuel geometry in the diffusion calculation, threemore » benchmark problems were set up with three different fuel lattices: CANDU, HWR and PWR. These benchmark problems were then used to analyze the root cause of the better performance of the coarse-mesh FDM for CANDU-type reactors. The analyses confirm that the better performance of the coarse-mesh FDM for CANDU-type reactors is mainly caused by the use of lattice-homogenized cross sections for the sub-meshes of the cluster fuel geometry in the diffusion calculation. Based on the analyses, it is recommended to use 2 x 2 coarse-mesh FDM to analyze CANDU-type reactors when lattice-homogenized cross sections are used in the core analysis. (authors)« less

  19. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model

    PubMed Central

    Saul, Katherine R.; Hu, Xiao; Goehler, Craig M.; Vidt, Meghan E.; Daly, Melissa; Velisar, Anca; Murray, Wendy M.

    2014-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms. PMID:24995410

  20. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model.

    PubMed

    Saul, Katherine R; Hu, Xiao; Goehler, Craig M; Vidt, Meghan E; Daly, Melissa; Velisar, Anca; Murray, Wendy M

    2015-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms.

  1. 77 FR 15293 - Airworthiness Directives; Dassault Aviation Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-15

    ...-190-20), land at nearest suitable airport Upon display of ELEC:LH ESS PWR LO or ELEC:LH ESS NO PWR (Abnormal procedure 3-190-40), land at nearest suitable airport Upon display of ELEC:RH ESS PWR LO and ELEC...

  2. A suite of benchmark and challenge problems for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark; Fu, Pengcheng; McClure, Mark

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less

  3. Computers for real time flight simulation: A market survey

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.; Karplus, W. J.

    1977-01-01

    An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.

  4. Performance of Multi-chaotic PSO on a shifted benchmark functions set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan

    2015-03-10

    In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions.

  5. Benchmarking of Neutron Production of Heavy-Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  6. Benchmarking of Heavy Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kochunas, Brendan; Collins, Benjamin; Stimpson, Shane

    This paper describes the methodology developed and implemented in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS) to perform high-fidelity, pressurized water reactor (PWR), multicycle, core physics calculations. Depletion of the core with pin-resolved power and nuclide detail is a significant advance in the state of the art for reactor analysis, providing the level of detail necessary to address the problems of the U.S. Department of Energy Nuclear Reactor Simulation Hub, the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS has three main components: the neutronics solver MPACT, the thermal-hydraulic (T-H) solver COBRA-TF (CTF), and the nuclidemore » transmutation solver ORIGEN. This paper focuses on MPACT and provides an overview of the resonance self-shielding methods, macroscopic-cross-section calculation, two-dimensional/one-dimensional (2-D/1-D) transport, nuclide depletion, T-H feedback, and other supporting methods representing a minimal set of the capabilities needed to simulate high-fidelity models of a commercial nuclear reactor. Results are presented from the simulation of a model of the first cycle of Watts Bar Unit 1. The simulation is within 16 parts per million boron (ppmB) reactivity for all state points compared to cycle measurements, with an average reactivity bias of <5 ppmB for the entire cycle. Comparisons to cycle 1 flux map data are also provided, and the average 2-D root-mean-square (rms) error during cycle 1 is 1.07%. To demonstrate the multicycle capability, a state point at beginning of cycle (BOC) 2 was also simulated and compared to plant data. The comparison of the cycle 2 BOC state has a reactivity difference of +3 ppmB from measurement, and the 2-D rms of the comparison in the flux maps is 1.77%. Lastly, these results provide confidence in VERA-CS’s capability to perform high-fidelity calculations for practical PWR reactor problems.« less

  8. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    NASA Astrophysics Data System (ADS)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  9. Photosynthetic productivity and its efficiencies in ISIMIP2a biome models: benchmarking for impact assessment studies

    NASA Astrophysics Data System (ADS)

    Ito, Akihiko; Nishina, Kazuya; Reyer, Christopher P. O.; François, Louis; Henrot, Alexandra-Jane; Munhoven, Guy; Jacquemin, Ingrid; Tian, Hanqin; Yang, Jia; Pan, Shufen; Morfopoulos, Catherine; Betts, Richard; Hickler, Thomas; Steinkamp, Jörg; Ostberg, Sebastian; Schaphoff, Sibyll; Ciais, Philippe; Chang, Jinfeng; Rafique, Rashid; Zeng, Ning; Zhao, Fang

    2017-08-01

    Simulating vegetation photosynthetic productivity (or gross primary production, GPP) is a critical feature of the biome models used for impact assessments of climate change. We conducted a benchmarking of global GPP simulated by eight biome models participating in the second phase of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP2a) with four meteorological forcing datasets (30 simulations), using independent GPP estimates and recent satellite data of solar-induced chlorophyll fluorescence as a proxy of GPP. The simulated global terrestrial GPP ranged from 98 to 141 Pg C yr-1 (1981-2000 mean); considerable inter-model and inter-data differences were found. Major features of spatial distribution and seasonal change of GPP were captured by each model, showing good agreement with the benchmarking data. All simulations showed incremental trends of annual GPP, seasonal-cycle amplitude, radiation-use efficiency, and water-use efficiency, mainly caused by the CO2 fertilization effect. The incremental slopes were higher than those obtained by remote sensing studies, but comparable with those by recent atmospheric observation. Apparent differences were found in the relationship between GPP and incoming solar radiation, for which forcing data differed considerably. The simulated GPP trends co-varied with a vegetation structural parameter, leaf area index, at model-dependent strengths, implying the importance of constraining canopy properties. In terms of extreme events, GPP anomalies associated with a historical El Niño event and large volcanic eruption were not consistently simulated in the model experiments due to deficiencies in both forcing data and parameterized environmental responsiveness. Although the benchmarking demonstrated the overall advancement of contemporary biome models, further refinements are required, for example, for solar radiation data and vegetation canopy schemes.

  10. An analytical benchmark and a Mathematica program for MD codes: Testing LAMMPS on the 2nd generation Brenner potential

    NASA Astrophysics Data System (ADS)

    Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.

    2016-10-01

    An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.

  11. Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.

    2007-03-01

    In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.

  12. Statistical evaluation of the metallurgical test data in the ORR-PSF-PVS irradiation experiment. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stallmann, F.W.

    1984-08-01

    A statistical analysis of Charpy test results of the two-year Pressure Vessel Simulation metallurgical irradiation experiment was performed. Determination of transition temperature and upper shelf energy derived from computer fits compare well with eyeball fits. Uncertainties for all results can be obtained with computer fits. The results were compared with predictions in Regulatory Guide 1.99 and other irradiation damage models.

  13. IgSimulator: a versatile immunosequencing simulator.

    PubMed

    Safonova, Yana; Lapidus, Alla; Lill, Jennie

    2015-10-01

    The recent introduction of next-generation sequencing technologies to antibody studies have resulted in a growing number of immunoinformatics tools for antibody repertoire analysis. However, benchmarking these newly emerging tools remains problematic since the gold standard datasets that are needed to validate these tools are typically not available. Since simulating antibody repertoires is often the only feasible way to benchmark new immunoinformatics tools, we developed the IgSimulator tool that addresses various complications in generating realistic antibody repertoires. IgSimulator's code has modular structure and can be easily adapted to new requirements to simulation. IgSimulator is open source and freely available as a C++ and Python program running on all Unix-compatible platforms. The source code is available from yana-safonova.github.io/ig_simulator. safonova.yana@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Benchmarks for target tracking

    NASA Astrophysics Data System (ADS)

    Dunham, Darin T.; West, Philip D.

    2011-09-01

    The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.

  15. Benchmarking of neutron production of heavy-ion transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, I.; Ronningen, R. M.; Heilbronn, L.

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less

  16. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less

  17. Microstructural Effects on SCC Initiation PWR Primary Water Cold-Worked Alloy 600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Ziqing; Toloczko, Mychailo B.; Bruemmer, Stephen M.

    SCC initiation behavior of one mill annealed alloy 600 plate heat was investigated in simulated PWR primary water under constant load at yield stress with in-situ direct current potential drop (DCPD) monitoring for crack initiation. Twelve specimens were tested at similar cold work levels among which three showed much shorter SCC initiation times (<400 hrs) than the others (>1200 hrs). Post-test examinations revealed that these three specimens all feature an inhomogeneous microstructure where the primary crack always nucleated along the boundary of large elongated grains protruding normally into the gauge. In contrast, such microstructure was either not observed or didmore » not extend deep enough into the gauge in the other specimens exhibiting ~3-6X longer initiation times. In order to better understand the role of this microstructural inhomogeneity in SCC initiation, high-resolution microscopy was performed to compare carbide morphology and strain distribution between the long grains and normal grains, and their potential effects on SCC initiation are discussed in this paper.« less

  18. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    NASA Astrophysics Data System (ADS)

    Hartini, Entin; Andiwijayakusuma, Dinan

    2014-09-01

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.

  19. Corrosion fatigue characterization of reactor pressure vessel steels. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Der Sluys, W.A.

    1982-12-01

    During routine operation, light water reactor (LWR) pressure vessels are subjected to a variety of transients that result in time-varying stresses. Consequently, fatigue and environmentally-assisted fatigue are mechanisms of growth relevant to flaws in these pressure vessels. To provide a better understanding of the resistance of nuclear pressure vessel steels to these flaw growth processes, fracture mechanics data were generated on the rates of fatigue crack growth for SA508-2 and SA533B-1 steels in both room temperature air and 288/sup 0/C water. Areas investigated were: the relationship of crack growth rate to prior loading history; the effects of loading frequency andmore » R ratio (K/sub min//K/sub max/) on crack growth rate as a function of the stress intensity factor range (..delta..K); transient aspects of the fatigue crack growth behavior; the effect of material chemistry (sulphur content) on fatigue crack; and growth rate; water chemistry effects (high-purity water versus simulated pressurized water reactotr (PWR) primary coolant).« less

  20. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id

    2014-09-30

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less

  1. Improved Biomolecular Thin-Film Sensor based on Plasmon Waveguide Resonance

    NASA Astrophysics Data System (ADS)

    Byard, Courtney; Aslan, Mustafa; Mendes, Sergio

    2009-05-01

    The design, fabrication, and characterization of a plasmon waveguide resonance (PWR) sensor are presented. Glass substrates are coated with a 35 nm gold film using electron beam evaporation, and then covered with a 143 nm aluminum oxide waveguide using an atomic layer deposition process, creating a smooth, highly transparent dielectric film. When probed in the Kretschmann configuration, the structure allows for an efficient conversion of an incident optical beam into a surface wave, which is mainly confined in the dielectric layer and exhibits a deep and narrow angular resonance. The performance (reflectance vs. incidence angle in TE polarization) is modeled using a transfer-matrix approach implemented into a Mathematica code. Our simulations and experimental data are compared with that of surface plasmon resonance (SPR) sensor using the same criteria. We show that the resolution of PWR is approximately ten times better than SPR, opening opportunities for more sensitive studies in various applications including research in protein interactions, pharmaceutical drug development, and food analysis.

  2. Optimization of burnable poison design for Pu incineration in fully fertile free PWR core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fridman, E.; Shwageraus, E.; Galperin, A.

    2006-07-01

    The design challenges of the fertile-free based fuel (FFF) can be addressed by careful and elaborate use of burnable poisons (BP). Practical fully FFF core design for PWR reactor has been reported in the past [1]. However, the burnable poison option used in the design resulted in significant end of cycle reactivity penalty due to incomplete BP depletion. Consequently, excessive Pu loading were required to maintain the target fuel cycle length, which in turn decreased the Pu burning efficiency. A systematic evaluation of commercially available BP materials in all configurations currently used in PWRs is the main objective of thismore » work. The BP materials considered are Boron, Gd, Er, and Hf. The BP geometries were based on Wet Annular Burnable Absorber (WABA), Integral Fuel Burnable Absorber (IFBA), and Homogeneous poison/fuel mixtures. Several most promising combinations of BP designs were selected for the full core 3D simulation. All major core performance parameters for the analyzed cases are very close to those of a standard PWR with conventional UO{sub 2} fuel including possibility of reactivity control, power peaking factors, and cycle length. The MTC of all FFF cores was found at the full power conditions at all times and very close to that of the UO{sub 2} core. The Doppler coefficient of the FFF cores is also negative but somewhat lower in magnitude compared to UO{sub 2} core. The soluble boron worth of the FFF cores was calculated to be lower than that of the UO{sub 2} core by about a factor of two, which still allows the core reactivity control with acceptable soluble boron concentrations. The main conclusion of this work is that judicial application of burnable poisons for fertile free fuel has a potential to produce a core design with performance characteristics close to those of the reference PWR core with conventional UO{sub 2} fuel. (authors)« less

  3. Benchmarking Heavy Ion Transport Codes FLUKA, HETC-HEDS MARS15, MCNPX, and PHITS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronningen, Reginald Martin; Remec, Igor; Heilbronn, Lawrence H.

    Powerful accelerators such as spallation neutron sources, muon-collider/neutrino facilities, and rare isotope beam facilities must be designed with the consideration that they handle the beam power reliably and safely, and they must be optimized to yield maximum performance relative to their design requirements. The simulation codes used for design purposes must produce reliable results. If not, component and facility designs can become costly, have limited lifetime and usefulness, and could even be unsafe. The objective of this proposal is to assess the performance of the currently available codes PHITS, FLUKA, MARS15, MCNPX, and HETC-HEDS that could be used for designmore » simulations involving heavy ion transport. We plan to access their performance by performing simulations and comparing results against experimental data of benchmark quality. Quantitative knowledge of the biases and the uncertainties of the simulations is essential as this potentially impacts the safe, reliable and cost effective design of any future radioactive ion beam facility. Further benchmarking of heavy-ion transport codes was one of the actions recommended in the Report of the 2003 RIA R&D Workshop".« less

  4. Humidification of Blow-By Oxygen During Recovery of Postoperative Pediatric Patients: One Unit's Journey.

    PubMed

    Donahue, Suzanne; DiBlasi, Robert M; Thomas, Karen

    2018-02-02

    To examine the practice of nebulizer cool mist blow-by oxygen administered to spontaneously breathing postanesthesia care unit (PACU) pediatric patients during Phase one recovery. Existing evidence was evaluated. Informal benchmarking documented practices in peer organizations. An in vitro study was then conducted to simulate clinical practice and determine depth and amount of airway humidity delivery with blow-by oxygen. Informal benchmarking information was obtained by telephone interview. Using a three-dimensional printed simulation model of the head connected to a breathing lung simulator, depth and amount of moisture delivery in the respiratory tree were measured. Evidence specific to PACU administration of cool mist blow-by oxygen was limited. Informal benchmarking revealed that routine cool mist oxygenated blow-by administration was not widely practiced. The laboratory experiment revealed minimal moisture reaching the mid-tracheal area of the simulated airway model. Routine use of oxygenated cool mist in spontaneously breathing pediatric PACU patients is not supported. Copyright © 2017 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.

  5. Benchmarking MARS (accident management software) with the Browns Ferry fire

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, S.M.; Liu, L.Y.; Raines, J.C.

    1992-01-01

    The MAAP Accident Response System (MARS) is a userfriendly computer software developed to provide management and engineering staff with the most needed insights, during actual or simulated accidents, of the current and future conditions of the plant based on current plant data and its trends. To demonstrate the reliability of the MARS code in simulatng a plant transient, MARS is being benchmarked with the available reactor pressure vessel (RPV) pressure and level data from the Browns Ferry fire. The MRS software uses the Modular Accident Analysis Program (MAAP) code as its basis to calculate plant response under accident conditions. MARSmore » uses a limited set of plant data to initialize and track the accidnt progression. To perform this benchmark, a simulated set of plant data was constructed based on actual report data containing the information necessary to initialize MARS and keep track of plant system status throughout the accident progression. The initial Browns Ferry fire data were produced by performing a MAAP run to simulate the accident. The remaining accident simulation used actual plant data.« less

  6. Opto-Electronic and Interconnects Hierarchical Design Automation System (OE-IDEAS)

    DTIC Science & Technology

    2004-05-01

    NETBOOK WEBSITE............................................................71 8.2 SIMULATION OF CRITICAL PATH FROM THE MAYO “10G” SYSTEM MCM BOARD...Benchmarks from the DaVinci Netbook website In May 2002, CFDRC downloaded all the materials from the DaVinci Netbook website containing the benchmark

  7. Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool

    NASA Astrophysics Data System (ADS)

    Torlapati, Jagadish; Prabhakar Clement, T.

    2013-01-01

    We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.

  8. Benchmarking FEniCS for mantle convection simulations

    NASA Astrophysics Data System (ADS)

    Vynnytska, L.; Rognes, M. E.; Clark, S. R.

    2013-01-01

    This paper evaluates the usability of the FEniCS Project for mantle convection simulations by numerical comparison to three established benchmarks. The benchmark problems all concern convection processes in an incompressible fluid induced by temperature or composition variations, and cover three cases: (i) steady-state convection with depth- and temperature-dependent viscosity, (ii) time-dependent convection with constant viscosity and internal heating, and (iii) a Rayleigh-Taylor instability. These problems are modeled by the Stokes equations for the fluid and advection-diffusion equations for the temperature and composition. The FEniCS Project provides a novel platform for the automated solution of differential equations by finite element methods. In particular, it offers a significant flexibility with regard to modeling and numerical discretization choices; we have here used a discontinuous Galerkin method for the numerical solution of the advection-diffusion equations. Our numerical results are in agreement with the benchmarks, and demonstrate the applicability of both the discontinuous Galerkin method and FEniCS for such applications.

  9. Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements

    DOE PAGES

    Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.; ...

    2014-11-04

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  10. Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  11. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these banchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  12. Electron Microscopy Characterizations and Atom Probe Tomography of Intergranular Attack in Alloy 600 Exposed to PWR Primary Water

    NASA Astrophysics Data System (ADS)

    Olszta, Matthew J.; Schreiber, Daniel K.; Thomas, Larry E.; Bruemmer, Stephen M.

    Detailed examinations of intergranular attack (IGA) in alloy 600 were performed after exposure to simulated PWR primary water at 325°C for 500 h. High-resolution analyses of IGA characteristics were conducted on specimens with either a 1 µm diamond or 1200-grit SiC surface finish using scanning electron microscopy, transmission electron microscopy and atom probe tomography techniques. The diamond-polish finish with very little preexisting subsurface damage revealed attack of high-energy grain boundaries that intersected the exposed surface to depths approaching 2 µm. In all cases, IGA from the surface is localized oxidation consisting of porous, nanocrystalline MO-structure and spinel particles along with regions of faceted wall oxidation. Surprisingly, this continuous IG oxidation transitions to discontinuous, discrete Cr-rich sulfide particles up to 50 nm in diameter. In the vicinity of the sulfides, the grain boundaries were severely Cr depleted (to <1 at%) and enriched in S. The 1200 grit SiC finish surface exhibited a preexisting highly strained recrystallized layer of elongated nanocrystalline matrix grains. Similar IG oxidation and leading sulfide particles were found, but the IGA depth was typically confined to the near-surface ( 400 nm) recrystallized region. Difference in IGA for the two surface finishes indicates that the formation of grain boundary sulfides occurs during the exposure to PWR primary water. The source of S remains unclear, however it is not present as sulfides in the bulk alloy nor is it segregated to bulk grain boundaries.

  13. The increase in fatigue crack growth rates observed for Zircaloy-4 in a PWR environment

    NASA Astrophysics Data System (ADS)

    Cockeram, B. V.; Kammenzind, B. F.

    2018-02-01

    Cyclic stresses produced during the operation of nuclear reactors can result in the extension of cracks by processes of fatigue. Although fatigue crack growth rate (FCGR) data for Zircaloy-4 in air are available, little testing has been performed in a PWR primary water environment. Test programs have been performed by Gee et al., in 1989 and Picker and Pickles in 1984 by the UK Atomic Energy Authority, and by Wisner et al., in 1994, that have shown an enhancement in FCGR for Zircaloy-2 and Zircaloy-4 in high-temperature water. In this work, FCGR testing is performed on Zircaloy-4 in a PWR environment in the hydrided and non-hydrided condition over a range of stress-intensity. Measurements of crack extension are performed using a direct current potential drop (DCPD) method. The cyclic rate in the PWR primary water environment is varied between 1 cycle per minute to 0.1 cycle per minute. Faster FCGR rates are observed in water in comparison to FCGR testing performed in air for the hydrided material. Hydrided and non-hydrided materials had similar FCGR values in air, but the non-hydrided material exhibited much lower rates of FCGR in a PWR primary water environment than for hydrided material. Hydrides are shown to exhibit an increased tendency for cracking or decohesion in a PWR primary water environment that results in an enhancement in FCGR values. The FCGR in the PWR primary water only increased slightly with decreasing cycle frequency in the range of 1 cycle per minute to 0.1 cycle per minute. Comparisons between the FCGR in water and air show the enhancement from the PWR environment is affected by the applied stress intensity.

  14. Benchmark simulation Model no 2 in Matlab-simulink: towards plant-wide WWTP control strategy evaluation.

    PubMed

    Vreck, D; Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    In this paper, implementation of the Benchmark Simulation Model No 2 (BSM2) within Matlab-Simulink is presented. The BSM2 is developed for plant-wide WWTP control strategy evaluation on a long-term basis. It consists of a pre-treatment process, an activated sludge process and sludge treatment processes. Extended evaluation criteria are proposed for plant-wide control strategy assessment. Default open-loop and closed-loop strategies are also proposed to be used as references with which to compare other control strategies. Simulations indicate that the BM2 is an appropriate tool for plant-wide control strategy evaluation.

  15. A CUMULATIVE MIGRATION METHOD FOR COMPUTING RIGOROUS TRANSPORT CROSS SECTIONS AND DIFFUSION COEFFICIENTS FOR LWR LATTICES WITH MONTE CARLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhaoyuan Liu; Kord Smith; Benoit Forget

    2016-05-01

    A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices.more » Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.« less

  16. A study of workstation computational performance for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Cleveland, Jeff I., II

    1995-01-01

    With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.

  17. Proficiency performance benchmarks for removal of simulated brain tumors using a virtual reality simulator NeuroTouch.

    PubMed

    AlZhrani, Gmaan; Alotaibi, Fahad; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Sabbagh, Abdulrahman; Bajunaid, Khalid; Lajoie, Susanne P; Del Maestro, Rolando F

    2015-01-01

    Assessment of neurosurgical technical skills involved in the resection of cerebral tumors in operative environments is complex. Educators emphasize the need to develop and use objective and meaningful assessment tools that are reliable and valid for assessing trainees' progress in acquiring surgical skills. The purpose of this study was to develop proficiency performance benchmarks for a newly proposed set of objective measures (metrics) of neurosurgical technical skills performance during simulated brain tumor resection using a new virtual reality simulator (NeuroTouch). Each participant performed the resection of 18 simulated brain tumors of different complexity using the NeuroTouch platform. Surgical performance was computed using Tier 1 and Tier 2 metrics derived from NeuroTouch simulator data consisting of (1) safety metrics, including (a) volume of surrounding simulated normal brain tissue removed, (b) sum of forces utilized, and (c) maximum force applied during tumor resection; (2) quality of operation metric, which involved the percentage of tumor removed; and (3) efficiency metrics, including (a) instrument total tip path lengths and (b) frequency of pedal activation. All studies were conducted in the Neurosurgical Simulation Research Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada. A total of 33 participants were recruited, including 17 experts (board-certified neurosurgeons) and 16 novices (7 senior and 9 junior neurosurgery residents). The results demonstrated that "expert" neurosurgeons resected less surrounding simulated normal brain tissue and less tumor tissue than residents. These data are consistent with the concept that "experts" focused more on safety of the surgical procedure compared with novices. By analyzing experts' neurosurgical technical skills performance on these different metrics, we were able to establish benchmarks for goal proficiency performance training of neurosurgery residents. This study furthers our understanding of expert neurosurgical performance during the resection of simulated virtual reality tumors and provides neurosurgical trainees with predefined proficiency performance benchmarks designed to maximize the learning of specific surgical technical skills. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  18. Development and Experimental Benchmark of Simulations to Predict Used Nuclear Fuel Cladding Temperatures during Drying and Transfer Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greiner, Miles

    Radial hydride formation in high-burnup used fuel cladding has the potential to radically reduce its ductility and suitability for long-term storage and eventual transport. To avoid this formation, the maximum post-reactor temperature must remain sufficiently low to limit the cladding hoop stress, and so that hydrogen from the existing circumferential hydrides will not dissolve and become available to re-precipitate into radial hydrides under the slow cooling conditions during drying, transfer and early dry-cask storage. The objective of this research is to develop and experimentallybenchmark computational fluid dynamics simulations of heat transfer in post-pool-storage drying operations, when high-burnup fuel cladding ismore » likely to experience its highest temperature. These benchmarked tools can play a key role in evaluating dry cask storage systems for extended storage of high-burnup fuels and post-storage transportation, including fuel retrievability. The benchmarked tools will be used to aid the design of efficient drying processes, as well as estimate variations of surface temperatures as a means of inferring helium integrity inside the canister or cask. This work will be conducted effectively because the principal investigator has experience developing these types of simulations, and has constructed a test facility that can be used to benchmark them.« less

  19. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    PubMed

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  20. Stress corrosion of low alloy steels used in external bolting on pressurised water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skeldon, P.; Hurst, P.; Smart, N.R.

    1992-12-31

    The stress corrosion cracking (SCC) susceptibility of AISI 4140 and AISI 4340 steels has been evaluated in five environments, three simulating a leaking aqueous boric acid environment and two simulating ambient external conditions ie moist air and salt spray. Both steels were found to be highly susceptible to SCC in all environments at hardnesses of 400 VPN and above. The susceptibility was greatly reduced at hardnesses below 330 VPN but in one environment, viz refluxing PWR primary water, SCC was observed at hardnesses as low as 260VPN. Threshold stress intensities for SCC were frequently lower than those in the literature.

  1. Current and anticipated uses of thermal-hydraulic codes in Germany

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teschendorff, V.; Sommer, F.; Depisch, F.

    1997-07-01

    In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses.

  2. Design and implementation of a simple nuclear power plant simulator

    NASA Astrophysics Data System (ADS)

    Miller, William H.

    1983-02-01

    A simple PWR nuclear power plant simulator has been designed and implemented on a minicomputer system. The system is intended for students use in understanding the power operation of a nuclear power plant. A PDP-11 minicomputer calculates reactor parameters in real time, uses a graphics terminal to display the results and a keyboard and joystick for control functions. Plant parameters calculated by the model include the core reactivity (based upon control rod positions, soluble boron concentration and reactivity feedback effects), the total core power, the axial core power distribution, the temperature and pressure in the primary and secondary coolant loops, etc.

  3. Emergy assessment of three home courtyard agriculture production systems in Tibet Autonomous Region, China*

    PubMed Central

    Guan, Fa-chun; Sha, Zhi-peng; Zhang, Yu-yang; Wang, Jun-feng; Wang, Chao

    2016-01-01

    Home courtyard agriculture is an important model of agricultural production on the Tibetan plateau. Because of the sensitive and fragile plateau environment, it needs to have optimal performance characteristics, including high sustainability, low environmental pressure, and high economic benefit. Emergy analysis is a promising tool for evaluation of the environmental-economic performance of these production systems. In this study, emergy analysis was used to evaluate three courtyard agricultural production models: Raising Geese in Corn Fields (RGICF), Conventional Corn Planting (CCP), and Pea-Wheat Rotation (PWR). The results showed that the RGICF model produced greater economic benefits, and had higher sustainability, lower environmental pressure, and higher product safety than the CCP and PWR models. The emergy yield ratio (EYR) and emergy self-support ratio (ESR) of RGICF were 0.66 and 0.11, respectively, lower than those of the CCP production model, and 0.99 and 0.08, respectively, lower than those of the PWR production model. The impact of RGICF (1.45) on the environment was lower than that of CCP (2.26) and PWR (2.46). The emergy sustainable indices (ESIs) of RGICF were 1.07 and 1.02 times higher than those of CCP and PWR, respectively. With regard to the emergy index of product safety (EIPS), RGICF had a higher safety index than those of CCP and PWR. Overall, our results suggest that the RGICF model is advantageous and provides higher environmental benefits than the CCP and PWR systems. PMID:27487808

  4. Emergy assessment of three home courtyard agriculture production systems in Tibet Autonomous Region, China.

    PubMed

    Guan, Fa-Chun; Sha, Zhi-Peng; Zhang, Yu-Yang; Wang, Jun-Feng; Wang, Chao

    2016-08-01

    Home courtyard agriculture is an important model of agricultural production on the Tibetan plateau. Because of the sensitive and fragile plateau environment, it needs to have optimal performance characteristics, including high sustainability, low environmental pressure, and high economic benefit. Emergy analysis is a promising tool for evaluation of the environmental-economic performance of these production systems. In this study, emergy analysis was used to evaluate three courtyard agricultural production models: Raising Geese in Corn Fields (RGICF), Conventional Corn Planting (CCP), and Pea-Wheat Rotation (PWR). The results showed that the RGICF model produced greater economic benefits, and had higher sustainability, lower environmental pressure, and higher product safety than the CCP and PWR models. The emergy yield ratio (EYR) and emergy self-support ratio (ESR) of RGICF were 0.66 and 0.11, respectively, lower than those of the CCP production model, and 0.99 and 0.08, respectively, lower than those of the PWR production model. The impact of RGICF (1.45) on the environment was lower than that of CCP (2.26) and PWR (2.46). The emergy sustainable indices (ESIs) of RGICF were 1.07 and 1.02 times higher than those of CCP and PWR, respectively. With regard to the emergy index of product safety (EIPS), RGICF had a higher safety index than those of CCP and PWR. Overall, our results suggest that the RGICF model is advantageous and provides higher environmental benefits than the CCP and PWR systems.

  5. Cyclic crack growth behavior of reactor pressure vessel steels in light water reactor environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Der Sluys, W.A.; Emanuelson, R.H.

    1986-01-01

    During normal operation light water reactor (LWR) pressure vessels are subjected to a variety of transients resulting in time varying stresses. Consequently, fatigue and environmentally assisted fatigue are growth mechanisms relevant to flaws in these pressure vessels. In order to provide a better understanding of the resistance of nuclear pressure vessel steels to flaw growth process, a series of fracture mechanics experiments were conducted to generate data on the rate of cyclic crack growth in SA508-2 and SA533b-1 steels in simulated 550/sup 0/F boiling water reactor (BWR) and 550/sup 0/F pressurized water reactor (PWR) environments. Areas investigated over the coursemore » of the test program included the effects of loading frequency and r ratio (Kmin-Kmax) on crack growth rate as a function of the stress intensity factor (deltaK) range. In addition, the effect of sulfur content of the test material on the cyclic crack growth rate was studied. Cyclic crack growth rates were found to be controlled by deltaK, R ratio, and loading frequency. The sulfur impurity content of the reactor pressure vessel steels studied had a significant effect on the cyclic crack growth rates. The higher growth rates were always associated with materials of higher sulfur content. For a given level of sulfur, growth rates were in a 550/sup 0/F simulated BWR environment than in a 550/sup 0/F simulated PWR environment. In both environments cyclic crack growth rates were a strong function of the loading frequency.« less

  6. Comparison of fresh fuel experimental measurements to MCNPX calculations using self-interrogation neutron resonance densitometry

    NASA Astrophysics Data System (ADS)

    LaFleur, Adrienne M.; Charlton, William S.; Menlove, Howard O.; Swinhoe, Martyn T.

    2012-07-01

    A new non-destructive assay technique called Self-Interrogation Neutron Resonance Densitometry (SINRD) is currently being developed at Los Alamos National Laboratory (LANL) to improve existing nuclear safeguards measurements for Light Water Reactor (LWR) fuel assemblies. SINRD consists of four 235U fission chambers (FCs): bare FC, boron carbide shielded FC, Gd covered FC, and Cd covered FC. Ratios of different FCs are used to determine the amount of resonance absorption from 235U in the fuel assembly. The sensitivity of this technique is based on using the same fissile materials in the FCs as are present in the fuel because the effect of resonance absorption lines in the transmitted flux is amplified by the corresponding (n,f) reaction peaks in the fission chamber. In this work, experimental measurements were performed in air with SINRD using a reference Pressurized Water Reactor (PWR) 15×15 low enriched uranium (LEU) fresh fuel assembly at LANL. The purpose of this experiment was to assess the following capabilities of SINRD: (1) ability to measure the effective 235U enrichment of the PWR fresh LEU fuel assembly and (2) sensitivity and penetrability to the removal of fuel pins from an assembly. These measurements were compared to Monte Carlo N-Particle eXtended transport code (MCNPX) simulations to verify the accuracy of the MCNPX model of SINRD. The reproducibility of experimental measurements via MCNPX simulations is essential to validating the results and conclusions obtained from the simulations of SINRD for LWR spent fuel assemblies.

  7. Evaluation of control strategies using an oxidation ditch benchmark.

    PubMed

    Abusam, A; Keesman, K J; Spanjers, H; van, Straten G; Meinema, K

    2002-01-01

    This paper presents validation and implementation results of a benchmark developed for a specific full-scale oxidation ditch wastewater treatment plant. A benchmark is a standard simulation procedure that can be used as a tool in evaluating various control strategies proposed for wastewater treatment plants. It is based on model and performance criteria development. Testing of this benchmark, by comparing benchmark predictions to real measurements of the electrical energy consumptions and amounts of disposed sludge for a specific oxidation ditch WWTP, has shown that it can (reasonably) be used for evaluating the performance of this WWTP. Subsequently, the validated benchmark was then used in evaluating some basic and advanced control strategies. Some of the interesting results obtained are the following: (i) influent flow splitting ratio, between the first and the fourth aerated compartments of the ditch, has no significant effect on the TN concentrations in the effluent, and (ii) for evaluation of long-term control strategies, future benchmarks need to be able to assess settlers' performance.

  8. Comparison of mapping algorithms used in high-throughput sequencing: application to Ion Torrent data

    PubMed Central

    2014-01-01

    Background The rapid evolution in high-throughput sequencing (HTS) technologies has opened up new perspectives in several research fields and led to the production of large volumes of sequence data. A fundamental step in HTS data analysis is the mapping of reads onto reference sequences. Choosing a suitable mapper for a given technology and a given application is a subtle task because of the difficulty of evaluating mapping algorithms. Results In this paper, we present a benchmark procedure to compare mapping algorithms used in HTS using both real and simulated datasets and considering four evaluation criteria: computational resource and time requirements, robustness of mapping, ability to report positions for reads in repetitive regions, and ability to retrieve true genetic variation positions. To measure robustness, we introduced a new definition for a correctly mapped read taking into account not only the expected start position of the read but also the end position and the number of indels and substitutions. We developed CuReSim, a new read simulator, that is able to generate customized benchmark data for any kind of HTS technology by adjusting parameters to the error types. CuReSim and CuReSimEval, a tool to evaluate the mapping quality of the CuReSim simulated reads, are freely available. We applied our benchmark procedure to evaluate 14 mappers in the context of whole genome sequencing of small genomes with Ion Torrent data for which such a comparison has not yet been established. Conclusions A benchmark procedure to compare HTS data mappers is introduced with a new definition for the mapping correctness as well as tools to generate simulated reads and evaluate mapping quality. The application of this procedure to Ion Torrent data from the whole genome sequencing of small genomes has allowed us to validate our benchmark procedure and demonstrate that it is helpful for selecting a mapper based on the intended application, questions to be addressed, and the technology used. This benchmark procedure can be used to evaluate existing or in-development mappers as well as to optimize parameters of a chosen mapper for any application and any sequencing platform. PMID:24708189

  9. Benchmarking of measurement and simulation of transverse rms-emittance growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Dong-O

    2008-01-01

    Transverse emittance growth along the Alvarez DTL section is a major concern with respect to the preservation of beam quality of high current beams at the GSI UNILAC. In order to define measures to reduce this growth appropriated tools to simulate the beam dynamics are indispensable. This paper is about the benchmarking of three beam dynamics simulation codes, i.e. DYNAMION, PARMILA, and PARTRAN against systematic measurements of beam emittances for different machine settings. Experimental set-ups, data reduction, the preparation of the simulations, and the evaluation of the simulations will be described. It was found that the measured 100%-rmsemittances behind themore » DTL exceed the simulated values. Comparing measured 90%-rms-emittances to the simulated 95%-rms-emittances gives fair to good agreement instead. The sum of horizontal and vertical emittances is even described well by the codes as long as experimental 90%-rmsemittances are compared to simulated 95%-rms-emittances. Finally, the successful reduction of transverse emittance growth by systematic beam matching is reported.« less

  10. RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Andrs; Ray Berry; Derek Gaston

    The document contains the simulation results of a steady state model PWR problem with the RELAP-7 code. The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on INL's modern scientific software development framework - MOOSE (Multi-Physics Object-Oriented Simulation Environment). This report summarizes the initial results of simulating a model steady-state single phase PWR problem using the current version of the RELAP-7 code. The major purpose of this demonstration simulation is to show that RELAP-7 code can be rapidly developed to simulate single-phase reactor problems. RELAP-7more » is a new project started on October 1st, 2011. It will become the main reactor systems simulation toolkit for RISMC (Risk Informed Safety Margin Characterization) and the next generation tool in the RELAP reactor safety/systems analysis application series (the replacement for RELAP5). The key to the success of RELAP-7 is the simultaneous advancement of physical models, numerical methods, and software design while maintaining a solid user perspective. Physical models include both PDEs (Partial Differential Equations) and ODEs (Ordinary Differential Equations) and experimental based closure models. RELAP-7 will eventually utilize well posed governing equations for multiphase flow, which can be strictly verified. Closure models used in RELAP5 and newly developed models will be reviewed and selected to reflect the progress made during the past three decades. RELAP-7 uses modern numerical methods, which allow implicit time integration, higher order schemes in both time and space, and strongly coupled multi-physics simulations. RELAP-7 is written with object oriented programming language C++. Its development follows modern software design paradigms. The code is easy to read, develop, maintain, and couple with other codes. Most importantly, the modern software design allows the RELAP-7 code to evolve with time. RELAP-7 is a MOOSE-based application. MOOSE (Multiphysics Object-Oriented Simulation Environment) is a framework for solving computational engineering problems in a well-planned, managed, and coordinated way. By leveraging millions of lines of open source software packages, such as PETSC (a nonlinear solver developed at Argonne National Laboratory) and LibMesh (a Finite Element Analysis package developed at University of Texas), MOOSE significantly reduces the expense and time required to develop new applications. Numerical integration methods and mesh management for parallel computation are provided by MOOSE. Therefore RELAP-7 code developers only need to focus on physics and user experiences. By using the MOOSE development environment, RELAP-7 code is developed by following the same modern software design paradigms used for other MOOSE development efforts. There are currently over 20 different MOOSE based applications ranging from 3-D transient neutron transport, detailed 3-D transient fuel performance analysis, to long-term material aging. Multi-physics and multiple dimensional analyses capabilities can be obtained by coupling RELAP-7 and other MOOSE based applications and by leveraging with capabilities developed by other DOE programs. This allows restricting the focus of RELAP-7 to systems analysis-type simulations and gives priority to retain and significantly extend RELAP5's capabilities.« less

  11. Evaluation and comparison of gross primary production estimates for the Northern Great Plains grasslands

    USGS Publications Warehouse

    Zhang, Li; Wylie, Bruce K.; Loveland, Thomas R.; Fosnight, Eugene A.; Tieszen, Larry L.; Ji, Lei; Gilmanov, Tagir

    2007-01-01

    Two spatially-explicit estimates of gross primary production (GPP) are available for the Northern Great Plains. An empirical piecewise regression (PWR) GPP model was developed from flux tower measurements to map carbon flux across the region. The Moderate Resolution Imaging Spectrometer (MODIS) GPP model is a process-based model that uses flux tower data to calibrate its parameters. Verification and comparison of the regional PWR GPP and the global MODIS GPP are important for the modeling of grassland carbon flux. This study compared GPP estimates from PWR and MODIS models with five towers in the grasslands. Among them, PWR GPP and MODIS GPP showed a good agreement with tower-based GPP at three towers. The global MODIS GPP, however, did not agree well with tower-based GPP at two other towers, probably because of the insensitivity of MODIS model to regional ecosystem and climate change and extreme soil moisture conditions. Cross-validation indicated that the PWR model is relatively robust for predicting regional grassland GPP. However, the PWR model should include a wide variety of flux tower data as the training data sets to obtain more accurate results.In addition, GPP maps based on the PWR and MODIS models were compared for the entire region. In the northwest and south, PWR GPP was much higher than MODIS GPP. These areas were characterized by the higher water holding capacity with a lower proportion of C4 grasses in the northwest and a higher proportion of C4 grasses in the south. In the central and southeastern regions, PWR GPP was much lower than MODIS GPP under complicated conditions with generally mixed C3/C4 grasses. The analysis indicated that the global MODIS GPP model has some limitations on detecting moisture stress, which may have been caused by the facts that C3 and C4 grasses are not distinguished, water stress is driven by vapor pressure deficit (VPD) from coarse meteorological data, and MODIS land cover data are unable to differentiate the sub-pixel cropland components.

  12. A NIST Kinetic Data Base for PAH Reaction and Soot Particle Inception During Combusion

    DTIC Science & Technology

    2007-12-01

    in Computational Fluid Dynamics (CFD) codes hat have lead to the capability of describing complex reactive flow problems and thus simulating... parameters . However in the absence of data estimates must be made. Since the chemistry of combustion is extremely complex and for proper description...118:381-389 9. Babushok, V. and Tsang, W., J. Prop. and Pwr . 20 (2004) 403-414. 10. . Fournet, R., Warth, V., Glaude, P.A., Battin-Leclerc, F

  13. Engine dynamic analysis with general nonlinear finite element codes. Part 2: Bearing element implementation overall numerical characteristics and benchmaking

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Adams, M.; Fertis, J.; Zeid, I.; Lam, P.

    1982-01-01

    Finite element codes are used in modelling rotor-bearing-stator structure common to the turbine industry. Engine dynamic simulation is used by developing strategies which enable the use of available finite element codes. benchmarking the elements developed are benchmarked by incorporation into a general purpose code (ADINA); the numerical characteristics of finite element type rotor-bearing-stator simulations are evaluated through the use of various types of explicit/implicit numerical integration operators. Improving the overall numerical efficiency of the procedure is improved.

  14. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outsidemore » of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.« less

  15. Targeting the affordability of cigarettes: a new benchmark for taxation policy in low-income and-middle-income countries.

    PubMed

    Blecher, Evan

    2010-08-01

    To investigate the appropriateness of tax incidence (the percentage of the retail price occupied by taxes) benchmarking in low-income and-middle-income countries (LMICs) with rapidly growing economies and to explore the viability of an alternative tax policy rule based on the affordability of cigarettes. The paper outlines criticisms of tax incidence benchmarking, particularly in the context of LMICs. It then considers an affordability-based benchmark using relative income price (RIP) as a measure of affordability. The RIP measures the percentage of annual per capita GDP required to purchase 100 packs of cigarettes. Using South Africa as a case study of an LMIC, future consumption is simulated using both tax incidence benchmarks and affordability benchmarks. I show that a tax incidence benchmark is not an optimal policy tool in South Africa and that an affordability benchmark could be a more effective means of reducing tobacco consumption in the future. Although a tax incidence benchmark was successful in increasing prices and reducing tobacco consumption in South Africa in the past, this approach has drawbacks, particularly in the context of a rapidly growing LMIC economy. An affordability benchmark represents an appropriate alternative that would be more effective in reducing future cigarette consumption.

  16. An Enriched Shell Element for Delamination Simulation in Composite Laminates

    NASA Technical Reports Server (NTRS)

    McElroy, Mark

    2015-01-01

    A formulation is presented for an enriched shell finite element capable of delamination simulation in composite laminates. The element uses an adaptive splitting approach for damage characterization that allows for straightforward low-fidelity model creation and a numerically efficient solution. The Floating Node Method is used in conjunction with the Virtual Crack Closure Technique to predict delamination growth and represent it discretely at an arbitrary ply interface. The enriched element is verified for Mode I delamination simulation using numerical benchmark data. After determining important mesh configuration guidelines for the vicinity of the delamination front in the model, a good correlation was found between the enriched shell element model results and the benchmark data set.

  17. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    NASA Astrophysics Data System (ADS)

    Kahler, A. C.; MacFarlane, R. E.; Mosteller, R. D.; Kiedrowski, B. C.; Frankle, S. C.; Chadwick, M. B.; McKnight, R. D.; Lell, R. M.; Palmiotti, G.; Hiruta, H.; Herman, M.; Arcilla, R.; Mughabghab, S. F.; Sublet, J. C.; Trkov, A.; Trumbull, T. H.; Dunn, M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., "ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data," Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as 236U, 238,242Pu and 241,243Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical eigenvalues and a decreasing trend in calculated eigenvalue for 233U fueled systems as a function of Above-Thermal Fission Fraction remain. The comprehensive nature of this critical benchmark suite and the generally accurate calculated eigenvalues obtained with ENDF/B-VII.1 neutron cross sections support the conclusion that this is the most accurate general purpose ENDF/B cross section library yet released to the technical community.

  18. First benchmark of the Unstructured Grid Adaptation Working Group

    NASA Technical Reports Server (NTRS)

    Ibanez, Daniel; Barral, Nicolas; Krakos, Joshua; Loseille, Adrien; Michal, Todd; Park, Mike

    2017-01-01

    Unstructured grid adaptation is a technology that holds the potential to improve the automation and accuracy of computational fluid dynamics and other computational disciplines. Difficulty producing the highly anisotropic elements necessary for simulation on complex curved geometries that satisfies a resolution request has limited this technology's widespread adoption. The Unstructured Grid Adaptation Working Group is an open gathering of researchers working on adapting simplicial meshes to conform to a metric field. Current members span a wide range of institutions including academia, industry, and national laboratories. The purpose of this group is to create a common basis for understanding and improving mesh adaptation. We present our first major contribution: a common set of benchmark cases, including input meshes and analytic metric specifications, that are publicly available to be used for evaluating any mesh adaptation code. We also present the results of several existing codes on these benchmark cases, to illustrate their utility in identifying key challenges common to all codes and important differences between available codes. Future directions are defined to expand this benchmark to mature the technology necessary to impact practical simulation workflows.

  19. Comparing Hospital Processes and Outcomes in California Medicare Beneficiaries: Simulation Prompts Reconsideration.

    PubMed

    Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia

    2017-01-01

    This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. The Centers for Medicare and Medicaid Services' Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records.To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California's (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals' mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals' decreased, KPNC hospitals' performance would appear better. Future hospital benchmarking should consider the impact of variation in admission thresholds.

  20. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  1. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  2. Ultracool dwarf benchmarks with Gaia primaries

    NASA Astrophysics Data System (ADS)

    Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.

    2017-10-01

    We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability <10-4, based on projected separation and expected constraints on common distance, common proper motion and/or common radial velocity. Within this bulk population, we identify smaller target subsets of rarer systems whose collective properties still span the full parameter space of the population, as well as systems containing primary stars that are good age calibrators. Our simulation analysis leads to a series of recommendations for candidate selection and observational follow-up that could identify ˜500 diverse Gaia benchmarks. As a test of the veracity of our methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.

  3. Experimental benchmark of kinetic simulations of capacitively coupled plasmas in molecular gases

    NASA Astrophysics Data System (ADS)

    Donkó, Z.; Derzsi, A.; Korolov, I.; Hartmann, P.; Brandt, S.; Schulze, J.; Berger, B.; Koepke, M.; Bruneau, B.; Johnson, E.; Lafleur, T.; Booth, J.-P.; Gibson, A. R.; O'Connell, D.; Gans, T.

    2018-01-01

    We discuss the origin of uncertainties in the results of numerical simulations of low-temperature plasma sources, focusing on capacitively coupled plasmas. These sources can be operated in various gases/gas mixtures, over a wide domain of excitation frequency, voltage, and gas pressure. At low pressures, the non-equilibrium character of the charged particle transport prevails and particle-based simulations become the primary tools for their numerical description. The particle-in-cell method, complemented with Monte Carlo type description of collision processes, is a well-established approach for this purpose. Codes based on this technique have been developed by several authors/groups, and have been benchmarked with each other in some cases. Such benchmarking demonstrates the correctness of the codes, but the underlying physical model remains unvalidated. This is a key point, as this model should ideally account for all important plasma chemical reactions as well as for the plasma-surface interaction via including specific surface reaction coefficients (electron yields, sticking coefficients, etc). In order to test the models rigorously, comparison with experimental ‘benchmark data’ is necessary. Examples will be given regarding the studies of electron power absorption modes in O2, and CF4-Ar discharges, as well as on the effect of modifications of the parameters of certain elementary processes on the computed discharge characteristics in O2 capacitively coupled plasmas.

  4. Issues in benchmarking human reliability analysis methods : a literature review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  5. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  6. The philosophy of benchmark testing a standards-based picture archiving and communications system.

    PubMed

    Richardson, N E; Thomas, J A; Lyche, D K; Romlein, J; Norton, G S; Dolecek, Q E

    1999-05-01

    The Department of Defense issued its requirements for a Digital Imaging Network-Picture Archiving and Communications System (DIN-PACS) in a Request for Proposals (RFP) to industry in January 1997, with subsequent contracts being awarded in November 1997 to the Agfa Division of Bayer and IBM Global Government Industry. The Government's technical evaluation process consisted of evaluating a written technical proposal as well as conducting a benchmark test of each proposed system at the vendor's test facility. The purpose of benchmark testing was to evaluate the performance of the fully integrated system in a simulated operational environment. The benchmark test procedures and test equipment were developed through a joint effort between the Government, academic institutions, and private consultants. Herein the authors discuss the resources required and the methods used to benchmark test a standards-based PACS.

  7. Assessment of PWR Steam Generator modelling in RELAP5/MOD2. International Agreement Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putney, J.M.; Preece, R.J.

    1993-06-01

    An assessment of Steam Generator (SG) modelling in the PWR thermal-hydraulic code RELAP5/MOD2 is presented. The assessment is based on a review of code assessment calculations performed in the UK and elsewhere, detailed calculations against a series of commissioning tests carried out on the Wolf Creek PWR and analytical investigations of the phenomena involved in normal and abnormal SG operation. A number of modelling deficiencies are identified and their implications for PWR safety analysis are discussed -- including methods for compensating for the deficiencies through changes to the input deck. Consideration is also given as to whether the deficiencies willmore » still be present in the successor code RELAP5/MOD3.« less

  8. Intercomparison of Monte Carlo radiation transport codes to model TEPC response in low-energy neutron and gamma-ray fields.

    PubMed

    Ali, F; Waker, A J; Waller, E J

    2014-10-01

    Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Integrated modeling of second phase precipitation in cold-worked 316 stainless steels under irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamivand, Mahmood; Yang, Ying; Busby, Jeremy T.

    The current work combines the Cluster Dynamics (CD) technique and CALPHAD-based precipitation modeling to address the second phase precipitation in cold-worked (CW) 316 stainless steels (SS) under irradiation at 300–400 °C. CD provides the radiation enhanced diffusion and dislocation evolution as inputs for the precipitation model. The CALPHAD-based precipitation model treats the nucleation, growth and coarsening of precipitation processes based on classical nucleation theory and evolution equations, and simulates the composition, size and size distribution of precipitate phases. We benchmark the model against available experimental data at fast reactor conditions (9.4 × 10 –7 dpa/s and 390 °C) and thenmore » use the model to predict the phase instability of CW 316 SS under light water reactor (LWR) extended life conditions (7 × 10 –8 dpa/s and 275 °C). The model accurately predicts the γ' (Ni 3Si) precipitation evolution under fast reactor conditions and that the formation of this phase is dominated by radiation enhanced segregation. The model also predicts a carbide volume fraction that agrees well with available experimental data from a PWR reactor but is much higher than the volume fraction observed in fast reactors. We propose that radiation enhanced dissolution and/or carbon depletion at sinks that occurs at high flux could be the main sources of this inconsistency. The integrated model predicts ~1.2% volume fraction for carbide and ~3.0% volume fraction for γ' for typical CW 316 SS (with 0.054 wt% carbon) under LWR extended life conditions. Finally, this work provides valuable insights into the magnitudes and mechanisms of precipitation in irradiated CW 316 SS for nuclear applications.« less

  10. Integrated modeling of second phase precipitation in cold-worked 316 stainless steels under irradiation

    DOE PAGES

    Mamivand, Mahmood; Yang, Ying; Busby, Jeremy T.; ...

    2017-03-11

    The current work combines the Cluster Dynamics (CD) technique and CALPHAD-based precipitation modeling to address the second phase precipitation in cold-worked (CW) 316 stainless steels (SS) under irradiation at 300–400 °C. CD provides the radiation enhanced diffusion and dislocation evolution as inputs for the precipitation model. The CALPHAD-based precipitation model treats the nucleation, growth and coarsening of precipitation processes based on classical nucleation theory and evolution equations, and simulates the composition, size and size distribution of precipitate phases. We benchmark the model against available experimental data at fast reactor conditions (9.4 × 10 –7 dpa/s and 390 °C) and thenmore » use the model to predict the phase instability of CW 316 SS under light water reactor (LWR) extended life conditions (7 × 10 –8 dpa/s and 275 °C). The model accurately predicts the γ' (Ni 3Si) precipitation evolution under fast reactor conditions and that the formation of this phase is dominated by radiation enhanced segregation. The model also predicts a carbide volume fraction that agrees well with available experimental data from a PWR reactor but is much higher than the volume fraction observed in fast reactors. We propose that radiation enhanced dissolution and/or carbon depletion at sinks that occurs at high flux could be the main sources of this inconsistency. The integrated model predicts ~1.2% volume fraction for carbide and ~3.0% volume fraction for γ' for typical CW 316 SS (with 0.054 wt% carbon) under LWR extended life conditions. Finally, this work provides valuable insights into the magnitudes and mechanisms of precipitation in irradiated CW 316 SS for nuclear applications.« less

  11. On the Solidification and Structure Formation during Casting of Large Inserts in Ferritic Nodular Cast Iron

    NASA Astrophysics Data System (ADS)

    Tadesse, Abel; Fredriksson, Hasse

    2018-06-01

    The graphite nodule count and size distributions for boiling water reactor (BWR) and pressurized water reactor (PWR) inserts were investigated by taking samples at heights of 2160 and 1150 mm, respectively. In each cross section, two locations were taken into consideration for both the microstructural and solidification modeling. The numerical solidification modeling was performed in a two-dimensional model by considering the nucleation and growth in eutectic ductile cast iron. The microstructural results reveal that the nodule size and count distribution along the cross sections are different in each location for both inserts. Finer graphite nodules appear in the thinner sections and close to the mold walls. The coarser nodules are distributed mostly in the last solidified location. The simulation result indicates that the finer nodules are related to a higher cooling rate and a lower degree of microsegregation, whereas the coarser nodules are related to a lower cooling rate and a higher degree of microsegregation. The solidification time interval and the last solidifying locations in the BWR and PWR are also different.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Overman, Nicole R.; Toloczko, Mychailo B.; Olszta, Matthew J.

    High chromium, nickel-base Alloy 690 exhibits an increased resistance to stress corrosion cracking (SCC) in pressurized water reactor (PWR) primary water environments over lower chromium alloy 600. As a result, Alloy 690 has been used to replace Alloy 600 for steam generator tubing, reactor pressure vessel nozzles and other pressure boundary components. However, recent laboratory crack-growth testing has revealed that heavily cold-worked Alloy 690 materials can become susceptible to SCC. To evaluate reasons for this increased SCC susceptibility, detailed characterizations have been performed on as-received and cold-worked Alloy 690 materials using electron backscatter diffraction (EBSD) and Vickers hardness measurements. Examinationsmore » were performed on cross sections of compact tension specimens that were used for SCC crack growth rate testing in simulated PWR primary water. Hardness and the EBSD integrated misorientation density could both be related to the degree of cold work for materials of similar grain size. However, a microstructural dependence was observed for strain correlations using EBSD and hardness which should be considered if this technique is to be used for gaining insight on SCC growth rates« less

  13. Technical Report: Benchmarking for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLoughlin, K.

    2016-01-22

    The software application “MetaQuant” was developed by our group at Lawrence Livermore National Laboratory (LLNL). It is designed to profile microbial populations in a sample using data from whole-genome shotgun (WGS) metagenomic DNA sequencing. Several other metagenomic profiling applications have been described in the literature. We ran a series of benchmark tests to compare the performance of MetaQuant against that of a few existing profiling tools, using real and simulated sequence datasets. This report describes our benchmarking procedure and results.

  14. MoMaS reactive transport benchmark using PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Park, H.

    2017-12-01

    MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.

  15. Multidimensional effects in the thermal response of fuel rod simulators. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dabbs, R.D.; Ott, L.J.

    1980-01-01

    One of the primary objectives of the Oak Ridge National Laboratory Pressurized-Water Reactor Blowdown Heat Transfer Separate-Effects Program is the determination of the transient surface temperature and surface heat flux of fuel pin simulators (FPSs) from internal thermocouple signals obtained during a loss-of-coolant experiment (LOCE) in the Thermal-Hydraulics Test Facility. This analysis requires the solution of the classical inverse heat conduction problem. The assumptions that allow the governing differential equation to be reduced to one dimension can introduce significant errors in the computed surface heat flux and surface temperature. The degree to which these computed variables are perturbed is addressedmore » and quantified.« less

  16. Large-break LOCA, in-reactor fuel bundle Materials Test MT-6A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, C.L.; Hesson, G.M.; Pilger, J.P.

    1993-09-01

    This is a report on one of a series of experiments to simulates a loss-of-coolant accident (LOCA) using full-length fuel rods for pressurized water reactors (PWR). The experiments were conducted by Pacific Northwest Laboratory (PNL) under the LOCA simulation Program sponsored by the US Nuclear Regulatory Commission (NRC). The major objective of this program was causing the maximum possible expansion of the cladding on the fuel rods from a short-term adiabatic temperature transient to 1200 K (1700 F) leading to the rupture of the cladding; and second, by reflooding the fuel rods to determine the rate at which the fuelmore » bundle is cooled.« less

  17. Performance of exchange-correlation functionals in density functional theory calculations for liquid metal: A benchmark test for sodium.

    PubMed

    Han, Jeong-Hwan; Oda, Takuji

    2018-04-14

    The performance of exchange-correlation functionals in density-functional theory (DFT) calculations for liquid metal has not been sufficiently examined. In the present study, benchmark tests of Perdew-Burke-Ernzerhof (PBE), Armiento-Mattsson 2005 (AM05), PBE re-parameterized for solids, and local density approximation (LDA) functionals are conducted for liquid sodium. The pair correlation function, equilibrium atomic volume, bulk modulus, and relative enthalpy are evaluated at 600 K and 1000 K. Compared with the available experimental data, the errors range from -11.2% to 0.0% for the atomic volume, from -5.2% to 22.0% for the bulk modulus, and from -3.5% to 2.5% for the relative enthalpy depending on the DFT functional. The generalized gradient approximation functionals are superior to the LDA functional, and the PBE and AM05 functionals exhibit the best performance. In addition, we assess whether the error tendency in liquid simulations is comparable to that in solid simulations, which would suggest that the atomic volume and relative enthalpy performances are comparable between solid and liquid states but that the bulk modulus performance is not. These benchmark test results indicate that the results of liquid simulations are significantly dependent on the exchange-correlation functional and that the DFT functional performance in solid simulations can be used to roughly estimate the performance in liquid simulations.

  18. Performance of exchange-correlation functionals in density functional theory calculations for liquid metal: A benchmark test for sodium

    NASA Astrophysics Data System (ADS)

    Han, Jeong-Hwan; Oda, Takuji

    2018-04-01

    The performance of exchange-correlation functionals in density-functional theory (DFT) calculations for liquid metal has not been sufficiently examined. In the present study, benchmark tests of Perdew-Burke-Ernzerhof (PBE), Armiento-Mattsson 2005 (AM05), PBE re-parameterized for solids, and local density approximation (LDA) functionals are conducted for liquid sodium. The pair correlation function, equilibrium atomic volume, bulk modulus, and relative enthalpy are evaluated at 600 K and 1000 K. Compared with the available experimental data, the errors range from -11.2% to 0.0% for the atomic volume, from -5.2% to 22.0% for the bulk modulus, and from -3.5% to 2.5% for the relative enthalpy depending on the DFT functional. The generalized gradient approximation functionals are superior to the LDA functional, and the PBE and AM05 functionals exhibit the best performance. In addition, we assess whether the error tendency in liquid simulations is comparable to that in solid simulations, which would suggest that the atomic volume and relative enthalpy performances are comparable between solid and liquid states but that the bulk modulus performance is not. These benchmark test results indicate that the results of liquid simulations are significantly dependent on the exchange-correlation functional and that the DFT functional performance in solid simulations can be used to roughly estimate the performance in liquid simulations.

  19. An Integrated Development Environment for Adiabatic Quantum Programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; McCaskey, Alex; Bennink, Ryan S

    2014-01-01

    Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less

  20. Development of Hplc Techniques for the Analysis of Trace Metal Species in the Primary Coolant of a Pressurised Water Reactor.

    NASA Astrophysics Data System (ADS)

    Barron, Keiron Robert Philip

    Available from UMI in association with The British Library. The need to monitor corrosion products in the primary circuit of a pressurised water reactor (PWR), at a concentration of 10pg ml^{-1} is discussed. A review of trace and ultra-trace metal analysis, relevant to the specific requirements imposed by primary coolant chemistry, indicated that high performance liquid chromatography (HPLC), coupled with preconcentration of sample was an ideal technique. A HPLC system was developed to determine trace metal species in simulated PWR primary coolant. In order to achieve the desired detection limit an on-line preconcentration system had to be developed. Separations were performed on Aminex A9 and Benson BC-X10 analytical columns. Detection was by post column reaction with Eriochrome Black T and Calmagite Linear calibrations of 2.5-100ng of cobalt (the main species of interest), were achieved using up to 200ml samples. The detection limit for a 200ml sample was 10pg ml^{-1}. In order to achieve the desired aim of on-line collection of species at 300^circ C, the use of inorganic ion-exchangers is essential. A novel application, utilising the attractive features of the inorganic ion-exchangers titanium dioxide, zirconium dioxide, zirconium arsenophosphate and pore controlled glass beads, was developed for the preconcentration of trace metal species at temperature and pressure. The performance of these exchangers, at ambient and 300^ circC was assessed by their inclusion in the developed analytical system and by the use of radioisotopes. The particular emphasis during the development has been upon accuracy, reproducibility of recovery, stability of reagents and system contamination, studied by the use of radioisotopes and response to post column reagents. This study in conjunction with work carried out at Winfrith, resulted in a monitoring system that could follow changes in coolant chemistry, on deposition and release of metal species in simulated PWR water loops. On -line detection of cobalt at 11pg ml^{ -1} was recorded, something which previously could not be performed by other techniques.

  1. Effects of the weld thermal cycle on the microstructure of alloy 690

    NASA Astrophysics Data System (ADS)

    Tuttle, James R.

    Alloy 690 has been introduced as a material for use as the heat exchanger tubes in the steam generators (SGs) of pressurised water reactor (PWR) nuclear power plant. Its immediate predecessor, alloy 600, suffered from a number of degradation modes and another alternative, alloy 800, has also had in-service problems. In laboratory tests, alloy 690 in both mill annealed (MA) and special thermally treated (STT) condition has shown a high degree of resistance to degradation in simulated PWR primary side environments and other test media.Limited research has previously been undertaken to investigate the effects of welding on alloy 690, when the material is used in SG applications. It was deemed important to increase knowledge in this area since fabrication of PWR SGs involves gas tungsten arc welding (GTAW) of the heat exchanger tubes to a clad tubeplate. For this research investigation welded samples of alloy 690 have been produced in the laboratory using a range of thermal cycles based around recommended weld parameters for SG fabrication. These samples have been compared with archive welds from PWR SG manufacturers. A number of welds incorporating alloy 600 and a number using alloy 800 tubing material have also been fabricated in the laboratory for comparative purposes. Two experimental melts have been produced to study the effects of Nb substitution for Ti in alloy 690 type materials.Welded and unwelded specimens have been studied, analysed and tested using a variety of methods and techniques. A method of metallographic sample preparation for transmission electron microscope (TEM) thin foil specimens has been developed and documented which ensures foil perforation in a specific region. The effects of Nb substitution for Ti have been discussed. Chemical balances and microstructures in the fusion zone of welds manufactured from alloy 690 tubing incorporating alloy 82 weld consumable have been shown to be non-ideal. Within the heat affected zone (HAZ) of both laboratory produced and archive welds the microstructures have been identified as detrimentally altered from the STT condition original tubing material(s). A number of conclusions have been drawn and recommendations have been made for future work.

  2. Heuristic rules embedded genetic algorithm for in-core fuel management optimization

    NASA Astrophysics Data System (ADS)

    Alim, Fatih

    The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.

  3. The PPP Simulator: User’s Manual and Report

    DTIC Science & Technology

    1986-11-01

    simulator: Script started on Thu Aug 28 09:16:15 1986 1 ji] -> ppp -d Benchmarks/Par/ccon6.w pau load /a/hprg’fagin/ PPPl /Benchmarks/Par,’concatOP .w Capace...EOF ) putc( c, stdout ) #else if(( fp = fopen("/a/hprg/fagin/ PPPl /notes’, fir" ))!NULL) while(( c = getc(fp)) != EOF ) putc( c, stdout ) #erndif if...hprg/fagin/ PPPl /bitl.d’, fir" ) =NULL) lddsptbl( fp, bi-tbl ); while((--argc > 0) && ((*.+argv)[0]= -I for( s =argv[0]+l; *s!=’\\0’ s++ A -A Aug 18 16

  4. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  5. Performance modeling & simulation of complex systems (A systems engineering design & analysis approach)

    NASA Technical Reports Server (NTRS)

    Hall, Laverne

    1995-01-01

    Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.

  6. Assessment for advanced fuel cycle options in CANDU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morreale, A.C.; Luxat, J.C.; Friedlander, Y.

    2013-07-01

    The possible options for advanced fuel cycles in CANDU reactors including actinide burning options and thorium cycles were explored and are feasible options to increase the efficiency of uranium utilization and help close the fuel cycle. The actinide burning TRUMOX approach uses a mixed oxide fuel of reprocessed transuranic actinides from PWR spent fuel blended with natural uranium in the CANDU-900 reactor. This system reduced actinide content by 35% and decreased natural uranium consumption by 24% over a PWR once through cycle. The thorium cycles evaluated used two CANDU-900 units, a generator and a burner unit along with a drivermore » fuel feedstock. The driver fuels included plutonium reprocessed from PWR, from CANDU and low enriched uranium (LEU). All three cycles were effective options and reduced natural uranium consumption over a PWR once through cycle. The LEU driven system saw the largest reduction with a 94% savings while the plutonium driven cycles achieved 75% savings for PWR and 87% for CANDU. The high neutron economy, online fuelling and flexible compact fuel make the CANDU system an ideal reactor platform for many advanced fuel cycles.« less

  7. Noninvasive and Real-Time Plasmon Waveguide Resonance Thermometry

    PubMed Central

    Zhang, Pengfei; Liu, Le; He, Yonghong; Zhou, Yanfei; Ji, Yanhong; Ma, Hui

    2015-01-01

    In this paper, the noninvasive and real-time plasmon waveguide resonance (PWR) thermometry is reported theoretically and demonstrated experimentally. Owing to the enhanced evanescent field and thermal shield effect of its dielectric layer, a PWR thermometer permits accurate temperature sensing and has a wide dynamic range. A temperature measurement sensitivity of 9.4 × 10−3 °C is achieved and the thermo optic coefficient nonlinearity is measured in the experiment. The measurement of water cooling processes distributed in one dimension reveals that a PWR thermometer allows real-time temperature sensing and has potential to be applied for thermal gradient analysis. Apart from this, the PWR thermometer has the advantages of low cost and simple structure, since our transduction scheme can be constructed with conventional optical components and commercial coating techniques. PMID:25871718

  8. Comparing Hospital Processes and Outcomes in California Medicare Beneficiaries: Simulation Prompts Reconsideration

    PubMed Central

    Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia

    2017-01-01

    Introduction This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. Objectives The Centers for Medicare and Medicaid Services’ Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records. To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California’s (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. Methods We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. Results We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals’ mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals’ decreased, KPNC hospitals’ performance would appear better. Conclusion Future hospital benchmarking should consider the impact of variation in admission thresholds. PMID:29035176

  9. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less

  10. Cyber-Based Turbulent Combustion Simulation

    DTIC Science & Technology

    2012-02-28

    flame thickness by comparing with benchmark of AFRL/RZ ( UNICORN ) suppressing the oscillatory numerical behavior. These improvements in numerical...fraction with the benchmark results of AFRL/RZ. This validating base is generated by the UNICORN program on the finest mesh available and the local...shared kinematic and thermodynamic data from the UNICORN program. The most important and meaningful conclusion can be drawn from this comparison is

  11. The future of simulation technologies for complex cardiovascular procedures.

    PubMed

    Cates, Christopher U; Gallagher, Anthony G

    2012-09-01

    Changing work practices and the evolution of more complex interventions in cardiovascular medicine are forcing a paradigm shift in the way doctors are trained. Implantable cardioverter defibrillator (ICD), transcatheter aortic valve implantation (TAVI), carotid artery stenting (CAS), and acute stroke intervention procedures are forcing these changes at a faster pace than in other disciplines. As a consequence, cardiovascular medicine has had to develop a sophisticated understanding of precisely what is meant by 'training' and 'skill'. An evolving conclusion is that procedure training on a virtual reality (VR) simulator presents a viable current solution. These simulations should characterize the important performance characteristics of procedural skill that have metrics derived and defined from, and then benchmarked to experienced operators (i.e. level of proficiency). Simulation training is optimal with metric-based feedback, particularly formative trainee error assessments, proximate to their performance. In prospective, randomized studies, learners who trained to a benchmarked proficiency level on the simulator performed significantly better than learners who were traditionally trained. In addition, cardiovascular medicine now has available the most sophisticated virtual reality simulators in medicine and these have been used for the roll-out of interventions such as CAS in the USA and globally with cardiovascular society and industry partnered training programmes. The Food and Drug Administration has advocated the use of VR simulation as part of the approval of new devices and the American Board of Internal Medicine has adopted simulation as part of its maintenance of certification. Simulation is rapidly becoming a mainstay of cardiovascular education, training, certification, and the safe adoption of new technology. If cardiovascular medicine is to continue to lead in the adoption and integration of simulation, then, it must take a proactive position in the development of metric-based simulation curriculum, adoption of proficiency benchmarking definitions, and then resolve to commit resources so as to continue to lead this revolution in physician training.

  12. Importance of inlet boundary conditions for numerical simulation of combustor flows

    NASA Technical Reports Server (NTRS)

    Sturgess, G. J.; Syed, S. A.; Mcmanus, K. R.

    1983-01-01

    Fluid dynamic computer codes for the mathematical simulation of problems in gas turbine engine combustion systems are required as design and diagnostic tools. To eventually achieve a performance standard with these codes of more than qualitative accuracy it is desirable to use benchmark experiments for validation studies. Typical of the fluid dynamic computer codes being developed for combustor simulations is the TEACH (Teaching Elliptic Axisymmetric Characteristics Heuristically) solution procedure. It is difficult to find suitable experiments which satisfy the present definition of benchmark quality. For the majority of the available experiments there is a lack of information concerning the boundary conditions. A standard TEACH-type numerical technique is applied to a number of test-case experiments. It is found that numerical simulations of gas turbine combustor-relevant flows can be sensitive to the plane at which the calculations start and the spatial distributions of inlet quantities for swirling flows.

  13. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine performance tests readily accessible will help advance a more transparent model evaluation process.

  14. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGES

    Bess, John D.; Montierth, Leland; Köberl, Oliver; ...

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data aremore » greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  15. Implementing ADM1 for plant-wide benchmark simulations in Matlab/Simulink.

    PubMed

    Rosen, C; Vrecko, D; Gernaey, K V; Pons, M N; Jeppsson, U

    2006-01-01

    The IWA Anaerobic Digestion Model No.1 (ADM1) was presented in 2002 and is expected to represent the state-of-the-art model within this field in the future. Due to its complexity the implementation of the model is not a simple task and several computational aspects need to be considered, in particular if the ADM1 is to be included in dynamic simulations of plant-wide or even integrated systems. In this paper, the experiences gained from a Matlab/Simulink implementation of ADM1 into the extended COST/IWA Benchmark Simulation Model (BSM2) are presented. Aspects related to system stiffness, model interfacing with the ASM family, mass balances, acid-base equilibrium and algebraic solvers for pH and other troublesome state variables, numerical solvers and simulation time are discussed. The main conclusion is that if implemented properly, the ADM1 will also produce high-quality results in dynamic plant-wide simulations including noise, discrete sub-systems, etc. without imposing any major restrictions due to extensive computational efforts.

  16. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    USGS Publications Warehouse

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  17. PWR and BWR spent fuel assembly gamma spectra measurements

    NASA Astrophysics Data System (ADS)

    Vaccaro, S.; Tobin, S. J.; Favalli, A.; Grogan, B.; Jansson, P.; Liljenfeldt, H.; Mozin, V.; Hu, J.; Schwalbach, P.; Sjöland, A.; Trellue, H.; Vo, D.

    2016-10-01

    A project to research the application of nondestructive assay (NDA) to spent fuel assemblies is underway. The research team comprises the European Atomic Energy Community (EURATOM), embodied by the European Commission, DG Energy, Directorate EURATOM Safeguards; the Swedish Nuclear Fuel and Waste Management Company (SKB); two universities; and several United States national laboratories. The Next Generation of Safeguards Initiative-Spent Fuel project team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: (1) verify the initial enrichment, burnup, and cooling time of facility declaration; (2) detect the diversion or replacement of pins, (3) estimate the plutonium mass, (4) estimate the decay heat, and (5) determine the reactivity of spent fuel assemblies. This study focuses on spectrally resolved gamma-ray measurements performed on a diverse set of 50 assemblies [25 pressurized water reactor (PWR) assemblies and 25 boiling water reactor (BWR) assemblies]; these same 50 assemblies will be measured with neutron-based NDA instruments and a full-length calorimeter. Given that encapsulation/repository and dry storage safeguards are the primarily intended applications, the analysis focused on the dominant gamma-ray lines of 137Cs, 154Eu, and 134Cs because these isotopes will be the primary gamma-ray emitters during the time frames of interest to these applications. This study addresses the impact on the measured passive gamma-ray signals due to the following factors: burnup, initial enrichment, cooling time, assembly type (eight different PWR and six different BWR fuel designs), presence of gadolinium rods, and anomalies in operating history. To compare the measured results with theory, a limited number of ORIGEN-ARP simulations were performed.

  18. PWR and BWR spent fuel assembly gamma spectra measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaccaro, S.; Tobin, Stephen J.; Favalli, Andrea

    A project to research the application of nondestructive assay (NDA) to spent fuel assemblies is underway. The research team comprises the European Atomic Energy Community (EURATOM), embodied by the European Commission, DG Energy, Directorate EURATOM Safeguards; the Swedish Nuclear Fuel and Waste Management Company (SKB); two universities; and several United States national laboratories. The Next Generation of Safeguards Initiative–Spent Fuel project team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: (1) verify the initial enrichment, burnup, and cooling time of facility declaration; (2) detectmore » the diversion or replacement of pins, (3) estimate the plutonium mass, (4) estimate the decay heat, and (5) determine the reactivity of spent fuel assemblies. This study focuses on spectrally resolved gamma-ray measurements performed on a diverse set of 50 assemblies [25 pressurized water reactor (PWR) assemblies and 25 boiling water reactor (BWR) assemblies]; these same 50 assemblies will be measured with neutron-based NDA instruments and a full-length calorimeter. Given that encapsulation/repository and dry storage safeguards are the primarily intended applications, the analysis focused on the dominant gamma-ray lines of 137Cs, 154Eu, and 134Cs because these isotopes will be the primary gamma-ray emitters during the time frames of interest to these applications. This study addresses the impact on the measured passive gamma-ray signals due to the following factors: burnup, initial enrichment, cooling time, assembly type (eight different PWR and six different BWR fuel designs), presence of gadolinium rods, and anomalies in operating history. As a result, to compare the measured results with theory, a limited number of ORIGEN-ARP simulations were performed.« less

  19. PWR and BWR spent fuel assembly gamma spectra measurements

    DOE PAGES

    Vaccaro, S.; Tobin, Stephen J.; Favalli, Andrea; ...

    2016-07-17

    A project to research the application of nondestructive assay (NDA) to spent fuel assemblies is underway. The research team comprises the European Atomic Energy Community (EURATOM), embodied by the European Commission, DG Energy, Directorate EURATOM Safeguards; the Swedish Nuclear Fuel and Waste Management Company (SKB); two universities; and several United States national laboratories. The Next Generation of Safeguards Initiative–Spent Fuel project team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: (1) verify the initial enrichment, burnup, and cooling time of facility declaration; (2) detectmore » the diversion or replacement of pins, (3) estimate the plutonium mass, (4) estimate the decay heat, and (5) determine the reactivity of spent fuel assemblies. This study focuses on spectrally resolved gamma-ray measurements performed on a diverse set of 50 assemblies [25 pressurized water reactor (PWR) assemblies and 25 boiling water reactor (BWR) assemblies]; these same 50 assemblies will be measured with neutron-based NDA instruments and a full-length calorimeter. Given that encapsulation/repository and dry storage safeguards are the primarily intended applications, the analysis focused on the dominant gamma-ray lines of 137Cs, 154Eu, and 134Cs because these isotopes will be the primary gamma-ray emitters during the time frames of interest to these applications. This study addresses the impact on the measured passive gamma-ray signals due to the following factors: burnup, initial enrichment, cooling time, assembly type (eight different PWR and six different BWR fuel designs), presence of gadolinium rods, and anomalies in operating history. As a result, to compare the measured results with theory, a limited number of ORIGEN-ARP simulations were performed.« less

  20. Implementing a Nuclear Power Plant Model for Evaluating Load-Following Capability on a Small Grid

    NASA Astrophysics Data System (ADS)

    Arda, Samet Egemen

    A pressurized water reactor (PWR) nuclear power plant (NPP) model is introduced into Positive Sequence Load Flow (PSLF) software by General Electric in order to evaluate the load-following capability of NPPs. The nuclear steam supply system (NSSS) consists of a reactor core, hot and cold legs, plenums, and a U-tube steam generator. The physical systems listed above are represented by mathematical models utilizing a state variable lumped parameter approach. A steady-state control program for the reactor, and simple turbine and governor models are also developed. Adequacy of the isolated reactor core, the isolated steam generator, and the complete PWR models are tested in Matlab/Simulink and dynamic responses are compared with the test results obtained from the H. B. Robinson NPP. Test results illustrate that the developed models represents the dynamic features of real-physical systems and are capable of predicting responses due to small perturbations of external reactivity and steam valve opening. Subsequently, the NSSS representation is incorporated into PSLF and coupled with built-in excitation system and generator models. Different simulation cases are run when sudden loss of generation occurs in a small power system which includes hydroelectric and natural gas power plants besides the developed PWR NPP. The conclusion is that the NPP can respond to a disturbance in the power system without exceeding any design and safety limits if appropriate operational conditions, such as achieving the NPP turbine control by adjusting the speed of the steam valve, are met. In other words, the NPP can participate in the control of system frequency and improve the overall power system performance.

  1. Finite Element Modeling of the World Federation's Second MFL Benchmark Problem

    NASA Astrophysics Data System (ADS)

    Zeng, Zhiwei; Tian, Yong; Udpa, Satish; Udpa, Lalita

    2004-02-01

    This paper presents results obtained by simulating the second magnetic flux leakage benchmark problem proposed by the World Federation of NDE Centers. The geometry consists of notches machined on the internal and external surfaces of a rotating steel pipe that is placed between two yokes that are part of a magnetic circuit energized by an electromagnet. The model calculates the radial component of the leaked field at specific positions. The nonlinear material property of the ferromagnetic pipe is taken into account in simulating the problem. The velocity effect caused by the rotation of the pipe is, however, ignored for reasons of simplicity.

  2. Two-dimensional free-surface flow under gravity: A new benchmark case for SPH method

    NASA Astrophysics Data System (ADS)

    Wu, J. Z.; Fang, L.

    2018-02-01

    Currently there are few free-surface benchmark cases with analytical results for the Smoothed Particle Hydrodynamics (SPH) simulation. In the present contribution we introduce a two-dimensional free-surface flow under gravity, and obtain an analytical expression on the surface height difference and a theoretical estimation on the surface fractal dimension. They are preliminarily validated and supported by SPH calculations.

  3. Development and Implementation of Mechanistic Terry Turbine Models in RELAP-7 to Simulate RCIC Normal Operation Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zou, Ling; Zhang, Hongbin

    As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC (Reactor Core Isolation Cooling) systems in Fukushima accidents and extend BWR RCIC and PWR AFW (Auxiliary Feed Water) operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia’s original work [1], have been developed and implemented in the RELAP-7 code to simulate the RCIC system. In 2016, our effort has been focused on normal working conditions of the RCIC system. More complex off-design conditions will be pursued in later years when more data are available. In the Sandia model, the turbine stator inletmore » velocity is provided according to a reduced-order model which was obtained from a large number of CFD (computational fluid dynamics) simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine stator inlet. The models include both an adiabatic expansion process inside the nozzle and a free expansion process outside of the nozzle to ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input information for the Terry turbine rotor model. The analytical models for the nozzle were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The newly developed nozzle models and modified turbine rotor model according to the Sandia’s original work have been implemented into RELAP-7, along with the original Sandia Terry turbine model. A new pump model has also been developed and implemented to couple with the Terry turbine model. An input model was developed to test the Terry turbine RCIC system, which generates reasonable results. Both the INL RCIC model and the Sandia RCIC model produce results matching major rated parameters such as the rotational speed, pump torque, and the turbine shaft work for the normal operation condition. The Sandia model is more sensitive to the turbine outlet pressure than the INL model. The next step will be further refining the Terry turbine models by including two-phase flow cases so that off-design conditions can be simulated. The pump model could also be enhanced with the use of the homologous curves.« less

  4. Using GTO-Velo to Facilitate Communication and Sharing of Simulation Results in Support of the Geothermal Technologies Office Code Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Signe K.; Purohit, Sumit; Boyd, Lauren W.

    The Geothermal Technologies Office Code Comparison Study (GTO-CCS) aims to support the DOE Geothermal Technologies Office in organizing and executing a model comparison activity. This project is directed at testing, diagnosing differences, and demonstrating modeling capabilities of a worldwide collection of numerical simulators for evaluating geothermal technologies. Teams of researchers are collaborating in this code comparison effort, and it is important to be able to share results in a forum where technical discussions can easily take place without requiring teams to travel to a common location. Pacific Northwest National Laboratory has developed an open-source, flexible framework called Velo that providesmore » a knowledge management infrastructure and tools to support modeling and simulation for a variety of types of projects in a number of scientific domains. GTO-Velo is a customized version of the Velo Framework that is being used as the collaborative tool in support of the GTO-CCS project. Velo is designed around a novel integration of a collaborative Web-based environment and a scalable enterprise Content Management System (CMS). The underlying framework provides a flexible and unstructured data storage system that allows for easy upload of files that can be in any format. Data files are organized in hierarchical folders and each folder and each file has a corresponding wiki page for metadata. The user interacts with Velo through a web browser based wiki technology, providing the benefit of familiarity and ease of use. High-level folders have been defined in GTO-Velo for the benchmark problem descriptions, descriptions of simulator/code capabilities, a project notebook, and folders for participating teams. Each team has a subfolder with write access limited only to the team members, where they can upload their simulation results. The GTO-CCS participants are charged with defining the benchmark problems for the study, and as each GTO-CCS Benchmark problem is defined, the problem creator can provide a description using a template on the metadata page corresponding to the benchmark problem folder. Project documents, references and videos of the weekly online meetings are shared via GTO-Velo. A results comparison tool allows users to plot their uploaded simulation results on the fly, along with those of other teams, to facilitate weekly discussions of the benchmark problem results being generated by the teams. GTO-Velo is an invaluable tool providing the project coordinators and team members with a framework for collaboration among geographically dispersed organizations.« less

  5. ViSAPy: a Python tool for biophysics-based generation of virtual spiking activity for evaluation of spike-sorting algorithms.

    PubMed

    Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T

    2015-04-30

    New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  6. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    NASA Astrophysics Data System (ADS)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, D.L.; Simonen, F.A.; Strosnider, J. Jr.

    The VISA (Vessel Integrity Simulation Analysis) code was developed as part of the NRC staff evaluation of pressurized thermal shock. VISA uses Monte Carlo simulation to evaluate the failure probability of a pressurized water reactor (PWR) pressure vessel subjected to a pressure and thermal transient specified by the user. Linear elastic fracture mechanics are used to model crack initiation and propagation. parameters for initial crack size, copper content, initial RT/sub NDT/, fluence, crack-initiation fracture toughness, and arrest fracture toughness are treated as random variables. This report documents the version of VISA used in the NRC staff report (Policy Issue frommore » J.W. Dircks to NRC Commissioners, Enclosure A: NRC Staff Evaluation of Pressurized Thermal Shock, November 1982, SECY-82-465) and includes a user's guide for the code.« less

  8. A new numerical benchmark for variably saturated variable-density flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Guevara, Carlos; Graf, Thomas

    2016-04-01

    In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.

  9. Construct validity and expert benchmarking of the haptic virtual reality dental simulator.

    PubMed

    Suebnukarn, Siriwan; Chaisombat, Monthalee; Kongpunwijit, Thanapohn; Rhienmora, Phattanapon

    2014-10-01

    The aim of this study was to demonstrate construct validation of the haptic virtual reality (VR) dental simulator and to define expert benchmarking criteria for skills assessment. Thirty-four self-selected participants (fourteen novices, fourteen intermediates, and six experts in endodontics) at one dental school performed ten repetitions of three mode tasks of endodontic cavity preparation: easy (mandibular premolar with one canal), medium (maxillary premolar with two canals), and hard (mandibular molar with three canals). The virtual instrument's path length was registered by the simulator. The outcomes were assessed by an expert. The error scores in easy and medium modes accurately distinguished the experts from novices and intermediates at the onset of training, when there was a significant difference between groups (ANOVA, p<0.05). The trend was consistent until trial 5. From trial 6 on, the three groups achieved similar scores. No significant difference was found between groups at the end of training. Error score analysis was not able to distinguish any group at the hard level of training. Instrument path length showed a difference in performance according to groups at the onset of training (ANOVA, p<0.05). This study established construct validity for the haptic VR dental simulator by demonstrating its discriminant capabilities between that of experts and non-experts. The experts' error scores and path length were used to define benchmarking criteria for optimal performance.

  10. Identification of poor households for premium exemptions in Ghana's National Health Insurance Scheme: empirical analysis of three strategies.

    PubMed

    Aryeetey, Genevieve Cecilia; Jehu-Appiah, Caroline; Spaan, Ernst; D'Exelle, Ben; Agyepong, Irene; Baltussen, Rob

    2010-12-01

    To evaluate the effectiveness of three alternative strategies to identify poor households: means testing (MT), proxy means testing (PMT) and participatory wealth ranking (PWR) in urban, rural and semi-urban settings in Ghana. The primary motivation was to inform implementation of the National Health Insurance policy of premium exemptions for the poorest households. Survey of 145-147 households per setting to collect data on consumption expenditure to estimate MT measures and of household assets to estimate PMT measures. We organized focus group discussions to derive PWR measures. We compared errors of inclusion and exclusion of PMT and PWR relative to MT, the latter being considered the gold standard measure to identify poor households. Compared to MT, the errors of exclusion and inclusion of PMT ranged between 0.46-0.63 and 0.21-0.36, respectively, and of PWR between 0.03-0.73 and 0.17-0.60, respectively, depending on the setting. Proxy means testing and PWR have considerable errors of exclusion and inclusion in comparison with MT. PWR is a subjective measure of poverty and has appeal because it reflects community's perceptions on poverty. However, as its definition of the poor varies across settings, its acceptability as a uniform strategy to identify the poor in Ghana may be questionable. PMT and MT are potential strategies to identify the poor, and their relative societal attractiveness should be judged in a broader economic analysis. This study also holds relevance to other programmes that require identification of the poor in low-income countries. © 2010 Blackwell Publishing Ltd.

  11. Fundamental metallurgical aspects of axial splitting in zircaloy cladding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, H. M.

    Fundamental metallurgical aspects of axial splitting in irradiated Zircaloy cladding have been investigated by microstructural characterization and analytical modeling, with emphasis on application of the results to understand high-burnup fuel failure under RIA situations. Optical microscopy, SEM, and TEM were conducted on BWR and PWR fuel cladding tubes that were irradiated to fluence levels of 3.3 x 10{sup 21} n cm{sup {minus}2} to 5.9 x 10{sup 21} n cm{sup {minus}2} (E > 1 MeV) and tested in hot cell at 292--325 C in Ar. The morphology, distribution, and habit planes of macroscopic and microscopic hydrides in as-irradiated and posttest claddingmore » were determined by stereo-TEM. The type and magnitude of the residual stress produced in association with oxide-layer growth and dense hydride precipitation, and several synergistic factors that strongly influence axial-splitting behavior were analyzed. The results of the microstructural characterization and stress analyses were then correlated with axial-splitting behavior of high-burnup PWR cladding reported for simulated-RIA conditions. The effects of key test procedures and their implications for the interpretation of RIA test results are discussed.« less

  12. Characterization of interfacial reactions and oxide films on 316L stainless steel in various simulated PWR primary water environments

    NASA Astrophysics Data System (ADS)

    Chen, Junjie; Xiao, Qian; Lu, Zhanpeng; Ru, Xiangkun; Peng, Hao; Xiong, Qi; Li, Hongjuan

    2017-06-01

    The effect of water chemistry on the electrochemical and oxidizing behaviors of 316L SS was investigated in hydrogenated, deaerated and oxygenated PWR primary water at 310 °C. Water chemistry significantly influenced the electrochemical impedance spectroscopy parameters. The highest charge-transfer resistance and oxide-film resistance occurred in oxygenated water. The highest electric double-layer capacitance and constant phase element of the oxide film were in hydrogenated water. The oxide films formed in deaerated and hydrogenated environments were similar in composition but different in morphology. An oxide film with spinel outer particles and a compact and Cr-rich inner layer was formed in both hydrogenated and deaerated water. Larger and more loosely distributed outer oxide particles were formed in deaerated water. In oxygenated water, an oxide film with hematite outer particles and a porous and Ni-rich inner layer was formed. The reaction kinetics parameters obtained by electrochemical impedance spectroscopy measurements and oxidation film properties relating to the steady or quasi-steady state conditions in the time-period of measurements could provide fundamental information for understanding stress corrosion cracking processes and controlling parameters.

  13. Laser anemometry measurements of natural circulation flow in a scale model PWR reactor system. [Pressurized Water Reactor

    NASA Technical Reports Server (NTRS)

    Kadambi, J. R.; Schneider, S. J.; Stewart, W. A.

    1986-01-01

    The natural circulation of a single phase fluid in a scale model of a pressurized water reactor system during a postulated grade core accident is analyzed. The fluids utilized were water and SF6. The design of the reactor model and the similitude requirements are described. Four LDA tests were conducted: water with 28 kW of heat in the simulated core, with and without the participation of simulated steam generators; water with 28 kW of heat in the simulated core, with the participation of simulated steam generators and with cold upflow of 12 lbm/min from the lower plenum; and SF6 with 0.9 kW of heat in the simulated core and without the participation of the simulated steam generators. For the water tests, the velocity of the water in the center of the core increases with vertical height and continues to increase in the upper plenum. For SF6, it is observed that the velocities are an order of magnitude higher than those of water; however, the velocity patterns are similar.

  14. Benchmark Simulations of the Thermal-Hydraulic Responses during EBR-II Inherent Safety Tests using SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui; Sumner, Tyler S.

    2016-04-17

    An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less

  15. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  16. Evaluation of a Powered Ankle-Foot Prosthesis during Slope Ascent Gait

    PubMed Central

    2016-01-01

    Passive prosthetic feet lack active plantarflexion and push-off power resulting in gait deviations and compensations by individuals with transtibial amputation (TTA) during slope ascent. We sought to determine the effect of active ankle plantarflexion and push-off power provided by a powered prosthetic ankle-foot (PWR) on lower extremity compensations in individuals with unilateral TTA as they walked up a slope. We hypothesized that increased ankle plantarflexion and push-off power would reduce compensations commonly observed with a passive, energy-storing-returning prosthetic ankle-foot (ESR). We compared the temporal spatial, kinematic, and kinetic measures of ten individuals with TTA (age: 30.2 ± 5.3 yrs) to matched abled-bodied (AB) individuals during 5° slope ascent. The TTA group walked with an ESR and separately with a PWR. The PWR produced significantly greater prosthetic ankle plantarflexion and push-off power generation compared to an ESR and more closely matched AB values. The PWR functioned similar to a passive ESR device when transitioning onto the prosthetic limb due to limited prosthetic dorsiflexion, which resulted in similar deviations and compensations. In contrast, when transitioning off the prosthetic limb, increased ankle plantarflexion and push-off power provided by the PWR contributed to decreased intact limb knee extensor power production, lessening demand on the intact limb knee. PMID:27977681

  17. Evaluation of a Powered Ankle-Foot Prosthesis during Slope Ascent Gait.

    PubMed

    Rábago, Christopher A; Aldridge Whitehead, Jennifer; Wilken, Jason M

    2016-01-01

    Passive prosthetic feet lack active plantarflexion and push-off power resulting in gait deviations and compensations by individuals with transtibial amputation (TTA) during slope ascent. We sought to determine the effect of active ankle plantarflexion and push-off power provided by a powered prosthetic ankle-foot (PWR) on lower extremity compensations in individuals with unilateral TTA as they walked up a slope. We hypothesized that increased ankle plantarflexion and push-off power would reduce compensations commonly observed with a passive, energy-storing-returning prosthetic ankle-foot (ESR). We compared the temporal spatial, kinematic, and kinetic measures of ten individuals with TTA (age: 30.2 ± 5.3 yrs) to matched abled-bodied (AB) individuals during 5° slope ascent. The TTA group walked with an ESR and separately with a PWR. The PWR produced significantly greater prosthetic ankle plantarflexion and push-off power generation compared to an ESR and more closely matched AB values. The PWR functioned similar to a passive ESR device when transitioning onto the prosthetic limb due to limited prosthetic dorsiflexion, which resulted in similar deviations and compensations. In contrast, when transitioning off the prosthetic limb, increased ankle plantarflexion and push-off power provided by the PWR contributed to decreased intact limb knee extensor power production, lessening demand on the intact limb knee.

  18. Multirecycling of Plutonium from LMFBR Blanket in Standard PWRs Loaded with MOX Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonat Sen; Gilles Youinou

    2013-02-01

    It is now well-known that, from a physics standpoint, Pu, or even TRU (i.e. Pu+M.A.), originating from LEU fuel irradiated in PWRs can be multirecycled also in PWRs using MOX fuel. However, the degradation of the isotopic composition during irradiation necessitates using enriched U in conjunction with the MOX fuel either homogeneously or heterogeneously to maintain the Pu (or TRU) content at a level allowing safe operation of the reactor, i.e. below about 10%. The study is related to another possible utilization of the excess Pu produced in the blanket of a LMFBR, namely in a PWR(MOX). In this casemore » the more Pu is bred in the LMFBR, the more PWR(MOX) it can sustain. The important difference between the Pu coming from the blanket of a LMFBR and that coming from a PWR(LEU) is its isotopic composition. The first one contains about 95% of fissile isotopes whereas the second one contains only about 65% of fissile isotopes. As it will be shown later, this difference allows the PWR fed by Pu from the LMFBR blanket to operate with natural U instead of enriched U when it is fed by Pu from PWR(LEU)« less

  19. Surrogate model approach for improving the performance of reactive transport simulations

    NASA Astrophysics Data System (ADS)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2016-04-01

    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines (MARS) method provides the best trade-off between speed and accuracy. This proof-of-concept forms an essential step towards building an interactive visual analytics system to enable user-driven systematic creation of geochemical surrogate models. Such a system shall enable reactive transport simulations with unprecedented spatial and temporal detail to become possible. References: Kolditz, O., Görke, U.J., Shao, H. and Wang, W., 2012. Thermo-hydro-mechanical-chemical processes in porous media: benchmarks and examples (Vol. 86). Springer Science & Business Media.

  20. Simulation of guided-wave ultrasound propagation in composite laminates: Benchmark comparisons of numerical codes and experiment.

    PubMed

    Leckey, Cara A C; Wheeler, Kevin R; Hafiychuk, Vasyl N; Hafiychuk, Halyna; Timuçin, Doğan A

    2018-03-01

    Ultrasonic wave methods constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials, such as carbon fiber reinforced polymer (CFRP) laminates. Computational models of ultrasonic wave excitation, propagation, and scattering in CFRP composites can be extremely valuable in designing practicable NDE and SHM hardware, software, and methodologies that accomplish the desired accuracy, reliability, efficiency, and coverage. The development and application of ultrasonic simulation approaches for composite materials is an active area of research in the field of NDE. This paper presents comparisons of guided wave simulations for CFRP composites implemented using four different simulation codes: the commercial finite element modeling (FEM) packages ABAQUS, ANSYS, and COMSOL, and a custom code executing the Elastodynamic Finite Integration Technique (EFIT). Benchmark comparisons are made between the simulation tools and both experimental laser Doppler vibrometry data and theoretical dispersion curves. A pristine and a delamination type case (Teflon insert in the experimental specimen) is studied. A summary is given of the accuracy of simulation results and the respective computational performance of the four different simulation tools. Published by Elsevier B.V.

  1. Deepthi Vaidhynathan | NREL

    Science.gov Websites

    Complex Systems Simulation and Optimization Group on performance analysis and benchmarking latest . Research Interests High Performance Computing|Embedded System |Microprocessors & Microcontrollers

  2. Benchmarking Data for the Proposed Signature of Used Fuel Casks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauch, Eric Benton

    2016-09-23

    A set of benchmarking measurements to test facets of the proposed extended storage signature was conducted on May 17, 2016. The measurements were designed to test the overall concept of how the proposed signature can be used to identify a used fuel cask based only on the distribution of neutron sources within the cask. To simulate the distribution, 4 Cf-252 sources were chosen and arranged on a 3x3 grid in 3 different patterns and raw neutron totals counts were taken at 6 locations around the grid. This is a very simplified test of the typical geometry studied previously in simulationmore » with simulated used nuclear fuel.« less

  3. Numerical modeling of fluid and electrical currents through geometries based on synchrotron X-ray tomographic images of reservoir rocks using Avizo and COMSOL

    NASA Astrophysics Data System (ADS)

    Bird, M. B.; Butler, S. L.; Hawkes, C. D.; Kotzer, T.

    2014-12-01

    The use of numerical simulations to model physical processes occurring within subvolumes of rock samples that have been characterized using advanced 3D imaging techniques is becoming increasingly common. Not only do these simulations allow for the determination of macroscopic properties like hydraulic permeability and electrical formation factor, but they also allow the user to visualize processes taking place at the pore scale and they allow for multiple different processes to be simulated on the same geometry. Most efforts to date have used specialized research software for the purpose of simulations. In this contribution, we outline the steps taken to use commercial software Avizo to transform a 3D synchrotron X-ray-derived tomographic image of a rock core sample to an STL (STereoLithography) file which can be imported into the commercial multiphysics modeling package COMSOL. We demonstrate that the use of COMSOL to perform fluid and electrical current flow simulations through the pore spaces. The permeability and electrical formation factor of the sample are calculated and compared with laboratory-derived values and benchmark calculations. Although the simulation domains that we were able to model on a desk top computer were significantly smaller than representative elementary volumes, and we were able to establish Kozeny-Carman and Archie's Law trends on which laboratory measurements and previous benchmark solutions fall. The rock core samples include a Fountainebleau sandstone used for benchmarking and a marly dolostone sampled from a well in the Weyburn oil field of southeastern Saskatchewan, Canada. Such carbonates are known to have complicated pore structures compared with sandstones, yet we are able to calculate reasonable macroscopic properties. We discuss the computing resources required.

  4. The Stock Market Game: A Simulation of Stock Market Trading. Grades 5-8.

    ERIC Educational Resources Information Center

    Draze, Dianne

    This guide to a unit on a simulation game about the stock market contains an instructional text and two separate simulations. Through directed lessons and reproducible worksheets, the unit teaches students about business ownership, stock exchanges, benchmarks, commissions, why prices change, the logistics of buying and selling stocks, and how to…

  5. Modeling and Simulation With Operational Databases to Enable Dynamic Situation Assessment & Prediction

    DTIC Science & Technology

    2010-11-01

    subsections discuss the design of the simulations. 3.12.1 Lanchester5D Simulation A Lanchester simulation was developed to conduct performance...benchmarks using the WarpIV Kernel and HyperWarpSpeed. The Lanchester simulation contains a user-definable number of grid cells in which blue and red...forces engage in battle using Lanchester equations. Having a user-definable number of grid cells enables the simulation to be stressed with high entity

  6. Multivariate analysis of gamma spectra to characterize used nuclear fuel

    DOE PAGES

    Coble, Jamie; Orton, Christopher; Schwantes, Jon

    2017-01-17

    The Multi-Isotope Process (MIP) Monitor provides an efficient means to monitor the process conditions in used nuclear fuel reprocessing facilities to support process verification and validation. The MIP Monitor applies multivariate analysis to gamma spectroscopy of key stages in the reprocessing stream in order to detect small changes in the gamma spectrum, which may indicate changes in process conditions. This research extends the MIP Monitor by characterizing a used fuel sample after initial dissolution according to the type of reactor of origin (pressurized or boiling water reactor; PWR and BWR, respectively), initial enrichment, burn up, and cooling time. Simulated gammamore » spectra were used in this paper to develop and test three fuel characterization algorithms. The classification and estimation models employed are based on the partial least squares regression (PLS) algorithm. A PLS discriminate analysis model was developed which perfectly classified reactor type for the three PWR and three BWR reactor designs studied. Locally weighted PLS models were fitted on-the-fly to estimate the remaining fuel characteristics. For the simulated gamma spectra considered, burn up was predicted with 0.1% root mean squared percent error (RMSPE) and both cooling time and initial enrichment with approximately 2% RMSPE. Finally, this approach to automated fuel characterization can be used to independently verify operator declarations of used fuel characteristics and to inform the MIP Monitor anomaly detection routines at later stages of the fuel reprocessing stream to improve sensitivity to changes in operational parameters that may indicate issues with operational control or malicious activities.« less

  7. EMERALD REV. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunot, W.K.; Fray, R.R.; Gillespie, S.G.

    1974-03-01

    The EMERALD program is designed for the calculation of radiation releases and exposures resulting from abnormal operation of a large pressurized water reactor (PWR). The approach used in EMERALD is similar to an analog simulation of a real system. Each component or volume in the plant which contains a radioactive material is represented by a subroutine which keeps track of the production, transfer, decay and absorption of radioactivity in that volume. During the course of the analysis of an accident, activity is transferred from subroutine to subroutine in the program as it would be transferred from place to place inmore » the plant. For example, in the calculation of the doses resulting from a loss-of-coolant accident the program first calculates the activity built up in the fuel before the accident, then releases some of this activity to the containment volume. Some of this activity is then released to the atmosphere. The rates of transfer, leakage, production, cleanup, decay, and release are read in as input to the program. Subroutines are also included which calculate the on-site and off-site radiation exposures at various distances for individual isotopes and sums of isotopes. The program contains a library of physical data for the twenty-five isotopes of most interest in licensing calculations, and other isotopes can be added or substituted. Because of the flexible nature of the simulation approach, the EMERALD program can be used for most calculations involving the production and release of radioactive materials during abnormal operation of a PWR. These include design, operational, and licensing studies.« less

  8. Multivariate analysis of gamma spectra to characterize used nuclear fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coble, Jamie; Orton, Christopher; Schwantes, Jon

    The Multi-Isotope Process (MIP) Monitor provides an efficient means to monitor the process conditions in used nuclear fuel reprocessing facilities to support process verification and validation. The MIP Monitor applies multivariate analysis to gamma spectroscopy of key stages in the reprocessing stream in order to detect small changes in the gamma spectrum, which may indicate changes in process conditions. This research extends the MIP Monitor by characterizing a used fuel sample after initial dissolution according to the type of reactor of origin (pressurized or boiling water reactor; PWR and BWR, respectively), initial enrichment, burn up, and cooling time. Simulated gammamore » spectra were used in this paper to develop and test three fuel characterization algorithms. The classification and estimation models employed are based on the partial least squares regression (PLS) algorithm. A PLS discriminate analysis model was developed which perfectly classified reactor type for the three PWR and three BWR reactor designs studied. Locally weighted PLS models were fitted on-the-fly to estimate the remaining fuel characteristics. For the simulated gamma spectra considered, burn up was predicted with 0.1% root mean squared percent error (RMSPE) and both cooling time and initial enrichment with approximately 2% RMSPE. Finally, this approach to automated fuel characterization can be used to independently verify operator declarations of used fuel characteristics and to inform the MIP Monitor anomaly detection routines at later stages of the fuel reprocessing stream to improve sensitivity to changes in operational parameters that may indicate issues with operational control or malicious activities.« less

  9. Performance evaluation of two-stage fuel cycle from SFR to PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fei, T.; Hoffman, E.A.; Kim, T.K.

    2013-07-01

    One potential fuel cycle option being considered is a two-stage fuel cycle system involving the continuous recycle of transuranics in a fast reactor and the use of bred plutonium in a thermal reactor. The first stage is a Sodium-cooled Fast Reactor (SFR) fuel cycle with metallic U-TRU-Zr fuel. The SFRs need to have a breeding ratio greater than 1.0 in order to produce fissile material for use in the second stage. The second stage is a PWR fuel cycle with uranium and plutonium mixed oxide fuel based on the design and performance of the current state-of-the-art commercial PWRs with anmore » average discharge burnup of 50 MWd/kgHM. This paper evaluates the possibility of this fuel cycle option and discusses its fuel cycle performance characteristics. The study focuses on an equilibrium stage of the fuel cycle. Results indicate that, in order to avoid a positive coolant void reactivity feedback in the stage-2 PWR, the reactor requires high quality of plutonium from the first stage and minor actinides in the discharge fuel of the PWR needs to be separated and sent back to the stage-1 SFR. The electricity-sharing ratio between the 2 stages is 87.0% (SFR) to 13.0% (PWR) for a TRU inventory ratio (the mass of TRU in the discharge fuel divided by the mass of TRU in the fresh fuel) of 1.06. A sensitivity study indicated that by increasing the TRU inventory ratio to 1.13, The electricity generation fraction of stage-2 PWR is increased to 28.9%. The two-stage fuel cycle system considered in this study was found to provide a high uranium utilization (>80%). (authors)« less

  10. Association between gestational weight gain according to body mass index and postpartum weight in a large cohort of Danish women.

    PubMed

    Rode, Line; Kjærgaard, Hanne; Ottesen, Bent; Damm, Peter; Hegaard, Hanne K

    2012-02-01

    Our aim was to investigate the association between gestational weight gain (GWG) and postpartum weight retention (PWR) in pre-pregnancy underweight, normal weight, overweight or obese women, with emphasis on the American Institute of Medicine (IOM) recommendations. We performed secondary analyses on data based on questionnaires from 1,898 women from the "Smoke-free Newborn Study" conducted 1996-1999 at Hvidovre Hospital, Denmark. Relationship between GWG and PWR was examined according to BMI as a continuous variable and in four groups. Association between PWR and GWG according to IOM recommendations was tested by linear regression analysis and the association between PWR ≥ 5 kg (11 lbs) and GWG by logistic regression analysis. Mean GWG and mean PWR were constant for all BMI units until 26-27 kg/m(2). After this cut-off mean GWG and mean PWR decreased with increasing BMI. Nearly 40% of normal weight, 60% of overweight and 50% of obese women gained more than recommended during pregnancy. For normal weight and overweight women with GWG above recommendations the OR of gaining ≥ 5 kg (11 lbs) 1-year postpartum was 2.8 (95% CI 2.0-4.0) and 2.8 (95% CI 1.3-6.2, respectively) compared to women with GWG within recommendations. GWG above IOM recommendations significantly increases normal weight, overweight and obese women's risk of retaining weight 1 year after delivery. Health personnel face a challenge in prenatal counseling as 40-60% of these women gain more weight than recommended for their BMI. As GWG is potentially modifiable, our study should be followed by intervention studies focusing on GW.

  11. Higher representations on the lattice: Numerical simulations, SU(2) with adjoint fermions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Debbio, Luigi; Patella, Agostino; Pica, Claudio

    2010-05-01

    We discuss the lattice formulation of gauge theories with fermions in arbitrary representations of the color group and present in detail the implementation of the hybrid Monte Carlo (HMC)/rational HMC algorithm for simulating dynamical fermions. We discuss the validation of the implementation through an extensive set of tests and the stability of simulations by monitoring the distribution of the lowest eigenvalue of the Wilson-Dirac operator. Working with two flavors of Wilson fermions in the adjoint representation, benchmark results for realistic lattice simulations are presented. Runs are performed on different lattice sizes ranging from 4{sup 3}x8 to 24{sup 3}x64 sites. Formore » the two smallest lattices we also report the measured values of benchmark mesonic observables. These results can be used as a baseline for rapid cross-checks of simulations in higher representations. The results presented here are the first steps toward more extensive investigations with controlled systematic errors, aiming at a detailed understanding of the phase structure of these theories, and of their viability as candidates for strong dynamics beyond the standard model.« less

  12. Groundwater flow with energy transport and water-ice phase change: Numerical simulations, benchmarks, and application to freezing in peat bogs

    USGS Publications Warehouse

    McKenzie, J.M.; Voss, C.I.; Siegel, D.I.

    2007-01-01

    In northern peatlands, subsurface ice formation is an important process that can control heat transport, groundwater flow, and biological activity. Temperature was measured over one and a half years in a vertical profile in the Red Lake Bog, Minnesota. To successfully simulate the transport of heat within the peat profile, the U.S. Geological Survey's SUTRA computer code was modified. The modified code simulates fully saturated, coupled porewater-energy transport, with freezing and melting porewater, and includes proportional heat capacity and thermal conductivity of water and ice, decreasing matrix permeability due to ice formation, and latent heat. The model is verified by correctly simulating the Lunardini analytical solution for ice formation in a porous medium with a mixed ice-water zone. The modified SUTRA model correctly simulates the temperature and ice distributions in the peat bog. Two possible benchmark problems for groundwater and energy transport with ice formation and melting are proposed that may be used by other researchers for code comparison. ?? 2006 Elsevier Ltd. All rights reserved.

  13. Recent operating experiences with steam generators in Japanese NPPs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashima, Seiji

    1997-02-01

    In 1994, the Genkai-3 of Kyushu Electric Power Co., Inc. and the Ikata-3 of Shikoku Electric Power Co., Inc. started commercial operation, and now 22 PWR plants are being operated in Japan. Since the first PWR plant now 22 PWR plants are being operated in was started to operate, Japanese PWR plants have had an operating experience of approx. 280 reactor-years. During that period, many tube degradations have been experienced in steam generators (SGs). And, in 1991, the steam generator tube rupture (SGTR) occurred in the Mihama-2 of Kansai Electric Power Co., Inc. However, the occurrence of tube degradation ofmore » SGs has been decreased by the instructions of the MITI as regulatory authorities, efforts of Electric Utilities, and technical support from the SG manufacturers. Here the author describes the recent SGs in Japan about the following points. (1) Recent Operating Experiences (2) Lessons learned from Mihama-2 SGTR (3) SG replacement (4) Safety Regulations on SG (5) Research and development on SG.« less

  14. Design study of long-life PWR using thorium cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subkhi, Moh. Nurul; Su'ud, Zaki; Waris, Abdul

    2012-06-06

    Design study of long-life Pressurized Water Reactor (PWR) using thorium cycle has been performed. Thorium cycle in general has higher conversion ratio in the thermal spectrum domain than uranium cycle. Cell calculation, Burn-up and multigroup diffusion calculation was performed by PIJ-CITATION-SRAC code using libraries based on JENDL 3.2. The neutronic analysis result of infinite cell calculation shows that {sup 231}Pa better than {sup 237}Np as burnable poisons in thorium fuel system. Thorium oxide system with 8%{sup 233}U enrichment and 7.6{approx} 8%{sup 231}Pa is the most suitable fuel for small-long life PWR core because it gives reactivity swing less than 1%{Delta}k/kmore » and longer burn up period (more than 20 year). By using this result, small long-life PWR core can be designed for long time operation with reduced excess reactivity as low as 0.53%{Delta}k/k and reduced power peaking during its operation.« less

  15. What are the assets and weaknesses of HFO detectors? A benchmark framework based on realistic simulations

    PubMed Central

    Pizzo, Francesca; Bartolomei, Fabrice; Wendling, Fabrice; Bénar, Christian-George

    2017-01-01

    High-frequency oscillations (HFO) have been suggested as biomarkers of epileptic tissues. While visual marking of these short and small oscillations is tedious and time-consuming, automatic HFO detectors have not yet met a large consensus. Even though detectors have been shown to perform well when validated against visual marking, the large number of false detections due to their lack of robustness hinder their clinical application. In this study, we developed a validation framework based on realistic and controlled simulations to quantify precisely the assets and weaknesses of current detectors. We constructed a dictionary of synthesized elements—HFOs and epileptic spikes—from different patients and brain areas by extracting these elements from the original data using discrete wavelet transform coefficients. These elements were then added to their corresponding simulated background activity (preserving patient- and region- specific spectra). We tested five existing detectors against this benchmark. Compared to other studies confronting detectors, we did not only ranked them according their performance but we investigated the reasons leading to these results. Our simulations, thanks to their realism and their variability, enabled us to highlight unreported issues of current detectors: (1) the lack of robust estimation of the background activity, (2) the underestimated impact of the 1/f spectrum, and (3) the inadequate criteria defining an HFO. We believe that our benchmark framework could be a valuable tool to translate HFOs into a clinical environment. PMID:28406919

  16. High-order continuum kinetic method for modeling plasma dynamics in phase space

    DOE PAGES

    Vogman, G. V.; Colella, P.; Shumlak, U.

    2014-12-15

    Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,v x,v y) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuummore » finite volume algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,v r,v z) phase space are presented.« less

  17. Benchmarks for time-domain simulation of sound propagation in soft-walled airways: Steady configurations

    PubMed Central

    Titze, Ingo R.; Palaparthi, Anil; Smith, Simeon L.

    2014-01-01

    Time-domain computer simulation of sound production in airways is a widely used tool, both for research and synthetic speech production technology. Speed of computation is generally the rationale for one-dimensional approaches to sound propagation and radiation. Transmission line and wave-reflection (scattering) algorithms are used to produce formant frequencies and bandwidths for arbitrarily shaped airways. Some benchmark graphs and tables are provided for formant frequencies and bandwidth calculations based on specific mathematical terms in the one-dimensional Navier–Stokes equation. Some rules are provided here for temporal and spatial discretization in terms of desired accuracy and stability of the solution. Kinetic losses, which have been difficult to quantify in frequency-domain simulations, are quantified here on the basis of the measurements of Scherer, Torkaman, Kucinschi, and Afjeh [(2010). J. Acoust. Soc. Am. 128(2), 828–838]. PMID:25480071

  18. Development of ECT/UT inspection system for bottom mounted instrumentation nozzle of PWR reactor vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanaka, H.; Fukui, S.; Iwahashi, Y.

    1994-12-31

    The development of inspection technique and tool for Bottom Mounted Instrument (BMI) nozzle of PWR plant was performed for countermeasure of leakage accident at incore instrument nozzle of Hamaoka-1 (BWR). MHI achieved the following development, of which object was PWR Plant R/V: (1) development of ECT/UT Multi-sensored Probe; (2) development of Inspection System (3) development of Data Processing System. The Inspection System had been functionally tested using full scale mock-up. As the result of the functional test, this system was confirmed to be very effective, and assumed to be hopeful for the actual application on site.

  19. Plasmon waveguide resonance sensor using an Au-MgF2 structure.

    PubMed

    Zhou, Yanfei; Zhang, Pengfei; He, Yonghong; Xu, Zihao; Liu, Le; Ji, Yanhong; Ma, Hui

    2014-10-01

    We report an Au − MgF(2) plasmon waveguide resonance (PWR) sensor in this work. The characteristics of this sensing structure are compared with a surface plasmon resonance (SPR) structure theoretically and experimentally. The transverse-magnetic-polarized PWR sensor has a refractive index resolution of 9.3 × 10(-7) RIU, which is 6 times smaller than that of SPR at the incident light wavelength of 633 nm, and the transverse-electric-polarized PWR sensor has a refractive index resolution of 3.0 × 10(-6) RIU. This high-resolution sensor is easy to build and is less sensitive to film coating deviations.

  20. Global Gridded Crop Model Evaluation: Benchmarking, Skills, Deficiencies and Implications.

    NASA Technical Reports Server (NTRS)

    Muller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Folberth, Christian; Glotter, Michael; Hoek, Steven; hide

    2017-01-01

    Crop models are increasingly used to simulate crop yields at the global scale, but so far there is no general framework on how to assess model performance. Here we evaluate the simulation results of 14 global gridded crop modeling groups that have contributed historic crop yield simulations for maize, wheat, rice and soybean to the Global Gridded Crop Model Intercomparison (GGCMI) of the Agricultural Model Intercomparison and Improvement Project (AgMIP). Simulation results are compared to reference data at global, national and grid cell scales and we evaluate model performance with respect to time series correlation, spatial correlation and mean bias. We find that global gridded crop models (GGCMs) show mixed skill in reproducing time series correlations or spatial patterns at the different spatial scales. Generally, maize, wheat and soybean simulations of many GGCMs are capable of reproducing larger parts of observed temporal variability (time series correlation coefficients (r) of up to 0.888 for maize, 0.673 for wheat and 0.643 for soybean at the global scale) but rice yield variability cannot be well reproduced by most models. Yield variability can be well reproduced for most major producing countries by many GGCMs and for all countries by at least some. A comparison with gridded yield data and a statistical analysis of the effects of weather variability on yield variability shows that the ensemble of GGCMs can explain more of the yield variability than an ensemble of regression models for maize and soybean, but not for wheat and rice. We identify future research needs in global gridded crop modeling and for all individual crop modeling groups. In the absence of a purely observation-based benchmark for model evaluation, we propose that the best performing crop model per crop and region establishes the benchmark for all others, and modelers are encouraged to investigate how crop model performance can be increased. We make our evaluation system accessible to all crop modelers so that other modeling groups can also test their model performance against the reference data and the GGCMI benchmark.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonen, F.A.; Johnson, K.I.; Liebetrau, A.M.

    The VISA-II (Vessel Integrity Simulation Analysis code was originally developed as part of the NRC staff evaluation of pressurized thermal shock. VISA-II uses Monte Carlo simulation to evaluate the failure probability of a pressurized water reactor (PWR) pressure vessel subjected to a pressure and thermal transient specified by the user. Linear elastic fracture mechanics methods are used to model crack initiation and propagation. Parameters for initial crack size and location, copper content, initial reference temperature of the nil-ductility transition, fluence, crack-initiation fracture toughness, and arrest fracture toughness are treated as random variables. This report documents an upgraded version of themore » original VISA code as described in NUREG/CR-3384. Improvements include a treatment of cladding effects, a more general simulation of flaw size, shape and location, a simulation of inservice inspection, an updated simulation of the reference temperature of the nil-ductility transition, and treatment of vessels with multiple welds and initial flaws. The code has been extensively tested and verified and is written in FORTRAN for ease of installation on different computers. 38 refs., 25 figs.« less

  2. Delay Tolerant Networking - Bundle Protocol Simulation

    NASA Technical Reports Server (NTRS)

    SeGui, John; Jenning, Esther

    2006-01-01

    In this paper, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the useof MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol and discuss statistics gathered concerning the total time needed to simulate numerous bundle transmissions.

  3. A Modular Simulation Framework for Assessing Swarm Search Models

    DTIC Science & Technology

    2014-09-01

    SUBTITLE A MODULAR SIMULATION FRAMEWORK FOR ASSESSING SWARM SEARCH MODELS 5. FUNDING NUMBERS 6. AUTHOR(S) Blake M. Wanier 7. PERFORMING ORGANIZATION...Numerical studies demonstrate the ability to leverage the developed simulation and analysis framework to investigate three canonical swarm search models ...as benchmarks for future exploration of more sophisticated swarm search scenarios. 14. SUBJECT TERMS Swarm Search, Search Theory, Modeling Framework

  4. Computer simulation to predict energy use, greenhouse gas emissions and costs for production of fluid milk using alternative processing methods

    USDA-ARS?s Scientific Manuscript database

    Computer simulation is a useful tool for benchmarking the electrical and fuel energy consumption and water use in a fluid milk plant. In this study, a computer simulation model of the fluid milk process based on high temperature short time (HTST) pasteurization was extended to include models for pr...

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Nicholas R.; Carlsen, Brett W.; Dixon, Brent W.

    Dynamic fuel cycle simulation tools are intended to model holistic transient nuclear fuel cycle scenarios. As with all simulation tools, fuel cycle simulators require verification through unit tests, benchmark cases, and integral tests. Model validation is a vital aspect as well. Although compara-tive studies have been performed, there is no comprehensive unit test and benchmark library for fuel cycle simulator tools. The objective of this paper is to identify the must test functionalities of a fuel cycle simulator tool within the context of specific problems of interest to the Fuel Cycle Options Campaign within the U.S. Department of Energy smore » Office of Nuclear Energy. The approach in this paper identifies the features needed to cover the range of promising fuel cycle options identified in the DOE-NE Fuel Cycle Evaluation and Screening (E&S) and categorizes these features to facilitate prioritization. Features were categorized as essential functions, integrating features, and exemplary capabilities. One objective of this paper is to propose a library of unit tests applicable to each of the essential functions. Another underlying motivation for this paper is to encourage an international dialog on the functionalities and standard test methods for fuel cycle simulator tools.« less

  6. Suggestion on the safety classification of spent fuel dry storage in China’s pressurized water reactor nuclear power plant

    NASA Astrophysics Data System (ADS)

    Liu, Ting; Qu, Yunhuan; Meng, De; Zhang, Qiaoer; Lu, Xinhua

    2018-01-01

    China’s spent fuel storage in the pressurized water reactors(PWR) is stored with wet storage way. With the rapid development of nuclear power industry, China’s NPPs(NPPs) will not be able to meet the problem of the production of spent fuel. Currently the world’s major nuclear power countries use dry storage as a way of spent fuel storage, so in recent years, China study on additional spent fuel dry storage system mainly. Part of the PWR NPP is ready to apply for additional spent fuel dry storage system. It also need to safety classificate to spent fuel dry storage facilities in PWR, but there is no standard for safety classification of spent fuel dry storage facilities in China. Because the storage facilities of the spent fuel dry storage are not part of the NPP, the classification standard of China’s NPPs is not applicable. This paper proposes the safety classification suggestion of the spent fuel dry storage for China’s PWR NPP, through to the study on China’s safety classification principles of PWR NPP in “Classification for the items of pressurized water reactor nuclear power plants (GB/T 17569-2013)”, and safety classification about spent fuel dry storage system in NUREG/CR - 6407 in the United States.

  7. Annual report, FY 1979 Spent fuel and fuel pool component integrity.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, A.B. Jr.; Bailey, W.J.; Schreiber, R.E.

    International meetings under the BEFAST program and under INFCE Working Group No. 6 during 1978 and 1979 continue to indicate that no cases of fuel cladding degradation have developed on pool-stored fuel from water reactors. A section from a spent fuel rack stand, exposed for 1.5 y in the Yankee Rowe (PWR) pool had 0.001- to 0.003-in.-deep (25- to 75-..mu..m) intergranular corrosion in weld heat-affected zones but no evidence of stress corrosion cracking. A section of a 304 stainless steel spent fuel storage rack exposed 6.67 y in the Point Beach reactor (PWR) spent fuel pool showed no significant corrosion.more » A section of 304 stainless steel 8-in.-dia pipe from the Three Mile Island No. 1 (PWR) spent fuel pool heat exchanger plumbing developed a through-wall crack. The crack was intergranular, initiating from the inside surface in a weld heat-affected zone. The zone where the crack occurred was severely sensitized during field welding. The Kraftwerk Union (Erlangen, GFR) disassembled a stainless-steel fuel-handling machine that operated for 12 y in a PWR (boric acid) spent fuel pool. There was no evidence of deterioration, and the fuel-handling machine was reassembled for further use. A spent fuel pool at a Swedish PWR was decontaminated. The procedure is outlined in this report.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohr, C.L.; Rausch, W.N.; Hesson, G.M.

    The LOCA Simulation Program in the NRU reactor is the first set of experiments to provide data on the behavior of full-length, nuclear-heated PWR fuel bundles during the heatup, reflood, and quench phases of a loss-of-coolant accident (LOCA). This paper compares the temperature time histories of 4 experimental test cases with 4 computer codes: CE-THERM, FRAP-T5, GT3-FLECHT, and TRUMP-FLECHT. The preliminary comparisons between prediction and experiment show that the state-of-the art fuel codes have large uncertainties and are not necessarily conservative in predicting peak temperatures, turn around times, and bundle quench times.

  9. Thermal modeling of a vertical dry storage cask for used nuclear fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jie; Liu, Yung Y.

    2016-05-01

    Thermal modeling of temperature profiles of dry casks has been identified as a high-priority item in a U.S. Department of Energy gap analysis. In this work, a three-dimensional model of a vertical dry cask has been constructed for computer simulation by using the ANSYS/FLUENT code. The vertical storage cask contains a welded canister for 32 Pressurized Water Reactor (PWR) used-fuel assemblies with a total decay heat load of 34 kW. To simplify thermal calculations, an effective thermal conductivity model for a 17 x 17 PWR used (or spent)-fuel assembly was developed and used in the simulation of thermal performance. Themore » effects of canister fill gas (helium or nitrogen), internal pressure (1-6 atm), and basket material (stainless steel or aluminum alloy) were studied to determine the peak cladding temperature (PCT) and the canister surface temperatures (CSTs). The results showed that high thermal conductivity of the basket material greatly enhances heat transfer and reduces the PCT. The results also showed that natural convection affects both PCT and the CST profile, while the latter depends strongly on the type of fill gas and canister internal pressure. Of particular interest to condition and performance monitoring is the identification of canister locations where significant temperature change occurs after a canister is breached and the fill gas changes from high-pressure helium to ambient air. This study provided insight on the thermal performance of a vertical storage cask containing high-burnup fuel, and helped advance the concept of monitoring CSTs as a means to detect helium leakage from a welded canister. The effects of blockage of air inlet vents on the cask's thermal performance were studied. The simulation were validated by comparing the results against data obtained from the temperature measurements of a commercial cask.« less

  10. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    NASA Astrophysics Data System (ADS)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  11. Optimization and benchmarking of a perturbative Metropolis Monte Carlo quantum mechanics/molecular mechanics program

    NASA Astrophysics Data System (ADS)

    Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A.

    2017-12-01

    In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.

  12. Optimization and benchmarking of a perturbative Metropolis Monte Carlo quantum mechanics/molecular mechanics program.

    PubMed

    Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A

    2017-12-28

    In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.

  13. Novel probabilistic neuroclassifier

    NASA Astrophysics Data System (ADS)

    Hong, Jiang; Serpen, Gursel

    2003-09-01

    A novel probabilistic potential function neural network classifier algorithm to deal with classes which are multi-modally distributed and formed from sets of disjoint pattern clusters is proposed in this paper. The proposed classifier has a number of desirable properties which distinguish it from other neural network classifiers. A complete description of the algorithm in terms of its architecture and the pseudocode is presented. Simulation analysis of the newly proposed neuro-classifier algorithm on a set of benchmark problems is presented. Benchmark problems tested include IRIS, Sonar, Vowel Recognition, Two-Spiral, Wisconsin Breast Cancer, Cleveland Heart Disease and Thyroid Gland Disease. Simulation results indicate that the proposed neuro-classifier performs consistently better for a subset of problems for which other neural classifiers perform relatively poorly.

  14. "Aid to Thought"--Just Simulate It!

    ERIC Educational Resources Information Center

    Kinczkowski, Linda; Cardon, Phillip; Speelman, Pamela

    2015-01-01

    This paper provides examples of Aid-to-Thought uses in urban decision making, classroom laboratory planning, and in a ship antiaircraft defense system. Aid-to-Thought modeling and simulations are tools students can use effectively in a STEM classroom while meeting Standards for Technological Literacy Benchmarks O and R. These projects prepare…

  15. Toward Automated Benchmarking of Atomistic Force Fields: Neat Liquid Densities and Static Dielectric Constants from the ThermoML Data Archive.

    PubMed

    Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D

    2015-10-08

    Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes.

  16. CFD-Based Design of Turbopump Inlet Duct for Reduced Dynamic Loads

    NASA Technical Reports Server (NTRS)

    Rothermel, Jeffry; Dorney, Suzanne M.; Dorney, Daniel J.

    2003-01-01

    Numerical simulations have been completed for a variety of designs for a 90 deg elbow duct. The objective is to identify a design that minimizes the dynamic load entering a LOX turbopump located at the elbow exit. Designs simulated to date indicate that simpler duct geometries result in lower losses. Benchmark simulations have verified that the compressible flow codes used in this study are applicable to these incompressible flow simulations.

  17. CFD-based Design of LOX Pump Inlet Duct for Reduced Dynamic Loads

    NASA Technical Reports Server (NTRS)

    Rothermel, Jeffry; Dorney, Daniel J.; Dorney, Suzanne M.

    2003-01-01

    Numerical simulations have been completed for a variety of designs for a 90 deg elbow duct. The objective is to identify a design that minimizes the dynamic load entering a LOX turbopump located at the elbow exit. Designs simulated to date indicate that simpler duct geometries result in lower losses. Benchmark simulations have verified that the compressible flow code used in this study is applicable to these incompressible flow simulations.

  18. Assessment of the effect of population and diary sampling methods on estimation of school-age children exposure to fine particles.

    PubMed

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2014-12-01

    Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.

  19. Identification of fuel cycle simulator functionalities for analysis of transition to a new fuel cycle

    DOE PAGES

    Brown, Nicholas R.; Carlsen, Brett W.; Dixon, Brent W.; ...

    2016-06-09

    Dynamic fuel cycle simulation tools are intended to model holistic transient nuclear fuel cycle scenarios. As with all simulation tools, fuel cycle simulators require verification through unit tests, benchmark cases, and integral tests. Model validation is a vital aspect as well. Although compara-tive studies have been performed, there is no comprehensive unit test and benchmark library for fuel cycle simulator tools. The objective of this paper is to identify the must test functionalities of a fuel cycle simulator tool within the context of specific problems of interest to the Fuel Cycle Options Campaign within the U.S. Department of Energy smore » Office of Nuclear Energy. The approach in this paper identifies the features needed to cover the range of promising fuel cycle options identified in the DOE-NE Fuel Cycle Evaluation and Screening (E&S) and categorizes these features to facilitate prioritization. Features were categorized as essential functions, integrating features, and exemplary capabilities. One objective of this paper is to propose a library of unit tests applicable to each of the essential functions. Another underlying motivation for this paper is to encourage an international dialog on the functionalities and standard test methods for fuel cycle simulator tools.« less

  20. International benchmarking of longitudinal train dynamics simulators: results

    NASA Astrophysics Data System (ADS)

    Wu, Qing; Spiryagin, Maksym; Cole, Colin; Chang, Chongyi; Guo, Gang; Sakalo, Alexey; Wei, Wei; Zhao, Xubao; Burgelman, Nico; Wiersma, Pier; Chollet, Hugues; Sebes, Michel; Shamdani, Amir; Melzi, Stefano; Cheli, Federico; di Gialleonardo, Egidio; Bosso, Nicola; Zampieri, Nicolò; Luo, Shihui; Wu, Honghua; Kaza, Guy-Léon

    2018-03-01

    This paper presents the results of the International Benchmarking of Longitudinal Train Dynamics Simulators which involved participation of nine simulators (TABLDSS, UM, CRE-LTS, TDEAS, PoliTo, TsDyn, CARS, BODYSIM and VOCO) from six countries. Longitudinal train dynamics results and computing time of four simulation cases are presented and compared. The results show that all simulators had basic agreement in simulations of locomotive forces, resistance forces and track gradients. The major differences among different simulators lie in the draft gear models. TABLDSS, UM, CRE-LTS, TDEAS, TsDyn and CARS had general agreement in terms of the in-train forces; minor differences exist as reflections of draft gear model variations. In-train force oscillations were observed in VOCO due to the introduction of wheel-rail contact. In-train force instabilities were sometimes observed in PoliTo and BODYSIM due to the velocity controlled transitional characteristics which could have generated unreasonable transitional stiffness. Regarding computing time per train operational second, the following list is in order of increasing computing speed: VOCO, TsDyn, PoliTO, CARS, BODYSIM, UM, TDEAS, CRE-LTS and TABLDSS (fastest); all simulators except VOCO, TsDyn and PoliTo achieved faster speeds than real-time simulations. Similarly, regarding computing time per integration step, the computing speeds in order are: CRE-LTS, VOCO, CARS, TsDyn, UM, TABLDSS and TDEAS (fastest).

  1. Optimization of small long-life PWR based on thorium fuel

    NASA Astrophysics Data System (ADS)

    Subkhi, Moh Nurul; Suud, Zaki; Waris, Abdul; Permana, Sidik

    2015-09-01

    A conceptual design of small long-life Pressurized Water Reactor (PWR) using thorium fuel has been investigated in neutronic aspect. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.2, while the multi-energy-group diffusion calculations were optimized in three-dimension X-Y-Z geometry of core by COREBN. The excess reactivity of thorium nitride with ZIRLO cladding is considered during 5 years of burnup without refueling. Optimization of 350 MWe long life PWR based on 5% 233U & 2.8% 231Pa, 6% 233U & 2.8% 231Pa and 7% 233U & 6% 231Pa give low excess reactivity.

  2. 2D Quantum Simulation of MOSFET Using the Non Equilibrium Green's Function Method

    NASA Technical Reports Server (NTRS)

    Svizhenko, Alexel; Anantram, M. P.; Govindan, T. R.; Yan, Jerry (Technical Monitor)

    2000-01-01

    The objectives this viewgraph presentation summarizes include: (1) the development of a quantum mechanical simulator for ultra short channel MOSFET simulation, including theory, physical approximations, and computer code; (2) explore physics that is not accessible by semiclassical methods; (3) benchmarking of semiclassical and classical methods; and (4) study other two-dimensional devices and molecular structure, from discretized Hamiltonian to tight-binding Hamiltonian.

  3. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  4. 78 FR 56752 - Interim Staff Guidance Specific Environmental Guidance for Integral Pressurized Water Reactors...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ... (iPWR). This guidance applies to environmental reviews associated with iPWR applications for limited... received on or before this date. ADDRESSES: You may submit comments by any of the following methods (unless... this document. You may access publicly-available information related to this document by any of the...

  5. Fretting wear behaviors of a dual-cooled nuclear fuel rod under a simulated rod vibration

    NASA Astrophysics Data System (ADS)

    Lee, Young-Ho; Kim, Hyung-Kyu; Kang, Heung-Seok; Yoon, Kyung-Ho; Kim, Jae-Yong; Lee, Kang-Hee

    2012-06-01

    Recently, a dual-cooled fuel (i.e., annular fuel) that is compatible with current operating PWR plants has been proposed in order to realize both a considerable amount of power uprating and an increase of safety margins. As the design concept should be compatible with current operating PWR plants, however, it shows a narrow gap between the fuel rods when compared with current solid nuclear fuel arrays and needs to modify the spacer grid shapes and their positions. In this study, fretting wear tests have been performed to evaluate the wear resistance of a dual-cooled fuel by using a proposed spring and dimple of spacer grids that have a cantilever type and hemispherical shape, respectively. As a result, the wear volume of the spring specimen gradually increases as the contact condition is changed from a certain gap, just contact to positive force. However, in the dimple specimen, just contact condition shows a large wear volume. In addition, a circular rod motion at upper region of contact surface is gradually increased and its diametric size depends on the wear depth increase. Based on the test results, the fretting wear resistance of the proposed spring and dimple is analyzed by comparing the wear measurement results and rod motion in detail.

  6. Investigation into the effect of water chemistry on corrosion product formation in areas of accelerated flow

    NASA Astrophysics Data System (ADS)

    McGrady, John; Scenini, Fabio; Duff, Jonathan; Stevens, Nicholas; Cassineri, Stefano; Curioni, Michele; Banks, Andrew

    2017-09-01

    The deposition of CRUD (Chalk River Unidentified Deposit) in the primary circuit of a Pressurised Water Reactor (PWR) is known to preferentially occur in regions of the circuit where flow acceleration of coolant occurs. A micro-fluidic flow cell was used to recreate accelerated flow under simulated PWR conditions, by flowing water through a disc with a central micro-orifice. CRUD deposition was reproduced on the disc, and CRUD Build-Up Rates (BUR) in various regions of the disc were analysed. The effect of the local environment on BUR was investigated. In particular, the effect of flow velocity, specimen material and Fe concentration were considered. The morphology and composition of the deposits were analysed with respect to experimental conditions. The BUR of CRUD was found to be sensitive to flow velocity and Fe concentration, suggesting that mass transfer is an important factor. The morphology of the deposit was affected by the specimen material indicating a dependence on surface/particle electrostatics meaning surface chemistry plays an important role in deposition. The preferential deposition of CRUD in accelerated flow regions due to electrokinetic effects was observed and it was shown that higher Fe concentrations in solution increased BURs within the orifice whereas increased flow velocity reduced BURs.

  7. Penetrative Internal Oxidation from Alloy 690 Surfaces and Stress Corrosion Crack Walls during Exposure to PWR Primary Water

    NASA Astrophysics Data System (ADS)

    Olszta, Matthew J.; Schreiber, Daniel K.; Thomas, Larry E.; Bruemmer, Stephen M.

    Analytical electron microscopy and three-dimensional atom probe tomography (ATP) examinations of surface and near-surface oxidation have been performed on Ni-30%Cr alloy 690 materials after exposure to high-temperature, simulated PWR primary water. The oxidation nanostructures have been characterized at crack walls after stress-corrosion crack growth tests and at polished surfaces of unstressed specimens for the same alloys. Localized oxidation was discovered for both crack walls and surfaces as continuous filaments (typically <10 nm in diameter) extending from the water interface into the alloy 690 matrix reaching depths of 500 nm. These filaments consisted of discrete, plate-shaped Cr2O3 particles surrounded by a distribution of nanocrystalline, rock-salt (Ni-Cr-Fe) oxide. The oxide-containing filament depth was found to increase with exposure time and, at longer times, the filaments became very dense at the surface leaving only isolated islands of metal. Individual dislocations were oxidized in non-deformed materials, while the oxidation path appeared to be along more complex dislocation substructures in heavily deformed materials. This paper will highlight the use of high resolution scanning and transmission electron microscopy in combination with APT to better elucidate the microstructure and microchemistry of the filamentary oxidation.

  8. TRIGA MARK-II source term

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Usang, M. D., E-mail: mark-dennis@nuclearmalaysia.gov.my; Hamzah, N. S., E-mail: mark-dennis@nuclearmalaysia.gov.my; Abi, M. J. B., E-mail: mark-dennis@nuclearmalaysia.gov.my

    ORIGEN 2.2 are employed to obtain data regarding γ source term and the radio-activity of irradiated TRIGA fuel. The fuel composition are specified in grams for use as input data. Three types of fuel are irradiated in the reactor, each differs from the other in terms of the amount of Uranium compared to the total weight. Each fuel are irradiated for 365 days with 50 days time step. We obtain results on the total radioactivity of the fuel, the composition of activated materials, composition of fission products and the photon spectrum of the burned fuel. We investigate the differences ofmore » results using BWR and PWR library for ORIGEN. Finally, we compare the composition of major nuclides after 1 year irradiation of both ORIGEN library with results from WIMS. We found only minor disagreements between the yields of PWR and BWR libraries. In comparison with WIMS, the errors are a little bit more pronounced. To overcome this errors, the irradiation power used in ORIGEN could be increased a little, so that the differences in the yield of ORIGEN and WIMS could be reduced. A more permanent solution is to use a different code altogether to simulate burnup such as DRAGON and ORIGEN-S. The result of this study are essential for the design of radiation shielding from the fuel.« less

  9. Characterization of ion irradiation effects on the microstructure, hardness, deformation and crack initiation behavior of austenitic stainless steel:Heavy ions vs protons

    NASA Astrophysics Data System (ADS)

    Gupta, J.; Hure, J.; Tanguy, B.; Laffont, L.; Lafont, M.-C.; Andrieu, E.

    2018-04-01

    Irradiation Assisted Stress Corrosion Cracking (IASCC) is a complex phenomenon of degradation which can have a significant influence on maintenance time and cost of core internals of a Pressurized Water Reactor (PWR). Hence, it is an issue of concern, especially in the context of lifetime extension of PWRs. Proton irradiation is generally used as a representative alternative of neutron irradiation to improve the current understanding of the mechanisms involved in IASCC. This study assesses the possibility of using heavy ions irradiation to evaluate IASCC mechanisms by comparing the irradiation induced modifications (in microstructure and mechanical properties) and cracking susceptibility of SA 304 L after both type of irradiations: Fe irradiation at 450 °C and proton irradiation at 350 °C. Irradiation-induced defects are characterized and quantified along with nano-hardness measurements, showing a correlation between irradiation hardening and density of Frank loops that is well captured by Orowan's formula. Both irradiations (iron and proton) increase the susceptibility of SA 304 L to intergranular cracking on subjection to Constant Extension Rate Tensile tests (CERT) in simulated nominal PWR primary water environment at 340 °C. For these conditions, cracking susceptibility is found to be quantitatively similar for both irradiations, despite significant differences in hardening and degree of localization.

  10. ADVANCEMENTS IN TIME-SPECTRA ANALYSIS METHODS FOR LEAD SLOWING-DOWN SPECTROSCOPY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Leon E.; Anderson, Kevin K.; Gesh, Christopher J.

    2010-08-11

    Direct measurement of Pu in spent nuclear fuel remains a key challenge for safeguarding nuclear fuel cycles of today and tomorrow. Lead slowing-down spectroscopy (LSDS) is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic mass with an uncertainty lower than the approximately 10 percent typical of today’s confirmatory assay methods. Pacific Northwest National Laboratory’s (PNNL) previous work to assess the viability of LSDS for the assay of pressurized water reactor (PWR) assemblies indicated that the method could provide direct assay of Pu-239 and U-235 (and possibly Pu-240 and Pu-241)more » with uncertainties less than a few percent, assuming suitably efficient instrumentation, an intense pulsed neutron source, and improvements in the time-spectra analysis methods used to extract isotopic information from a complex LSDS signal. This previous simulation-based evaluation used relatively simple PWR fuel assembly definitions (e.g. constant burnup across the assembly) and a constant initial enrichment and cooling time. The time-spectra analysis method was founded on a preliminary analytical model of self-shielding intended to correct for assay-signal nonlinearities introduced by attenuation of the interrogating neutron flux within the assembly.« less

  11. Conceptual Core Analysis of Long Life PWR Utilizing Thorium-Uranium Fuel Cycle

    NASA Astrophysics Data System (ADS)

    Rouf; Su'ud, Zaki

    2016-08-01

    Conceptual core analysis of long life PWR utilizing thorium-uranium based fuel has conducted. The purpose of this study is to evaluate neutronic behavior of reactor core using combined thorium and enriched uranium fuel. Based on this fuel composition, reactor core have higher conversion ratio rather than conventional fuel which could give longer operation length. This simulation performed using SRAC Code System based on library SRACLIB-JDL32. The calculation carried out for (Th-U)O2 and (Th-U)C fuel with uranium composition 30 - 40% and gadolinium (Gd2O3) as burnable poison 0,0125%. The fuel composition adjusted to obtain burn up length 10 - 15 years under thermal power 600 - 1000 MWt. The key properties such as uranium enrichment, fuel volume fraction, percentage of uranium are evaluated. Core calculation on this study adopted R-Z geometry divided by 3 region, each region have different uranium enrichment. The result show multiplication factor every burn up step for 15 years operation length, power distribution behavior, power peaking factor, and conversion ratio. The optimum core design achieved when thermal power 600 MWt, percentage of uranium 35%, U-235 enrichment 11 - 13%, with 14 years operation length, axial and radial power peaking factor about 1.5 and 1.2 respectively.

  12. NetBenchmark: a bioconductor package for reproducible benchmarks of gene regulatory network inference.

    PubMed

    Bellot, Pau; Olsen, Catharina; Salembier, Philippe; Oliveras-Vergés, Albert; Meyer, Patrick E

    2015-09-29

    In the last decade, a great number of methods for reconstructing gene regulatory networks from expression data have been proposed. However, very few tools and datasets allow to evaluate accurately and reproducibly those methods. Hence, we propose here a new tool, able to perform a systematic, yet fully reproducible, evaluation of transcriptional network inference methods. Our open-source and freely available Bioconductor package aggregates a large set of tools to assess the robustness of network inference algorithms against different simulators, topologies, sample sizes and noise intensities. The benchmarking framework that uses various datasets highlights the specialization of some methods toward network types and data. As a result, it is possible to identify the techniques that have broad overall performances.

  13. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  14. PHISICS/RELAP5-3D RESULTS FOR EXERCISES II-1 AND II-2 OF THE OECD/NEA MHTGR-350 BENCHMARK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, Gerhard

    2016-03-01

    The Idaho National Laboratory (INL) Advanced Reactor Technologies (ART) High-Temperature Gas-Cooled Reactor (HTGR) Methods group currently leads the Modular High-Temperature Gas-Cooled Reactor (MHTGR) 350 benchmark. The benchmark consists of a set of lattice-depletion, steady-state, and transient problems that can be used by HTGR simulation groups to assess the performance of their code suites. The paper summarizes the results obtained for the first two transient exercises defined for Phase II of the benchmark. The Parallel and Highly Innovative Simulation for INL Code System (PHISICS), coupled with the INL system code RELAP5-3D, was used to generate the results for the Depressurized Conductionmore » Cooldown (DCC) (exercise II-1a) and Pressurized Conduction Cooldown (PCC) (exercise II-2) transients. These exercises require the time-dependent simulation of coupled neutronics and thermal-hydraulics phenomena, and utilize the steady-state solution previously obtained for exercise I-3 of Phase I. This paper also includes a comparison of the benchmark results obtained with a traditional system code “ring” model against a more detailed “block” model that include kinetics feedback on an individual block level and thermal feedbacks on a triangular sub-mesh. The higher spatial fidelity that can be obtained by the block model is illustrated with comparisons of the maximum fuel temperatures, especially in the case of natural convection conditions that dominate the DCC and PCC events. Differences up to 125 K (or 10%) were observed between the ring and block model predictions of the DCC transient, mostly due to the block model’s capability of tracking individual block decay powers and more detailed helium flow distributions. In general, the block model only required DCC and PCC calculation times twice as long as the ring models, and it therefore seems that the additional development and calculation time required for the block model could be worth the gain that can be obtained in the spatial resolution« less

  15. Simulation-based inter-professional education to improve attitudes towards collaborative practice: a prospective comparative pilot study in a Chinese medical centre

    PubMed Central

    Yang, Ling-Yu; Yang, Ying-Ying; Huang, Chia-Chang; Liang, Jen-Feng; Lee, Fa-Yauh; Cheng, Hao-Min; Huang, Chin-Chou; Kao, Shou-Yen

    2017-01-01

    Objectives Inter-professional education (IPE) builds inter-professional collaboration (IPC) attitude/skills of health professionals. This interventional IPE programme evaluates whether benchmarking sharing can successfully cultivate seed instructors responsible for improving their team members’ IPC attitudes. Design Prospective, pre-post comparative cross-sectional pilot study. Setting/participants Thirty four physicians, 30 nurses and 24 pharmacists, who volunteered to be trained as seed instructors participated in 3.5-hour preparation and 3.5-hour simulation courses. Then, participants (n=88) drew lots to decide 44 presenters, half of each profession, who needed to prepare IPC benchmarking and formed Group 1. The remaining participants formed Group 2 (regular). Facilitators rated the Group 1 participants’ degree of appropriate transfer and sustainable practice of the learnt IPC skills in the workplace according to successful IPC examples in their benchmarking sharing. Results For the three professions, improvement in IPC attitude was identified by sequential increase in the post-course (second month, T2) and end-of-study (third month, T3) Interdisciplinary Education Perception Scale (IEPS) and Attitudes Towards Healthcare Teams Scale (ATHCTS) scores, compared with pre-course (first month, T1) scores. By IEPS and ATHCTS-based assessment, the degree of sequential improvements in IPC attitude was found to be higher among nurses and pharmacists than in physicians. In benchmarking sharing, the facilitators’ agreement about the degree of participants’appropriate transfer and sustainable practice learnt ‘communication and teamwork’ skills in the workplace were significantly higher among pharmacists and nurses than among physicians. The post-intervention random sampling survey (sixth month, Tpost) found that the IPC attitude of the three professions improved after on-site IPC skill promotion by new programme-trained seed instructors within teams. Conclusions Addition of benchmark sharing to a diamond-based IPE simulation programme enhances participants’ IPC attitudes, self-reflection, workplace transfer and practice of the learnt skills. Furthermore, IPC promotion within teams by newly trained seed instructors improved the IPC attitudes across all three professions. PMID:29122781

  16. Validation of Shielding Analysis Capability of SuperMC with SINBAD

    NASA Astrophysics Data System (ADS)

    Chen, Chaobin; Yang, Qi; Wu, Bin; Han, Yuncheng; Song, Jing

    2017-09-01

    Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD). The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.

  17. PWR-related integral safety experiments in the PKL 111 test facility SBLOCA under beyond-design-basis accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, P.; Umminger, K.J.; Schoen, B.

    1995-09-01

    The thermal hydraulic behavior of a PWR during beyond-design-basis accident scenarios is of vital interest for the verification and optimization of accident management procedures. Within the scope of the German reactor safety research program experiments were performed in the volumetrically scaled PKL 111 test facility by Siemens/KWU. This highly instrumented test rig simulates a KWU-design PWR (1300 MWe). In particular, the latest tests performed related to a SBLOCA with additional system failures, e.g. nitrogen entering the primary system. In the case of a SBLOCA, it is the goal of the operator to put the plant in a condition where themore » decay heat can be removed first using the low pressure emergency core cooling system and then the residual heat removal system. The experimental investigation presented assumed the following beyond-design-basis accident conditions: 0.5% break in a cold leg, 2 of 4 steam generators (SGs) isolated on the secondary side (feedwater- and steam line-valves closed), filled with steam on the primary side, cooldown of the primary system using the remaining two steam generators, high pressure injection system only in the two loops with intact steam generators, if possible no operator actions to reach the conditions for residual heat removal system activation. Furthermore, it was postulated that 2 of the 4 hot leg accumulators had a reduced initial water inventory (increased nitrogen inventory), allowing nitrogen to enter the primary system at a pressure of 15 bar and nearly preventing the heat transfer in the SGs ({open_quotes}passivating{close_quotes} U-tubes). Due to this the heat transfer regime in the intact steam generators changed remarkably. The primary system showed self-regulating system effects and heat transfer improved again (reflux-condenser mode in the U-tube inlet region).« less

  18. J-2X Turbopump Cavitation Diagnostics

    NASA Technical Reports Server (NTRS)

    Santi, I. Michael; Butas, John P.; Tyler, Thomas R., Jr.; Aguilar, Robert; Sowers, T. Shane

    2010-01-01

    The J-2X is the upper stage engine currently being designed by Pratt & Whitney Rocketdyne (PWR) for the Ares I Crew Launch Vehicle (CLV). Propellant supply requirements for the J-2X are defined by the Ares Upper Stage to J-2X Interface Control Document (ICD). Supply conditions outside ICD defined start or run boxes can induce turbopump cavitation leading to interruption of J-2X propellant flow during hot fire operation. In severe cases, cavitation can lead to uncontained engine failure with the potential to cause a vehicle catastrophic event. Turbopump and engine system performance models supported by system design information and test data are required to predict existence, severity, and consequences of a cavitation event. A cavitation model for each of the J-2X fuel and oxidizer turbopumps was developed using data from pump water flow test facilities at Pratt & Whitney Rocketdyne (PWR) and Marshall Space Flight Center (MSFC) together with data from Powerpack 1A testing at Stennis Space Center (SSC) and from heritage systems. These component models were implemented within the PWR J-2X Real Time Model (RTM) to provide a foundation for predicting system level effects following turbopump cavitation. The RTM serves as a general failure simulation platform supporting estimation of J-2X redline system effectiveness. A study to compare cavitation induced conditions with component level structural limit thresholds throughout the engine was performed using the RTM. Results provided insight into system level turbopump cavitation effects and redline system effectiveness in preventing structural limit violations. A need to better understand structural limits and redline system failure mitigation potential in the event of fuel side cavitation was indicated. This paper examines study results, efforts to mature J-2X turbopump cavitation models and structural limits, and issues with engine redline detection of cavitation and the use of vehicle-side abort triggers to augment the engine redline system.

  19. Estimating irradiated nuclear fuel characteristics by nonlinear multivariate regression of simulated gamma-ray emissions

    NASA Astrophysics Data System (ADS)

    Åberg Lindell, M.; Andersson, P.; Grape, S.; Håkansson, A.; Thulin, M.

    2018-07-01

    In addition to verifying operator declared parameters of spent nuclear fuel, the ability to experimentally infer such parameters with a minimum of intrusiveness is of great interest and has been long-sought after in the nuclear safeguards community. It can also be anticipated that such ability would be of interest for quality assurance in e.g. recycling facilities in future Generation IV nuclear fuel cycles. One way to obtain information regarding spent nuclear fuel is to measure various gamma-ray intensities using high-resolution gamma-ray spectroscopy. While intensities from a few isotopes obtained from such measurements have traditionally been used pairwise, the approach in this work is to simultaneously analyze correlations between all available isotopes, using multivariate analysis techniques. Based on this approach, a methodology for inferring burnup, cooling time, and initial fissile content of PWR fuels using passive gamma-ray spectroscopy data has been investigated. PWR nuclear fuels, of UOX and MOX type, and their gamma-ray emissions, were simulated using the Monte Carlo code Serpent. Data comprising relative isotope activities was analyzed with decision trees and support vector machines, for predicting fuel parameters and their associated uncertainties. From this work it may be concluded that up to a cooling time of twenty years, the 95% prediction intervals of burnup, cooling time and initial fissile content could be inferred to within approximately 7 MWd/kgHM, 8 months, and 1.4 percentage points, respectively. An attempt aiming to estimate the plutonium content in spent UOX fuel, using the developed multivariate analysis model, is also presented. The results for Pu mass estimation are promising and call for further studies.

  20. Preliminary LOCA analysis of the westinghouse small modular reactor using the WCOBRA/TRAC-TF2 thermal-hydraulics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, J.; Kucukboyaci, V. N.; Nguyen, L.

    2012-07-01

    The Westinghouse Small Modular Reactor (SMR) is an 800 MWt (> 225 MWe) integral pressurized water reactor (iPWR) with all primary components, including the steam generator and the pressurizer located inside the reactor vessel. The reactor core is based on a partial-height 17x17 fuel assembly design used in the AP1000{sup R} reactor core. The Westinghouse SMR utilizes passive safety systems and proven components from the AP1000 plant design with a compact containment that houses the integral reactor vessel and the passive safety systems. A preliminary loss of coolant accident (LOCA) analysis of the Westinghouse SMR has been performed using themore » WCOBRA/TRAC-TF2 code, simulating a transient caused by a double ended guillotine (DEG) break in the direct vessel injection (DVI) line. WCOBRA/TRAC-TF2 is a new generation Westinghouse LOCA thermal-hydraulics code evolving from the US NRC licensed WCOBRA/TRAC code. It is designed to simulate PWR LOCA events from the smallest break size to the largest break size (DEG cold leg). A significant number of fluid dynamics models and heat transfer models were developed or improved in WCOBRA/TRAC-TF2. A large number of separate effects and integral effects tests were performed for a rigorous code assessment and validation. WCOBRA/TRAC-TF2 was introduced into the Westinghouse SMR design phase to assist a quick and robust passive cooling system design and to identify thermal-hydraulic phenomena for the development of the SMR Phenomena Identification Ranking Table (PIRT). The LOCA analysis of the Westinghouse SMR demonstrates that the DEG DVI break LOCA is mitigated by the injection and venting from the Westinghouse SMR passive safety systems without core heat up, achieving long term core cooling. (authors)« less

  1. Corrosion behavior and oxide properties of Zr 1.1 wt%Nb 0.05 wt%Cu alloy

    NASA Astrophysics Data System (ADS)

    Park, Jeong-Yong; Choi, Byung-Kwon; Yoo, Seung Jo; Jeong, Yong Hwan

    2006-12-01

    The corrosion behavior and oxide properties of Zr-1.1 wt%Nb-0.05 wt%Cu (ZrNbCu) and Zircaloy-4 have been investigated. The corrosion rate of the ZrNbCu alloy was much lower than that of the Zirclaoy-4 in the 360 °C water and 360 °C PWR-simulating loop condition without a neutron flux and it was increased with an increase of the final annealing temperature from 470 °C to 570 °C. TEM observations revealed that the precipitates in the ZrNbCu were β-Nb and ZrNbFe-precipitate with β-Nb being more frequently observed and that the precipitates were more finely distributed in the ZrNbCu alloy. It was also observed that the oxides of the ZrNbCu and Zircaloy-4 consisted of two and seven layers, respectively, after 1000 days in the PWR-simulating loop condition and that the thickness of a fully-developed layer was higher in the ZrNbCu than in the Zircaloy-4. It was also found that the β-Nb in ZrNbCu was oxidized more slowly when compared to the Zr(Fe, Cr) 2 in Zirclaoy-4 when the precipitates in the oxide were observed by TEM. Cracks were observed in the vicinity of the oxidized Zr(Fe, Cr) 2, while no cracks were formed near β-Nb which had retained a metallic state. From the results obtained, it is suggested that the oxide formed on the ZrNbCu has a more protective nature against a corrosion when compared to that of the Zircaloy-4.

  2. Benchmark simulation model no 2: general protocol and exploratory case studies.

    PubMed

    Jeppsson, U; Pons, M-N; Nopens, I; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A

    2007-01-01

    Over a decade ago, the concept of objectively evaluating the performance of control strategies by simulating them using a standard model implementation was introduced for activated sludge wastewater treatment plants. The resulting Benchmark Simulation Model No 1 (BSM1) has been the basis for a significant new development that is reported on here: Rather than only evaluating control strategies at the level of the activated sludge unit (bioreactors and secondary clarifier) the new BSM2 now allows the evaluation of control strategies at the level of the whole plant, including primary clarifier and sludge treatment with anaerobic sludge digestion. In this contribution, the decisions that have been made over the past three years regarding the models used within the BSM2 are presented and argued, with particular emphasis on the ADM1 description of the digester, the interfaces between activated sludge and digester models, the included temperature dependencies and the reject water storage. BSM2-implementations are now available in a wide range of simulation platforms and a ring test has verified their proper implementation, consistent with the BSM2 definition. This guarantees that users can focus on the control strategy evaluation rather than on modelling issues. Finally, for illustration, twelve simple operational strategies have been implemented in BSM2 and their performance evaluated. Results show that it is an interesting control engineering challenge to further improve the performance of the BSM2 plant (which is the whole idea behind benchmarking) and that integrated control (i.e. acting at different places in the whole plant) is certainly worthwhile to achieve overall improvement.

  3. Benchmarking urban flood models of varying complexity and scale using high resolution terrestrial LiDAR data

    NASA Astrophysics Data System (ADS)

    Fewtrell, Timothy J.; Duncan, Alastair; Sampson, Christopher C.; Neal, Jeffrey C.; Bates, Paul D.

    2011-01-01

    This paper describes benchmark testing of a diffusive and an inertial formulation of the de St. Venant equations implemented within the LISFLOOD-FP hydraulic model using high resolution terrestrial LiDAR data. The models are applied to a hypothetical flooding scenario in a section of Alcester, UK which experienced significant surface water flooding in the June and July floods of 2007 in the UK. The sensitivity of water elevation and velocity simulations to model formulation and grid resolution are analyzed. The differences in depth and velocity estimates between the diffusive and inertial approximations are within 10% of the simulated value but inertial effects persist at the wetting front in steep catchments. Both models portray a similar scale dependency between 50 cm and 5 m resolution which reiterates previous findings that errors in coarse scale topographic data sets are significantly larger than differences between numerical approximations. In particular, these results confirm the need to distinctly represent the camber and curbs of roads in the numerical grid when simulating surface water flooding events. Furthermore, although water depth estimates at grid scales coarser than 1 m appear robust, velocity estimates at these scales seem to be inconsistent compared to the 50 cm benchmark. The inertial formulation is shown to reduce computational cost by up to three orders of magnitude at high resolutions thus making simulations at this scale viable in practice compared to diffusive models. For the first time, this paper highlights the utility of high resolution terrestrial LiDAR data to inform small-scale flood risk management studies.

  4. Developing a molecular dynamics force field for both folded and disordered protein states.

    PubMed

    Robustelli, Paul; Piana, Stefano; Shaw, David E

    2018-05-07

    Molecular dynamics (MD) simulation is a valuable tool for characterizing the structural dynamics of folded proteins and should be similarly applicable to disordered proteins and proteins with both folded and disordered regions. It has been unclear, however, whether any physical model (force field) used in MD simulations accurately describes both folded and disordered proteins. Here, we select a benchmark set of 21 systems, including folded and disordered proteins, simulate these systems with six state-of-the-art force fields, and compare the results to over 9,000 available experimental data points. We find that none of the tested force fields simultaneously provided accurate descriptions of folded proteins, of the dimensions of disordered proteins, and of the secondary structure propensities of disordered proteins. Guided by simulation results on a subset of our benchmark, however, we modified parameters of one force field, achieving excellent agreement with experiment for disordered proteins, while maintaining state-of-the-art accuracy for folded proteins. The resulting force field, a99SB- disp , should thus greatly expand the range of biological systems amenable to MD simulation. A similar approach could be taken to improve other force fields. Copyright © 2018 the Author(s). Published by PNAS.

  5. Analysis of a Neutronic Experiment on a Simulated Mercury Spallation Neutron Target Assembly Bombarded by Giga-Electron-Volt Protons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maekawa, Fujio; Meigo, Shin-ichiro; Kasugai, Yoshimi

    2005-05-15

    A neutronic benchmark experiment on a simulated spallation neutron target assembly was conducted by using the Alternating Gradient Synchrotron at Brookhaven National Laboratory and was analyzed to investigate the prediction capability of Monte Carlo simulation codes used in neutronic designs of spallation neutron sources. The target assembly consisting of a mercury target, a light water moderator, and a lead reflector was bombarded by 1.94-, 12-, and 24-GeV protons, and the fast neutron flux distributions around the target and the spectra of thermal neutrons leaking from the moderator were measured in the experiment. In this study, the Monte Carlo particle transportmore » simulation codes NMTC/JAM, MCNPX, and MCNP-4A with associated cross-section data in JENDL and LA-150 were verified based on benchmark analysis of the experiment. As a result, all the calculations predicted the measured quantities adequately; calculated integral fluxes of fast and thermal neutrons agreed approximately within {+-}40% with the experiments although the overall energy range encompassed more than 12 orders of magnitude. Accordingly, it was concluded that these simulation codes and cross-section data were adequate for neutronics designs of spallation neutron sources.« less

  6. Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.

    2016-01-01

    Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.

  7. Using Machine Learning to Predict MCNP Bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grechanuk, Pavel Aleksandrovi

    For many real-world applications in radiation transport where simulations are compared to experimental measurements, like in nuclear criticality safety, the bias (simulated - experimental k eff) in the calculation is an extremely important quantity used for code validation. The objective of this project is to accurately predict the bias of MCNP6 [1] criticality calculations using machine learning (ML) algorithms, with the intention of creating a tool that can complement the current nuclear criticality safety methods. In the latest release of MCNP6, the Whisper tool is available for criticality safety analysts and includes a large catalogue of experimental benchmarks, sensitivity profiles,more » and nuclear data covariance matrices. This data, coming from 1100+ benchmark cases, is used in this study of ML algorithms for criticality safety bias predictions.« less

  8. Numerical simulation of air distribution in a room with a sidewall jet under benchmark test conditions

    NASA Astrophysics Data System (ADS)

    Zasimova, Marina; Ivanov, Nikolay

    2018-05-01

    The goal of the study is to validate Large Eddy Simulation (LES) data on mixing ventilation in an isothermal room at conditions of benchmark experiments by Hurnik et al. (2015). The focus is on the accuracy of the mean and rms velocity fields prediction in the quasi-free jet zone of the room with 3D jet supplied from a sidewall rectangular diffuser. Calculations were carried out using the ANSYS Fluent 16.2 software with an algebraic wall-modeled LES subgrid-scale model. CFD results on the mean velocity vector are compared with the Laser Doppler Anemometry data. The difference between the mean velocity vector and the mean air speed in the jet zone, both LES-computed, is presented and discussed.

  9. Simulation of differential die-away instrument’s response to asymmetrically burned spent nuclear fuel

    DOE PAGES

    Martinik, Tomas; Henzl, Vladimir; Grape, Sophie; ...

    2015-03-04

    Here, previous simulation studies of Differential Die–Away (DDA) instrument’s response to active interrogation of spent nuclear fuel from a pressurized water reactor (PWR) yielded promising results in terms of its capability to accurately measure or estimate basic spent fuel assembly (SFA) characteristics, such as multiplication, initial enrichment (IE) and burn-up (BU) as well as the total plutonium content. These studies were however performed only for a subset of idealized SFAs with a symmetric BU with respect to its longitudinal axis. Therefore, to complement the previous results, additional simulations have been performed of the DDA instrument’s response to interrogation of asymmetricallymore » burned spent nuclear fuel in order to determine whether detailed assay of SFAs from all 4 sides will be necessary in real life applications or whether a cost and time saving single sided assay could be used to achieve results of similar quality as previously reported in case of symmetrically burned SFAs.« less

  10. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin, E-mail: collinsbs@ornl.gov; Stimpson, Shane, E-mail: stimpsonsg@ornl.gov; Kelley, Blake W., E-mail: kelleybl@umich.edu

    2016-12-01

    A consistent “2D/1D” neutron transport method is derived from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. This paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. Several applications have been performed on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less

  11. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    DOE PAGES

    Collins, Benjamin; Stimpson, Shane; Kelley, Blake W.; ...

    2016-08-25

    We derived a consistent “2D/1D” neutron transport method from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. Our paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. We also performed several applications on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less

  12. Simulation of differential die-away instrument’s response to asymmetrically burned spent nuclear fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinik, Tomas; Henzl, Vladimir; Grape, Sophie

    Here, previous simulation studies of Differential Die–Away (DDA) instrument’s response to active interrogation of spent nuclear fuel from a pressurized water reactor (PWR) yielded promising results in terms of its capability to accurately measure or estimate basic spent fuel assembly (SFA) characteristics, such as multiplication, initial enrichment (IE) and burn-up (BU) as well as the total plutonium content. These studies were however performed only for a subset of idealized SFAs with a symmetric BU with respect to its longitudinal axis. Therefore, to complement the previous results, additional simulations have been performed of the DDA instrument’s response to interrogation of asymmetricallymore » burned spent nuclear fuel in order to determine whether detailed assay of SFAs from all 4 sides will be necessary in real life applications or whether a cost and time saving single sided assay could be used to achieve results of similar quality as previously reported in case of symmetrically burned SFAs.« less

  13. Spiking neural network simulation: memory-optimal synaptic event scheduling.

    PubMed

    Stewart, Robert D; Gurney, Kevin N

    2011-06-01

    Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.

  14. Neutron Collar Evolution and Fresh PWR Assembly Measurements with a New Fast Neutron Passive Collar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menlove, Howard Olsen; Geist, William H.; Root, Margaret A.

    The passive neutron collar approach removes the effect of poison rods when using a 1mm Gd liner. This project sets out to solve the following challenges: BWR fuel assemblies have less mass and less neutron multiplication than PWR; and effective removal of cosmic ray spallation neutron bursts needed via QC tests.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kastenberg, W.E.; Apostolakis, G.; Dhir, V.K.

    Severe accident management can be defined as the use of existing and/or altemative resources, systems and actors to prevent or mitigate a core-melt accident. For each accident sequence and each combination of severe accident management strategies, there may be several options available to the operator, and each involves phenomenological and operational considerations regarding uncertainty. Operational uncertainties include operator, system and instrumentation behavior during an accident. A framework based on decision trees and influence diagrams has been developed which incorporates such criteria as feasibility, effectiveness, and adverse effects, for evaluating potential severe accident management strategies. The framework is also capable ofmore » propagating both data and model uncertainty. It is applied to several potential strategies including PWR cavity flooding, BWR drywell flooding, PWR depressurization and PWR feed and bleed.« less

  16. Secondary Startup Neutron Sources as a Source of Tritium in a Pressurized Water Reactor (PWR) Reactor Coolant System (RCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Lanning, Donald D.

    2010-02-01

    The hypothesis of this paper is that the Zircaloy clad fuel source is minimal and that secondary startup neutron sources are the significant contributors of the tritium in the RCS that was previously assigned to release from fuel. Currently there are large uncertainties in the attribution of tritium in a Pressurized Water Reactor (PWR) Reactor Coolant System (RCS). The measured amount of tritium in the coolant cannot be separated out empirically into its individual sources. Therefore, to quantify individual contributors, all sources of tritium in the RCS of a PWR must be understood theoretically and verified by the sum ofmore » the individual components equaling the measured values.« less

  17. Optimization of small long-life PWR based on thorium fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subkhi, Moh Nurul, E-mail: nsubkhi@students.itb.ac.id; Physics Dept., Faculty of Science and Technology, State Islamic University of Sunan Gunung Djati Bandung Jalan A.H Nasution 105 Bandung; Suud, Zaki, E-mail: szaki@fi.itb.ac.id

    2015-09-30

    A conceptual design of small long-life Pressurized Water Reactor (PWR) using thorium fuel has been investigated in neutronic aspect. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.2, while the multi-energy-group diffusion calculations were optimized in three-dimension X-Y-Z geometry of core by COREBN. The excess reactivity of thorium nitride with ZIRLO cladding is considered during 5 years of burnup without refueling. Optimization of 350 MWe long life PWR based on 5% {sup 233}U & 2.8% {sup 231}Pa, 6% {sup 233}U & 2.8% {sup 231}Pa and 7% {sup 233}U & 6% {supmore » 231}Pa give low excess reactivity.« less

  18. Current and planned numerical development for improving computing performance for long duration and/or low pressure transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faydide, B.

    1997-07-01

    This paper presents the current and planned numerical development for improving computing performance in case of Cathare applications needing real time, like simulator applications. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the general characteristics of the code are presented, dealing with physical models, numerical topics, and validation strategy. Then, the current and planned applications of Cathare in the field of simulators are discussed. Some of these applications were made in the past, using a simplified and fast-running version of Cathare (Cathare-Simu); the status of the numerical improvements obtained withmore » Cathare-Simu is presented. The planned developments concern mainly the Simulator Cathare Release (SCAR) project which deals with the use of the most recent version of Cathare inside simulators. In this frame, the numerical developments are related with the speed up of the calculation process, using parallel processing and improvement of code reliability on a large set of NPP transients.« less

  19. A wind energy benchmark for ABL modelling of a diurnal cycle with a nocturnal low-level jet: GABLS3 revisited

    DOE PAGES

    Rodrigo, J. Sanz; Churchfield, M.; Kosović, B.

    2016-10-03

    The third GEWEX Atmospheric Boundary Layer Studies (GABLS3) model intercomparison study, around the Cabauw met tower in the Netherlands, is revisited as a benchmark for wind energy atmospheric boundary layer (ABL) models. The case was originally developed by the boundary layer meteorology community, interested in analysing the performance of single-column and large-eddy simulation atmospheric models dealing with a diurnal cycle leading to the development of a nocturnal low-level jet. The case addresses fundamental questions related to the definition of the large-scale forcing, the interaction of the ABL with the surface and the evaluation of model results with observations. The characterizationmore » of mesoscale forcing for asynchronous microscale modelling of the ABL is discussed based on momentum budget analysis of WRF simulations. Then a single-column model is used to demonstrate the added value of incorporating different forcing mechanisms in microscale models. The simulations are evaluated in terms of wind energy quantities of interest.« less

  20. FLUKA Monte Carlo simulations and benchmark measurements for the LHC beam loss monitors

    NASA Astrophysics Data System (ADS)

    Sarchiapone, L.; Brugger, M.; Dehning, B.; Kramer, D.; Stockner, M.; Vlachoudis, V.

    2007-10-01

    One of the crucial elements in terms of machine protection for CERN's Large Hadron Collider (LHC) is its beam loss monitoring (BLM) system. On-line loss measurements must prevent the superconducting magnets from quenching and protect the machine components from damages due to unforeseen critical beam losses. In order to ensure the BLM's design quality, in the final design phase of the LHC detailed FLUKA Monte Carlo simulations were performed for the betatron collimation insertion. In addition, benchmark measurements were carried out with LHC type BLMs installed at the CERN-EU high-energy Reference Field facility (CERF). This paper presents results of FLUKA calculations performed for BLMs installed in the collimation region, compares the results of the CERF measurement with FLUKA simulations and evaluates related uncertainties. This, together with the fact that the CERF source spectra at the respective BLM locations are comparable with those at the LHC, allows assessing the sensitivity of the performed LHC design studies.

  1. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas

    2009-01-01

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors,more » other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million atom biological systems scale well up to 30k cores, producing 30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.« less

  2. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer.

    PubMed

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-10-13

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million-atom biological systems scale well up to ∼30k cores, producing ∼30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.

  3. SU-E-J-30: Benchmark Image-Based TCP Calculation for Evaluation of PTV Margins for Lung SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, M; Chetty, I; Zhong, H

    2014-06-01

    Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVFmore » formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients.« less

  4. A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger

    2017-09-01

    Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.

  5. Bio-inspired benchmark generator for extracellular multi-unit recordings

    PubMed Central

    Mondragón-González, Sirenia Lizbeth; Burguière, Eric

    2017-01-01

    The analysis of multi-unit extracellular recordings of brain activity has led to the development of numerous tools, ranging from signal processing algorithms to electronic devices and applications. Currently, the evaluation and optimisation of these tools are hampered by the lack of ground-truth databases of neural signals. These databases must be parameterisable, easy to generate and bio-inspired, i.e. containing features encountered in real electrophysiological recording sessions. Towards that end, this article introduces an original computational approach to create fully annotated and parameterised benchmark datasets, generated from the summation of three components: neural signals from compartmental models and recorded extracellular spikes, non-stationary slow oscillations, and a variety of different types of artefacts. We present three application examples. (1) We reproduced in-vivo extracellular hippocampal multi-unit recordings from either tetrode or polytrode designs. (2) We simulated recordings in two different experimental conditions: anaesthetised and awake subjects. (3) Last, we also conducted a series of simulations to study the impact of different level of artefacts on extracellular recordings and their influence in the frequency domain. Beyond the results presented here, such a benchmark dataset generator has many applications such as calibration, evaluation and development of both hardware and software architectures. PMID:28233819

  6. How well does your model capture the terrestrial ecosystem dynamics of the Arctic-Boreal Region?

    NASA Astrophysics Data System (ADS)

    Stofferahn, E.; Fisher, J. B.; Hayes, D. J.; Huntzinger, D. N.; Schwalm, C.

    2016-12-01

    The Arctic-Boreal Region (ABR) is a major source of uncertainties for terrestrial biosphere model (TBM) simulations. These uncertainties are precipitated by a lack of observational data from the region, affecting the parameterizations of cold environment processes in the models. Addressing these uncertainties requires a coordinated effort of data collection and integration of the following key indicators of the ABR ecosystem: disturbance, flora / fauna and related ecosystem function, carbon pools and biogeochemistry, permafrost, and hydrology. We are developing a model-data integration framework for NASA's Arctic Boreal Vulnerability Experiment (ABoVE), wherein data collection for the key ABoVE indicators is driven by matching observations and model outputs to the ABoVE indicators. The data are used as reference datasets for a benchmarking system which evaluates TBM performance with respect to ABR processes. The benchmarking system utilizes performance metrics to identify intra-model and inter-model strengths and weaknesses, which in turn provides guidance to model development teams for reducing uncertainties in TBM simulations of the ABR. The system is directly connected to the International Land Model Benchmarking (ILaMB) system, as an ABR-focused application.

  7. Adaptive unified continuum FEM modeling of a 3D FSI benchmark problem.

    PubMed

    Jansson, Johan; Degirmenci, Niyazi Cem; Hoffman, Johan

    2017-09-01

    In this paper, we address a 3D fluid-structure interaction benchmark problem that represents important characteristics of biomedical modeling. We present a goal-oriented adaptive finite element methodology for incompressible fluid-structure interaction based on a streamline diffusion-type stabilization of the balance equations for mass and momentum for the entire continuum in the domain, which is implemented in the Unicorn/FEniCS software framework. A phase marker function and its corresponding transport equation are introduced to select the constitutive law, where the mesh tracks the discontinuous fluid-structure interface. This results in a unified simulation method for fluids and structures. We present detailed results for the benchmark problem compared with experiments, together with a mesh convergence study. Copyright © 2016 John Wiley & Sons, Ltd.

  8. A Level-set based framework for viscous simulation of particle-laden supersonic flows

    NASA Astrophysics Data System (ADS)

    Das, Pratik; Sen, Oishik; Jacobs, Gustaaf; Udaykumar, H. S.

    2017-06-01

    Particle-laden supersonic flows are important in natural and industrial processes, such as, volcanic eruptions, explosions, pneumatic conveyance of particle in material processing etc. Numerical study of such high-speed particle laden flows at the mesoscale calls for a numerical framework which allows simulation of supersonic flow around multiple moving solid objects. Only a few efforts have been made toward development of numerical frameworks for viscous simulation of particle-fluid interaction in supersonic flow regime. The current work presents a Cartesian grid based sharp-interface method for viscous simulations of interaction between supersonic flow with moving rigid particles. The no-slip boundary condition is imposed at the solid-fluid interfaces using a modified ghost fluid method (GFM). The current method is validated against the similarity solution of compressible boundary layer over flat-plate and benchmark numerical solution for steady supersonic flow over cylinder. Further validation is carried out against benchmark numerical results for shock induced lift-off of a cylinder in a shock tube. 3D simulation of steady supersonic flow over sphere is performed to compare the numerically obtained drag co-efficient with experimental results. A particle-resolved viscous simulation of shock interaction with a cloud of particles is performed to demonstrate that the current method is suitable for large-scale particle resolved simulations of particle-laden supersonic flows.

  9. Benchmark of multi-phase method for the computation of fast ion distributions in a tokamak plasma in the presence of low-amplitude resonant MHD activity

    NASA Astrophysics Data System (ADS)

    Bierwage, A.; Todo, Y.

    2017-11-01

    The transport of fast ions in a beam-driven JT-60U tokamak plasma subject to resonant magnetohydrodynamic (MHD) mode activity is simulated using the so-called multi-phase method, where 4 ms intervals of classical Monte-Carlo simulations (without MHD) are interlaced with 1 ms intervals of hybrid simulations (with MHD). The multi-phase simulation results are compared to results obtained with continuous hybrid simulations, which were recently validated against experimental data (Bierwage et al., 2017). It is shown that the multi-phase method, in spite of causing significant overshoots in the MHD fluctuation amplitudes, accurately reproduces the frequencies and positions of the dominant resonant modes, as well as the spatial profile and velocity distribution of the fast ions, while consuming only a fraction of the computation time required by the continuous hybrid simulation. The present paper is limited to low-amplitude fluctuations consisting of a few long-wavelength modes that interact only weakly with each other. The success of this benchmark study paves the way for applying the multi-phase method to the simulation of Abrupt Large-amplitude Events (ALE), which were seen in the same JT-60U experiments but at larger time intervals. Possible implications for the construction of reduced models for fast ion transport are discussed.

  10. Interfacing VPSC with finite element codes. Demonstration of irradiation growth simulation in a cladding tube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patra, Anirban; Tome, Carlos

    This Milestone report shows good progress in interfacing VPSC with the FE codes ABAQUS and MOOSE, to perform component-level simulations of irradiation-induced deformation in Zirconium alloys. In this preliminary application, we have performed an irradiation growth simulation in the quarter geometry of a cladding tube. We have benchmarked VPSC-ABAQUS and VPSC-MOOSE predictions with VPSC-SA predictions to verify the accuracy of the VPSCFE interface. Predictions from the FE simulations are in general agreement with VPSC-SA simulations and also with experimental trends.

  11. Development a computer codes to couple PWR-GALE output and PC-CREAM input

    NASA Astrophysics Data System (ADS)

    Kuntjoro, S.; Budi Setiawan, M.; Nursinta Adi, W.; Deswandri; Sunaryo, G. R.

    2018-02-01

    Radionuclide dispersion analysis is part of an important reactor safety analysis. From the analysis it can be obtained the amount of doses received by radiation workers and communities around nuclear reactor. The radionuclide dispersion analysis under normal operating conditions is carried out using the PC-CREAM code, and it requires input data such as source term and population distribution. Input data is derived from the output of another program that is PWR-GALE and written Population Distribution data in certain format. Compiling inputs for PC-CREAM programs manually requires high accuracy, as it involves large amounts of data in certain formats and often errors in compiling inputs manually. To minimize errors in input generation, than it is make coupling program for PWR-GALE and PC-CREAM programs and a program for writing population distribution according to the PC-CREAM input format. This work was conducted to create the coupling programming between PWR-GALE output and PC-CREAM input and programming to written population data in the required formats. Programming is done by using Python programming language which has advantages of multiplatform, object-oriented and interactive. The result of this work is software for coupling data of source term and written population distribution data. So that input to PC-CREAM program can be done easily and avoid formatting errors. Programming sourceterm coupling program PWR-GALE and PC-CREAM is completed, so that the creation of PC-CREAM inputs in souceterm and distribution data can be done easily and according to the desired format.

  12. Stress corrosion crack initiation of alloy 600 in PWR primary water

    DOE PAGES

    Zhai, Ziqing; Toloczko, Mychailo B.; Olszta, Matthew J.; ...

    2017-04-27

    Stress corrosion crack (SCC) initiation of three mill-annealed alloy 600 heats in simulated pressurized water reactor primary water has been investigated using constant load tests equipped with in-situ direct current potential drop (DCPD) measurement capabilities. SCC initiation times were greatly reduced by a small amount of cold work. Shallow intergranular attack and/or cracks were found on most high-energy grain boundaries intersecting the surface with only a small fraction evolving into larger cracks and intergranular SCC growth. Crack depth profiles were measured and related to DCPD-detected initiation response. Lastly, we discuss processes controlling the SCC initiation in mill-annealed alloy 600.

  13. Stress corrosion crack initiation of alloy 600 in PWR primary water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Ziqing; Toloczko, Mychailo B.; Olszta, Matthew J.

    Stress corrosion crack (SCC) initiation of three mill-annealed alloy 600 heats in simulated pressurized water reactor primary water has been investigated using constant load tests equipped with in-situ direct current potential drop (DCPD) measurement capabilities. SCC initiation times were greatly reduced by a small amount of cold work. Shallow intergranular attack and/or cracks were found on most high-energy grain boundaries intersecting the surface with only a small fraction evolving into larger cracks and intergranular SCC growth. Crack depth profiles were measured and related to DCPD-detected initiation response. Lastly, we discuss processes controlling the SCC initiation in mill-annealed alloy 600.

  14. A Simple Graphical Method for Quantification of Disaster Management Surge Capacity Using Computer Simulation and Process-control Tools.

    PubMed

    Franc, Jeffrey Michael; Ingrassia, Pier Luigi; Verde, Manuela; Colombo, Davide; Della Corte, Francesco

    2015-02-01

    Surge capacity, or the ability to manage an extraordinary volume of patients, is fundamental for hospital management of mass-casualty incidents. However, quantification of surge capacity is difficult and no universal standard for its measurement has emerged, nor has a standardized statistical method been advocated. As mass-casualty incidents are rare, simulation may represent a viable alternative to measure surge capacity. Hypothesis/Problem The objective of the current study was to develop a statistical method for the quantification of surge capacity using a combination of computer simulation and simple process-control statistical tools. Length-of-stay (LOS) and patient volume (PV) were used as metrics. The use of this method was then demonstrated on a subsequent computer simulation of an emergency department (ED) response to a mass-casualty incident. In the derivation phase, 357 participants in five countries performed 62 computer simulations of an ED response to a mass-casualty incident. Benchmarks for ED response were derived from these simulations, including LOS and PV metrics for triage, bed assignment, physician assessment, and disposition. In the application phase, 13 students of the European Master in Disaster Medicine (EMDM) program completed the same simulation scenario, and the results were compared to the standards obtained in the derivation phase. Patient-volume metrics included number of patients to be triaged, assigned to rooms, assessed by a physician, and disposed. Length-of-stay metrics included median time to triage, room assignment, physician assessment, and disposition. Simple graphical methods were used to compare the application phase group to the derived benchmarks using process-control statistical tools. The group in the application phase failed to meet the indicated standard for LOS from admission to disposition decision. This study demonstrates how simulation software can be used to derive values for objective benchmarks of ED surge capacity using PV and LOS metrics. These objective metrics can then be applied to other simulation groups using simple graphical process-control tools to provide a numeric measure of surge capacity. Repeated use in simulations of actual EDs may represent a potential means of objectively quantifying disaster management surge capacity. It is hoped that the described statistical method, which is simple and reusable, will be useful for investigators in this field to apply to their own research.

  15. Shuttle Engine Designs Revolutionize Solar Power

    NASA Technical Reports Server (NTRS)

    2014-01-01

    The Space Shuttle Main Engine was built under contract to Marshall Space Flight Center by Rocketdyne, now part of Pratt & Whitney Rocketdyne (PWR). PWR applied its NASA experience to solar power technology and licensed the technology to Santa Monica, California-based SolarReserve. The company now develops concentrating solar power projects, including a plant in Nevada that has created 4,300 jobs during construction.

  16. Fourier Transform-Plasmon Waveguide Spectroscopy: A Nondestructive Multifrequency Method for Simultaneously Determining Polymer Thickness and Apparent Index of Refraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobbitt, Jonathan M; Weibel, Stephen C; Elshobaki, Moneim

    2014-12-16

    Fourier transform (FT)-plasmon waveguide resonance (PWR) spectroscopy measures light reflectivity at a waveguide interface as the incident frequency and angle are scanned. Under conditions of total internal reflection, the reflected light intensity is attenuated when the incident frequency and angle satisfy conditions for exciting surface plasmon modes in the metal as well as guided modes within the waveguide. Expanding upon the concept of two-frequency surface plasmon resonance developed by Peterlinz and Georgiadis [ Opt. Commun. 1996, 130, 260], the apparent index of refraction and the thickness of a waveguide can be measured precisely and simultaneously by FT-PWR with an averagemore » percent relative error of 0.4%. Measuring reflectivity for a range of frequencies extends the analysis to a wide variety of sample compositions and thicknesses since frequencies with the maximum attenuation can be selected to optimize the analysis. Additionally, the ability to measure reflectivity curves with both p- and s-polarized light provides anisotropic indices of refraction. FT-PWR is demonstrated using polystyrene waveguides of varying thickness, and the validity of FT-PWR measurements are verified by comparing the results to data from profilometry and atomic force microscopy (AFM).« less

  17. Fourier transform-plasmon waveguide spectroscopy: a nondestructive multifrequency method for simultaneously determining polymer thickness and apparent index of refraction.

    PubMed

    Bobbitt, Jonathan M; Weibel, Stephen C; Elshobaki, Moneim; Chaudhary, Sumit; Smith, Emily A

    2014-12-16

    Fourier transform (FT)-plasmon waveguide resonance (PWR) spectroscopy measures light reflectivity at a waveguide interface as the incident frequency and angle are scanned. Under conditions of total internal reflection, the reflected light intensity is attenuated when the incident frequency and angle satisfy conditions for exciting surface plasmon modes in the metal as well as guided modes within the waveguide. Expanding upon the concept of two-frequency surface plasmon resonance developed by Peterlinz and Georgiadis [Opt. Commun. 1996, 130, 260], the apparent index of refraction and the thickness of a waveguide can be measured precisely and simultaneously by FT-PWR with an average percent relative error of 0.4%. Measuring reflectivity for a range of frequencies extends the analysis to a wide variety of sample compositions and thicknesses since frequencies with the maximum attenuation can be selected to optimize the analysis. Additionally, the ability to measure reflectivity curves with both p- and s-polarized light provides anisotropic indices of refraction. FT-PWR is demonstrated using polystyrene waveguides of varying thickness, and the validity of FT-PWR measurements are verified by comparing the results to data from profilometry and atomic force microscopy (AFM).

  18. Analysis of 2D Torus and Hub Topologies of 100Mb/s Ethernet for the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Pedretti, Kevin T.; Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    A variety of different network technologies and topologies are currently being evaluated as part of the Whitney Project. This paper reports on the implementation and performance of a Fast Ethernet network configured in a 4x4 2D torus topology in a testbed cluster of 'commodity' Pentium Pro PCs. Several benchmarks were used for performance evaluation: an MPI point to point message passing benchmark, an MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2). Our results show that for point to point communication on an unloaded network, the hub and 1 hop routes on the torus have about the same bandwidth and latency. However, the bandwidth decreases and the latency increases on the torus for each additional route hop. Collective communication benchmarks show that the torus provides roughly four times more aggregate bandwidth and eight times faster MPI barrier synchronizations than a hub based network for 16 processor systems. Finally, the SOAPBOX benchmarks, which simulate real-world CFD applications, generally demonstrated substantially better performance on the torus than on the hub. In the few cases the hub was faster, the difference was negligible. In total, our experimental results lead to the conclusion that for Fast Ethernet networks, the torus topology has better performance and scales better than a hub based network.

  19. Benchmark results for few-body hypernuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruffino, Fabrizio Ferrari; Lonardoni, Diego; Barnea, Nir

    2017-03-16

    Here, the Non-Symmetrized Hyperspherical Harmonics method (NSHH) is introduced in the hypernuclear sector and benchmarked with three different ab-initio methods, namely the Auxiliary Field Diffusion Monte Carlo method, the Faddeev–Yakubovsky approach and the Gaussian Expansion Method. Binding energies and hyperon separation energies of three- to five-body hypernuclei are calculated by employing the two-body ΛN component of the phenomenological Bodmer–Usmani potential, and a hyperon-nucleon interaction simulating the scattering phase shifts given by NSC97f. The range of applicability of the NSHH method is briefly discussed.

  20. Using SPARK as a Solver for Modelica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Wetter, Michael; Haves, Philip

    Modelica is an object-oriented acausal modeling language that is well positioned to become a de-facto standard for expressing models of complex physical systems. To simulate a model expressed in Modelica, it needs to be translated into executable code. For generating run-time efficient code, such a translation needs to employ algebraic formula manipulations. As the SPARK solver has been shown to be competitive for generating such code but currently cannot be used with the Modelica language, we report in this paper how SPARK's symbolic and numerical algorithms can be implemented in OpenModelica, an open-source implementation of a Modelica modeling and simulationmore » environment. We also report benchmark results that show that for our air flow network simulation benchmark, the SPARK solver is competitive with Dymola, which is believed to provide the best solver for Modelica.« less

  1. Engine dynamic analysis with general nonlinear finite element codes. II - Bearing element implementation, overall numerical characteristics and benchmarking

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Adams, M.; Lam, P.; Fertis, D.; Zeid, I.

    1982-01-01

    Second-year efforts within a three-year study to develop and extend finite element (FE) methodology to efficiently handle the transient/steady state response of rotor-bearing-stator structure associated with gas turbine engines are outlined. The two main areas aim at (1) implanting the squeeze film damper element into a general purpose FE code for testing and evaluation; and (2) determining the numerical characteristics of the FE-generated rotor-bearing-stator simulation scheme. The governing FE field equations are set out and the solution methodology is presented. The choice of ADINA as the general-purpose FE code is explained, and the numerical operational characteristics of the direct integration approach of FE-generated rotor-bearing-stator simulations is determined, including benchmarking, comparison of explicit vs. implicit methodologies of direct integration, and demonstration problems.

  2. The development of a virtual reality training curriculum for colonoscopy.

    PubMed

    Sugden, Colin; Aggarwal, Rajesh; Banerjee, Amrita; Haycock, Adam; Thomas-Gibson, Siwan; Williams, Christopher B; Darzi, Ara

    2012-07-01

    The development of a structured virtual reality (VR) training curriculum for colonoscopy using high-fidelity simulation. Colonoscopy requires detailed knowledge and technical skill. Changes to working practices in recent times have reduced the availability of traditional training opportunities. Much might, therefore, be achieved by applying novel technologies such as VR simulation to colonoscopy. Scientifically developed device-specific curricula aim to maximize the yield of laboratory-based training by focusing on validated modules and linking progression to the attainment of benchmarked proficiency criteria. Fifty participants comprised of 30 novices (<10 colonoscopies), 10 intermediates (100 to 500 colonoscopies), and 10 experienced (>500 colonoscopies) colonoscopists were recruited to participate. Surrogates of proficiency, such as number of procedures undertaken, determined prospective allocation to 1 of 3 groups (novice, intermediate, and experienced). Construct validity and learning value (comparison between groups and within groups respectively) for each task and metric on the chosen simulator model determined suitability for inclusion in the curriculum. Eight tasks in possession of construct validity and significant learning curves were included in the curriculum: 3 abstract tasks, 4 part-procedural tasks, and 1 procedural task. The whole-procedure task was valid for 11 metrics including the following: "time taken to complete the task" (1238, 343, and 293 s; P < 0.001) and "insertion length with embedded tip" (23.8, 3.6, and 4.9 cm; P = 0.005). Learning curves consistently plateaued at or beyond the ninth attempt. Valid metrics were used to define benchmarks, derived from the performance of the experienced cohort, for each included task. A comprehensive, stratified, benchmarked, whole-procedure curriculum has been developed for a modern high-fidelity VR colonoscopy simulator.

  3. Physics of hydride fueled PWR

    NASA Astrophysics Data System (ADS)

    Ganda, Francesco

    The first part of the work presents the neutronic results of a detailed and comprehensive study of the feasibility of using hydride fuel in pressurized water reactors (PWR). The primary hydride fuel examined is U-ZrH1.6 having 45w/o uranium: two acceptable design approaches were identified: (1) use of erbium as a burnable poison; (2) replacement of a fraction of the ZrH1.6 by thorium hydride along with addition of some IFBA. The replacement of 25 v/o of ZrH 1.6 by ThH2 along with use of IFBA was identified as the preferred design approach as it gives a slight cycle length gain whereas use of erbium burnable poison results in a cycle length penalty. The feasibility of a single recycling plutonium in PWR in the form of U-PuH2-ZrH1.6 has also been assessed. This fuel was found superior to MOX in terms of the TRU fractional transmutation---53% for U-PuH2-ZrH1.6 versus 29% for MOX---and proliferation resistance. A thorough investigation of physics characteristics of hydride fuels has been performed to understand the reasons of the trends in the reactivity coefficients. The second part of this work assessed the feasibility of multi-recycling plutonium in PWR using hydride fuel. It was found that the fertile-free hydride fuel PuH2-ZrH1.6, enables multi-recycling of Pu in PWR an unlimited number of times. This unique feature of hydride fuels is due to the incorporation of a significant fraction of the hydrogen moderator in the fuel, thereby mitigating the effect of spectrum hardening due to coolant voiding accidents. An equivalent oxide fuel PuO2-ZrO2 was investigated as well and found to enable up to 10 recycles. The feasibility of recycling Pu and all the TRU using hydride fuels were investigated as well. It was found that hydride fuels allow recycling of Pu+Np at least 6 times. If it was desired to recycle all the TRU in PWR using hydrides, the number of possible recycles is limited to 3; the limit is imposed by positive large void reactivity feedback.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barry, Kenneth

    The Nuclear Energy Institute (NEI) Small Modular Reactor (SMR) Licensing Task Force (TF) has been evaluating licensing issues unique and important to iPWRs, ranking these issues, and developing NEI position papers for submittal to the U.S. Nuclear Regulatory Commission (NRC) during the past three years. Papers have been developed and submitted to the NRC in a range of areas including: Price-Anderson Act, NRC annual fees, security, modularity, and staffing. In December, 2012, NEI completed a draft position paper on SMR source terms and participated in an NRC public meeting presenting a summary of this paper, which was subsequently submitted tomore » the NRC. One important conclusion of the source term paper was the evaluation and selection of high importance areas where additional research would have a significant impact on source terms. The highest ranked research area was iPWR containment aerosol natural deposition. The NRC accepts the use of existing aerosol deposition correlations in Regulatory Guide 1.183, but these were developed for large light water reactor (LWR) containments. Application of these correlations to an iPWR design has resulted in greater than a ten-fold reduction of containment airborne aerosol inventory as compared to large LWRs. Development and experimental justification of containment aerosol natural deposition correlations specifically for the unique iPWR containments is expected to result in a large reduction of design basis and beyond-design-basis accident source terms with concomitantly smaller dose to workers and the public. Therefore, NRC acceptance of iPWR containment aerosol natural deposition correlations will directly support the industry’s goal of reducing the Emergency Planning Zone (EPZ) for SMRs. Based on the results in this work, it is clear that thermophoresis is relatively unimportant for iPWRs. Gravitational settling is well understood, and may be the dominant process for a dry environment. Diffusiophoresis and enhanced settling by particle growth are the dominant processes for determining DFs for expected conditions in an iPWR containment. These processes are dependent on the areato-volume (A/V) ratio, which should benefit iPWR designs because these reactors have higher A/Vs compared to existing LWRs.« less

  5. New methods to benchmark simulations of accreting black holes systems against observations

    NASA Astrophysics Data System (ADS)

    Markoff, Sera; Chatterjee, Koushik; Liska, Matthew; Tchekhovskoy, Alexander; Hesp, Casper; Ceccobello, Chiara; Russell, Thomas

    2017-08-01

    The field of black hole accretion has been significantly advanced by the use of complex ideal general relativistic magnetohydrodynamics (GRMHD) codes, now capable of simulating scales from the event horizon out to ~10^5 gravitational radii at high resolution. The challenge remains how to test these simulations against data, because the self-consistent treatment of radiation is still in its early days, and is complicated by dependence on non-ideal/microphysical processes not yet included in the codes. On the other extreme, a variety of phenomenological models (disk, corona, jet, wind) can well-describe spectra or variability signatures in a particular waveband, although often not both. To bring these two methodologies together, we need robust observational “benchmarks” that can be identified and studied in simulations. I will focus on one example of such a benchmark, from recent observational campaigns on black holes across the mass scale: the jet break. I will describe new work attempting to understand what drives this feature by searching for regions that share similar trends in terms of dependence on accretion power or magnetisation. Such methods can allow early tests of simulation assumptions and help pinpoint which regions will dominate the light production, well before full radiative processes are incorporated, and will help guide the interpretation of, e.g. Event Horizon Telescope data.

  6. Genomic prediction in animals and plants: simulation of data, validation, reporting, and benchmarking.

    PubMed

    Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M

    2013-02-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals.

  7. Genomic Prediction in Animals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking

    PubMed Central

    Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.

    2013-01-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals. PMID:23222650

  8. Computational Investigation of In-Flight Temperature in Shaped Charge Jets and Explosively Formed Penetrators

    NASA Astrophysics Data System (ADS)

    Sable, Peter; Helminiak, Nathaniel; Harstad, Eric; Gullerud, Arne; Hollenshead, Jeromy; Hertel, Eugene; Sandia National Laboratories Collaboration; Marquette University Collaboration

    2017-06-01

    With the increasing use of hydrocodes in modeling and system design, experimental benchmarking of software has never been more important. While this has been a large area of focus since the inception of computational design, comparisons with temperature data are sparse due to experimental limitations. A novel temperature measurement technique, magnetic diffusion analysis, has enabled the acquisition of in-flight temperature measurements of hyper velocity projectiles. Using this, an AC-14 bare shaped charge and an LX-14 EFP, both with copper linings, were simulated using CTH to benchmark temperature against experimental results. Particular attention was given to the slug temperature profiles after separation, and the effect of varying equation-of-state and strength models. Simulations are in agreement with experimental, attaining better than 2% error between observed shaped charge temperatures. This varied notably depending on the strength model used. Similar observations were made simulating the EFP case, with a minimum 4% deviation. Jet structures compare well with radiographic images and are consistent with ALEGRA simulations previously conducted. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  9. Measurements and FLUKA simulations of bismuth and aluminium activation at the CERN Shielding Benchmark Facility (CSBF)

    NASA Astrophysics Data System (ADS)

    Iliopoulou, E.; Bamidis, P.; Brugger, M.; Froeschl, R.; Infantino, A.; Kajimoto, T.; Nakao, N.; Roesler, S.; Sanami, T.; Siountas, A.

    2018-03-01

    The CERN High Energy AcceleRator Mixed field facility (CHARM) is located in the CERN Proton Synchrotron (PS) East Experimental Area. The facility receives a pulsed proton beam from the CERN PS with a beam momentum of 24 GeV/c with 5 ṡ1011 protons per pulse with a pulse length of 350 ms and with a maximum average beam intensity of 6.7 ṡ1010 p/s that then impacts on the CHARM target. The shielding of the CHARM facility also includes the CERN Shielding Benchmark Facility (CSBF) situated laterally above the target. This facility consists of 80 cm of cast iron and 360 cm of concrete with barite concrete in some places. Activation samples of bismuth and aluminium were placed in the CSBF and in the CHARM access corridor in July 2015. Monte Carlo simulations with the FLUKA code have been performed to estimate the specific production yields for these samples. The results estimated by FLUKA Monte Carlo simulations are compared to activation measurements of these samples. The comparison between FLUKA simulations and the measured values from γ-spectrometry gives an agreement better than a factor of 2.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold H. Kritz

    PTRANSP, which is the predictive version of the TRANSP code, was developed in a collaborative effort involving the Princeton Plasma Physics Laboratory, General Atomics Corporation, Lawrence Livermore National Laboratory, and Lehigh University. The PTRANSP/TRANSP suite of codes is the premier integrated tokamak modeling software in the United States. A production service for PTRANSP/TRANSP simulations is maintained at the Princeton Plasma Physics Laboratory; the server has a simple command line client interface and is subscribed to by about 100 researchers from tokamak projects in the US, Europe, and Asia. This service produced nearly 13000 PTRANSP/TRANSP simulations in the four year periodmore » FY 2005 through FY 2008. Major archives of TRANSP results are maintained at PPPL, MIT, General Atomics, and JET. Recent utilization, counting experimental analysis simulations as well as predictive simulations, more than doubled from slightly over 2000 simulations per year in FY 2005 and FY 2006 to over 4300 simulations per year in FY 2007 and FY 2008. PTRANSP predictive simulations applied to ITER increased eight fold from 30 simulations per year in FY 2005 and FY 2006 to 240 simulations per year in FY 2007 and FY 2008, accounting for more than half of combined PTRANSP/TRANSP service CPU resource utilization in FY 2008. PTRANSP studies focused on ITER played a key role in journal articles. Examples of validation studies carried out for momentum transport in PTRANSP simulations were presented at the 2008 IAEA conference. The increase in number of PTRANSP simulations has continued (more than 7000 TRANSP/PTRANSP simulations in 2010) and results of PTRANSP simulations appear in conference proceedings, for example the 2010 IAEA conference, and in peer reviewed papers. PTRANSP provides a bridge to the Fusion Simulation Program (FSP) and to the future of integrated modeling. Through years of widespread usage, each of the many parts of the PTRANSP suite of codes has been thoroughly validated against experimental data and benchmarked against other codes. At the same time, architectural modernizations are improving the modularity of the PTRANSP code base. The NUBEAM neutral beam and fusion products fast ion model, the Plasma State data repository (developed originally in the SWIM SciDAC project and adapted for use in PTRANSP), and other components are already shared with the SWIM, FACETS, and CPES SciDAC FSP prototype projects. Thus, the PTRANSP code is already serving as a bridge between our present integrated modeling capability and future capability. As the Fusion Simulation Program builds toward the facility currently available in the PTRANSP suite of codes, early versions of the FSP core plasma model will need to be benchmarked against the PTRANSP simulations. This will be necessary to build user confidence in FSP, but this benchmarking can only be done if PTRANSP itself is maintained and developed.« less

  11. An End-to-End simulator for the development of atmospheric corrections and temperature - emissivity separation algorithms in the TIR spectral domain

    NASA Astrophysics Data System (ADS)

    Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas

    2017-04-01

    The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.

  12. Differential Die-Away Instrument: Report on Benchmark Measurements and Comparison with Simulation for the Effects of Neutron Poisons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsell, Alison Victoria; Swinhoe, Martyn Thomas; Henzl, Vladimir

    2015-03-30

    In this report, new experimental data and MCNPX simulation results of the differential die-away (DDA) instrument response to the presence of neutron absorbers are evaluated. In our previous fresh nuclear fuel experiments and simulations, no neutron absorbers or poisons were included in the fuel definition. These new results showcase the capability of the DDA instrument to acquire data from a system that better mimics spent nuclear fuel.

  13. Nuclear Data Uncertainties for Typical LWR Fuel Assemblies and a Simple Reactor Core

    NASA Astrophysics Data System (ADS)

    Rochman, D.; Leray, O.; Hursin, M.; Ferroukhi, H.; Vasiliev, A.; Aures, A.; Bostelmann, F.; Zwermann, W.; Cabellos, O.; Diez, C. J.; Dyrda, J.; Garcia-Herranz, N.; Castro, E.; van der Marck, S.; Sjöstrand, H.; Hernandez, A.; Fleming, M.; Sublet, J.-Ch.; Fiorito, L.

    2017-01-01

    The impact of the current nuclear data library covariances such as in ENDF/B-VII.1, JEFF-3.2, JENDL-4.0, SCALE and TENDL, for relevant current reactors is presented in this work. The uncertainties due to nuclear data are calculated for existing PWR and BWR fuel assemblies (with burn-up up to 40 GWd/tHM, followed by 10 years of cooling time) and for a simplified PWR full core model (without burn-up) for quantities such as k∞, macroscopic cross sections, pin power or isotope inventory. In this work, the method of propagation of uncertainties is based on random sampling of nuclear data, either from covariance files or directly from basic parameters. Additionally, possible biases on calculated quantities are investigated such as the self-shielding treatment. Different calculation schemes are used, based on CASMO, SCALE, DRAGON, MCNP or FISPACT-II, thus simulating real-life assignments for technical-support organizations. The outcome of such a study is a comparison of uncertainties with two consequences. One: although this study is not expected to lead to similar results between the involved calculation schemes, it provides an insight on what can happen when calculating uncertainties and allows to give some perspectives on the range of validity on these uncertainties. Two: it allows to dress a picture of the state of the knowledge as of today, using existing nuclear data library covariances and current methods.

  14. Commercial Building Energy Saver, Web App

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon

    The CBES App is a web-based toolkit for use by small businesses and building owners and operators of small and medium size commercial buildings to perform energy benchmarking and retrofit analysis for buildings. The CBES App analyzes the energy performance of user's building for pre-and posto-retrofit, in conjunction with user's input data, to identify recommended retrofit measures, energy savings and economic analysis for the selected measures. The CBES App provides energy benchmarking, including getting an EnergyStar score using EnergyStar API and benchmarking against California peer buildings using the EnergyIQ API. The retrofit analysis includes a preliminary analysis by looking upmore » retrofit measures from a pre-simulated database DEEP, and a detailed analysis creating and running EnergyPlus models to calculate energy savings of retrofit measures. The CBES App builds upon the LBNL CBES API.« less

  15. Benchmarking study of the MCNP code against cold critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, S.

    1991-01-01

    The purpose of this study was to benchmark the widely used Monte Carlo code MCNP against a set of cold critical experiments with a view to using the code as a means of independently verifying the performance of faster but less accurate Monte Carlo and deterministic codes. The experiments simulated consisted of both fast and thermal criticals as well as fuel in a variety of chemical forms. A standard set of benchmark cold critical experiments was modeled. These included the two fast experiments, GODIVA and JEZEBEL, the TRX metallic uranium thermal experiments, the Babcock and Wilcox oxide and mixed oxidemore » experiments, and the Oak Ridge National Laboratory (ORNL) and Pacific Northwest Laboratory (PNL) nitrate solution experiments. The principal case studied was a small critical experiment that was performed with boiling water reactor bundles.« less

  16. Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan

    2015-11-06

    In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependentmore » cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.« less

  17. Investigation of Natural Circulation Instability and Transients in Passively Safe Small Modular Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishii, Mamoru

    The NEUP funded project, NEUP-3496, aims to experimentally investigate two-phase natural circulation flow instability that could occur in Small Modular Reactors (SMRs), especially for natural circulation SMRs. The objective has been achieved by systematically performing tests to study the general natural circulation instability characteristics and the natural circulation behavior under start-up or design basis accident conditions. Experimental data sets highlighting the effect of void reactivity feedback as well as the effect of power ramp-up rate and system pressure have been used to develop a comprehensive stability map. The safety analysis code, RELAP5, has been used to evaluate experimental results andmore » models. Improvements to the constitutive relations for flashing have been made in order to develop a reliable analysis tool. This research has been focusing on two generic SMR designs, i.e. a small modular Simplified Boiling Water Reactor (SBWR) like design and a small integral Pressurized Water Reactor (PWR) like design. A BWR-type natural circulation test facility was firstly built based on the three-level scaling analysis of the Purdue Novel Modular Reactor (NMR) with an electric output of 50 MWe, namely NMR-50, which represents a BWR-type SMR with a significantly reduced reactor pressure vessel (RPV) height. The experimental facility was installed with various equipment to measure thermalhydraulic parameters such as pressure, temperature, mass flow rate and void fraction. Characterization tests were performed before the startup transient tests and quasi-steady tests to determine the loop flow resistance. The control system and data acquisition system were programmed with LabVIEW to realize the realtime control and data storage. The thermal-hydraulic and nuclear coupled startup transients were performed to investigate the flow instabilities at low pressure and low power conditions for NMR-50. Two different power ramps were chosen to study the effect of startup power density on the flow instability. The experimental startup transient results showed the existence of three different flow instability mechanisms, i.e., flashing instability, condensation induced flow instability, and density wave oscillations. In addition, the void-reactivity feedback did not have significant effects on the flow instability during the startup transients for NMR-50. ii Several initial startup procedures with different power ramp rates were experimentally investigated to eliminate the flow instabilities observed from the startup transients. Particularly, the very slow startup transient and pressurized startup transient tests were performed and compared. It was found that the very slow startup transients by applying very small power density can eliminate the flashing oscillations in the single-phase natural circulation and stabilize the flow oscillations in the phase of net vapor generation. The initially pressurized startup procedure was tested to eliminate the flashing instability during the startup transients as well. The pressurized startup procedure included the initial pressurization, heat-up, and venting process. The startup transient tests showed that the pressurized startup procedure could eliminate the flow instability during the transition from single-phase flow to two-phase flow at low pressure conditions. The experimental results indicated that both startup procedures were applicable to the initial startup of NMR. However, the pressurized startup procedures might be preferred due to short operating hours required. In order to have a deeper understanding of natural circulation flow instability, the quasi-steady tests were performed using the test facility installed with preheater and subcooler. The effect of system pressure, core inlet subcooling, core power density, inlet flow resistance coefficient, and void reactivity feedback were investigated in the quasi-steady state tests. The experimental stability boundaries were determined between unstable and stable flow conditions in the dimensionless stability plane of inlet subcooling number and Zuber number. To predict the stability boundary theoretically, linear stability analysis in the frequency domain was performed at four sections of the natural circulation test loop. The flashing phenomena in the chimney section was considered as an axially uniform heat source. And the dimensionless characteristic equation of the pressure drop perturbation was obtained by considering the void fraction effect and outlet flow resistance in the core section. The theoretical flashing boundary showed some discrepancies with previous experimental data from the quasi-steady state tests. In the future, thermal non-equilibrium was recommended to improve the accuracy of flashing instability boundary. As another part of the funded research, flow instabilities of a PWR-type SMR under low pressure and low power conditions were investigated experimentally as well. The NuScale reactor design was selected as the prototype for the PWR-type SMR. In order to experimentally study the natural circulation behavior of NuScale iii reactor during accidental scenarios, detailed scaling analyses are necessary to ensure that the scaled phenomena could be obtained in a laboratory test facility. The three-level scaling method is used as well to obtain the scaling ratios derived from various non-dimensional numbers. The design of the ideally scaled facility (ISF) was initially accomplished based on these scaling ratios. Then the engineering scaled facility (ESF) was designed and constructed based on the ISF by considering engineering limitations including laboratory space, pipe size, and pipe connections etc. PWR-type SMR experiments were performed in this well-scaled test facility to investigate the potential thermal hydraulic flow instability during the blowdown events, which might occur during the loss of coolant accident (LOCA) and loss of heat sink accident (LOHS) of the prototype PWR-type SMR. Two kinds of experiments, normal blowdown event and cold blowdown event, were experimentally investigated and compared with code predictions. The normal blowdown event was experimentally simulated since an initial condition where the pressure was lower than the designed pressure of the experiment facility, while the code prediction of blowdown started from the normal operation condition. Important thermal hydraulic parameters including reactor pressure vessel (RPV) pressure, containment pressure, local void fraction and temperature, pressure drop and natural circulation flow rate were measured and analyzed during the blowdown event. The pressure and water level transients are similar to the experimental results published by NuScale [51], which proves the capability of current loop in simulating the thermal hydraulic transient of real PWR-type SMR. During the 20000s blowdown experiment, water level in the core was always above the active fuel assemble during the experiment and proved the safety of natural circulation cooling and water recycling design of PWR-type SMR. Besides, pressure, temperature, and water level transient can be accurately predicted by RELAP5 code. However, the oscillations of natural circulation flow rate, water level and pressure drops were observed during the blowdown transients. This kind of flow oscillations are related to the water level and the location upper plenum, which is a path for coolant flow from chimney to steam generator and down comer. In order to investigate the transients start from the opening of ADS valve in both experimental and numerical way, the cold blow-down experiment is conducted. For the cold blowdown event, different from setting both reactor iv pressure vessel (RPV) and containment at high temperature and pressure, only RPV was heated close to the highest designed pressure and then open the ADS valve, same process was predicted using RELAP5 code. By doing cold blowdown experiment, the entire transients from the opening of ADS can be investigated by code and benchmarked with experimental data. Similar flow instability observed in the cold blowdown experiment. The comparison between code prediction and experiment data showed that the RELAP5 code can successfully predict the pressure void fraction and temperature transient during the cold blowdown event with limited error, but numerical instability exists in predicting natural circulation flow rate. Besides, the code is lack of capability in predicting the water level related flow instability observed in experiments.« less

  18. Report on the PWR-radiation protection/ALARA Committee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malone, D.J.

    1995-03-01

    In 1992, representatives from several utilities with operational Pressurized Water Reactors (PWR) formed the PWR-Radiation Protection/ALARA Committee. The mission of the Committee is to facilitate open communications between member utilities relative to radiation protection and ALARA issues such that cost effective dose reduction and radiation protection measures may be instituted. While industry deregulation appears inevitable and inter-utility competition is on the rise, Committee members are fully committed to sharing both positive and negative experiences for the benefit of the health and safety of the radiation worker. Committee meetings provide current operational experiences through members providing Plant status reports, and informationmore » relative to programmatic improvements through member presentations and topic specific workshops. The most recent Committee workshop was facilitated to provide members with defined experiences that provide cost effective ALARA performance.« less

  19. Validation of numerical codes for impact and explosion cratering: Impacts on strengthless and metal targets

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    2008-12-01

    Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.

  20. Design and Application of a Community Land Benchmarking System for Earth System Models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Koven, C. D.; Kluzek, E. B.; Mao, J.; Randerson, J. T.

    2015-12-01

    Benchmarking has been widely used to assess the ability of climate models to capture the spatial and temporal variability of observations during the historical era. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we developed a new benchmarking software system that enables the user to specify the models, benchmarks, and scoring metrics, so that results can be tailored to specific model intercomparison projects. Evaluation data sets included soil and aboveground carbon stocks, fluxes of energy, carbon and water, burned area, leaf area, and climate forcing and response variables. We used this system to evaluate simulations from the 5th Phase of the Coupled Model Intercomparison Project (CMIP5) with prognostic atmospheric carbon dioxide levels over the period from 1850 to 2005 (i.e., esmHistorical simulations archived on the Earth System Grid Federation). We found that the multi-model ensemble had a high bias in incoming solar radiation across Asia, likely as a consequence of incomplete representation of aerosol effects in this region, and in South America, primarily as a consequence of a low bias in mean annual precipitation. The reduced precipitation in South America had a larger influence on gross primary production than the high bias in incoming light, and as a consequence gross primary production had a low bias relative to the observations. Although model to model variations were large, the multi-model mean had a positive bias in atmospheric carbon dioxide that has been attributed in past work to weak ocean uptake of fossil emissions. In mid latitudes of the northern hemisphere, most models overestimate latent heat fluxes in the early part of the growing season, and underestimate these fluxes in mid-summer and early fall, whereas sensible heat fluxes show the opposite trend.

  1. Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.

  2. Quantification of uncertainties for application in detonation simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Miao; Ma, Zhibo

    2016-06-01

    Numerical simulation has become an important means in designing detonation systems, and the quantification of its uncertainty is also necessary to reliability certification. As to quantifying the uncertainty, it is the most important to analyze how the uncertainties occur and develop, and how the simulations develop from benchmark models to new models. Based on the practical needs of engineering and the technology of verification & validation, a framework of QU(quantification of uncertainty) is brought forward in the case that simulation is used on detonation system for scientific prediction. An example is offered to describe the general idea of quantification of simulation uncertainties.

  3. Verification and benchmark testing of the NUFT computer code

    NASA Astrophysics Data System (ADS)

    Lee, K. H.; Nitao, J. J.; Kulshrestha, A.

    1993-10-01

    This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.

  4. Optimization of Deep Drilling Performance - Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2005-09-30

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2004 through September 2005. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for amore » next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all Phase 1 testing and is planning Phase 2 development.« less

  5. Primary water chemistry improvement for radiation exposure reduction at Japanese PWR Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishizawa, Eiichi

    1995-03-01

    Radiation exposure during the refueling outages at Japanese Pressurized Water Reactor (PWR) Plants has been gradually decreased through continuous efforts keeping the radiation dose rates at relatively low level. The improvement of primary water chemistry in respect to reduction of the radiation sources appears as one of the most important contributions to the achieved results and can be classified by the plant operation conditions as follows

  6. Comparison of Measures of Vibration Affecting Occupants of Military Vehicles

    DTIC Science & Technology

    1986-12-01

    8217 ,, l I WES equipment 27. The WES equipment consisted of a battery operated absorbed power ( ABS -PW) meter with signal conditioning...West Germany. These will be referred to as the ISO ride meter and the ABS -PWR ridemeter, respectively. The first implemented the vibration measure...the ABS -PWR algorithms were used with each acceleration signal source (analog and digital) to provide a comprehensive basis for comparing the vibration

  7. The change of radial power factor distribution due to RCCA insertion at the first cycle core of AP1000

    NASA Astrophysics Data System (ADS)

    Susilo, J.; Suparlina, L.; Deswandri; Sunaryo, G. R.

    2018-02-01

    The using of a computer program for the PWR type core neutronic design parameters analysis has been carried out in some previous studies. These studies included a computer code validation on the neutronic parameters data values resulted from measurements and benchmarking calculation. In this study, the AP1000 first cycle core radial power peaking factor validation and analysis were performed using CITATION module of the SRAC2006 computer code. The computer code has been also validated with a good result to the criticality values of VERA benchmark core. The AP1000 core power distribution calculation has been done in two-dimensional X-Y geometry through ¼ section modeling. The purpose of this research is to determine the accuracy of the SRAC2006 code, and also the safety performance of the AP1000 core first cycle operating. The core calculations were carried out with the several conditions, those are without Rod Cluster Control Assembly (RCCA), by insertion of a single RCCA (AO, M1, M2, MA, MB, MC, MD) and multiple insertion RCCA (MA + MB, MA + MB + MC, MA + MB + MC + MD, and MA + MB + MC + MD + M1). The maximum power factor of the fuel rods value in the fuel assembly assumedapproximately 1.406. The calculation results analysis showed that the 2-dimensional CITATION module of SRAC2006 code is accurate in AP1000 power distribution calculation without RCCA and with MA+MB RCCA insertion.The power peaking factor on the first operating cycle of the AP1000 core without RCCA, as well as with single and multiple RCCA are still below in the safety limit values (less then about 1.798). So in terms of thermal power generated by the fuel assembly, then it can be considered that the AP100 core at the first operating cycle is safe.

  8. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  9. Evaluation of the synoptic and mesoscale predictive capabilities of a mesoscale atmospheric simulation system

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K.; Keyser, D. A.; Mccumber, M. C.

    1983-01-01

    The overall performance characteristics of a limited area, hydrostatic, fine (52 km) mesh, primitive equation, numerical weather prediction model are determined in anticipation of satellite data assimilations with the model. The synoptic and mesoscale predictive capabilities of version 2.0 of this model, the Mesoscale Atmospheric Simulation System (MASS 2.0), were evaluated. The two part study is based on a sample of approximately thirty 12h and 24h forecasts of atmospheric flow patterns during spring and early summer. The synoptic scale evaluation results benchmark the performance of MASS 2.0 against that of an operational, synoptic scale weather prediction model, the Limited area Fine Mesh (LFM). The large sample allows for the calculation of statistically significant measures of forecast accuracy and the determination of systematic model errors. The synoptic scale benchmark is required before unsmoothed mesoscale forecast fields can be seriously considered.

  10. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

    PubMed Central

    Stöckel, Andreas; Jenzen, Christoph; Thies, Michael; Rückert, Ulrich

    2017-01-01

    Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP). Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output. PMID:28878642

  11. A simple numerical model for membrane oxygenation of an artificial lung machine

    NASA Astrophysics Data System (ADS)

    Subraveti, Sai Nikhil; Sai, P. S. T.; Viswanathan Pillai, Vinod Kumar; Patnaik, B. S. V.

    2015-11-01

    Optimal design of membrane oxygenators will have far reaching ramification in the development of artificial heart-lung systems. In the present CFD study, we simulate the gas exchange between the venous blood and air that passes through the hollow fiber membranes on a benchmark device. The gas exchange between the tube side fluid and the shell side venous liquid is modeled by solving mass, momentum conservation equations. The fiber bundle was modelled as a porous block with a bundle porosity of 0.6. The resistance offered by the fiber bundle was estimated by the standard Ergun correlation. The present numerical simulations are validated against available benchmark data. The effect of bundle porosity, bundle size, Reynolds number, non-Newtonian constitutive relation, upstream velocity distribution etc. on the pressure drop, oxygen saturation levels etc. are investigated. To emulate the features of gas transfer past the alveoli, the effect of pulsatility on the membrane oxygenation is also investigated.

  12. Integrated Prediction and Mitigation Methods of Materials Damage and Lifetime Assessment during Plasma Operation and Various Instabilities in Fusion Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassanein, Ahmed

    2015-03-31

    This report describes implementation of comprehensive and integrated models to evaluate plasma material interactions during normal and abnormal plasma operations. The models in full3D simulations represent state-of-the art worldwide development with numerous benchmarking of various tokamak devices and plasma simulators. In addition, significant number of experimental work has been performed in our center for materials under extreme environment (CMUXE) at Purdue to benchmark the effect of intense particle and heat fluxes on plasma-facing components. This represents one-year worth of work and resulted in more than 23 Journal Publications and numerous conferences presentations. The funding has helped several students to obtainmore » their M.Sc. and Ph.D. degrees and many of them are now faculty members in US and around the world teaching and conducting fusion research. Our work has also been recognized through many awards.« less

  13. Simulated annealing with probabilistic analysis for solving traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan

    2013-09-01

    Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.

  14. The Equivalent Thermal Resistance of Tile Roofs with and without Batten Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, William A

    Clay and concrete tile roofs were installed on a fully instrumented attic test facility operating in East Tennessee s climate. Roof, attic and deck temperatures and heat flows were recorded for each of the tile roofs and also on an adjacent attic cavity covered with a conventionally pigmented and direct-nailed asphalt shingle roof. The data were used to benchmark a computer tool for simulation of roofs and attics and the tool used to develop an approach for computing an equivalent seasonal R-value for sub-tile venting. The approach computed equal heat fluxes through the ceilings of roofs having different combinations ofmore » surface radiation properties and or building constructions. A direct nailed shingle roof served as a control for estimating the equivalent thermal resistance of the air space. Simulations were benchmarked to data in the ASHRAE Fundamentals for the thermal resistance of inclined and closed air spaces.« less

  15. Verification of gyrokinetic particle simulation of current-driven instability in fusion plasmas. I. Internal kink mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClenaghan, J.; Lin, Z.; Holod, I.

    The gyrokinetic toroidal code (GTC) capability has been extended for simulating internal kink instability with kinetic effects in toroidal geometry. The global simulation domain covers the magnetic axis, which is necessary for simulating current-driven instabilities. GTC simulation in the fluid limit of the kink modes in cylindrical geometry is verified by benchmarking with a magnetohydrodynamic eigenvalue code. Gyrokinetic simulations of the kink modes in the toroidal geometry find that ion kinetic effects significantly reduce the growth rate even when the banana orbit width is much smaller than the radial width of the perturbed current layer at the mode rational surface.

  16. Large eddy simulation of the FDA benchmark nozzle for a Reynolds number of 6500.

    PubMed

    Janiga, Gábor

    2014-04-01

    This work investigates the flow in a benchmark nozzle model of an idealized medical device proposed by the FDA using computational fluid dynamics (CFD). It was in particular shown that a proper modeling of the transitional flow features is particularly challenging, leading to large discrepancies and inaccurate predictions from the different research groups using Reynolds-averaged Navier-Stokes (RANS) modeling. In spite of the relatively simple, axisymmetric computational geometry, the resulting turbulent flow is fairly complex and non-axisymmetric, in particular due to the sudden expansion. The resulting flow cannot be well predicted with simple modeling approaches. Due to the varying diameters and flow velocities encountered in the nozzle, different typical flow regions and regimes can be distinguished, from laminar to transitional and to weakly turbulent. The purpose of the present work is to re-examine the FDA-CFD benchmark nozzle model at a Reynolds number of 6500 using large eddy simulation (LES). The LES results are compared with published experimental data obtained by Particle Image Velocimetry (PIV) and an excellent agreement can be observed considering the temporally averaged flow velocities. Different flow regimes are characterized by computing the temporal energy spectra at different locations along the main axis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Microbially Mediated Kinetic Sulfur Isotope Fractionation: Reactive Transport Modeling Benchmark

    NASA Astrophysics Data System (ADS)

    Wanner, C.; Druhan, J. L.; Cheng, Y.; Amos, R. T.; Steefel, C. I.; Ajo Franklin, J. B.

    2014-12-01

    Microbially mediated sulfate reduction is a ubiquitous process in many subsurface systems. Isotopic fractionation is characteristic of this anaerobic process, since sulfate reducing bacteria (SRB) favor the reduction of the lighter sulfate isotopologue (S32O42-) over the heavier isotopologue (S34O42-). Detection of isotopic shifts have been utilized as a proxy for the onset of sulfate reduction in subsurface systems such as oil reservoirs and aquifers undergoing uranium bioremediation. Reactive transport modeling (RTM) of kinetic sulfur isotope fractionation has been applied to field and laboratory studies. These RTM approaches employ different mathematical formulations in the representation of kinetic sulfur isotope fractionation. In order to test the various formulations, we propose a benchmark problem set for the simulation of kinetic sulfur isotope fractionation during microbially mediated sulfate reduction. The benchmark problem set is comprised of four problem levels and is based on a recent laboratory column experimental study of sulfur isotope fractionation. Pertinent processes impacting sulfur isotopic composition such as microbial sulfate reduction and dispersion are included in the problem set. To date, participating RTM codes are: CRUNCHTOPE, TOUGHREACT, MIN3P and THE GEOCHEMIST'S WORKBENCH. Preliminary results from various codes show reasonable agreement for the problem levels simulating sulfur isotope fractionation in 1D.

  18. An Approach to Assess Delamination Propagation Simulation Capabilities in Commercial Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2008-01-01

    An approach for assessing the delamination propagation simulation capabilities in commercial finite element codes is presented and demonstrated. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. The load-displacement relationship and the total strain energy obtained from the propagation analysis results and the benchmark results were compared and good agreements could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as was expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.

  19. Low energy electron transport in furfural

    NASA Astrophysics Data System (ADS)

    Lozano, Ana I.; Krupa, Kateryna; Ferreira da Silva, Filipe; Limão-Vieira, Paulo; Blanco, Francisco; Muñoz, Antonio; Jones, Darryl B.; Brunger, Michael J.; García, Gustavo

    2017-09-01

    We report on an initial investigation into the transport of electrons through a gas cell containing 1 mTorr of gaseous furfural. Results from our Monte Carlo simulation are implicitly checked against those from a corresponding electron transmission measurement. To enable this simulation a self-consistent cross section data base was constructed. This data base is benchmarked through new total cross section measurements which are also described here. In addition, again to facilitate the simulation, our preferred energy loss distribution function is presented and discussed.

  20. Neutron streaming studies along JET shielding penetrations

    NASA Astrophysics Data System (ADS)

    Stamatelatos, Ion E.; Vasilopoulou, Theodora; Batistoni, Paola; Obryk, Barbara; Popovichev, Sergey; Naish, Jonathan

    2017-09-01

    Neutronic benchmark experiments are carried out at JET aiming to assess the neutronic codes and data used in ITER analysis. Among other activities, experiments are performed in order to validate neutron streaming simulations along long penetrations in the JET shielding configuration. In this work, neutron streaming calculations along the JET personnel entrance maze are presented. Simulations were performed using the MCNP code for Deuterium-Deuterium and Deuterium- Tritium plasma sources. The results of the simulations were compared against experimental data obtained using thermoluminescence detectors and activation foils.

  1. Shuttle Main Propulsion System LH2 Feed Line and Inducer Simulations

    NASA Technical Reports Server (NTRS)

    Dorney, Daniel J.; Rothermel, Jeffry

    2002-01-01

    This viewgraph presentation includes simulations of the unsteady flow field in the LH2 feed line, flow line, flow liner, backing cavity and inducer of Shuttle engine #1. It also evaluates aerodynamic forcing functions which may contribute to the formation of the cracks observed on the flow liner slots. The presentation lists the numerical methods used, and profiles a benchmark test case.

  2. Validation of Tendril TrueHome Using Software-to-Software Comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maguire, Jeffrey B; Horowitz, Scott G; Moore, Nathan

    This study performed comparative evaluation of EnergyPlus version 8.6 and Tendril TrueHome, two physics-based home energy simulation models, to identify differences in energy consumption predictions between the two programs and resolve discrepancies between them. EnergyPlus is considered a benchmark, best-in-class software tool for building energy simulation. This exercise sought to improve both software tools through additional evaluation/scrutiny.

  3. A simulation study of thinning and fuel treatments on a wildland-urban interface in eastern Oregon, USA

    Treesearch

    Alan A. Ager; Andrew J. McMahan; James J. Barrett; Charles W. McHugh

    2007-01-01

    We simulated long-term forest management activities on 16,000-ha wildland-urban interface in the Blue Mountains near La Grande, Oregon. The study area is targeted for thinning and fuels treatments on both private and Federally managed lands to address forest health and sustainability concerns and reduce the risk of severe wildfire. We modeled number of benchmark...

  4. Gaming in risk-adjusted mortality rates: effect of misclassification of risk factors in the benchmarking of cardiac surgery risk-adjusted mortality rates.

    PubMed

    Siregar, Sabrina; Groenwold, Rolf H H; Versteegh, Michel I M; Noyez, Luc; ter Burg, Willem Jan P P; Bots, Michiel L; van der Graaf, Yolanda; van Herwerden, Lex A

    2013-03-01

    Upcoding or undercoding of risk factors could affect the benchmarking of risk-adjusted mortality rates. The aim was to investigate the effect of misclassification of risk factors on the benchmarking of mortality rates after cardiac surgery. A prospective cohort was used comprising all adult cardiac surgery patients in all 16 cardiothoracic centers in The Netherlands from January 1, 2007, to December 31, 2009. A random effects model, including the logistic European system for cardiac operative risk evaluation (EuroSCORE) was used to benchmark the in-hospital mortality rates. We simulated upcoding and undercoding of 5 selected variables in the patients from 1 center. These patients were selected randomly (nondifferential misclassification) or by the EuroSCORE (differential misclassification). In the random patients, substantial misclassification was required to affect benchmarking: a 1.8-fold increase in prevalence of the 4 risk factors changed an underperforming center into an average performing one. Upcoding of 1 variable required even more. When patients with the greatest EuroSCORE were upcoded (ie, differential misclassification), a 1.1-fold increase was sufficient: moderate left ventricular function from 14.2% to 15.7%, poor left ventricular function from 8.4% to 9.3%, recent myocardial infarction from 7.9% to 8.6%, and extracardiac arteriopathy from 9.0% to 9.8%. Benchmarking using risk-adjusted mortality rates can be manipulated by misclassification of the EuroSCORE risk factors. Misclassification of random patients or of single variables will have little effect. However, limited upcoding of multiple risk factors in high-risk patients can greatly influence benchmarking. To minimize "gaming," the prevalence of all risk factors should be carefully monitored. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  5. Results of the 2013 UT modeling benchmark obtained with models implemented in CIVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toullelan, Gwénaël; Raillon, Raphaële; Chatillon, Sylvain

    The 2013 Ultrasonic Testing (UT) modeling benchmark concerns direct echoes from side drilled holes (SDH), flat bottom holes (FBH) and corner echoes from backwall breaking artificial notches inspected with a matrix phased array probe. This communication presents the results obtained with the models implemented in the CIVA software: the pencilmodel is used to compute the field radiated by the probe, the Kirchhoff approximation is applied to predict the response of FBH and notches and the SOV (Separation Of Variables) model is used for the SDH responses. The comparison between simulated and experimental results are presented and discussed.

  6. Benchmarking of relative permeability

    NASA Astrophysics Data System (ADS)

    DiCarlo, D. A.

    2017-12-01

    Relative permeability is the key relation in terms of multi-phase flow through porous media. There are hundreds of published relative permeability curves for various media, some classic (Oak 90 and 91), some contradictory. This can lead to a confusing situation if one is trying to benchmark simulation results to "experimental data". Coming from the experimental side, I have found that modelers have too much trust in relative permeability data sets. In this talk, I will discuss reasons for discrepancies within and between data sets, and give guidance on which portions of the data sets are most solid in terms of matching through models.

  7. Pretest mediction of Semiscale Test S-07-10 B. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobbe, C A

    A best estimate prediction of Semiscale Test S-07-10B was performed at INEL by EG and G Idaho as part of the RELAP4/MOD6 code assessment effort and as the Nuclear Regulatory Commission pretest calculation for the Small Break Experiment. The RELAP4/MOD6 Update 4 and the RELAP4/MOD7 computer codes were used to analyze Semiscale Test S-07-10B, a 10% communicative cold leg break experiment. The Semiscale Mod-3 system utilized an electrially heated simulated core operating at a power level of 1.94 MW. The initial system pressure and temperature in the upper plenum was 2276 psia and 604/sup 0/F, respectively.

  8. Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.

    PubMed

    Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J

    2016-01-01

    Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.

  9. HRSSA - Efficient hybrid stochastic simulation for spatially homogeneous biochemical reaction networks

    NASA Astrophysics Data System (ADS)

    Marchetti, Luca; Priami, Corrado; Thanh, Vo Hong

    2016-07-01

    This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.

  10. SAS2H Generated Isotopic Concentrations For B&W 15X15 PWR Assembly (SCPB:N/A)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.W. Davis

    This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) to provide pressurized water reactor (PWR) isotopic composition data as a function of time for use in criticality analyses. The objectives of this evaluation are to generate burnup and decay dependant isotopic inventories and to provide these inventories in a form which can easily be utilized in subsequent criticality calculations.

  11. Astronaut Robinson presents 2010 Silver Snoopy awards

    NASA Image and Video Library

    2010-06-23

    NASA's John C. Stennis Space Center Director Patrick Scheuermann and astronaut Steve Robinson stand with recipients of the 2010 Silver Snoopy awards following a June 23 ceremony. Sixteen Stennis employees received the astronauts' personal award, which is presented by a member of the astronaut corps representing its core principles for outstanding flight safety and mission success. This year's recipients and ceremony participants were: (front row, l to r): Cliff Arnold (NASA), Wendy Holladay (NASA), Kendra Moran (Pratt & Whitney Rocketdyne), Mary Johnson (Jacobs Technology Facility Operating Services Contract group), Cory Beckemeyer (PWR), Dean Bourlet (PWR), Cecile Saltzman (NASA), Marla Carpenter (Jacobs FOSC), David Alston (Jacobs FOSC); (back row, l to r) Scheuermann, Don Wilson (A2 Research), Tim White (NASA), Ira Lossett (Jacobs Technology NASA Test Operations Group), Kerry Gallagher (Jacobs NTOG); Rene LeFrere (PWR), Todd Ladner (ASRC Research and Technology Solutions) and Thomas Jacks (NASA).

  12. Experimental depth dose curves of a 67.5 MeV proton beam for benchmarking and validation of Monte Carlo simulation

    PubMed Central

    Faddegon, Bruce A.; Shin, Jungwook; Castenada, Carlos M.; Ramos-Méndez, José; Daftari, Inder K.

    2015-01-01

    Purpose: To measure depth dose curves for a 67.5 ± 0.1 MeV proton beam for benchmarking and validation of Monte Carlo simulation. Methods: Depth dose curves were measured in 2 beam lines. Protons in the raw beam line traversed a Ta scattering foil, 0.1016 or 0.381 mm thick, a secondary emission monitor comprised of thin Al foils, and a thin Kapton exit window. The beam energy and peak width and the composition and density of material traversed by the beam were known with sufficient accuracy to permit benchmark quality measurements. Diodes for charged particle dosimetry from two different manufacturers were used to scan the depth dose curves with 0.003 mm depth reproducibility in a water tank placed 300 mm from the exit window. Depth in water was determined with an uncertainty of 0.15 mm, including the uncertainty in the water equivalent depth of the sensitive volume of the detector. Parallel-plate chambers were used to verify the accuracy of the shape of the Bragg peak and the peak-to-plateau ratio measured with the diodes. The uncertainty in the measured peak-to-plateau ratio was 4%. Depth dose curves were also measured with a diode for a Bragg curve and treatment beam spread out Bragg peak (SOBP) on the beam line used for eye treatment. The measurements were compared to Monte Carlo simulation done with geant4 using topas. Results: The 80% dose at the distal side of the Bragg peak for the thinner foil was at 37.47 ± 0.11 mm (average of measurement with diodes from two different manufacturers), compared to the simulated value of 37.20 mm. The 80% dose for the thicker foil was at 35.08 ± 0.15 mm, compared to the simulated value of 34.90 mm. The measured peak-to-plateau ratio was within one standard deviation experimental uncertainty of the simulated result for the thinnest foil and two standard deviations for the thickest foil. It was necessary to include the collimation in the simulation, which had a more pronounced effect on the peak-to-plateau ratio for the thicker foil. The treatment beam, being unfocussed, had a broader Bragg peak than the raw beam. A 1.3 ± 0.1 MeV FWHM peak width in the energy distribution was used in the simulation to match the Bragg peak width. An additional 1.3–2.24 mm of water in the water column was required over the nominal values to match the measured depth penetration. Conclusions: The proton Bragg curve measured for the 0.1016 mm thick Ta foil provided the most accurate benchmark, having a low contribution of proton scatter from upstream of the water tank. The accuracy was 0.15% in measured beam energy and 0.3% in measured depth penetration at the Bragg peak. The depth of the distal edge of the Bragg peak in the simulation fell short of measurement, suggesting that the mean ionization potential of water is 2–5 eV higher than the 78 eV used in the stopping power calculation for the simulation. The eye treatment beam line depth dose curves provide validation of Monte Carlo simulation of a Bragg curve and SOBP with 4%/2 mm accuracy. PMID:26133619

  13. A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems

    NASA Astrophysics Data System (ADS)

    Abtahi, Amir-Reza; Bijari, Afsane

    2017-03-01

    In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.

  14. Anharmonic Vibrational Spectroscopy on Metal Transition Complexes

    NASA Astrophysics Data System (ADS)

    Latouche, Camille; Bloino, Julien; Barone, Vincenzo

    2014-06-01

    Advances in hardware performance and the availability of efficient and reliable computational models have made possible the application of computational spectroscopy to ever larger molecular systems. The systematic interpretation of experimental data and the full characterization of complex molecules can then be facilitated. Focusing on vibrational spectroscopy, several approaches have been proposed to simulate spectra beyond the double harmonic approximation, so that more details become available. However, a routine use of such tools requires the preliminary definition of a valid protocol with the most appropriate combination of electronic structure and nuclear calculation models. Several benchmark of anharmonic calculations frequency have been realized on organic molecules. Nevertheless, benchmarks of organometallics or inorganic metal complexes at this level are strongly lacking despite the interest of these systems due to their strong emission and vibrational properties. Herein we report the benchmark study realized with anharmonic calculations on simple metal complexes, along with some pilot applications on systems of direct technological or biological interest.

  15. GLOFRIM v1.0 - A globally applicable computational framework for integrated hydrological-hydrodynamic modelling

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis M.; Neal, Jeffrey C.; Baart, Fedor; van Beek, Rens; Winsemius, Hessel C.; Bates, Paul D.; Bierkens, Marc F. P.

    2017-10-01

    We here present GLOFRIM, a globally applicable computational framework for integrated hydrological-hydrodynamic modelling. GLOFRIM facilitates spatially explicit coupling of hydrodynamic and hydrologic models and caters for an ensemble of models to be coupled. It currently encompasses the global hydrological model PCR-GLOBWB as well as the hydrodynamic models Delft3D Flexible Mesh (DFM; solving the full shallow-water equations and allowing for spatially flexible meshing) and LISFLOOD-FP (LFP; solving the local inertia equations and running on regular grids). The main advantages of the framework are its open and free access, its global applicability, its versatility, and its extensibility with other hydrological or hydrodynamic models. Before applying GLOFRIM to an actual test case, we benchmarked both DFM and LFP for a synthetic test case. Results show that for sub-critical flow conditions, discharge response to the same input signal is near-identical for both models, which agrees with previous studies. We subsequently applied the framework to the Amazon River basin to not only test the framework thoroughly, but also to perform a first-ever benchmark of flexible and regular grids on a large-scale. Both DFM and LFP produce comparable results in terms of simulated discharge with LFP exhibiting slightly higher accuracy as expressed by a Kling-Gupta efficiency of 0.82 compared to 0.76 for DFM. However, benchmarking inundation extent between DFM and LFP over the entire study area, a critical success index of 0.46 was obtained, indicating that the models disagree as often as they agree. Differences between models in both simulated discharge and inundation extent are to a large extent attributable to the gridding techniques employed. In fact, the results show that both the numerical scheme of the inundation model and the gridding technique can contribute to deviations in simulated inundation extent as we control for model forcing and boundary conditions. This study shows that the presented computational framework is robust and widely applicable. GLOFRIM is designed as open access and easily extendable, and thus we hope that other large-scale hydrological and hydrodynamic models will be added. Eventually, more locally relevant processes would be captured and more robust model inter-comparison, benchmarking, and ensemble simulations of flood hazard on a large scale would be allowed for.

  16. Benchmark levels for the consumptive water footprint of crop production for different environmental conditions: a case study for winter wheat in China

    NASA Astrophysics Data System (ADS)

    Zhuo, La; Mekonnen, Mesfin M.; Hoekstra, Arjen Y.

    2016-11-01

    Meeting growing food demands while simultaneously shrinking the water footprint (WF) of agricultural production is one of the greatest societal challenges. Benchmarks for the WF of crop production can serve as a reference and be helpful in setting WF reduction targets. The consumptive WF of crops, the consumption of rainwater stored in the soil (green WF), and the consumption of irrigation water (blue WF) over the crop growing period varies spatially and temporally depending on environmental factors like climate and soil. The study explores which environmental factors should be distinguished when determining benchmark levels for the consumptive WF of crops. Hereto we determine benchmark levels for the consumptive WF of winter wheat production in China for all separate years in the period 1961-2008, for rain-fed vs. irrigated croplands, for wet vs. dry years, for warm vs. cold years, for four different soil classes, and for two different climate zones. We simulate consumptive WFs of winter wheat production with the crop water productivity model AquaCrop at a 5 by 5 arcmin resolution, accounting for water stress only. The results show that (i) benchmark levels determined for individual years for the country as a whole remain within a range of ±20 % around long-term mean levels over 1961-2008, (ii) the WF benchmarks for irrigated winter wheat are 8-10 % larger than those for rain-fed winter wheat, (iii) WF benchmarks for wet years are 1-3 % smaller than for dry years, (iv) WF benchmarks for warm years are 7-8 % smaller than for cold years, (v) WF benchmarks differ by about 10-12 % across different soil texture classes, and (vi) WF benchmarks for the humid zone are 26-31 % smaller than for the arid zone, which has relatively higher reference evapotranspiration in general and lower yields in rain-fed fields. We conclude that when determining benchmark levels for the consumptive WF of a crop, it is useful to primarily distinguish between different climate zones. If actual consumptive WFs of winter wheat throughout China were reduced to the benchmark levels set by the best 25 % of Chinese winter wheat production (1224 m3 t-1 for arid areas and 841 m3 t-1 for humid areas), the water saving in an average year would be 53 % of the current water consumption at winter wheat fields in China. The majority of the yield increase and associated improvement in water productivity can be achieved in southern China.

  17. Visco-Resistive MHD Modeling Benchmark of Forced Magnetic Reconnection

    NASA Astrophysics Data System (ADS)

    Beidler, M. T.; Hegna, C. C.; Sovinec, C. R.; Callen, J. D.; Ferraro, N. M.

    2016-10-01

    The presence of externally-applied 3D magnetic fields can affect important phenomena in tokamaks, including mode locking, disruptions, and edge localized modes. External fields penetrate into the plasma and can lead to forced magnetic reconnection (FMR), and hence magnetic islands, on resonant surfaces if the local plasma rotation relative to the external field is slow. Preliminary visco-resistive MHD simulations of FMR in a slab geometry are consistent with theory. Specifically, linear simulations exhibit proper scaling of the penetrated field with resistivity, viscosity, and flow, and nonlinear simulations exhibit a bifurcation from a flow-screened to a field-penetrated, magnetic island state as the external field is increased, due to the 3D electromagnetic force. These results will be compared to simulations of FMR in a circular cross-section, cylindrical geometry by way of a benchmark between the NIMROD and M3D-C1 extended-MHD codes. Because neither this geometry nor the MHD model has the physics of poloidal flow damping, the theory of will be expanded to include poloidal flow effects. The resulting theory will be tested with linear and nonlinear simulations that vary the resistivity, viscosity, flow, and external field. Supported by OFES DoE Grants DE-FG02-92ER54139, DE-FG02-86ER53218, DE-AC02-09CH11466, and the SciDAC Center for Extended MHD Modeling.

  18. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE PAGES

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...

    2018-06-14

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  19. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  20. A Uranium Bioremediation Reactive Transport Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introducesmore » acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.« less

  1. Model evaluation using a community benchmarking system for land surface models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.

    2014-12-01

    Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.

  2. Accelerating cardiac bidomain simulations using graphics processing units.

    PubMed

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  3. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  4. Entropic multirelaxation-time lattice Boltzmann method for moving and deforming geometries in three dimensions

    NASA Astrophysics Data System (ADS)

    Dorschner, B.; Chikatamarla, S. S.; Karlin, I. V.

    2017-06-01

    Entropic lattice Boltzmann methods have been developed to alleviate intrinsic stability issues of lattice Boltzmann models for under-resolved simulations. Its reliability in combination with moving objects was established for various laminar benchmark flows in two dimensions in our previous work [B. Dorschner, S. Chikatamarla, F. Bösch, and I. Karlin, J. Comput. Phys. 295, 340 (2015), 10.1016/j.jcp.2015.04.017] as well as for three-dimensional one-way coupled simulations of engine-type geometries in B . Dorschner, F. Bösch, S. Chikatamarla, K. Boulouchos, and I. Karlin [J. Fluid Mech. 801, 623 (2016), 10.1017/jfm.2016.448] for flat moving walls. The present contribution aims to fully exploit the advantages of entropic lattice Boltzmann models in terms of stability and accuracy and extends the methodology to three-dimensional cases, including two-way coupling between fluid and structure and then turbulence and deforming geometries. To cover this wide range of applications, the classical benchmark of a sedimenting sphere is chosen first to validate the general two-way coupling algorithm. Increasing the complexity, we subsequently consider the simulation of a plunging SD7003 airfoil in the transitional regime at a Reynolds number of Re =40 000 and, finally, to access the model's performance for deforming geometries, we conduct a two-way coupled simulation of a self-propelled anguilliform swimmer. These simulations confirm the viability of the new fluid-structure interaction lattice Boltzmann algorithm to simulate flows of engineering relevance.

  5. GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.

    PubMed

    E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N

    2018-03-01

    GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.

  6. Commercial Building Energy Saver, API

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon

    2015-08-27

    The CBES API provides Application Programming Interface to a suite of functions to improve energy efficiency of buildings, including building energy benchmarking, preliminary retrofit analysis using a pre-simulation database DEEP, and detailed retrofit analysis using energy modeling with the EnergyPlus simulation engine. The CBES API is used to power the LBNL CBES Web App. It can be adopted by third party developers and vendors into their software tools and platforms.

  7. Multi-Constituent Simulation of Thrombus Deposition

    NASA Astrophysics Data System (ADS)

    Wu, Wei-Tao; Jamiolkowski, Megan A.; Wagner, William R.; Aubry, Nadine; Massoudi, Mehrdad; Antaki, James F.

    2017-02-01

    In this paper, we present a spatio-temporal mathematical model for simulating the formation and growth of a thrombus. Blood is treated as a multi-constituent mixture comprised of a linear fluid phase and a thrombus (solid) phase. The transport and reactions of 10 chemical and biological species are incorporated using a system of coupled convection-reaction-diffusion (CRD) equations to represent three processes in thrombus formation: initiation, propagation and stabilization. Computational fluid dynamic (CFD) simulations using the libraries of OpenFOAM were performed for two illustrative benchmark problems: in vivo thrombus growth in an injured blood vessel and in vitro thrombus deposition in micro-channels (1.5 mm × 1.6 mm × 0.1 mm) with small crevices (125 μm × 75 μm and 125 μm × 137 μm). For both problems, the simulated thrombus deposition agreed very well with experimental observations, both spatially and temporally. Based on the success with these two benchmark problems, which have very different flow conditions and biological environments, we believe that the current model will provide useful insight into the genesis of thrombosis in blood-wetted devices, and provide a tool for the design of less thrombogenic devices.

  8. Multi-Constituent Simulation of Thrombus Deposition

    PubMed Central

    Wu, Wei-Tao; Jamiolkowski, Megan A.; Wagner, William R.; Aubry, Nadine; Massoudi, Mehrdad; Antaki, James F.

    2017-01-01

    In this paper, we present a spatio-temporal mathematical model for simulating the formation and growth of a thrombus. Blood is treated as a multi-constituent mixture comprised of a linear fluid phase and a thrombus (solid) phase. The transport and reactions of 10 chemical and biological species are incorporated using a system of coupled convection-reaction-diffusion (CRD) equations to represent three processes in thrombus formation: initiation, propagation and stabilization. Computational fluid dynamic (CFD) simulations using the libraries of OpenFOAM were performed for two illustrative benchmark problems: in vivo thrombus growth in an injured blood vessel and in vitro thrombus deposition in micro-channels (1.5 mm × 1.6 mm × 0.1 mm) with small crevices (125 μm × 75 μm and 125 μm × 137 μm). For both problems, the simulated thrombus deposition agreed very well with experimental observations, both spatially and temporally. Based on the success with these two benchmark problems, which have very different flow conditions and biological environments, we believe that the current model will provide useful insight into the genesis of thrombosis in blood-wetted devices, and provide a tool for the design of less thrombogenic devices. PMID:28218279

  9. Multi-Constituent Simulation of Thrombus Deposition.

    PubMed

    Wu, Wei-Tao; Jamiolkowski, Megan A; Wagner, William R; Aubry, Nadine; Massoudi, Mehrdad; Antaki, James F

    2017-02-20

    In this paper, we present a spatio-temporal mathematical model for simulating the formation and growth of a thrombus. Blood is treated as a multi-constituent mixture comprised of a linear fluid phase and a thrombus (solid) phase. The transport and reactions of 10 chemical and biological species are incorporated using a system of coupled convection-reaction-diffusion (CRD) equations to represent three processes in thrombus formation: initiation, propagation and stabilization. Computational fluid dynamic (CFD) simulations using the libraries of OpenFOAM were performed for two illustrative benchmark problems: in vivo thrombus growth in an injured blood vessel and in vitro thrombus deposition in micro-channels (1.5 mm × 1.6 mm × 0.1 mm) with small crevices (125 μm × 75 μm and 125 μm × 137 μm). For both problems, the simulated thrombus deposition agreed very well with experimental observations, both spatially and temporally. Based on the success with these two benchmark problems, which have very different flow conditions and biological environments, we believe that the current model will provide useful insight into the genesis of thrombosis in blood-wetted devices, and provide a tool for the design of less thrombogenic devices.

  10. Scalable Metropolis Monte Carlo for simulation of hard shapes

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.

    2016-07-01

    We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.

  11. Cherry-picking functionally relevant substates from long md trajectories using a stratified sampling approach.

    PubMed

    Chandramouli, Balasubramanian; Mancini, Giordano

    2016-01-01

    Classical Molecular Dynamics (MD) simulations can provide insights at the nanoscopic scale into protein dynamics. Currently, simulations of large proteins and complexes can be routinely carried out in the ns-μs time regime. Clustering of MD trajectories is often performed to identify selective conformations and to compare simulation and experimental data coming from different sources on closely related systems. However, clustering techniques are usually applied without a careful validation of results and benchmark studies involving the application of different algorithms to MD data often deal with relatively small peptides instead of average or large proteins; finally clustering is often applied as a means to analyze refined data and also as a way to simplify further analysis of trajectories. Herein, we propose a strategy to classify MD data while carefully benchmarking the performance of clustering algorithms and internal validation criteria for such methods. We demonstrate the method on two showcase systems with different features, and compare the classification of trajectories in real and PCA space. We posit that the prototype procedure adopted here could be highly fruitful in clustering large trajectories of multiple systems or that resulting especially from enhanced sampling techniques like replica exchange simulations. Copyright: © 2016 by Fabrizio Serra editore, Pisa · Roma.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinilla, Maria Isabel

    This report seeks to study and benchmark code predictions against experimental data; determine parameters to match MCNP-simulated detector response functions to experimental stilbene measurements; add stilbene processing capabilities to DRiFT; and improve NEUANCE detector array modeling and analysis using new MCNP6 and DRiFT features.

  13. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  14. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  15. Mathematical simulations of photon interactions using Monte Carlo analysis to evaluate the uncertainty associated with in vivo K X-ray fluorescence measurements of stable lead in bone

    NASA Astrophysics Data System (ADS)

    Lodwick, Camille J.

    This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.

  16. ZPR-6 assembly 7 high {sup 240} PU core : a cylindrical assemby with mixed (PU, U)-oxide fuel and a central high {sup 240} PU zone.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; Schaefer, R. W.; McKnight, R. D.

    Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactormore » physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of demonstration-size LMFBRs. As a benchmark, ZPR-6/7 was devoid of many 'real' reactor features, such as simulated control rods and multiple enrichment zones, in its reference form. Those kinds of features were investigated experimentally in variants of the reference ZPR-6/7 or in other critical assemblies in the Demonstration Reactor Benchmark Program.« less

  17. Performance Evaluation and Benchmarking of Next Intelligent Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this bookmore » include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.« less

  18. Design and development of a community carbon cycle benchmarking system for CMIP5 models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Randerson, J. T.

    2013-12-01

    Benchmarking has been widely used to assess the ability of atmosphere, ocean, sea ice, and land surface models to capture the spatial and temporal variability of observations during the historical period. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we designed and developed a software system that enables the user to specify the models, benchmarks, and scoring systems so that results can be tailored to specific model intercomparison projects. We used this system to evaluate the performance of CMIP5 Earth system models (ESMs). Our scoring system used information from four different aspects of climate, including the climatological mean spatial pattern of gridded surface variables, seasonal cycle dynamics, the amplitude of interannual variability, and long-term decadal trends. We used this system to evaluate burned area, global biomass stocks, net ecosystem exchange, gross primary production, and ecosystem respiration from CMIP5 historical simulations. Initial results indicated that the multi-model mean often performed better than many of the individual models for most of the observational constraints.

  19. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    NASA Astrophysics Data System (ADS)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  20. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2012-01-01

    The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.

  1. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  2. Comparative study on gene set and pathway topology-based enrichment methods.

    PubMed

    Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim

    2015-10-22

    Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both types of methods for enrichment analysis require further improvements in order to deal with the problem of pathway overlaps.

  3. HRSSA – Efficient hybrid stochastic simulation for spatially homogeneous biochemical reaction networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchetti, Luca, E-mail: marchetti@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; University of Trento, Department of Mathematics

    This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance andmore » accuracy of HRSSA against other state of the art algorithms.« less

  4. Pretest analysis of Semiscale Mod-3 baseline test S-07-8 and S-07-9

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fineman, C.P.; Steiner, J.L.; Snider, D.M.

    This document contains a pretest analysis of the Semiscale Mod-3 system thermal-hydraulic response for the second and third integral tests in Test Series 7 (Tests S-07-8 and S-07-9). Test Series 7 is the first test series to be conducted with the Semiscale Mod-3 system. The design of the Mod-3 system includes an improved representation of certain portions of a pressurized water reactor (PWR) when compared to the previously operated Semiscale Mod-1 system. The improvements include a new vessel which contains a full length (3.66 m) core, a full length upper plenum and upper head, and an external downcomer. An activemore » pump and active steam generator scaled to their pressurized water reactor (PWR) counterparts have been added to the broken loop. The upper head design includes the capability to simulate emergency core coolant (ECC) injection into this region. Test Series 7 is divided into three groups of tests that emphasize the evaluation of the Mod-3 system performance during different phases of the loss-of-coolant experiment (LOCE) transient. The last test group, which includes Tests S-07-8 and S-07-9, will be used to evaluate the integral behavior of the system. The previous two test groups were used to evaluate the blowdown behavior and the reflood behavior of the system. 3 refs., 35 figs., 12 tabs.« less

  5. Modeling of turbulent separated flows for aerodynamic applications

    NASA Technical Reports Server (NTRS)

    Marvin, J. G.

    1983-01-01

    Steady, high speed, compressible separated flows modeled through numerical simulations resulting from solutions of the mass-averaged Navier-Stokes equations are reviewed. Emphasis is placed on benchmark flows that represent simplified (but realistic) aerodynamic phenomena. These include impinging shock waves, compression corners, glancing shock waves, trailing edge regions, and supersonic high angle of attack flows. A critical assessment of modeling capabilities is provided by comparing the numerical simulations with experiment. The importance of combining experiment, numerical algorithm, grid, and turbulence model to effectively develop this potentially powerful simulation technique is stressed.

  6. Waterside corrosion of Zircaloy-clad fuel rods in a PWR environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzarolli, F.; Jorde, D.; Manzel, R.

    A data base of Zircaloy corrosion behavior under PWR operating conditions has been established from previously published reports as well as from new Kraftwerk Union (KWU) fuel examinations. The data show that the reactor environment increases the corrosion. ZrO/sub 2/ film thermal conductivity is another major factor that influences corrosion behavior. It was inferred from KWU film thickness data that the oxide film thermal conductivity may decrease once circumferential cracks develop in the layer. 57 refs.

  7. Chemical Agonists of the PML/Daxx Pathway for Prostate Cancer Therapy

    DTIC Science & Technology

    2011-04-01

    positive nuclei. These data suggest that the assay is highly specific and will not suffer from promiscuous reactivity with NIH library compounds...Figure 16B). Strikingly, when we compared Daxx levels in PCa cell lines to a nontumorigenic human prostatic epithelial line, PWR -1E, they were...Lysates from six different cell types ( PWR -1E, ALVA-31 Daxx K/D, ALVA-31 WT, DU145, LNCaP, and PC3) were normalized for total protein content (60 μg

  8. Validation of the second-generation Olympus colonoscopy simulator for skills assessment.

    PubMed

    Haycock, A V; Bassett, P; Bladen, J; Thomas-Gibson, S

    2009-11-01

    Simulators have potential value in providing objective evidence of technical skill for procedures within medicine. The aim of this study was to determine face and construct validity for the Olympus colonoscopy simulator and to establish which assessment measures map to clinical benchmarks of expertise. Thirty-four participants were recruited: 10 novices with no prior colonoscopy experience, 13 intermediate (trainee) endoscopists with fewer than 1000 previous colonoscopies, and 11 experienced endoscopists with more than 1000 previous colonoscopies. All participants completed three standardized cases on the simulator and experts gave feedback regarding the realism of the simulator. Forty metrics recorded automatically by the simulator were analyzed for their ability to distinguish between the groups. The simulator discriminated participants by experience level for 22 different parameters. Completion rates were lower for novices than for trainees and experts (37 % vs. 79 % and 88 % respectively, P < 0.001) and both novices and trainees took significantly longer to reach all major landmarks than the experts. Several technical aspects of competency were discriminatory; pushing with an embedded tip ( P = 0.03), correct use of the variable stiffness function ( P = 0.004), number of sigmoid N-loops ( P = 0.02); size of sigmoid N-loops ( P = 0.01), and time to remove alpha loops ( P = 0.004). Out of 10, experts rated the realism of movement at 6.4, force feedback at 6.6, looping at 6.6, and loop resolution at 6.8. The Olympus colonoscopy simulator has good face validity and excellent construct validity. It provides an objective assessment of colonoscopic skill on multiple measures and benchmarks have been set to allow its use as both a formative and a summative assessment tool. Georg Thieme Verlag KG Stuttgart. New York.

  9. Increasing the relevance of GCM simulations for Climate Services

    NASA Astrophysics Data System (ADS)

    Smith, L. A.; Suckling, E.

    2012-12-01

    The design and interpretation of model simulations for climate services differ significantly from experimental design for the advancement of the fundamental research on predictability that underpins it. Climate services consider the sources of best information available today; this calls for a frank evaluation of model skill in the face of statistical benchmarks defined by empirical models. The fact that Physical simulation models are thought to provide the only reliable method for extrapolating into conditions not previously observed has no bearing on whether or not today's simulation models outperform empirical models. Evidence on the length scales on which today's simulation models fail to outperform empirical benchmarks is presented; it is illustrated that this occurs even on global scales in decadal prediction. At all timescales considered thus far (as of July 2012), predictions based on simulation models are improved by blending with the output of statistical models. Blending is shown to be more interesting in the climate context than it is in the weather context, where blending with a history-based climatology is straightforward. As GCMs improve and as the Earth's climate moves further from that of the last century, the skill from simulation models and their relevance to climate services is expected to increase. Examples from both seasonal and decadal forecasting will be used to discuss a third approach that may increase the role of current GCMs more quickly. Specifically, aspects of the experimental design in previous hind cast experiments are shown to hinder the use of GCM simulations for climate services. Alternative designs are proposed. The value in revisiting Thompson's classic approach to improving weather forecasting in the fifties in the context of climate services is discussed.

  10. Dimethyl methylphosphonate adsorption and decomposition on MoO2 as studied by ambient pressure x-ray photoelectron spectroscopy and DFT calculations

    NASA Astrophysics Data System (ADS)

    Head, Ashley R.; Tsyshevsky, Roman; Trotochaud, Lena; Yu, Yi; Karslıoǧlu, Osman; Eichhorn, Bryan; Kuklja, Maija M.; Bluhm, Hendrik

    2018-04-01

    Organophosphonates range in their toxicity and are used as pesticides, herbicides, and chemical warfare agents (CWAs). Few laboratories are equipped to handle the most toxic molecules, thus simulants such as dimethyl methylphosphonate (DMMP), are used as a first step in studying adsorption and reactivity on materials. Benchmarked by combined experimental and theoretical studies of simulants, calculations offer an opportunity to understand how molecular interactions with a surface changes upon using a CWA. However, most calculations of DMMP and CWAs on surfaces are limited to adsorption studies on clusters of atoms, which may differ markedly from the behavior on bulk solid-state materials with extended surfaces. We have benchmarked our solid-state periodic calculations of DMMP adsorption and reactivity on MoO2 with ambient pressure x-ray photoelectron spectroscopy studies (APXPS). DMMP is found to interact strongly with a MoO2 film, a model system for the MoO x component in the ASZM-TEDA© gas filtration material. Density functional theory modeling of several adsorption and decomposition mechanisms assist the assignment of APXPS peaks. Our results show that some of the adsorbed DMMP decomposes, with all the products remaining on the surface. The rigorous calculations benchmarked with experiments pave a path to reliable and predictive theoretical studies of CWA interactions with surfaces.

  11. Benchmark of the local drift-kinetic models for neoclassical transport simulation in helical plasmas

    NASA Astrophysics Data System (ADS)

    Huang, B.; Satake, S.; Kanno, R.; Sugama, H.; Matsuoka, S.

    2017-02-01

    The benchmarks of the neoclassical transport codes based on the several local drift-kinetic models are reported here. Here, the drift-kinetic models are zero orbit width (ZOW), zero magnetic drift, DKES-like, and global, as classified in Matsuoka et al. [Phys. Plasmas 22, 072511 (2015)]. The magnetic geometries of Helically Symmetric Experiment, Large Helical Device (LHD), and Wendelstein 7-X are employed in the benchmarks. It is found that the assumption of E ×B incompressibility causes discrepancy of neoclassical radial flux and parallel flow among the models when E ×B is sufficiently large compared to the magnetic drift velocities. For example, Mp≤0.4 where Mp is the poloidal Mach number. On the other hand, when E ×B and the magnetic drift velocities are comparable, the tangential magnetic drift, which is included in both the global and ZOW models, fills the role of suppressing unphysical peaking of neoclassical radial-fluxes found in the other local models at Er≃0 . In low collisionality plasmas, in particular, the tangential drift effect works well to suppress such unphysical behavior of the radial transport caused in the simulations. It is demonstrated that the ZOW model has the advantage of mitigating the unphysical behavior in the several magnetic geometries, and that it also implements the evaluation of bootstrap current in LHD with the low computation cost compared to the global model.

  12. FDA Benchmark Medical Device Flow Models for CFD Validation.

    PubMed

    Malinauskas, Richard A; Hariharan, Prasanna; Day, Steven W; Herbertson, Luke H; Buesen, Martin; Steinseifer, Ulrich; Aycock, Kenneth I; Good, Bryan C; Deutsch, Steven; Manning, Keefe B; Craven, Brent A

    Computational fluid dynamics (CFD) is increasingly being used to develop blood-contacting medical devices. However, the lack of standardized methods for validating CFD simulations and blood damage predictions limits its use in the safety evaluation of devices. Through a U.S. Food and Drug Administration (FDA) initiative, two benchmark models of typical device flow geometries (nozzle and centrifugal blood pump) were tested in multiple laboratories to provide experimental velocities, pressures, and hemolysis data to support CFD validation. In addition, computational simulations were performed by more than 20 independent groups to assess current CFD techniques. The primary goal of this article is to summarize the FDA initiative and to report recent findings from the benchmark blood pump model study. Discrepancies between CFD predicted velocities and those measured using particle image velocimetry most often occurred in regions of flow separation (e.g., downstream of the nozzle throat, and in the pump exit diffuser). For the six pump test conditions, 57% of the CFD predictions of pressure head were within one standard deviation of the mean measured values. Notably, only 37% of all CFD submissions contained hemolysis predictions. This project aided in the development of an FDA Guidance Document on factors to consider when reporting computational studies in medical device regulatory submissions. There is an accompanying podcast available for this article. Please visit the journal's Web site (www.asaiojournal.com) to listen.

  13. New Turbulent Multiphase Flow Facilities for Simulation Benchmarking

    NASA Astrophysics Data System (ADS)

    Teoh, Chee Hau; Salibindla, Ashwanth; Masuk, Ashik Ullah Mohammad; Ni, Rui

    2017-11-01

    The Fluid Transport Lab at Penn State has devoted last few years on developing new experimental facilities to unveil the underlying physics of coupling between solid-gas and gas-liquid multiphase flow in a turbulent environment. In this poster, I will introduce one bubbly flow facility and one dusty flow facility for validating and verifying simulation results. Financial support for this project was provided by National Science Foundation under Grant Number: 1653389 and 1705246.

  14. An HLA-Based Approach to Quantify Achievable Performance for Tactical Edge Applications

    DTIC Science & Technology

    2011-05-01

    in: Proceedings of the 2002 Fall Simulation Interoperability Workshop, 02F- SIW -068, Nov 2002. [16] P. Knight, et al. ―WBT RTI Independent...Benchmark Tests: Design, Implementation, and Updated Results‖, in: Proceedings of the 2002 Spring Simulation Interoperability Workshop, 02S- SIW -081, March...Interoperability Workshop, 98F- SIW -085, Nov 1998. [18] S. Ferenci and R. Fujimoto. ―RTI Performance on Shared Memory and Message Passing Architectures‖, in

  15. MEqTrees Telescope and Radio-sky Simulations and CPU Benchmarking

    NASA Astrophysics Data System (ADS)

    Shanmugha Sundaram, G. A.

    2009-09-01

    MEqTrees is a Python-based implementation of the classical Measurement Equation, wherein the various 2×2 Jones matrices are parametrized representations in the spatial and sky domains for any generic radio telescope. Customized simulations of radio-source sky models and corrupt Jones terms are demonstrated based on a policy framework, with performance estimates derived for array configurations, ``dirty''-map residuals and processing power requirements for such computations on conventional platforms.

  16. MCNP Simulation Benchmarks for a Portable Inspection System for Narcotics, Explosives, and Nuclear Material Detection

    NASA Astrophysics Data System (ADS)

    Alfonso, Krystal; Elsalim, Mashal; King, Michael; Strellis, Dan; Gozani, Tsahi

    2013-04-01

    MCNPX simulations have been used to guide the development of a portable inspection system for narcotics, explosives, and special nuclear material (SNM) detection. The system seeks to address these threats to national security by utilizing a high-yield, compact neutron source to actively interrogate the threats and produce characteristic signatures that can then be detected by radiation detectors. The portability of the system enables rapid deployment and proximity to threats concealed in small spaces. Both dD and dT electronic neutron generators (ENG) were used to interrogate ammonium nitrate fuel oil (ANFO) and cocaine hydrochloride, and the detector response of NaI, CsI, and LaBr3 were compared. The effect of tungsten shielding on the neutron flux in the gamma ray detectors was investigated, while carbon, beryllium, and polyethylene ENG moderator materials were optimized by determining the reaction rate density in the threats. In order to benchmark the modeling results, experimental measurements are compared with MCNPX simulations. In addition, the efficiency and die-away time of a portable differential die-away analysis (DDAA) detector using 3He proportional counters for SNM detection has been determined.

  17. Control strategies for nitrous oxide emissions reduction on wastewater treatment plants operation.

    PubMed

    Santín, I; Barbu, M; Pedret, C; Vilanova, R

    2017-11-15

    The present paper focused on reducing greenhouse gases emissions in wastewater treatment plants operation by application of suitable control strategies. Specifically, the objective is to reduce nitrous oxide emissions during the nitrification process. Incomplete nitrification in the aerobic tanks can lead to an accumulation of nitrite that triggers the nitrous oxide emissions. In order to avoid the peaks of nitrous oxide emissions, this paper proposes a cascade control configuration by manipulating the dissolved oxygen set-points in the aerobic tanks. This control strategy is combined with ammonia cascade control already applied in the literature. This is performed with the objective to take also into account effluent pollutants and operational costs. In addition, other greenhouse gases emissions sources are also evaluated. Results have been obtained by simulation, using a modified version of Benchmark Simulation Model no. 2, which takes into account greenhouse gases emissions. This is called Benchmark Simulation Model no. 2 Gas. The results show that the proposed control strategies are able to reduce by 29.86% of nitrous oxide emissions compared to the default control strategy, while maintaining a satisfactory trade-off between water quality and costs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy

    PubMed Central

    Testa, M.; Schümann, J.; Lu, H.-M.; Shin, J.; Faddegon, B.; Perl, J.; Paganetti, H.

    2013-01-01

    Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the nuclear interaction models used in the simulations. Results: SOBPs’ range and modulation width were reproduced, on average, with an accuracy of +1, −2 and ±3 mm, respectively. OF simulations reproduced measured data within ±3%. Simulated 2D dose-profiles show field flatness and average field radius within ±3% of measured profiles. The field symmetry resulted, on average in ±3% agreement with commissioned profiles. TOPAS accuracy in reproducing measured dose profiles downstream the half beam shifter is better than 2%. Dose rate function simulation reproduced the measurements within ∼2% showing that the four-dimensional modeling of the passively modulation system was implement correctly and millimeter accuracy can be achieved in reproducing measured data. For MLFCs simulations, 2% agreement was found between TOPAS and both sets of experimental measurements. The overall results show that TOPAS simulations are within the clinical accepted tolerances for all QA measurements performed at our institution. Conclusions: Our Monte Carlo simulations reproduced accurately the experimental data acquired through all the measurements performed in this study. Thus, TOPAS can reliably be applied to quality assurance for proton therapy and also as an input for commissioning of commercial treatment planning systems. This work also provides the basis for routine clinical dose calculations in patients for all passive scattering proton therapy centers using TOPAS. PMID:24320505

  19. Pretest and posttest calculations of Semiscale Test S-07-10D with the TRAC computer program. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duerre, K.H.; Cort, G.E.; Knight, T.D.

    The Transient Reactor Analysis Code (TRAC) developed at the Los Alamos National Laboratory was used to predict the behavior of the small-break experiment designated Semiscale S-07-10D. This test simulates a 10 per cent communicative cold-leg break with delayed Emergency Core Coolant injection and blowdown of the broken-loop steam generator secondary. Both pretest calculations that incorporated measured initial conditions and posttest calculations that incorporated measured initial conditions and measured transient boundary conditions were completed. The posttest calculated parameters were generally between those obtained from pretest calculations and those from the test data. The results are strongly dependent on depressurization rate and,more » hence, on break flow.« less

  20. Stress corrosion crack initiation of alloy 600 in PWR primary water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Ziqing; Toloczko, Mychailo B.; Olszta, Matthew J.

    Stress corrosion crack (SCC) initiation of three mill-annealed (MA) alloy 600 heats in simulated pressurized water reactor primary water has been investigated using constant load tests equipped with in-situ direct current potential drop (DCPD) measurement capabilities. SCC initiation times were greatly reduced by a small amount of cold work. Shallow intergranular (IG) attack and/or cracks were found on most high-energy grain boundaries intersecting the surface with only a small fraction evolving into larger cracks and IGSCC growth. Crack depth profiles were measured and related to DCPD-detected initiation response. Processes controlling the SCC initiation in MA alloy 600 are discussed. INmore » PRESS, CORRECTED PROOF, 05/02/2017 - mfl« less

  1. Characterization and corrosion behavior of F6NM stainless steel treated in high temperature water

    NASA Astrophysics Data System (ADS)

    Li, Zheng-yang; Cai, Zhen-bing; Yang, Wen-jin; Shen, Xiao-yao; Xue, Guo-hong; Zhu, Min-hao

    2018-03-01

    F6NM martensitic stainless steel was exposed to 350 °C water condition for 500, 1500, and 2500 h to simulate pressurized water reactor (PWR) condition. The characterization and corrosion behavior of the oxide film were investigated. Results indicate that the exposed steel surface formed a double-layer oxide film. The outer oxide film is Fe-rich and contains two type oxide particles. However, the inner oxide film is Cr-rich, and two oxide films, whose thicknesses increase with increasing exposure time. The oxide film reduces the corrosion behavior because the outer oxide film has many crack and pores. Finally, the mechanism and factors affecting the formation of the oxide film were investigated.

  2. Industry Application ECCS / LOCA Integrated Cladding/Emergency Core Cooling System Performance: Demonstration of LOTUS-Baseline Coupled Analysis of the South Texas Plant Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hongbin; Szilard, Ronaldo; Epiney, Aaron

    Under the auspices of the DOE LWRS Program RISMC Industry Application ECCS/LOCA, INL has engaged staff from both South Texas Project (STP) and the Texas A&M University (TAMU) to produce a generic pressurized water reactor (PWR) model including reactor core, clad/fuel design and systems thermal hydraulics based on the South Texas Project (STP) nuclear power plant, a 4-Loop Westinghouse PWR. A RISMC toolkit, named LOCA Toolkit for the U.S. (LOTUS), has been developed for use in this generic PWR plant model to assess safety margins for the proposed NRC 10 CFR 50.46c rule, Emergency Core Cooling System (ECCS) performance duringmore » LOCA. This demonstration includes coupled analysis of core design, fuel design, thermalhydraulics and systems analysis, using advanced risk analysis tools and methods to investigate a wide range of results. Within this context, a multi-physics best estimate plus uncertainty (MPBEPU) methodology framework is proposed.« less

  3. High-temperature Gas Reactor (HTGR)

    NASA Astrophysics Data System (ADS)

    Abedi, Sajad

    2011-05-01

    General Atomics (GA) has over 35 years experience in prismatic block High-temperature Gas Reactor (HTGR) technology design. During this period, the design has recently involved into a modular have been performed to demonstrate its versatility. This versatility is directly related to refractory TRISO coated - particle fuel that can contain any type of fuel. This paper summarized GA's fuel cycle studies individually and compares each based upon its cycle sustainability, proliferation-resistance capabilities, and other performance data against pressurized water reactor (PWR) fuel cycle data. Fuel cycle studies LEU-NV;commercial HEU-Th;commercial LEU-Th;weapons-grade plutonium consumption; and burning of LWR waste including plutonium and minor actinides in the MHR. results show that all commercial MHR options, with the exception of HEU-TH, are more sustainable than a PWR fuel cycle. With LEU-NV being the most sustainable commercial options. In addition, all commercial MHR options out perform the PWR with regards to its proliferation-resistance, with thorium fuel cycle having the best proliferation-resistance characteristics.

  4. Review of PWR fuel rod waterside corrosion behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzarolli, F.; Jorde, D.; Manzel, R.

    Waterside corrosion of Zircaloy has generally not been a problem under normal PWR operating conditions, although some instances of accelerated corrosion have been reported. However, an incentive exists to extend the average fuel rod discharge burnups to about 50,000 MWd/MTU. To minimize corrosion at these extended burnups, the factors which influence Zircaloy corrosion need to be better understood. A data base of Zircaloy corrosion behavior under PWR operating conditions has been established. The data are compiled previously published reports as well as from new Kraftwerk Union examinations. A non-destructive eddy-current technique is used to measure the oxide layer thickness onmore » fuel rods. Comparisons of measuremnts made using this eddy-current technique with those made by usual metallographic methods indicate good agreement. The data were evaluated by defining a fitting factor F which describes the increase in corrosion rate observed in-reactor over that observed from measurements of ex-reactor corrosion coupons.« less

  5. Manure nutrient management effects in the Leon River Watershed

    USDA-ARS?s Scientific Manuscript database

    The Leon River Watershed (LRW) in central Texas is a Benchmark and Special Emphasis watershed within the Conservation Effects Assessment Project (CEAP) located in central Texas. Model simulations from 1977 through 2006 were used to evaluate six manure nutrient management scenarios that reflect reali...

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, Jay Prakash

    The objectives of this project are to calibrate the Advanced Experimental Fuel Counter (AEFC), benchmark MCNP simulations using experimental results, investigate the effects of change in fuel assembly geometry, and finally to show the boost in doubles count rates with 252Cf active soruces due to the time correlated induced fission (TCIF) effect.

  7. Simulations of hypervelocity impacts for asteroid deflection studies

    NASA Astrophysics Data System (ADS)

    Heberling, T.; Ferguson, J. M.; Gisler, G. R.; Plesko, C. S.; Weaver, R.

    2016-12-01

    The possibility of kinetic-impact deflection of threatening near-earth asteroids will be tested for the first time in the proposed AIDA (Asteroid Impact Deflection Assessment) mission, involving two independent spacecraft, NASAs DART (Double Asteroid Redirection Test) and ESAs AIM (Asteroid Impact Mission). The impact of the DART spacecraft onto the secondary of the binary asteroid 65803 Didymos, at a speed of 5 to 7 km/s, is expected to alter the mutual orbit by an observable amount. The velocity imparted to the secondary depends on the geometry and dynamics of the impact, and especially on the momentum enhancement factor, conventionally called beta. We use the Los Alamos hydrocodes Rage and Pagosa to estimate beta in laboratory-scale benchmark experiments and in the large-scale asteroid deflection test. Simulations are performed in two- and three-dimensions, using a variety of equations of state and strength models for both the lab-scale and large-scale cases. This work is being performed as part of a systematic benchmarking study for the AIDA mission that includes other hydrocodes.

  8. Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas

    2017-04-01

    We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.

  9. Benchmarking electrophysiological models of human atrial myocytes

    PubMed Central

    Wilhelms, Mathias; Hettmann, Hanne; Maleckar, Mary M.; Koivumäki, Jussi T.; Dössel, Olaf; Seemann, Gunnar

    2013-01-01

    Mathematical modeling of cardiac electrophysiology is an insightful method to investigate the underlying mechanisms responsible for arrhythmias such as atrial fibrillation (AF). In past years, five models of human atrial electrophysiology with different formulations of ionic currents, and consequently diverging properties, have been published. The aim of this work is to give an overview of strengths and weaknesses of these models depending on the purpose and the general requirements of simulations. Therefore, these models were systematically benchmarked with respect to general mathematical properties and their ability to reproduce certain electrophysiological phenomena, such as action potential (AP) alternans. To assess the models' ability to replicate modified properties of human myocytes and tissue in cardiac disease, electrical remodeling in chronic atrial fibrillation (cAF) was chosen as test case. The healthy and remodeled model variants were compared with experimental results in single-cell, 1D and 2D tissue simulations to investigate AP and restitution properties, as well as the initiation of reentrant circuits. PMID:23316167

  10. Similarity indices of meteo-climatic gauging stations: definition and comparison.

    PubMed

    Barca, Emanuele; Bruno, Delia Evelina; Passarella, Giuseppe

    2016-07-01

    Space-time dependencies among monitoring network stations have been investigated to detect and quantify similarity relationships among gauging stations. In this work, besides the well-known rank correlation index, two new similarity indices have been defined and applied to compute the similarity matrix related to the Apulian meteo-climatic monitoring network. The similarity matrices can be applied to address reliably the issue of missing data in space-time series. In order to establish the effectiveness of the similarity indices, a simulation test was then designed and performed with the aim of estimating missing monthly rainfall rates in a suitably selected gauging station. The results of the simulation allowed us to evaluate the effectiveness of the proposed similarity indices. Finally, the multiple imputation by chained equations method was used as a benchmark to have an absolute yardstick for comparing the outcomes of the test. In conclusion, the new proposed multiplicative similarity index resulted at least as reliable as the selected benchmark.

  11. An experimental phylogeny to benchmark ancestral sequence reconstruction

    PubMed Central

    Randall, Ryan N.; Radford, Caelan E.; Roof, Kelsey A.; Natarajan, Divya K.; Gaucher, Eric A.

    2016-01-01

    Ancestral sequence reconstruction (ASR) is a still-burgeoning method that has revealed many key mechanisms of molecular evolution. One criticism of the approach is an inability to validate its algorithms within a biological context as opposed to a computer simulation. Here we build an experimental phylogeny using the gene of a single red fluorescent protein to address this criticism. The evolved phylogeny consists of 19 operational taxonomic units (leaves) and 17 ancestral bifurcations (nodes) that display a wide variety of fluorescent phenotypes. The 19 leaves then serve as ‘modern' sequences that we subject to ASR analyses using various algorithms and to benchmark against the known ancestral genotypes and ancestral phenotypes. We confirm computer simulations that show all algorithms infer ancient sequences with high accuracy, yet we also reveal wide variation in the phenotypes encoded by incorrectly inferred sequences. Specifically, Bayesian methods incorporating rate variation significantly outperform the maximum parsimony criterion in phenotypic accuracy. Subsampling of extant sequences had minor effect on the inference of ancestral sequences. PMID:27628687

  12. The MCUCN simulation code for ultracold neutron physics

    NASA Astrophysics Data System (ADS)

    Zsigmond, G.

    2018-02-01

    Ultracold neutrons (UCN) have very low kinetic energies 0-300 neV, thereby can be stored in specific material or magnetic confinements for many hundreds of seconds. This makes them a very useful tool in probing fundamental symmetries of nature (for instance charge-parity violation by neutron electric dipole moment experiments) and contributing important parameters for the Big Bang nucleosynthesis (neutron lifetime measurements). Improved precision experiments are in construction at new and planned UCN sources around the world. MC simulations play an important role in the optimization of such systems with a large number of parameters, but also in the estimation of systematic effects, in benchmarking of analysis codes, or as part of the analysis. The MCUCN code written at PSI has been extensively used for the optimization of the UCN source optics and in the optimization and analysis of (test) experiments within the nEDM project based at PSI. In this paper we present the main features of MCUCN and interesting benchmark and application examples.

  13. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    NASA Astrophysics Data System (ADS)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  14. Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics.

    PubMed

    Strehl, Robert; Ilie, Silvana

    2015-12-21

    In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated on three benchmarking systems, with special focus on approximation accuracy and efficiency.

  15. Two-Dimensional Self-Consistent Radio Frequency Plasma Simulations Relevant to the Gaseous Electronics Conference RF Reference Cell

    PubMed Central

    Lymberopoulos, Dimitris P.; Economou, Demetre J.

    1995-01-01

    Over the past few years multidimensional self-consistent plasma simulations including complex chemistry have been developed which are promising tools for furthering our understanding of reactive gas plasmas and for reactor design and optimization. These simulations must be benchmarked against experimental data obtained in well-characterized systems such as the Gaseous Electronics Conference (GEC) reference cell. Two-dimensional simulations relevant to the GEC Cell are reviewed in this paper with emphasis on fluid simulations. Important features observed experimentally, such as off-axis maxima in the charge density and hot spots of metastable species density near the electrode edges in capacitively-coupled GEC cells, have been captured by these simulations. PMID:29151756

  16. New Multi-group Transport Neutronics (PHISICS) Capabilities for RELAP5-3D and its Application to Phase I of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi

    2012-10-01

    PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICSmore » (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.« less

  17. Computer simulation of multigrid body dynamics and control

    NASA Technical Reports Server (NTRS)

    Swaminadham, M.; Moon, Young I.; Venkayya, V. B.

    1990-01-01

    The objective is to set up and analyze benchmark problems on multibody dynamics and to verify the predictions of two multibody computer simulation codes. TREETOPS and DISCOS have been used to run three example problems - one degree-of-freedom spring mass dashpot system, an inverted pendulum system, and a triple pendulum. To study the dynamics and control interaction, an inverted planar pendulum with an external body force and a torsional control spring was modeled as a hinge connected two-rigid body system. TREETOPS and DISCOS affected the time history simulation of this problem. System state space variables and their time derivatives from two simulation codes were compared.

  18. AGREEMENT AND COVERAGE OF INDICATORS OF RESPONSE TO INTERVENTION: A MULTI-METHOD COMPARISON AND SIMULATION

    PubMed Central

    Fletcher, Jack M.; Stuebing, Karla K.; Barth, Amy E.; Miciak, Jeremy; Francis, David J.; Denton, Carolyn A.

    2013-01-01

    Purpose Agreement across methods for identifying students as inadequate responders or as learning disabled is often poor. We report (1) an empirical examination of final status (post-intervention benchmarks) and dual-discrepancy growth methods based on growth during the intervention and final status for assessing response to intervention; and (2) a statistical simulation of psychometric issues that may explain low agreement. Methods After a Tier 2 intervention, final status benchmark criteria were used to identify 104 inadequate and 85 adequate responders to intervention, with comparisons of agreement and coverage for these methods and a dual-discrepancy method. Factors affecting agreement were investigated using computer simulation to manipulate reliability, the intercorrelation between measures, cut points, normative samples, and sample size. Results Identification of inadequate responders based on individual measures showed that single measures tended not to identify many members of the pool of 104 inadequate responders. Poor to fair levels of agreement for identifying inadequate responders were apparent between pairs of measures In the simulation, comparisons across two simulated measures generated indices of agreement (kappa) that were generally low because of multiple psychometric issues inherent in any test. Conclusions Expecting excellent agreement between two correlated tests with even small amounts of unreliability may not be realistic. Assessing outcomes based on multiple measures, such as level of CBM performance and short norm-referenced assessments of fluency may improve the reliability of diagnostic decisions. PMID:25364090

  19. ff14ipq: A Self-Consistent Force Field for Condensed-Phase Simulations of Proteins

    PubMed Central

    2015-01-01

    We present the ff14ipq force field, implementing the previously published IPolQ charge set for simulations of complete proteins. Minor modifications to the charge derivation scheme and van der Waals interactions between polar atoms are introduced. Torsion parameters are developed through a generational learning approach, based on gas-phase MP2/cc-pVTZ single-point energies computed of structures optimized by the force field itself rather than the quantum benchmark. In this manner, we sacrifice information about the true quantum minima in order to ensure that the force field maintains optimal agreement with the MP2/cc-pVTZ benchmark for the ensembles it will actually produce in simulations. A means of making the gas-phase torsion parameters compatible with solution-phase IPolQ charges is presented. The ff14ipq model is an alternative to ff99SB and other Amber force fields for protein simulations in programs that accommodate pair-specific Lennard–Jones combining rules. The force field gives strong performance on α-helical and β-sheet oligopeptides as well as globular proteins over microsecond time scale simulations, although it has not yet been tested in conjunction with lipid and nucleic acid models. We show how our choices in parameter development influence the resulting force field and how other choices that may have appeared reasonable would actually have led to poorer results. The tools we developed may also aid in the development of future fixed-charge and even polarizable biomolecular force fields. PMID:25328495

  20. Benchmarking the evaluated proton differential cross sections suitable for the EBS analysis of natSi and 16O

    NASA Astrophysics Data System (ADS)

    Kokkoris, M.; Dede, S.; Kantre, K.; Lagoyannis, A.; Ntemou, E.; Paneta, V.; Preketes-Sigalas, K.; Provatas, G.; Vlastou, R.; Bogdanović-Radović, I.; Siketić, Z.; Obajdin, N.

    2017-08-01

    The evaluated proton differential cross sections suitable for the Elastic Backscattering Spectroscopy (EBS) analysis of natSi and 16O, as obtained from SigmaCalc 2.0, have been benchmarked over a wide energy and angular range at two different accelerator laboratories, namely at N.C.S.R. 'Demokritos', Athens, Greece and at Ruđer Bošković Institute (RBI), Zagreb, Croatia, using a variety of high-purity thick targets of known stoichiometry. The results are presented in graphical and tabular forms, while the observed discrepancies, as well as, the limits in accuracy of the benchmarking procedure, along with target related effects, are thoroughly discussed and analysed. In the case of oxygen the agreement between simulated and experimental spectra was generally good, while for silicon serious discrepancies were observed above Ep,lab = 2.5 MeV, suggesting that a further tuning of the appropriate nuclear model parameters in the evaluated differential cross-section datasets is required.

  1. OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS & HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alan Black; Arnis Judzis

    2004-10-01

    The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit-fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all major preparations for themore » high pressure drilling campaign. Baker Hughes encountered difficulties in providing additional pumping capacity before TerraTek's scheduled relocation to another facility, thus the program was delayed further to accommodate the full testing program.« less

  2. Consideration of Real World Factors Influencing Greenhouse Gas Emissions in ALPHA

    EPA Science Inventory

    Discuss a variety of factors that influence the simulated fuel economy and GHG emissions that are often overlooked and updates made to ALPHA based on actual benchmarking data observed across a range of vehicles and transmissions. ALPHA model calibration is also examined, focusin...

  3. NRL 1989 Beam Propagation Studies in Support of the ATA Multi-Pulse Propagation Experiment

    DTIC Science & Technology

    1990-08-31

    papers presented here were all written prior to the completion of the experiment. The first of these papers presents simulation results which modeled ...beam stability and channel evolution for an entire five pulse burst. The second paper describes a new air chemistry model used in the SARLAC...Experiment: A new air chemistry model for use in the propagation codes simulating the MPPE was developed by making analytic fits to benchmark runs with

  4. A partial entropic lattice Boltzmann MHD simulation of the Orszag-Tang vortex

    NASA Astrophysics Data System (ADS)

    Flint, Christopher; Vahala, George

    2018-02-01

    Karlin has introduced an analytically determined entropic lattice Boltzmann (LB) algorithm for Navier-Stokes turbulence. Here, this is partially extended to an LB model of magnetohydrodynamics, on using the vector distribution function approach of Dellar for the magnetic field (which is permitted to have field reversal). The partial entropic algorithm is benchmarked successfully against standard simulations of the Orszag-Tang vortex [Orszag, S.A.; Tang, C.M. J. Fluid Mech. 1979, 90 (1), 129-143].

  5. MCNP modelling of scintillation-detector gamma-ray spectra from natural radionuclides.

    PubMed

    Hendriks, P H G M; Maucec, M; de Meijer, R J

    2002-09-01

    gamma-ray spectra of natural radionuclides are simulated for a BGO detector in a borehole geometry using the Monte Carlo code MCNP. All gamma-ray emissions of the decay of 40K and the series of 232Th and 238U are used to describe the source. A procedure is proposed which excludes the time-consuming electron tracking in less relevant areas of the geometry. The simulated gamma-ray spectra are benchmarked against laboratory data.

  6. Target Lagrangian kinematic simulation for particle-laden flows.

    PubMed

    Murray, S; Lightstone, M F; Tullis, S

    2016-09-01

    The target Lagrangian kinematic simulation method was motivated as a stochastic Lagrangian particle model that better synthesizes turbulence structure, relative to stochastic separated flow models. By this method, the trajectories of particles are constructed according to synthetic turbulent-like fields, which conform to a target Lagrangian integral timescale. In addition to recovering the expected Lagrangian properties of fluid tracers, this method is shown to reproduce the crossing trajectories and continuity effects, in agreement with an experimental benchmark.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faidy, C.

    Practical applications of the leak-before break concept are presently limited in French Pressurized Water Reactors (PWR) compared to Fast Breeder Reactors. Neithertheless, different fracture mechanic demonstrations have been done on different primary, auxiliary and secondary PWR piping systems based on similar requirements that the American NUREG 1061 specifications. The consequences of the success in different demonstrations are still in discussion to be included in the global safety assessment of the plants, such as the consequences on in-service inspections, leak detection systems, support optimization,.... A large research and development program, realized in different co-operative agreements, completes the general approach.

  8. Planar Monolithic Schottky Varactor Diode Millimeter-Wave Frequency Multipliers

    DTIC Science & Technology

    1992-06-01

    wave applications", IEEE Trans on Microwave Theory and Tech., vol. 39, no. 12, Dec. 1991 , pp. 1964-1971. A copy of this paper is 35 included in...Watts to Bulky 1991 spectral HV DC Power line Pwr Very Inguscio varies Massive 1986 with Vac.:um line Very low Gas noise Supply Ledatron Up to 1 W at...PULSED Band up to 1985 HV DC 10 GHz Massive Pwr Magnetic V?4MA > 100 GHz > 1 Watt Wide Cooling Research Quasi- McGruer Theory Theory Band Planar 1991

  9. Effects of Lower Drying-Storage Temperature on the Ductility of High-Burnup PWR Cladding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billone, M. C.; Burtseva, T. A.

    2016-08-30

    The purpose of this research effort is to determine the effects of canister and/or cask drying and storage on radial hydride precipitation in, and potential embrittlement of, high-burnup (HBU) pressurized water reactor (PWR) cladding alloys during cooling for a range of peak drying-storage temperatures (PCT) and hoop stresses. Extensive precipitation of radial hydrides could lower the failure hoop stresses and strains, relative to limits established for as-irradiated cladding from discharged fuel rods stored in pools, at temperatures below the ductile-to-brittle transition temperature (DBTT).

  10. Assessment of RELAP5/MOD2 against a pressurizer spray valve inadverted fully opening transient and recovery by natural circulation in Jose Cabrera Nuclear Station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arroyo, R.; Rebollo, L.

    1993-06-01

    This document presents the comparison between the simulation results and the plant measurements of a real event that took place in JOSE CABRERA nuclear power plant in August 30th, 1984. The event was originated by the total, continuous and inadverted opening of the pressurizer spray valve PCV-400A. JOSE CABRERA power plant is a single loop Westinghouse PWR belonging to UNION ELECTRICA FENOSA, S.A. (UNION FENOSA), an Spanish utility which participates in the International Code Assessment and Applications Program (ICAP) as a member of UNIDAD ELECTRICA, S.A. (UNESA). This is the second of its two contributions to the Program: the firstmore » one was an application case and this is an assessment one. The simulation has been performed using the RELAP5/MOD2 cycle 36.04 code, running on a CDC CYBER 180/830 computer under NOS 2.5 operating system. The main phenomena have been calculated correctly and some conclusions about the 3D characteristics of the condensation due to the spray and its simulation with a 1D tool have been got.« less

  11. Investigating the Transonic Flutter Boundary of the Benchmark Supercritical Wing

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Chwalowski, Pawel

    2017-01-01

    This paper builds on the computational aeroelastic results published previously and generated in support of the second Aeroelastic Prediction Workshop for the NASA Benchmark Supercritical Wing configuration. The computational results are obtained using FUN3D, an unstructured grid Reynolds-Averaged Navier-Stokes solver developed at the NASA Langley Research Center. The analysis results focus on understanding the dip in the transonic flutter boundary at a single Mach number (0.74), exploring an angle of attack range of ??1 to 8 and dynamic pressures from wind off to beyond flutter onset. The rigid analysis results are examined for insights into the behavior of the aeroelastic system. Both static and dynamic aeroelastic simulation results are also examined.

  12. Test and Verification of AES Used for Image Encryption

    NASA Astrophysics Data System (ADS)

    Zhang, Yong

    2018-03-01

    In this paper, an image encryption program based on AES in cipher block chaining mode was designed with C language. The encryption/decryption speed and security performance of AES based image cryptosystem were tested and used to compare the proposed cryptosystem with some existing image cryptosystems based on chaos. Simulation results show that AES can apply to image encryption, which refutes the widely accepted point of view that AES is not suitable for image encryption. This paper also suggests taking the speed of AES based image encryption as the speed benchmark of image encryption algorithms. And those image encryption algorithms whose speeds are lower than the benchmark should be discarded in practical communications.

  13. The International Experimental Thermal Hydraulic Systems database – TIETHYS: A new NEA validation tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohatgi, Upendra S.

    Nuclear reactor codes require validation with appropriate data representing the plant for specific scenarios. The thermal-hydraulic data is scattered in different locations and in different formats. Some of the data is in danger of being lost. A relational database is being developed to organize the international thermal hydraulic test data for various reactor concepts and different scenarios. At the reactor system level, that data is organized to include separate effect tests and integral effect tests for specific scenarios and corresponding phenomena. The database relies on the phenomena identification sections of expert developed PIRTs. The database will provide a summary ofmore » appropriate data, review of facility information, test description, instrumentation, references for the experimental data and some examples of application of the data for validation. The current database platform includes scenarios for PWR, BWR, VVER, and specific benchmarks for CFD modelling data and is to be expanded to include references for molten salt reactors. There are place holders for high temperature gas cooled reactors, CANDU and liquid metal reactors. This relational database is called The International Experimental Thermal Hydraulic Systems (TIETHYS) database and currently resides at Nuclear Energy Agency (NEA) of the OECD and is freely open to public access. Going forward the database will be extended to include additional links and data as they become available. https://www.oecd-nea.org/tiethysweb/« less

  14. NMRNet: A deep learning approach to automated peak picking of protein NMR spectra.

    PubMed

    Klukowski, Piotr; Augoff, Michal; Zieba, Maciej; Drwal, Maciej; Gonczarek, Adam; Walczak, Michal J

    2018-03-14

    Automated selection of signals in protein NMR spectra, known as peak picking, has been studied for over 20 years, nevertheless existing peak picking methods are still largely deficient. Accurate and precise automated peak picking would accelerate the structure calculation, and analysis of dynamics and interactions of macromolecules. Recent advancement in handling big data, together with an outburst of machine learning techniques, offer an opportunity to tackle the peak picking problem substantially faster than manual picking and on par with human accuracy. In particular, deep learning has proven to systematically achieve human-level performance in various recognition tasks, and thus emerges as an ideal tool to address automated identification of NMR signals. We have applied a convolutional neural network for visual analysis of multidimensional NMR spectra. A comprehensive test on 31 manually-annotated spectra has demonstrated top-tier average precision (AP) of 0.9596, 0.9058 and 0.8271 for backbone, side-chain and NOESY spectra, respectively. Furthermore, a combination of extracted peak lists with automated assignment routine, FLYA, outperformed other methods, including the manual one, and led to correct resonance assignment at the levels of 90.40%, 89.90% and 90.20% for three benchmark proteins. The proposed model is a part of a Dumpling software (platform for protein NMR data analysis), and is available at https://dumpling.bio/. michaljerzywalczak@gmail.compiotr.klukowski@pwr.edu.pl. Supplementary data are available at Bioinformatics online.

  15. The Eighth Industrial Fluids Properties Simulation Challenge

    PubMed Central

    Schultz, Nathan E.; Ahmad, Riaz; Brennan, John K.; Frankel, Kevin A.; Moore, Jonathan D.; Moore, Joshua D.; Mountain, Raymond D.; Ross, Richard B.; Thommes, Matthias; Shen, Vincent K.; Siderius, Daniel W.; Smith, Kenneth D.

    2016-01-01

    The goal of the eighth industrial fluid properties simulation challenge was to test the ability of molecular simulation methods to predict the adsorption of organic adsorbates in activated carbon materials. In particular, the eighth challenge focused on the adsorption of perfluorohexane in the activated carbon BAM-109. Entrants were challenged to predict the adsorption in the carbon at 273 K and relative pressures of 0.1, 0.3, and 0.6. The predictions were judged by comparison to a benchmark set of experimentally determined values. Overall good agreement and consistency were found between the predictions of most entrants. PMID:27840542

  16. Monte Carlo errors with less errors

    NASA Astrophysics Data System (ADS)

    Wolff, Ulli; Alpha Collaboration

    2004-01-01

    We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.

  17. CAFE simulation of columnar-to-equiaxed transition in Al-7wt%Si alloys directionally solidified under microgravity

    NASA Astrophysics Data System (ADS)

    Liu, D. R.; Mangelinck-Noël, N.; Gandin, Ch-A.; Zimmermann, G.; Sturz, L.; Nguyen Thi, H.; Billia, B.

    2016-03-01

    A two-dimensional multi-scale cellular automaton - finite element (CAFE) model is used to simulate grain structure evolution and microsegregation formation during solidification of refined Al-7wt%Si alloys under microgravity. The CAFE simulations are first qualitatively compared with the benchmark experimental data under microgravity. Qualitative agreement is obtained for the position of columnar to equiaxed transition (CET) and the CET transition mode (sharp or progressive). Further comparisons of the distributions of grain elongation factor and equivalent diameter are conducted and reveal a fair quantitative agreement.

  18. Integrated Disposal Facility FY 2016: ILAW Verification and Validation of the eSTOMP Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freedman, Vicky L.; Bacon, Diana H.; Fang, Yilin

    2016-05-13

    This document describes two sets of simulations carried out to further verify and validate the eSTOMP simulator. In this report, a distinction is made between verification and validation, and the focus is on verifying eSTOMP through a series of published benchmarks on cementitious wastes, and validating eSTOMP based on a lysimeter experiment for the glassified waste. These activities are carried out within the context of a scientific view of validation that asserts that models can only be invalidated, and that model validation (and verification) is a subjective assessment.

  19. A conservative approach to parallelizing the Sharks World simulation

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Riffe, Scott E.

    1990-01-01

    Parallelizing a benchmark problem for parallel simulation, the Sharks World, is described. The described solution is conservative, in the sense that no state information is saved, and no 'rollbacks' occur. The used approach illustrates both the principal advantage and principal disadvantage of conservative parallel simulation. The advantage is that by exploiting lookahead an approach was found that dramatically improves the serial execution time, and also achieves excellent speedups. The disadvantage is that if the model rules are changed in such a way that the lookahead is destroyed, it is difficult to modify the solution to accommodate the changes.

  20. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  1. PPI4DOCK: large scale assessment of the use of homology models in free docking over more than 1000 realistic targets.

    PubMed

    Yu, Jinchao; Guerois, Raphaël

    2016-12-15

    Protein-protein docking methods are of great importance for understanding interactomes at the structural level. It has become increasingly appealing to use not only experimental structures but also homology models of unbound subunits as input for docking simulations. So far we are missing a large scale assessment of the success of rigid-body free docking methods on homology models. We explored how we could benefit from comparative modelling of unbound subunits to expand docking benchmark datasets. Starting from a collection of 3157 non-redundant, high X-ray resolution heterodimers, we developed the PPI4DOCK benchmark containing 1417 docking targets based on unbound homology models. Rigid-body docking by Zdock showed that for 1208 cases (85.2%), at least one correct decoy was generated, emphasizing the efficiency of rigid-body docking in generating correct assemblies. Overall, the PPI4DOCK benchmark contains a large set of realistic cases and provides new ground for assessing docking and scoring methodologies. Benchmark sets can be downloaded from http://biodev.cea.fr/interevol/ppi4dock/ CONTACT: guerois@cea.frSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    NASA Astrophysics Data System (ADS)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  3. Simulation of the pulse propagation by the interacting mode parabolic equation method

    NASA Astrophysics Data System (ADS)

    Trofimov, M. Yu.; Kozitskiy, S. B.; Zakharenko, A. D.

    2018-07-01

    A broadband modeling of pulses has been performed by using the previously derived interacting mode parabolic equation through the Fourier synthesis. Test examples on the wedge with the angle 2.86∘ (known as the ASA benchmark) show excellent agreement with the source images method.

  4. Benchmarking and Modeling of a Conventional Mid-Size Car Using ALPHA (SAE Paper 2015-01-1140)

    EPA Science Inventory

    The Advanced Light-Duty Powertrain and Hybrid Analysis (ALPHA) modeling tool was created by EPA to estimate greenhouse gas (GHG) emissions of light-duty vehicles. ALPHA is a physics-based, forward-looking, full vehicle computer simulation capable of analyzing various vehicle type...

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa

    The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less

  6. The rotating movement of three immiscible fluids - A benchmark problem

    USGS Publications Warehouse

    Bakker, M.; Oude, Essink G.H.P.; Langevin, C.D.

    2004-01-01

    A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by interfaces. Initially, the interfaces between the fluids make a 45??angle with the horizontal. Over time, the fluids rotate to the stable position whereby the interfaces are horizontal; all flow is caused by density differences. Two cases of the problem are presented, one resulting in a symmetric flow field and one resulting in an asymmetric flow field. An exact analytical solution for the initial flow field is presented by application of the vortex theory and complex variables. Numerical results are obtained using three variable-density groundwater flow codes (SWI, MOCDENS3D, and SEAWAT). Initial horizontal velocities of the interfaces, as simulated by the three codes, compare well with the exact solution. The three codes are used to simulate the positions of the interfaces at two times; the three codes produce nearly identical results. The agreement between the results is evidence that the specific rotational behavior predicted by the models is correct. It also shows that the proposed problem may be used to benchmark variable-density codes. It is concluded that the three models can be used to model accurately the movement of interfaces between immiscible fluids, and have little or no numerical dispersion. ?? 2003 Elsevier B.V. All rights reserved.

  7. Dissolution experiments of commercial PWR (52 MWd/kgU) and BWR (53 MWd/kgU) spent nuclear fuel cladded segments in bicarbonate water under oxidizing conditions. Experimental determination of matrix and instant release fraction

    NASA Astrophysics Data System (ADS)

    González-Robles, E.; Serrano-Purroy, D.; Sureda, R.; Casas, I.; de Pablo, J.

    2015-10-01

    The denominated instant release fraction (IRF) is considered in performance assessment (PA) exercises to govern the dose that could arise from the repository. A conservative definition of IRF comprises the total inventory of radionuclides located in the gap, fractures, and the grain boundaries and, if present, in the high burn-up structure (HBS). The values calculated from this theoretical approach correspond to an upper limit that likely does not correspond to what it will be expected to be instantaneously released in the real system. Trying to ascertain this IRF from an experimental point of view, static leaching experiments have been carried out with two commercial UO2 spent nuclear fuels (SNF): one from a pressurized water reactor (PWR), labelled PWR, with an average burn-up (BU) of 52 MWd/kgU and fission gas release (FGR) of 23.1%, and one from a boiling water reactor (BWR), labelled BWR, with an average BU of and 53 MWd/kgU and FGR of 3.9%. One sample of each SNF, consisting of fuel and cladding, has been leached in bicarbonate water during one year under oxidizing conditions at room temperature (25 ± 5)°C. The behaviour of the concentration measured in solution can be divided in two according to the release rate. All radionuclides presented an initial release rate that after some days levels down to a slower second one, which remains constant until the end of the experiment. Cumulative fraction of inventory in aqueous phase (FIAPc) values has been calculated. Results show faster release in the case of the PWR SNF. In both cases Np, Pu, Am, Cm, Y, Tc, La and Nd dissolve congruently with U, while dissolution of Zr, Ru and Rh is slower. Rb, Sr, Cs and Mo, dissolve faster than U. The IRF of Cs at 10 and 200 days has been calculated, being (3.10 ± 0.62) and (3.66 ± 0.73) for PWR fuel, and (0.35 ± 0.07) and (0.51 ± 0.10) for BWR fuel.

  8. Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strehl, Robert; Ilie, Silvana, E-mail: silvana@ryerson.ca

    2015-12-21

    In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated onmore » three benchmarking systems, with special focus on approximation accuracy and efficiency.« less

  9. Progress in Unsteady Turbopump Flow Simulations Using Overset Grid Systems

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Chan, William; Kwak, Dochan

    2002-01-01

    This viewgraph presentation provides information on unsteady flow simulations for the Second Generation RLV (Reusable Launch Vehicle) baseline turbopump. Three impeller rotations were simulated by using a 34.3 million grid points model. MPI/OpenMP hybrid parallelism and MLP shared memory parallelism has been implemented and benchmarked in INS3D, an incompressible Navier-Stokes solver. For RLV turbopump simulations a speed up of more than 30 times has been obtained. Moving boundary capability is obtained by using the DCF module. Scripting capability from CAD geometry to solution is developed. Unsteady flow simulations for advanced consortium impeller/diffuser by using a 39 million grid points model are currently underway. 1.2 impeller rotations are completed. The fluid/structure coupling is initiated.

  10. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  11. Preliminary Stratigraphic Basis for Geologic Mapping of Venus

    NASA Technical Reports Server (NTRS)

    Basilevsky, A. T.; Head, J. W.

    1993-01-01

    The age relations between geologic formations have been studied at 36 1000x1000 km areas centered at the dark paraboloid craters. The geologic setting in all these sites could be characterized using only 16 types of features and terrains (units). These units form a basic stratigraphic sequence (from older to younger: (1) Tessera (Tt); (2-3) Densely fractured terrains associated with coronae (COdf) and in the form of remnants among plains (Pdf); (4) Fractured and ridged plains (Pfr); (5) Plains with wrinkle ridges (Pwr); (6-7) Smooth and lobate plains (Ps/Pl); and (8) Rift-associated fractures (Fra). The stratigraphic position of the other units is determined by their relation with the units of the basic sequence: (9) Ridge bells (RB), contemporary with Pfr; (10-11) Ridges of coronae and arachnoids annuli (COar/Aar), contemporary with wrinkle ridges of Pwr; (12) Fractures of coronae annuli (COaf) disrupt Pwr and Ps/Pl; (13) Fractures (F) disrupt Pwr or younger units; (14) Craters with associated dark paraboloids (Cdp), which are on top of all volcanic and tectonic units except the youngest episodes of rift-associated fracturing and volcanism; (15-16) Surficial streaks (Ss) and surficial patches (Sp) are approximately contemporary with Cdp. These units may be used as a tentative basis for the geologic mapping of Venus including VMAP. This mapping should test the stratigraphy and answer the question of whether this stratigraphic sequence corresponds to geologic events which were generally synchronous all around the planet or whether the sequence is simply a typical sequence of events which occurred in different places at diffferent times.

  12. Multi-pack Disposal Concepts for Spent Fuel (Rev. 0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadgu, Teklu; Hardin, Ernest; Matteo, Edward N.

    2015-12-01

    At the initiation of the Used Fuel Disposition (UFD) R&D campaign, international geologic disposal programs and past work in the U.S. were surveyed to identify viable disposal concepts for crystalline, clay/shale, and salt host media (Hardin et al., 2012). Concepts for disposal of commercial spent nuclear fuel (SNF) and high-level waste (HLW) from reprocessing are relatively advanced in countries such as Finland, France, and Sweden. The UFD work quickly showed that these international concepts are all “enclosed,” whereby waste packages are emplaced in direct or close contact with natural or engineered materials . Alternative “open” modes (emplacement tunnels are keptmore » open after emplacement for extended ventilation) have been limited to the Yucca Mountain License Application Design (CRWMS M&O, 1999). Thermal analysis showed that, if “enclosed” concepts are constrained by peak package/buffer temperature, waste package capacity is limited to 4 PWR assemblies (or 9-BWR) in all media except salt. This information motivated separate studies: 1) extend the peak temperature tolerance of backfill materials, which is ongoing; and 2) develop small canisters (up to 4-PWR size) that can be grouped in larger multi-pack units for convenience of storage, transportation, and possibly disposal (should the disposal concept permit larger packages). A recent result from the second line of investigation is the Task Order 18 report: Generic Design for Small Standardized Transportation, Aging and Disposal Canister Systems (EnergySolution, 2015). This report identifies disposal concepts for the small canisters (4-PWR size) drawing heavily on previous work, and for the multi-pack (16-PWR or 36-BWR).« less

  13. Multi-Pack Disposal Concepts for Spent Fuel (Revision 1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardin, Ernest; Matteo, Edward N.; Hadgu, Teklu

    2016-01-01

    At the initiation of the Used Fuel Disposition (UFD) R&D campaign, international geologic disposal programs and past work in the U.S. were surveyed to identify viable disposal concepts for crystalline, clay/shale, and salt host media. Concepts for disposal of commercial spent nuclear fuel (SNF) and high-level waste (HLW) from reprocessing are relatively advanced in countries such as Finland, France, and Sweden. The UFD work quickly showed that these international concepts are all “enclosed,” whereby waste packages are emplaced in direct or close contact with natural or engineered materials . Alternative “open” modes (emplacement tunnels are kept open after emplacement formore » extended ventilation) have been limited to the Yucca Mountain License Application Design. Thermal analysis showed that if “enclosed” concepts are constrained by peak package/buffer temperature, that waste package capacity is limited to 4 PWR assemblies (or 9 BWR) in all media except salt. This information motivated separate studies: 1) extend the peak temperature tolerance of backfill materials, which is ongoing; and 2) develop small canisters (up to 4-PWR size) that can be grouped in larger multi-pack units for convenience of storage, transportation, and possibly disposal (should the disposal concept permit larger packages). A recent result from the second line of investigation is the Task Order 18 report: Generic Design for Small Standardized Transportation, Aging and Disposal Canister Systems. This report identifies disposal concepts for the small canisters (4-PWR size) drawing heavily on previous work, and for the multi-pack (16-PWR or 36-BWR).« less

  14. Coexistence of insulin resistance and increased glucose tolerance in pregnant rats: a physiological mechanism for glucose maintenance.

    PubMed

    Carrara, Marcia Aparecida; Batista, Márcia Regina; Saruhashi, Tiago Ribeiro; Felisberto, Antonio Machado; Guilhermetti, Marcio; Bazotte, Roberto Barbosa

    2012-06-06

    The contribution of insulin resistance (IR) and glucose tolerance to the maintenance of blood glucose levels in non diabetic pregnant Wistar rats (PWR) was investigated. PWR were submitted to conventional insulin tolerance test (ITT) and glucose tolerance test (GTT) using blood sample collected 0, 10 and 60 min after intraperitoneal insulin (1 U/kg) or oral (gavage) glucose (1g/kg) administration. Moreover, ITT, GTT and the kinetics of glucose concentration changes in the fed and fasted states were evaluated with a real-time continuous glucose monitoring system (RT-CGMS) technique. Furthermore, the contribution of the liver glucose production was investigated. Conventional ITT and GTT at 0, 7, 14 and 20 days of pregnancy revealed increased IR and glucose tolerance after 20 days of pregnancy. Thus, this period of pregnancy was used to investigate the kinetics of glucose changes with the RT-CGMS technique. PWR (day 20) exhibited a lower (p<0.05) glucose concentration in the fed state. In addition, we observed IR and increased glucose tolerance in the fed state (PWR-day 20 vs. day 0). Furthermore, our data from glycogenolysis and gluconeogenesis suggested that the liver glucose production did not contribute to these changes in insulin sensitivity and/or glucose tolerance during late pregnancy. In contrast to the general view that IR is a pathological process associated with gestational diabetes, a certain degree of IR may represent an important physiological mechanism for blood glucose maintenance during fasting. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. The pluralistic water research concept - a new human-water system research approach

    NASA Astrophysics Data System (ADS)

    Evers, Mariele; Höllermann, Britta; Almoradie, Adrian; Taft, Linda; Garcia-Santos, Glenda

    2017-04-01

    Sustainable water resources management has been and still is a main challenge for decision makers even though for the past number of decades integrative approaches and concepts (e.g. Integrated Water Resources Management - IWRM) have been developed to address problems on floods, droughts, water quality, water quantity, environment and ecology. Although somehow these approaches are aiming to address water related problems in an integrative approach and to some extent include or involve society in the planning and management, they still lack some of the vital components in including the social dimensions and their interaction with water. Understanding these dynamics in a holistic way and how they are shaped by time and space may tackle these shortcomings and provide more effective and sustainable management solutions with respect to a set of potential present social actions and values as well as possible futures. This paper aims to discuss challenges to coherently and comprehensively integrate the social dimensions of different human-water concepts like IWRM, socio-hydrology and waterscape. Against this background it will develop criteria for an integrative approach and present a newly developed concept termed pluralistic water research (PWR) concept. PWR is not only a pluralistic but also an integrative and interdisciplinary approach to acknowledge the social and water dimensions and their interaction and dynamics by considering more than one perspective of a water-related issue, hereby providing a set of multiple (future) developments. Our PWR concept will be illustrated by a case study application of the Canary island La Gomera. Furthermore an outlook on further possible developments of the PWR concept will be presented and discussed.

  16. Tensile and Fatigue Testing and Material Hardening Model Development for 508 LAS Base Metal and 316 SS Similar Metal Weld under In-air and PWR Primary Loop Water Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanty, Subhasish; Soppet, William; Majumdar, Saurin

    This report provides an update on an assessment of environmentally assisted fatigue for light water reactor components under extended service conditions. This report is a deliverable in September 2015 under the work package for environmentally assisted fatigue under DOE’s Light Water Reactor Sustainability program. In an April 2015 report we presented a baseline mechanistic finite element model of a two-loop pressurized water reactor (PWR) for systemlevel heat transfer analysis and subsequent thermal-mechanical stress analysis and fatigue life estimation under reactor thermal-mechanical cycles. In the present report, we provide tensile and fatigue test data for 508 low-alloy steel (LAS) base metal,more » 508 LAS heat-affected zone metal in 508 LAS–316 stainless steel (SS) dissimilar metal welds, and 316 SS-316 SS similar metal welds. The test was conducted under different conditions such as in air at room temperature, in air at 300 oC, and under PWR primary loop water conditions. Data are provided on materials properties related to time-independent tensile tests and time-dependent cyclic tests, such as elastic modulus, elastic and offset strain yield limit stress, and linear and nonlinear kinematic hardening model parameters. The overall objective of this report is to provide guidance to estimate tensile/fatigue hardening parameters from test data. Also, the material models and parameters reported here can directly be used in commercially available finite element codes for fatigue and ratcheting evaluation of reactor components under in-air and PWR water conditions.« less

  17. Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture

    PubMed Central

    Knight, James C.; Furber, Steve B.

    2016-01-01

    While the adult human brain has approximately 8.8 × 1010 neurons, this number is dwarfed by its 1 × 1015 synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously. PMID:27683540

  18. Shadow: Running Tor in a Box for Accurate and Efficient Experimentation

    DTIC Science & Technology

    2011-09-23

    Modeling the speed of a target CPU is done by running an OpenSSL [31] speed test on a real CPU of that type. This provides us with the raw CPU processing...rate, but we are also interested in the processing speed of an application. By running application 5 benchmarks on the same CPU as the OpenSSL speed test...simulation, saving CPU cy- cles on our simulation host machine. Shadow removes cryptographic processing by preloading the main OpenSSL [31] functions used

  19. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  20. Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry.

    PubMed

    Sohrabpour, M; Hassanzadeh, M; Shahriari, M; Sharifzadeh, M

    2002-10-01

    The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators.

Top