Sample records for benchmark numerical simulations

  1. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    NASA Technical Reports Server (NTRS)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  2. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

    1991-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  3. Engine dynamic analysis with general nonlinear finite element codes. Part 2: Bearing element implementation overall numerical characteristics and benchmaking

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Adams, M.; Fertis, J.; Zeid, I.; Lam, P.

    1982-01-01

    Finite element codes are used in modelling rotor-bearing-stator structure common to the turbine industry. Engine dynamic simulation is used by developing strategies which enable the use of available finite element codes. benchmarking the elements developed are benchmarked by incorporation into a general purpose code (ADINA); the numerical characteristics of finite element type rotor-bearing-stator simulations are evaluated through the use of various types of explicit/implicit numerical integration operators. Improving the overall numerical efficiency of the procedure is improved.

  4. A suite of benchmark and challenge problems for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark; Fu, Pengcheng; McClure, Mark

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less

  5. Benchmarking FEniCS for mantle convection simulations

    NASA Astrophysics Data System (ADS)

    Vynnytska, L.; Rognes, M. E.; Clark, S. R.

    2013-01-01

    This paper evaluates the usability of the FEniCS Project for mantle convection simulations by numerical comparison to three established benchmarks. The benchmark problems all concern convection processes in an incompressible fluid induced by temperature or composition variations, and cover three cases: (i) steady-state convection with depth- and temperature-dependent viscosity, (ii) time-dependent convection with constant viscosity and internal heating, and (iii) a Rayleigh-Taylor instability. These problems are modeled by the Stokes equations for the fluid and advection-diffusion equations for the temperature and composition. The FEniCS Project provides a novel platform for the automated solution of differential equations by finite element methods. In particular, it offers a significant flexibility with regard to modeling and numerical discretization choices; we have here used a discontinuous Galerkin method for the numerical solution of the advection-diffusion equations. Our numerical results are in agreement with the benchmarks, and demonstrate the applicability of both the discontinuous Galerkin method and FEniCS for such applications.

  6. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    NASA Astrophysics Data System (ADS)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.

  7. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  8. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  9. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less

  10. Cyber-Based Turbulent Combustion Simulation

    DTIC Science & Technology

    2012-02-28

    flame thickness by comparing with benchmark of AFRL/RZ ( UNICORN ) suppressing the oscillatory numerical behavior. These improvements in numerical...fraction with the benchmark results of AFRL/RZ. This validating base is generated by the UNICORN program on the finest mesh available and the local...shared kinematic and thermodynamic data from the UNICORN program. The most important and meaningful conclusion can be drawn from this comparison is

  11. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE PAGES

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...

    2018-03-26

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  12. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  13. Importance of inlet boundary conditions for numerical simulation of combustor flows

    NASA Technical Reports Server (NTRS)

    Sturgess, G. J.; Syed, S. A.; Mcmanus, K. R.

    1983-01-01

    Fluid dynamic computer codes for the mathematical simulation of problems in gas turbine engine combustion systems are required as design and diagnostic tools. To eventually achieve a performance standard with these codes of more than qualitative accuracy it is desirable to use benchmark experiments for validation studies. Typical of the fluid dynamic computer codes being developed for combustor simulations is the TEACH (Teaching Elliptic Axisymmetric Characteristics Heuristically) solution procedure. It is difficult to find suitable experiments which satisfy the present definition of benchmark quality. For the majority of the available experiments there is a lack of information concerning the boundary conditions. A standard TEACH-type numerical technique is applied to a number of test-case experiments. It is found that numerical simulations of gas turbine combustor-relevant flows can be sensitive to the plane at which the calculations start and the spatial distributions of inlet quantities for swirling flows.

  14. An Enriched Shell Element for Delamination Simulation in Composite Laminates

    NASA Technical Reports Server (NTRS)

    McElroy, Mark

    2015-01-01

    A formulation is presented for an enriched shell finite element capable of delamination simulation in composite laminates. The element uses an adaptive splitting approach for damage characterization that allows for straightforward low-fidelity model creation and a numerically efficient solution. The Floating Node Method is used in conjunction with the Virtual Crack Closure Technique to predict delamination growth and represent it discretely at an arbitrary ply interface. The enriched element is verified for Mode I delamination simulation using numerical benchmark data. After determining important mesh configuration guidelines for the vicinity of the delamination front in the model, a good correlation was found between the enriched shell element model results and the benchmark data set.

  15. A Level-set based framework for viscous simulation of particle-laden supersonic flows

    NASA Astrophysics Data System (ADS)

    Das, Pratik; Sen, Oishik; Jacobs, Gustaaf; Udaykumar, H. S.

    2017-06-01

    Particle-laden supersonic flows are important in natural and industrial processes, such as, volcanic eruptions, explosions, pneumatic conveyance of particle in material processing etc. Numerical study of such high-speed particle laden flows at the mesoscale calls for a numerical framework which allows simulation of supersonic flow around multiple moving solid objects. Only a few efforts have been made toward development of numerical frameworks for viscous simulation of particle-fluid interaction in supersonic flow regime. The current work presents a Cartesian grid based sharp-interface method for viscous simulations of interaction between supersonic flow with moving rigid particles. The no-slip boundary condition is imposed at the solid-fluid interfaces using a modified ghost fluid method (GFM). The current method is validated against the similarity solution of compressible boundary layer over flat-plate and benchmark numerical solution for steady supersonic flow over cylinder. Further validation is carried out against benchmark numerical results for shock induced lift-off of a cylinder in a shock tube. 3D simulation of steady supersonic flow over sphere is performed to compare the numerically obtained drag co-efficient with experimental results. A particle-resolved viscous simulation of shock interaction with a cloud of particles is performed to demonstrate that the current method is suitable for large-scale particle resolved simulations of particle-laden supersonic flows.

  16. A new numerical benchmark for variably saturated variable-density flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Guevara, Carlos; Graf, Thomas

    2016-04-01

    In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.

  17. PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Frederick, J. M.

    2016-12-01

    In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.

  18. Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool

    NASA Astrophysics Data System (ADS)

    Torlapati, Jagadish; Prabhakar Clement, T.

    2013-01-01

    We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.

  19. Engine dynamic analysis with general nonlinear finite element codes. II - Bearing element implementation, overall numerical characteristics and benchmarking

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Adams, M.; Lam, P.; Fertis, D.; Zeid, I.

    1982-01-01

    Second-year efforts within a three-year study to develop and extend finite element (FE) methodology to efficiently handle the transient/steady state response of rotor-bearing-stator structure associated with gas turbine engines are outlined. The two main areas aim at (1) implanting the squeeze film damper element into a general purpose FE code for testing and evaluation; and (2) determining the numerical characteristics of the FE-generated rotor-bearing-stator simulation scheme. The governing FE field equations are set out and the solution methodology is presented. The choice of ADINA as the general-purpose FE code is explained, and the numerical operational characteristics of the direct integration approach of FE-generated rotor-bearing-stator simulations is determined, including benchmarking, comparison of explicit vs. implicit methodologies of direct integration, and demonstration problems.

  20. Modeling of turbulent separated flows for aerodynamic applications

    NASA Technical Reports Server (NTRS)

    Marvin, J. G.

    1983-01-01

    Steady, high speed, compressible separated flows modeled through numerical simulations resulting from solutions of the mass-averaged Navier-Stokes equations are reviewed. Emphasis is placed on benchmark flows that represent simplified (but realistic) aerodynamic phenomena. These include impinging shock waves, compression corners, glancing shock waves, trailing edge regions, and supersonic high angle of attack flows. A critical assessment of modeling capabilities is provided by comparing the numerical simulations with experiment. The importance of combining experiment, numerical algorithm, grid, and turbulence model to effectively develop this potentially powerful simulation technique is stressed.

  1. MoMaS reactive transport benchmark using PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Park, H.

    2017-12-01

    MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.

  2. Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas

    2017-04-01

    We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.

  3. Three-dimensional benchmark for variable-density flow and transport simulation: matching semi-analytic stability modes for steady unstable convection in an inclined porous box

    USGS Publications Warehouse

    Voss, Clifford I.; Simmons, Craig T.; Robinson, Neville I.

    2010-01-01

    This benchmark for three-dimensional (3D) numerical simulators of variable-density groundwater flow and solute or energy transport consists of matching simulation results with the semi-analytical solution for the transition from one steady-state convective mode to another in a porous box. Previous experimental and analytical studies of natural convective flow in an inclined porous layer have shown that there are a variety of convective modes possible depending on system parameters, geometry and inclination. In particular, there is a well-defined transition from the helicoidal mode consisting of downslope longitudinal rolls superimposed upon an upslope unicellular roll to a mode consisting of purely an upslope unicellular roll. Three-dimensional benchmarks for variable-density simulators are currently (2009) lacking and comparison of simulation results with this transition locus provides an unambiguous means to test the ability of such simulators to represent steady-state unstable 3D variable-density physics.

  4. Experimental benchmark of kinetic simulations of capacitively coupled plasmas in molecular gases

    NASA Astrophysics Data System (ADS)

    Donkó, Z.; Derzsi, A.; Korolov, I.; Hartmann, P.; Brandt, S.; Schulze, J.; Berger, B.; Koepke, M.; Bruneau, B.; Johnson, E.; Lafleur, T.; Booth, J.-P.; Gibson, A. R.; O'Connell, D.; Gans, T.

    2018-01-01

    We discuss the origin of uncertainties in the results of numerical simulations of low-temperature plasma sources, focusing on capacitively coupled plasmas. These sources can be operated in various gases/gas mixtures, over a wide domain of excitation frequency, voltage, and gas pressure. At low pressures, the non-equilibrium character of the charged particle transport prevails and particle-based simulations become the primary tools for their numerical description. The particle-in-cell method, complemented with Monte Carlo type description of collision processes, is a well-established approach for this purpose. Codes based on this technique have been developed by several authors/groups, and have been benchmarked with each other in some cases. Such benchmarking demonstrates the correctness of the codes, but the underlying physical model remains unvalidated. This is a key point, as this model should ideally account for all important plasma chemical reactions as well as for the plasma-surface interaction via including specific surface reaction coefficients (electron yields, sticking coefficients, etc). In order to test the models rigorously, comparison with experimental ‘benchmark data’ is necessary. Examples will be given regarding the studies of electron power absorption modes in O2, and CF4-Ar discharges, as well as on the effect of modifications of the parameters of certain elementary processes on the computed discharge characteristics in O2 capacitively coupled plasmas.

  5. Numerical modeling of separated flows at moderate Reynolds numbers appropriate for turbine blades and unmanned aero vehicles

    NASA Astrophysics Data System (ADS)

    Castiglioni, Giacomo

    Flows over airfoils and blades in rotating machinery, for unmanned and micro-aerial vehicles, wind turbines, and propellers consist of a laminar boundary layer near the leading edge that is often followed by a laminar separation bubble and transition to turbulence further downstream. Typical Reynolds averaged Navier-Stokes turbulence models are inadequate for such flows. Direct numerical simulation is the most reliable, but is also the most computationally expensive alternative. This work assesses the capability of immersed boundary methods and large eddy simulations to reduce the computational requirements for such flows and still provide high quality results. Two-dimensional and three-dimensional simulations of a laminar separation bubble on a NACA-0012 airfoil at Rec = 5x104 and at 5° of incidence have been performed with an immersed boundary code and a commercial code using body fitted grids. Several sub-grid scale models have been implemented in both codes and their performance evaluated. For the two-dimensional simulations with the immersed boundary method the results show good agreement with the direct numerical simulation benchmark data for the pressure coefficient Cp and the friction coefficient Cf, but only when using dissipative numerical schemes. There is evidence that this behavior can be attributed to the ability of dissipative schemes to damp numerical noise coming from the immersed boundary. For the three-dimensional simulations the results show a good prediction of the separation point, but an inaccurate prediction of the reattachment point unless full direct numerical simulation resolution is used. The commercial code shows good agreement with the direct numerical simulation benchmark data in both two and three-dimensional simulations, but the presence of significant, unquantified numerical dissipation prevents a conclusive assessment of the actual prediction capabilities of very coarse large eddy simulations with low order schemes in general cases. Additionally, a two-dimensional sweep of angles of attack from 0° to 5° is performed showing a qualitative prediction of the jump in lift and drag coefficients due to the appearance of the laminar separation bubble. The numerical dissipation inhibits the predictive capabilities of large eddy simulations whenever it is of the same order of magnitude or larger than the sub-grid scale dissipation. The need to estimate the numerical dissipation is most pressing for low-order methods employed by commercial computational fluid dynamics codes. Following the recent work of Schranner et al., the equations and procedure for estimating the numerical dissipation rate and the numerical viscosity in a commercial code are presented. The method allows for the computation of the numerical dissipation rate and numerical viscosity in the physical space for arbitrary sub-domains in a self-consistent way, using only information provided by the code in question. The method is first tested for a three-dimensional Taylor-Green vortex flow in a simple cubic domain and compared with benchmark results obtained using an accurate, incompressible spectral solver. Afterwards the same procedure is applied for the first time to a realistic flow configuration, specifically to the above discussed laminar separation bubble flow over a NACA 0012 airfoil. The method appears to be quite robust and its application reveals that for the code and the flow in question the numerical dissipation can be significantly larger than the viscous dissipation or the dissipation of the classical Smagorinsky sub-grid scale model, confirming the previously qualitative finding.

  6. Numerically Simulating Collisions of Plastic and Foam Laser-Driven Foils

    NASA Astrophysics Data System (ADS)

    Zalesak, S. T.; Velikovich, A. L.; Schmitt, A. J.; Aglitskiy, Y.; Metzler, N.

    2007-11-01

    Interest in experiments on colliding planar foils has recently been stimulated by (a) the Impact Fast Ignition approach to laser fusion [1], and (b) the approach to a high-repetition rate ignition facility based on direct drive with the KrF laser [2]. Simulating the evolution of perturbations to such foils can be a numerical challenge, especially if the initial perturbation amplitudes are small. We discuss the numerical issues involved in such simulations, describe their benchmarking against recently-developed analytic results, and present simulations of such experiments on NRL's Nike laser. [1] M. Murakami et al., Nucl. Fusion 46, 99 (2006) [2] S. P. Obenschain et al., Phys. Plasmas 13, 056320 (2006).

  7. Delay Tolerant Networking - Bundle Protocol Simulation

    NASA Technical Reports Server (NTRS)

    SeGui, John; Jenning, Esther

    2006-01-01

    In this paper, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the useof MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol and discuss statistics gathered concerning the total time needed to simulate numerous bundle transmissions.

  8. A Modular Simulation Framework for Assessing Swarm Search Models

    DTIC Science & Technology

    2014-09-01

    SUBTITLE A MODULAR SIMULATION FRAMEWORK FOR ASSESSING SWARM SEARCH MODELS 5. FUNDING NUMBERS 6. AUTHOR(S) Blake M. Wanier 7. PERFORMING ORGANIZATION...Numerical studies demonstrate the ability to leverage the developed simulation and analysis framework to investigate three canonical swarm search models ...as benchmarks for future exploration of more sophisticated swarm search scenarios. 14. SUBJECT TERMS Swarm Search, Search Theory, Modeling Framework

  9. Solidification of a binary alloy: Finite-element, single-domain simulation and new benchmark solutions

    NASA Astrophysics Data System (ADS)

    Le Bars, Michael; Worster, M. Grae

    2006-07-01

    A finite-element simulation of binary alloy solidification based on a single-domain formulation is presented and tested. Resolution of phase change is first checked by comparison with the analytical results of Worster [M.G. Worster, Solidification of an alloy from a cooled boundary, J. Fluid Mech. 167 (1986) 481-501] for purely diffusive solidification. Fluid dynamical processes without phase change are then tested by comparison with previous numerical studies of thermal convection in a pure fluid [G. de Vahl Davis, Natural convection of air in a square cavity: a bench mark numerical solution, Int. J. Numer. Meth. Fluids 3 (1983) 249-264; D.A. Mayne, A.S. Usmani, M. Crapper, h-adaptive finite element solution of high Rayleigh number thermally driven cavity problem, Int. J. Numer. Meth. Heat Fluid Flow 10 (2000) 598-615; D.C. Wan, B.S.V. Patnaik, G.W. Wei, A new benchmark quality solution for the buoyancy driven cavity by discrete singular convolution, Numer. Heat Transf. 40 (2001) 199-228], in a porous medium with a constant porosity [G. Lauriat, V. Prasad, Non-darcian effects on natural convection in a vertical porous enclosure, Int. J. Heat Mass Transf. 32 (1989) 2135-2148; P. Nithiarasu, K.N. Seetharamu, T. Sundararajan, Natural convective heat transfer in an enclosure filled with fluid saturated variable porosity medium, Int. J. Heat Mass Transf. 40 (1997) 3955-3967] and in a mixed liquid-porous medium with a spatially variable porosity [P. Nithiarasu, K.N. Seetharamu, T. Sundararajan, Natural convective heat transfer in an enclosure filled with fluid saturated variable porosity medium, Int. J. Heat Mass Transf. 40 (1997) 3955-3967; N. Zabaras, D. Samanta, A stabilized volume-averaging finite element method for flow in porous media and binary alloy solidification processes, Int. J. Numer. Meth. Eng. 60 (2004) 1103-1138]. Finally, new benchmark solutions for simultaneous flow through both fluid and porous domains and for convective solidification processes are presented, based on the similarity solutions in corner-flow geometries recently obtained by Le Bars and Worster [M. Le Bars, M.G. Worster, Interfacial conditions between a pure fluid and a porous medium: implications for binary alloy solidification, J. Fluid Mech. (in press)]. Good agreement is found for all tests, hence validating our physical and numerical methods. More generally, the computations presented here could now be considered as standard and reliable analytical benchmarks for numerical simulations, specifically and independently testing the different processes underlying binary alloy solidification.

  10. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    USGS Publications Warehouse

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  11. A simple numerical model for membrane oxygenation of an artificial lung machine

    NASA Astrophysics Data System (ADS)

    Subraveti, Sai Nikhil; Sai, P. S. T.; Viswanathan Pillai, Vinod Kumar; Patnaik, B. S. V.

    2015-11-01

    Optimal design of membrane oxygenators will have far reaching ramification in the development of artificial heart-lung systems. In the present CFD study, we simulate the gas exchange between the venous blood and air that passes through the hollow fiber membranes on a benchmark device. The gas exchange between the tube side fluid and the shell side venous liquid is modeled by solving mass, momentum conservation equations. The fiber bundle was modelled as a porous block with a bundle porosity of 0.6. The resistance offered by the fiber bundle was estimated by the standard Ergun correlation. The present numerical simulations are validated against available benchmark data. The effect of bundle porosity, bundle size, Reynolds number, non-Newtonian constitutive relation, upstream velocity distribution etc. on the pressure drop, oxygen saturation levels etc. are investigated. To emulate the features of gas transfer past the alveoli, the effect of pulsatility on the membrane oxygenation is also investigated.

  12. Integrated Prediction and Mitigation Methods of Materials Damage and Lifetime Assessment during Plasma Operation and Various Instabilities in Fusion Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassanein, Ahmed

    2015-03-31

    This report describes implementation of comprehensive and integrated models to evaluate plasma material interactions during normal and abnormal plasma operations. The models in full3D simulations represent state-of-the art worldwide development with numerous benchmarking of various tokamak devices and plasma simulators. In addition, significant number of experimental work has been performed in our center for materials under extreme environment (CMUXE) at Purdue to benchmark the effect of intense particle and heat fluxes on plasma-facing components. This represents one-year worth of work and resulted in more than 23 Journal Publications and numerous conferences presentations. The funding has helped several students to obtainmore » their M.Sc. and Ph.D. degrees and many of them are now faculty members in US and around the world teaching and conducting fusion research. Our work has also been recognized through many awards.« less

  13. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    NASA Astrophysics Data System (ADS)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  14. CFD-Based Design of Turbopump Inlet Duct for Reduced Dynamic Loads

    NASA Technical Reports Server (NTRS)

    Rothermel, Jeffry; Dorney, Suzanne M.; Dorney, Daniel J.

    2003-01-01

    Numerical simulations have been completed for a variety of designs for a 90 deg elbow duct. The objective is to identify a design that minimizes the dynamic load entering a LOX turbopump located at the elbow exit. Designs simulated to date indicate that simpler duct geometries result in lower losses. Benchmark simulations have verified that the compressible flow codes used in this study are applicable to these incompressible flow simulations.

  15. CFD-based Design of LOX Pump Inlet Duct for Reduced Dynamic Loads

    NASA Technical Reports Server (NTRS)

    Rothermel, Jeffry; Dorney, Daniel J.; Dorney, Suzanne M.

    2003-01-01

    Numerical simulations have been completed for a variety of designs for a 90 deg elbow duct. The objective is to identify a design that minimizes the dynamic load entering a LOX turbopump located at the elbow exit. Designs simulated to date indicate that simpler duct geometries result in lower losses. Benchmark simulations have verified that the compressible flow code used in this study is applicable to these incompressible flow simulations.

  16. Numerical modeling of fluid and electrical currents through geometries based on synchrotron X-ray tomographic images of reservoir rocks using Avizo and COMSOL

    NASA Astrophysics Data System (ADS)

    Bird, M. B.; Butler, S. L.; Hawkes, C. D.; Kotzer, T.

    2014-12-01

    The use of numerical simulations to model physical processes occurring within subvolumes of rock samples that have been characterized using advanced 3D imaging techniques is becoming increasingly common. Not only do these simulations allow for the determination of macroscopic properties like hydraulic permeability and electrical formation factor, but they also allow the user to visualize processes taking place at the pore scale and they allow for multiple different processes to be simulated on the same geometry. Most efforts to date have used specialized research software for the purpose of simulations. In this contribution, we outline the steps taken to use commercial software Avizo to transform a 3D synchrotron X-ray-derived tomographic image of a rock core sample to an STL (STereoLithography) file which can be imported into the commercial multiphysics modeling package COMSOL. We demonstrate that the use of COMSOL to perform fluid and electrical current flow simulations through the pore spaces. The permeability and electrical formation factor of the sample are calculated and compared with laboratory-derived values and benchmark calculations. Although the simulation domains that we were able to model on a desk top computer were significantly smaller than representative elementary volumes, and we were able to establish Kozeny-Carman and Archie's Law trends on which laboratory measurements and previous benchmark solutions fall. The rock core samples include a Fountainebleau sandstone used for benchmarking and a marly dolostone sampled from a well in the Weyburn oil field of southeastern Saskatchewan, Canada. Such carbonates are known to have complicated pore structures compared with sandstones, yet we are able to calculate reasonable macroscopic properties. We discuss the computing resources required.

  17. Validation of Shielding Analysis Capability of SuperMC with SINBAD

    NASA Astrophysics Data System (ADS)

    Chen, Chaobin; Yang, Qi; Wu, Bin; Han, Yuncheng; Song, Jing

    2017-09-01

    Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD). The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.

  18. An Integrated Development Environment for Adiabatic Quantum Programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; McCaskey, Alex; Bennink, Ryan S

    2014-01-01

    Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less

  19. A Matter of Timing: Identifying Significant Multi-Dose Radiotherapy Improvements by Numerical Simulation and Genetic Algorithm Search

    PubMed Central

    Angus, Simon D.; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means of significantly improving clinical efficacy. PMID:25460164

  20. A matter of timing: identifying significant multi-dose radiotherapy improvements by numerical simulation and genetic algorithm search.

    PubMed

    Angus, Simon D; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means of significantly improving clinical efficacy.

  1. Benchmarking urban flood models of varying complexity and scale using high resolution terrestrial LiDAR data

    NASA Astrophysics Data System (ADS)

    Fewtrell, Timothy J.; Duncan, Alastair; Sampson, Christopher C.; Neal, Jeffrey C.; Bates, Paul D.

    2011-01-01

    This paper describes benchmark testing of a diffusive and an inertial formulation of the de St. Venant equations implemented within the LISFLOOD-FP hydraulic model using high resolution terrestrial LiDAR data. The models are applied to a hypothetical flooding scenario in a section of Alcester, UK which experienced significant surface water flooding in the June and July floods of 2007 in the UK. The sensitivity of water elevation and velocity simulations to model formulation and grid resolution are analyzed. The differences in depth and velocity estimates between the diffusive and inertial approximations are within 10% of the simulated value but inertial effects persist at the wetting front in steep catchments. Both models portray a similar scale dependency between 50 cm and 5 m resolution which reiterates previous findings that errors in coarse scale topographic data sets are significantly larger than differences between numerical approximations. In particular, these results confirm the need to distinctly represent the camber and curbs of roads in the numerical grid when simulating surface water flooding events. Furthermore, although water depth estimates at grid scales coarser than 1 m appear robust, velocity estimates at these scales seem to be inconsistent compared to the 50 cm benchmark. The inertial formulation is shown to reduce computational cost by up to three orders of magnitude at high resolutions thus making simulations at this scale viable in practice compared to diffusive models. For the first time, this paper highlights the utility of high resolution terrestrial LiDAR data to inform small-scale flood risk management studies.

  2. Shuttle Main Propulsion System LH2 Feed Line and Inducer Simulations

    NASA Technical Reports Server (NTRS)

    Dorney, Daniel J.; Rothermel, Jeffry

    2002-01-01

    This viewgraph presentation includes simulations of the unsteady flow field in the LH2 feed line, flow line, flow liner, backing cavity and inducer of Shuttle engine #1. It also evaluates aerodynamic forcing functions which may contribute to the formation of the cracks observed on the flow liner slots. The presentation lists the numerical methods used, and profiles a benchmark test case.

  3. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    PubMed

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  4. Computational methods for reactive transport modeling: A Gibbs energy minimization approach for multiphase equilibrium calculations

    NASA Astrophysics Data System (ADS)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg

    2016-02-01

    We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.

  5. Numerical simulation of three-component multiphase flows at high density and viscosity ratios using lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Haghani Hassan Abadi, Reza; Fakhari, Abbas; Rahimian, Mohammad Hassan

    2018-03-01

    In this paper, we propose a multiphase lattice Boltzmann model for numerical simulation of ternary flows at high density and viscosity ratios free from spurious velocities. The proposed scheme, which is based on the phase-field modeling, employs the Cahn-Hilliard theory to track the interfaces among three different fluid components. Several benchmarks, such as the spreading of a liquid lens, binary droplets, and head-on collision of two droplets in binary- and ternary-fluid systems, are conducted to assess the reliability and accuracy of the model. The proposed model can successfully simulate both partial and total spreadings while reducing the parasitic currents to the machine precision.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark D.; McPherson, Brian J.; Grigg, Reid B.

    Numerical simulation is an invaluable analytical tool for scientists and engineers in making predictions about of the fate of carbon dioxide injected into deep geologic formations for long-term storage. Current numerical simulators for assessing storage in deep saline formations have capabilities for modeling strongly coupled processes involving multifluid flow, heat transfer, chemistry, and rock mechanics in geologic media. Except for moderate pressure conditions, numerical simulators for deep saline formations only require the tracking of two immiscible phases and a limited number of phase components, beyond those comprising the geochemical reactive system. The requirements for numerically simulating the utilization and storagemore » of carbon dioxide in partially depleted petroleum reservoirs are more numerous than those for deep saline formations. The minimum number of immiscible phases increases to three, the number of phase components may easily increase fourfold, and the coupled processes of heat transfer, geochemistry, and geomechanics remain. Public and scientific confidence in the ability of numerical simulators used for carbon dioxide sequestration in deep saline formations has advanced via a natural progression of the simulators being proven against benchmark problems, code comparisons, laboratory-scale experiments, pilot-scale injections, and commercial-scale injections. This paper describes a new numerical simulator for the scientific investigation of carbon dioxide utilization and storage in partially depleted petroleum reservoirs, with an emphasis on its unique features for scientific investigations; and documents the numerical simulation of the utilization of carbon dioxide for enhanced oil recovery in the western section of the Farnsworth Unit and represents an early stage in the progression of numerical simulators for carbon utilization and storage in depleted oil reservoirs.« less

  7. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    NASA Astrophysics Data System (ADS)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  8. Implementing ADM1 for plant-wide benchmark simulations in Matlab/Simulink.

    PubMed

    Rosen, C; Vrecko, D; Gernaey, K V; Pons, M N; Jeppsson, U

    2006-01-01

    The IWA Anaerobic Digestion Model No.1 (ADM1) was presented in 2002 and is expected to represent the state-of-the-art model within this field in the future. Due to its complexity the implementation of the model is not a simple task and several computational aspects need to be considered, in particular if the ADM1 is to be included in dynamic simulations of plant-wide or even integrated systems. In this paper, the experiences gained from a Matlab/Simulink implementation of ADM1 into the extended COST/IWA Benchmark Simulation Model (BSM2) are presented. Aspects related to system stiffness, model interfacing with the ASM family, mass balances, acid-base equilibrium and algebraic solvers for pH and other troublesome state variables, numerical solvers and simulation time are discussed. The main conclusion is that if implemented properly, the ADM1 will also produce high-quality results in dynamic plant-wide simulations including noise, discrete sub-systems, etc. without imposing any major restrictions due to extensive computational efforts.

  9. Interaction between an elastic structure and free-surface flows: experimental versus numerical comparisons using the PFEM

    NASA Astrophysics Data System (ADS)

    Idelsohn, S. R.; Marti, J.; Souto-Iglesias, A.; Oñate, E.

    2008-12-01

    The paper aims to introduce new fluid structure interaction (FSI) tests to compare experimental results with numerical ones. The examples have been chosen for a particular case for which experimental results are not much reported. This is the case of FSI including free surface flows. The possibilities of the Particle Finite Element Method (PFEM) [1] for the simulation of free surface flows is also tested. The simulations are run using the same scale as the experiment in order to minimize errors due to scale effects. Different scenarios are simulated by changing the boundary conditions for reproducing flows with the desired characteristics. Details of the input data for all the examples studied are given. The aim is to identifying benchmark problems for FSI including free surface flows for future comparisons between different numerical approaches.

  10. An MPI-based MoSST core dynamics model

    NASA Astrophysics Data System (ADS)

    Jiang, Weiyuan; Kuang, Weijia

    2008-09-01

    Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.

  11. An Enriched Shell Finite Element for Progressive Damage Simulation in Composite Laminates

    NASA Technical Reports Server (NTRS)

    McElroy, Mark W.

    2016-01-01

    A formulation is presented for an enriched shell nite element capable of progressive damage simulation in composite laminates. The element uses a discrete adaptive splitting approach for damage representation that allows for a straightforward model creation procedure based on an initially low delity mesh. The enriched element is veri ed for Mode I, Mode II, and mixed Mode I/II delamination simulation using numerical benchmark data. Experimental validation is performed using test data from a delamination-migration experiment. Good correlation was found between the enriched shell element model results and the numerical and experimental data sets. The work presented in this paper is meant to serve as a rst milestone in the enriched element's development with an ultimate goal of simulating three-dimensional progressive damage processes in multidirectional laminates.

  12. Numerical Simulation and Analyses of the Loss of Feedwater Transient at the Unit 4 of Kola NPP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevanovic, Vladimir D.; Stosic, Zoran V.; Kiera, Michael

    2002-07-01

    A three-dimensional numerical simulation of the loss-of-feed water transient at the horizontal steam generator of the Kola nuclear power plant is performed. Presented numerical results show transient change of integral steam generator parameters, such as steam generation rate, water mass inventory, outlet reactor coolant temperature, as well as detailed distribution of shell side thermal-hydraulic parameters: swell and collapsed levels, void fraction distributions, mass flux vectors, etc. Numerical results are compared with measurements at the Kola NPP. The agreement is satisfactory, while differences are close to or below the measurement uncertainties. Obtained numerical results are the first ones that give completemore » insight into the three-dimensional and transient horizontal steam generator thermal-hydraulics. Also, the presented results serve as benchmark tests for the assessment and further improvement of one-dimensional models of horizontal steam generator built with safety codes. (authors)« less

  13. Thermal Hydraulic Computational Fluid Dynamics Simulations and Experimental Investigation of Deformed Fuel Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mays, Brian; Jackson, R. Brian

    2017-03-08

    The project, Toward a Longer Life Core: Thermal Hydraulic CFD Simulations and Experimental Investigation of Deformed Fuel Assemblies, DOE Project code DE-NE0008321, was a verification and validation project for flow and heat transfer through wire wrapped simulated liquid metal fuel assemblies that included both experiments and computational fluid dynamics simulations of those experiments. This project was a two year collaboration between AREVA, TerraPower, Argonne National Laboratory and Texas A&M University. Experiments were performed by AREVA and Texas A&M University. Numerical simulations of these experiments were performed by TerraPower and Argonne National Lab. Project management was performed by AREVA Federal Services.more » The first of a kind project resulted in the production of both local point temperature measurements and local flow mixing experiment data paired with numerical simulation benchmarking of the experiments. The project experiments included the largest wire-wrapped pin assembly Mass Index of Refraction (MIR) experiment in the world, the first known wire-wrapped assembly experiment with deformed duct geometries and the largest numerical simulations ever produced for wire-wrapped bundles.« less

  14. High-order continuum kinetic method for modeling plasma dynamics in phase space

    DOE PAGES

    Vogman, G. V.; Colella, P.; Shumlak, U.

    2014-12-15

    Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,v x,v y) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuummore » finite volume algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,v r,v z) phase space are presented.« less

  15. High Frequency Bottom Interaction in Range Dependent Biot Media

    DTIC Science & Technology

    1999-09-30

    acoust . Soc. Am. Stephen, R.A. Benchmark models for propagation and scattering in Biot media. Fall ASA, Norfolk, VA, October...1998, J. Acoust . Soc. Am., 104, 1808. X. Zhu and G. A. McMechan, “Numerical simulation of seismic responses of poroelastic reservoirs using Biot...reverberation from rough and heterogeneous seafloors. J. acoust . Soc. Am. Stephen, R.A., in press. Optimum and standard beam widths for numerical modeling of interface scattering problems. J. acoust . Soc. Am.

  16. Noiseless Vlasov-Poisson simulations with linearly transformed particles

    DOE PAGES

    Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...

    2014-06-25

    We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less

  17. Quantification of uncertainties for application in detonation simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Miao; Ma, Zhibo

    2016-06-01

    Numerical simulation has become an important means in designing detonation systems, and the quantification of its uncertainty is also necessary to reliability certification. As to quantifying the uncertainty, it is the most important to analyze how the uncertainties occur and develop, and how the simulations develop from benchmark models to new models. Based on the practical needs of engineering and the technology of verification & validation, a framework of QU(quantification of uncertainty) is brought forward in the case that simulation is used on detonation system for scientific prediction. An example is offered to describe the general idea of quantification of simulation uncertainties.

  18. Numerical simulation of air distribution in a room with a sidewall jet under benchmark test conditions

    NASA Astrophysics Data System (ADS)

    Zasimova, Marina; Ivanov, Nikolay

    2018-05-01

    The goal of the study is to validate Large Eddy Simulation (LES) data on mixing ventilation in an isothermal room at conditions of benchmark experiments by Hurnik et al. (2015). The focus is on the accuracy of the mean and rms velocity fields prediction in the quasi-free jet zone of the room with 3D jet supplied from a sidewall rectangular diffuser. Calculations were carried out using the ANSYS Fluent 16.2 software with an algebraic wall-modeled LES subgrid-scale model. CFD results on the mean velocity vector are compared with the Laser Doppler Anemometry data. The difference between the mean velocity vector and the mean air speed in the jet zone, both LES-computed, is presented and discussed.

  19. The challenges of numerically simulating analogue brittle thrust wedges

    NASA Astrophysics Data System (ADS)

    Buiter, Susanne; Ellis, Susan

    2017-04-01

    Fold-and-thrust belts and accretionary wedges form when sedimentary and crustal rocks are compressed into thrusts and folds in the foreland of an orogen or at a subduction trench. For over a century, analogue models have been used to investigate the deformation characteristics of such brittle wedges. These models predict wedge shapes that agree with analytical critical taper theory and internal deformation structures that well resemble natural observations. In a series of comparison experiments for thrust wedges, called the GeoMod2004 (1,2) and GeoMod2008 (3,4) experiments, it was shown that different numerical solution methods successfully reproduce sandbox thrust wedges. However, the GeoMod2008 benchmark also pointed to the difficulties of representing frictional boundary conditions and sharp velocity discontinuities with continuum numerical methods, in addition to the well-known challenges of numerical plasticity. Here we show how details in the numerical implementation of boundary conditions can substantially impact numerical wedge deformation. We consider experiment 1 of the GeoMod2008 brittle thrust wedge benchmarks. This experiment examines a triangular thrust wedge in the stable field of critical taper theory that should remain stable, that is, without internal deformation, when sliding over a basal frictional surface. The thrust wedge is translated by lateral displacement of a rigid mobile wall. The corner between the mobile wall and the subsurface is a velocity discontinuity. Using our finite-element code SULEC, we show how different approaches to implementing boundary friction (boundary layer or contact elements) and the velocity discontinuity (various smoothing schemes) can cause the wedge to indeed translate in a stable manner or to undergo internal deformation (which is a fail). We recommend that numerical studies of sandbox setups not only report the details of their implementation of boundary conditions, but also document the modelling attempts that failed. References 1. Buiter and the GeoMod2004 Team, 2006. The numerical sandbox: comparison of model results for a shortening and an extension experiment. Geol. Soc. Lond. Spec. Publ. 253, 29-64 2. Schreurs and the GeoMod2004 Team, 2006. Analogue benchmarks of shortening and extension experiments. Geol. Soc. Lond. Spec. Publ. 253, 1-27 3. Buiter, Schreurs and the GeoMod2008 Team, 2016. Benchmarking numerical models of brittle thrust wedges, J. Struct. Geol. 92, 140-177 4. Schreurs, Buiter and the GeoMod2008 Team, 2016. Benchmarking analogue models of brittle thrust wedges, J. Struct. Geol. 92, 116-13

  20. A hybrid interface tracking - level set technique for multiphase flow with soluble surfactant

    NASA Astrophysics Data System (ADS)

    Shin, Seungwon; Chergui, Jalel; Juric, Damir; Kahouadji, Lyes; Matar, Omar K.; Craster, Richard V.

    2018-04-01

    A formulation for soluble surfactant transport in multiphase flows recently presented by Muradoglu and Tryggvason (JCP 274 (2014) 737-757) [17] is adapted to the context of the Level Contour Reconstruction Method, LCRM, (Shin et al. IJNMF 60 (2009) 753-778, [8]) which is a hybrid method that combines the advantages of the Front-tracking and Level Set methods. Particularly close attention is paid to the formulation and numerical implementation of the surface gradients of surfactant concentration and surface tension. Various benchmark tests are performed to demonstrate the accuracy of different elements of the algorithm. To verify surfactant mass conservation, values for surfactant diffusion along the interface are compared with the exact solution for the problem of uniform expansion of a sphere. The numerical implementation of the discontinuous boundary condition for the source term in the bulk concentration is compared with the approximate solution. Surface tension forces are tested for Marangoni drop translation. Our numerical results for drop deformation in simple shear are compared with experiments and results from previous simulations. All benchmarking tests compare well with existing data thus providing confidence that the adapted LCRM formulation for surfactant advection and diffusion is accurate and effective in three-dimensional multiphase flows with a structured mesh. We also demonstrate that this approach applies easily to massively parallel simulations.

  1. A chemical EOR benchmark study of different reservoir simulators

    NASA Astrophysics Data System (ADS)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    Interest in chemical EOR processes has intensified in recent years due to the advancements in chemical formulations and injection techniques. Injecting Polymer (P), surfactant/polymer (SP), and alkaline/surfactant/polymer (ASP) are techniques for improving sweep and displacement efficiencies with the aim of improving oil production in both secondary and tertiary floods. There has been great interest in chemical flooding recently for different challenging situations. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. More oil reservoirs are reaching maturity where secondary polymer floods and tertiary surfactant methods have become increasingly important. This significance has added to the industry's interest in using reservoir simulators as tools for reservoir evaluation and management to minimize costs and increase the process efficiency. Reservoir simulators with special features are needed to represent coupled chemical and physical processes present in chemical EOR processes. The simulators need to be first validated against well controlled lab and pilot scale experiments to reliably predict the full field implementations. The available data from laboratory scale include 1) phase behavior and rheological data; and 2) results of secondary and tertiary coreflood experiments for P, SP, and ASP floods under reservoir conditions, i.e. chemical retentions, pressure drop, and oil recovery. Data collected from corefloods are used as benchmark tests comparing numerical reservoir simulators with chemical EOR modeling capabilities such as STARS of CMG, ECLIPSE-100 of Schlumberger, REVEAL of Petroleum Experts. The research UTCHEM simulator from The University of Texas at Austin is also included since it has been the benchmark for chemical flooding simulation for over 25 years. The results of this benchmark comparison will be utilized to improve chemical design for field-scale studies using commercial simulators. The benchmark tests illustrate the potential of commercial simulators for chemical flooding projects and provide a comprehensive table of strengths and limitations of each simulator for a given chemical EOR process. Mechanistic simulations of chemical EOR processes will provide predictive capability and can aid in optimization of the field injection projects. The objective of this paper is not to compare the computational efficiency and solution algorithms; it only focuses on the process modeling comparison.

  2. Accurate Mapping of Multilevel Rydberg Atoms on Interacting Spin-1 /2 Particles for the Quantum Simulation of Ising Models

    NASA Astrophysics Data System (ADS)

    de Léséleuc, Sylvain; Weber, Sebastian; Lienhard, Vincent; Barredo, Daniel; Büchler, Hans Peter; Lahaye, Thierry; Browaeys, Antoine

    2018-03-01

    We study a system of atoms that are laser driven to n D3 /2 Rydberg states and assess how accurately they can be mapped onto spin-1 /2 particles for the quantum simulation of anisotropic Ising magnets. Using nonperturbative calculations of the pair potentials between two atoms in the presence of electric and magnetic fields, we emphasize the importance of a careful selection of experimental parameters in order to maintain the Rydberg blockade and avoid excitation of unwanted Rydberg states. We benchmark these theoretical observations against experiments using two atoms. Finally, we show that in these conditions, the experimental dynamics observed after a quench is in good agreement with numerical simulations of spin-1 /2 Ising models in systems with up to 49 spins, for which numerical simulations become intractable.

  3. Resolved-particle simulation by the Physalis method: Enhancements and new capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierakowski, Adam J., E-mail: sierakowski@jhu.edu; Prosperetti, Andrea; Faculty of Science and Technology and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede

    2016-03-15

    We present enhancements and new capabilities of the Physalis method for simulating disperse multiphase flows using particle-resolved simulation. The current work enhances the previous method by incorporating a new type of pressure-Poisson solver that couples with a new Physalis particle pressure boundary condition scheme and a new particle interior treatment to significantly improve overall numerical efficiency. Further, we implement a more efficient method of calculating the Physalis scalar products and incorporate short-range particle interaction models. We provide validation and benchmarking for the Physalis method against experiments of a sedimenting particle and of normal wall collisions. We conclude with an illustrativemore » simulation of 2048 particles sedimenting in a duct. In the appendix, we present a complete and self-consistent description of the analytical development and numerical methods.« less

  4. Validation of numerical codes for impact and explosion cratering: Impacts on strengthless and metal targets

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    2008-12-01

    Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.

  5. Higher representations on the lattice: Numerical simulations, SU(2) with adjoint fermions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Debbio, Luigi; Patella, Agostino; Pica, Claudio

    2010-05-01

    We discuss the lattice formulation of gauge theories with fermions in arbitrary representations of the color group and present in detail the implementation of the hybrid Monte Carlo (HMC)/rational HMC algorithm for simulating dynamical fermions. We discuss the validation of the implementation through an extensive set of tests and the stability of simulations by monitoring the distribution of the lowest eigenvalue of the Wilson-Dirac operator. Working with two flavors of Wilson fermions in the adjoint representation, benchmark results for realistic lattice simulations are presented. Runs are performed on different lattice sizes ranging from 4{sup 3}x8 to 24{sup 3}x64 sites. Formore » the two smallest lattices we also report the measured values of benchmark mesonic observables. These results can be used as a baseline for rapid cross-checks of simulations in higher representations. The results presented here are the first steps toward more extensive investigations with controlled systematic errors, aiming at a detailed understanding of the phase structure of these theories, and of their viability as candidates for strong dynamics beyond the standard model.« less

  6. Groundwater flow with energy transport and water-ice phase change: Numerical simulations, benchmarks, and application to freezing in peat bogs

    USGS Publications Warehouse

    McKenzie, J.M.; Voss, C.I.; Siegel, D.I.

    2007-01-01

    In northern peatlands, subsurface ice formation is an important process that can control heat transport, groundwater flow, and biological activity. Temperature was measured over one and a half years in a vertical profile in the Red Lake Bog, Minnesota. To successfully simulate the transport of heat within the peat profile, the U.S. Geological Survey's SUTRA computer code was modified. The modified code simulates fully saturated, coupled porewater-energy transport, with freezing and melting porewater, and includes proportional heat capacity and thermal conductivity of water and ice, decreasing matrix permeability due to ice formation, and latent heat. The model is verified by correctly simulating the Lunardini analytical solution for ice formation in a porous medium with a mixed ice-water zone. The modified SUTRA model correctly simulates the temperature and ice distributions in the peat bog. Two possible benchmark problems for groundwater and energy transport with ice formation and melting are proposed that may be used by other researchers for code comparison. ?? 2006 Elsevier Ltd. All rights reserved.

  7. Reactive transport codes for subsurface environmental simulation

    DOE PAGES

    Steefel, C. I.; Appelo, C. A. J.; Arora, B.; ...

    2014-09-26

    A general description of the mathematical and numerical formulations used in modern numerical reactive transport codes relevant for subsurface environmental simulations is presented. The formulations are followed by short descriptions of commonly used and available subsurface simulators that consider continuum representations of flow, transport, and reactions in porous media. These formulations are applicable to most of the subsurface environmental benchmark problems included in this special issue. The list of codes described briefly here includes PHREEQC, HPx, PHT3D, OpenGeoSys (OGS), HYTEC, ORCHESTRA, TOUGHREACT, eSTOMP, HYDROGEOCHEM, CrunchFlow, MIN3P, and PFLOTRAN. The descriptions include a high-level list of capabilities for each of themore » codes, along with a selective list of applications that highlight their capabilities and historical development.« less

  8. Simulation of guided-wave ultrasound propagation in composite laminates: Benchmark comparisons of numerical codes and experiment.

    PubMed

    Leckey, Cara A C; Wheeler, Kevin R; Hafiychuk, Vasyl N; Hafiychuk, Halyna; Timuçin, Doğan A

    2018-03-01

    Ultrasonic wave methods constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials, such as carbon fiber reinforced polymer (CFRP) laminates. Computational models of ultrasonic wave excitation, propagation, and scattering in CFRP composites can be extremely valuable in designing practicable NDE and SHM hardware, software, and methodologies that accomplish the desired accuracy, reliability, efficiency, and coverage. The development and application of ultrasonic simulation approaches for composite materials is an active area of research in the field of NDE. This paper presents comparisons of guided wave simulations for CFRP composites implemented using four different simulation codes: the commercial finite element modeling (FEM) packages ABAQUS, ANSYS, and COMSOL, and a custom code executing the Elastodynamic Finite Integration Technique (EFIT). Benchmark comparisons are made between the simulation tools and both experimental laser Doppler vibrometry data and theoretical dispersion curves. A pristine and a delamination type case (Teflon insert in the experimental specimen) is studied. A summary is given of the accuracy of simulation results and the respective computational performance of the four different simulation tools. Published by Elsevier B.V.

  9. Large-eddy simulation of a backward facing step flow using a least-squares spectral element method

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Mittal, Rajat

    1996-01-01

    We report preliminary results obtained from the large eddy simulation of a backward facing step at a Reynolds number of 5100. The numerical platform is based on a high order Legendre spectral element spatial discretization and a least squares time integration scheme. A non-reflective outflow boundary condition is in place to minimize the effect of downstream influence. Smagorinsky model with Van Driest near wall damping is used for sub-grid scale modeling. Comparisons of mean velocity profiles and wall pressure show good agreement with benchmark data. More studies are needed to evaluate the sensitivity of this method on numerical parameters before it is applied to complex engineering problems.

  10. Numerical Simulations of Close and Contact Binary Systems Having Bipolytropic Equation of State

    NASA Astrophysics Data System (ADS)

    Kadam, Kundan; Clayton, Geoffrey C.; Motl, Patrick M.; Marcello, Dominic; Frank, Juhan

    2017-01-01

    I present the results of the numerical simulations of the mass transfer in close and contact binary systems with both stars having a bipolytropic (composite polytropic) equation of state. The initial binary systems are obtained by a modifying Hachisu’s self-consistent field technique. Both the stars have fully resolved cores with a molecular weight jump at the core-envelope interface. The initial properties of these simulations are chosen such that they satisfy the mass-radius relation, composition and period of a late W-type contact binary system. The simulations are carried out using two different Eulerian hydrocodes, Flow-ER with a fixed cylindrical grid, and Octo-tiger with an AMR capable cartesian grid. The detailed comparison of the simulations suggests an agreement between the results obtained from the two codes at different resolutions. The set of simulations can be treated as a benchmark, enabling us to reliably simulate mass transfer and merger scenarios of binary systems involving bipolytropic components.

  11. Using SPARK as a Solver for Modelica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Wetter, Michael; Haves, Philip

    Modelica is an object-oriented acausal modeling language that is well positioned to become a de-facto standard for expressing models of complex physical systems. To simulate a model expressed in Modelica, it needs to be translated into executable code. For generating run-time efficient code, such a translation needs to employ algebraic formula manipulations. As the SPARK solver has been shown to be competitive for generating such code but currently cannot be used with the Modelica language, we report in this paper how SPARK's symbolic and numerical algorithms can be implemented in OpenModelica, an open-source implementation of a Modelica modeling and simulationmore » environment. We also report benchmark results that show that for our air flow network simulation benchmark, the SPARK solver is competitive with Dymola, which is believed to provide the best solver for Modelica.« less

  12. Study of blood flow in several benchmark micro-channels using a two-fluid approach.

    PubMed

    Wu, Wei-Tao; Yang, Fang; Antaki, James F; Aubry, Nadine; Massoudi, Mehrdad

    2015-10-01

    It is known that in a vessel whose characteristic dimension (e.g., its diameter) is in the range of 20 to 500 microns, blood behaves as a non-Newtonian fluid, exhibiting complex phenomena, such as shear-thinning, stress relaxation, and also multi-component behaviors, such as the Fahraeus effect, plasma-skimming, etc. For describing these non-Newtonian and multi-component characteristics of blood, using the framework of mixture theory, a two-fluid model is applied, where the plasma is treated as a Newtonian fluid and the red blood cells (RBCs) are treated as shear-thinning fluid. A computational fluid dynamic (CFD) simulation incorporating the constitutive model was implemented using OpenFOAM® in which benchmark problems including a sudden expansion and various driven slots and crevices were studied numerically. The numerical results exhibited good agreement with the experimental observations with respect to both the velocity field and the volume fraction distribution of RBCs.

  13. Modeling of Compressible Flow with Friction and Heat Transfer Using the Generalized Fluid System Simulation Program (GFSSP)

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak; Majumdar, Alok

    2007-01-01

    The present paper describes the verification and validation of a quasi one-dimensional pressure based finite volume algorithm, implemented in Generalized Fluid System Simulation Program (GFSSP), for predicting compressible flow with friction, heat transfer and area change. The numerical predictions were compared with two classical solutions of compressible flow, i.e. Fanno and Rayleigh flow. Fanno flow provides an analytical solution of compressible flow in a long slender pipe where incoming subsonic flow can be choked due to friction. On the other hand, Raleigh flow provides analytical solution of frictionless compressible flow with heat transfer where incoming subsonic flow can be choked at the outlet boundary with heat addition to the control volume. Nonuniform grid distribution improves the accuracy of numerical prediction. A benchmark numerical solution of compressible flow in a converging-diverging nozzle with friction and heat transfer has been developed to verify GFSSP's numerical predictions. The numerical predictions compare favorably in all cases.

  14. The rotating movement of three immiscible fluids - A benchmark problem

    USGS Publications Warehouse

    Bakker, M.; Oude, Essink G.H.P.; Langevin, C.D.

    2004-01-01

    A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by interfaces. Initially, the interfaces between the fluids make a 45??angle with the horizontal. Over time, the fluids rotate to the stable position whereby the interfaces are horizontal; all flow is caused by density differences. Two cases of the problem are presented, one resulting in a symmetric flow field and one resulting in an asymmetric flow field. An exact analytical solution for the initial flow field is presented by application of the vortex theory and complex variables. Numerical results are obtained using three variable-density groundwater flow codes (SWI, MOCDENS3D, and SEAWAT). Initial horizontal velocities of the interfaces, as simulated by the three codes, compare well with the exact solution. The three codes are used to simulate the positions of the interfaces at two times; the three codes produce nearly identical results. The agreement between the results is evidence that the specific rotational behavior predicted by the models is correct. It also shows that the proposed problem may be used to benchmark variable-density codes. It is concluded that the three models can be used to model accurately the movement of interfaces between immiscible fluids, and have little or no numerical dispersion. ?? 2003 Elsevier B.V. All rights reserved.

  15. A Computational Framework for Efficient Low Temperature Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Verma, Abhishek Kumar; Venkattraman, Ayyaswamy

    2016-10-01

    Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.

  16. Numerical simulation of tunneling through arbitrary potential barriers applied on MIM and MIIM rectenna diodes

    NASA Astrophysics Data System (ADS)

    Abdolkader, Tarek M.; Shaker, Ahmed; Alahmadi, A. N. M.

    2018-07-01

    With the continuous miniaturization of electronic devices, quantum-mechanical effects such as tunneling become more effective in many device applications. In this paper, a numerical simulation tool is developed under a MATLAB environment to calculate the tunneling probability and current through an arbitrary potential barrier comparing three different numerical techniques: the finite difference method, transfer matrix method, and transmission line method. For benchmarking, the tool is applied to many case studies such as the rectangular single barrier, rectangular double barrier, and continuous bell-shaped potential barrier, each compared to analytical solutions and giving the dependence of the error on the number of mesh points. In addition, a thorough study of the J ‑ V characteristics of MIM and MIIM diodes, used as rectifiers for rectenna solar cells, is presented and simulations are compared to experimental results showing satisfactory agreement. On the undergraduate level, the tool provides a deeper insight for students to compare numerical techniques used to solve various tunneling problems and helps students to choose a suitable technique for a certain application.

  17. A Fully Nonlinear, Dynamically Consistent Numerical Model for Solid-Body Ship Motion. I. Ship Motion with Fixed Heading

    NASA Technical Reports Server (NTRS)

    Lin, Ray-Quing; Kuang, Weijia

    2011-01-01

    In this paper, we describe the details of our numerical model for simulating ship solidbody motion in a given environment. In this model, the fully nonlinear dynamical equations governing the time-varying solid-body ship motion under the forces arising from ship wave interactions are solved with given initial conditions. The net force and moment (torque) on the ship body are directly calculated via integration of the hydrodynamic pressure over the wetted surface and the buoyancy effect from the underwater volume of the actual ship hull with a hybrid finite-difference/finite-element method. Neither empirical nor free parametrization is introduced in this model, i.e. no a priori experimental data are needed for modelling. This model is benchmarked with many experiments of various ship hulls for heave, roll and pitch motion. In addition to the benchmark cases, numerical experiments are also carried out for strongly nonlinear ship motion with a fixed heading. These new cases demonstrate clearly the importance of nonlinearities in ship motion modelling.

  18. Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics.

    PubMed

    Strehl, Robert; Ilie, Silvana

    2015-12-21

    In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated on three benchmarking systems, with special focus on approximation accuracy and efficiency.

  19. Evaluation of the synoptic and mesoscale predictive capabilities of a mesoscale atmospheric simulation system

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K.; Keyser, D. A.; Mccumber, M. C.

    1983-01-01

    The overall performance characteristics of a limited area, hydrostatic, fine (52 km) mesh, primitive equation, numerical weather prediction model are determined in anticipation of satellite data assimilations with the model. The synoptic and mesoscale predictive capabilities of version 2.0 of this model, the Mesoscale Atmospheric Simulation System (MASS 2.0), were evaluated. The two part study is based on a sample of approximately thirty 12h and 24h forecasts of atmospheric flow patterns during spring and early summer. The synoptic scale evaluation results benchmark the performance of MASS 2.0 against that of an operational, synoptic scale weather prediction model, the Limited area Fine Mesh (LFM). The large sample allows for the calculation of statistically significant measures of forecast accuracy and the determination of systematic model errors. The synoptic scale benchmark is required before unsmoothed mesoscale forecast fields can be seriously considered.

  20. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  1. Bio-inspired benchmark generator for extracellular multi-unit recordings

    PubMed Central

    Mondragón-González, Sirenia Lizbeth; Burguière, Eric

    2017-01-01

    The analysis of multi-unit extracellular recordings of brain activity has led to the development of numerous tools, ranging from signal processing algorithms to electronic devices and applications. Currently, the evaluation and optimisation of these tools are hampered by the lack of ground-truth databases of neural signals. These databases must be parameterisable, easy to generate and bio-inspired, i.e. containing features encountered in real electrophysiological recording sessions. Towards that end, this article introduces an original computational approach to create fully annotated and parameterised benchmark datasets, generated from the summation of three components: neural signals from compartmental models and recorded extracellular spikes, non-stationary slow oscillations, and a variety of different types of artefacts. We present three application examples. (1) We reproduced in-vivo extracellular hippocampal multi-unit recordings from either tetrode or polytrode designs. (2) We simulated recordings in two different experimental conditions: anaesthetised and awake subjects. (3) Last, we also conducted a series of simulations to study the impact of different level of artefacts on extracellular recordings and their influence in the frequency domain. Beyond the results presented here, such a benchmark dataset generator has many applications such as calibration, evaluation and development of both hardware and software architectures. PMID:28233819

  2. The accuracy of semi-numerical reionization models in comparison with radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Hutter, Anne

    2018-03-01

    We have developed a modular semi-numerical code that computes the time and spatially dependent ionization of neutral hydrogen (H I), neutral (He I) and singly ionized helium (He II) in the intergalactic medium (IGM). The model accounts for recombinations and provides different descriptions for the photoionization rate that are used to calculate the residual H I fraction in ionized regions. We compare different semi-numerical reionization schemes to a radiative transfer (RT) simulation. We use the RT simulation as a benchmark, and find that the semi-numerical approaches produce similar H II and He II morphologies and power spectra of the H I 21cm signal throughout reionization. As we do not track partial ionization of He II, the extent of the double ionized helium (He III) regions is consistently smaller. In contrast to previous comparison projects, the ionizing emissivity in our semi-numerical scheme is not adjusted to reproduce the redshift evolution of the RT simulation, but directly derived from the RT simulation spectra. Among schemes that identify the ionized regions by the ratio of the number of ionization and absorption events on different spatial smoothing scales, we find those that mark the entire sphere as ionized when the ionization criterion is fulfilled to result in significantly accelerated reionization compared to the RT simulation. Conversely, those that flag only the central cell as ionized yield very similar but slightly delayed redshift evolution of reionization, with up to 20% ionizing photons lost. Despite the overall agreement with the RT simulation, our results suggests that constraining ionizing emissivity sensitive parameters from semi-numerical galaxy formation-reionization models are subject to photon nonconservation.

  3. Towards a suite of test cases and a pycomodo library to assess and improve numerical methods in ocean models

    NASA Astrophysics Data System (ADS)

    Garnier, Valérie; Honnorat, Marc; Benshila, Rachid; Boutet, Martial; Cambon, Gildas; Chanut, Jérome; Couvelard, Xavier; Debreu, Laurent; Ducousso, Nicolas; Duhaut, Thomas; Dumas, Franck; Flavoni, Simona; Gouillon, Flavien; Lathuilière, Cyril; Le Boyer, Arnaud; Le Sommer, Julien; Lyard, Florent; Marsaleix, Patrick; Marchesiello, Patrick; Soufflet, Yves

    2016-04-01

    The COMODO group (http://www.comodo-ocean.fr) gathers developers of global and limited-area ocean models (NEMO, ROMS_AGRIF, S, MARS, HYCOM, S-TUGO) with the aim to address well-identified numerical issues. In order to evaluate existing models, to improve numerical approaches and methods or concept (such as effective resolution) to assess the behavior of numerical model in complex hydrodynamical regimes and to propose guidelines for the development of future ocean models, a benchmark suite that covers both idealized test cases dedicated to targeted properties of numerical schemes and more complex test case allowing the evaluation of the kernel coherence is proposed. The benchmark suite is built to study separately, then together, the main components of an ocean model : the continuity and momentum equations, the advection-diffusion of the tracers, the vertical coordinate design and the time stepping algorithms. The test cases are chosen for their simplicity of implementation (analytic initial conditions), for their capacity to focus on a (few) scheme or part of the kernel, for the availability of analytical solutions or accurate diagnoses and lastly to simulate a key oceanic processus in a controlled environment. Idealized test cases allow to verify properties of numerical schemes advection-diffusion of tracers, - upwelling, - lock exchange, - baroclinic vortex, - adiabatic motion along bathymetry, and to put into light numerical issues that remain undetected in realistic configurations - trajectory of barotropic vortex, - interaction current - topography. When complexity in the simulated dynamics grows up, - internal wave, - unstable baroclinic jet, the sharing of the same experimental designs by different existing models is useful to get a measure of the model sensitivity to numerical choices (Soufflet et al., 2016). Lastly, test cases help in understanding the submesoscale influence on the dynamics (Couvelard et al., 2015). Such a benchmark suite is an interesting bed to continue research in numerical approaches as well as an efficient tool to maintain any oceanic code and assure the users a stamped model in a certain range of hydrodynamical regimes. Thanks to a common netCDF format, this suite is completed with a python library that encompasses all the tools and metrics used to assess the efficiency of the numerical methods. References - Couvelard X., F. Dumas, V. Garnier, A.L. Ponte, C. Talandier, A.M. Treguier (2015). Mixed layer formation and restratification in presence of mesoscale and submesoscale turbulence. Ocean Modelling, Vol 96-2, p 243-253. doi:10.1016/j.ocemod.2015.10.004. - Soufflet Y., P. Marchesiello, F. Lemarié, J. Jouanno, X. Capet, L. Debreu , R. Benshila (2016). On effective resolution in ocean models. Ocean Modelling, in press. doi:10.1016/j.ocemod.2015.12.004

  4. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  5. Study of blood flow in several benchmark micro-channels using a two-fluid approach

    PubMed Central

    Wu, Wei-Tao; Yang, Fang; Antaki, James F.; Aubry, Nadine; Massoudi, Mehrdad

    2015-01-01

    It is known that in a vessel whose characteristic dimension (e.g., its diameter) is in the range of 20 to 500 microns, blood behaves as a non-Newtonian fluid, exhibiting complex phenomena, such as shear-thinning, stress relaxation, and also multi-component behaviors, such as the Fahraeus effect, plasma-skimming, etc. For describing these non-Newtonian and multi-component characteristics of blood, using the framework of mixture theory, a two-fluid model is applied, where the plasma is treated as a Newtonian fluid and the red blood cells (RBCs) are treated as shear-thinning fluid. A computational fluid dynamic (CFD) simulation incorporating the constitutive model was implemented using OpenFOAM® in which benchmark problems including a sudden expansion and various driven slots and crevices were studied numerically. The numerical results exhibited good agreement with the experimental observations with respect to both the velocity field and the volume fraction distribution of RBCs. PMID:26240438

  6. Verification and benchmark testing of the NUFT computer code

    NASA Astrophysics Data System (ADS)

    Lee, K. H.; Nitao, J. J.; Kulshrestha, A.

    1993-10-01

    This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.

  7. An iterative solver for the 3D Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir

    2017-09-01

    We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.

  8. Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strehl, Robert; Ilie, Silvana, E-mail: silvana@ryerson.ca

    2015-12-21

    In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated onmore » three benchmarking systems, with special focus on approximation accuracy and efficiency.« less

  9. Validation and Performance Comparison of Numerical Codes for Tsunami Inundation

    NASA Astrophysics Data System (ADS)

    Velioglu, D.; Kian, R.; Yalciner, A. C.; Zaytsev, A.

    2015-12-01

    In inundation zones, tsunami motion turns from wave motion to flow of water. Modelling of this phenomenon is a complex problem since there are many parameters affecting the tsunami flow. In this respect, the performance of numerical codes that analyze tsunami inundation patterns becomes important. The computation of water surface elevation is not sufficient for proper analysis of tsunami behaviour in shallow water zones and on land and hence for the development of mitigation strategies. Velocity and velocity patterns are also crucial parameters and have to be computed at the highest accuracy. There are numerous numerical codes to be used for simulating tsunami inundation. In this study, FLOW 3D and NAMI DANCE codes are selected for validation and performance comparison. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. FLOW 3D is used specificaly for flood problems. NAMI DANCE uses finite difference computational method to solve linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In this study, these codes are validated and their performances are compared using two benchmark problems which are discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. One of the problems is an experiment of a single long-period wave propagating up a piecewise linear slope and onto a small-scale model of the town of Seaside, Oregon. Other benchmark problem is an experiment of a single solitary wave propagating up a triangular shaped shelf with an island feature located at the offshore point of the shelf. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. All results are presented with discussions and comparisons. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement No 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)

  10. A three-dimensional Dirichlet-to-Neumann operator for water waves over topography

    NASA Astrophysics Data System (ADS)

    Andrade, D.; Nachbin, A.

    2018-06-01

    Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.

  11. Multi-scale simulations of droplets in generic time-dependent flows

    NASA Astrophysics Data System (ADS)

    Milan, Felix; Biferale, Luca; Sbragaglia, Mauro; Toschi, Federico

    2017-11-01

    We study the deformation and dynamics of droplets in time-dependent flows using a diffuse interface model for two immiscible fluids. The numerical simulations are at first benchmarked against analytical results of steady droplet deformation, and further extended to the more interesting case of time-dependent flows. The results of these time-dependent numerical simulations are compared against analytical models available in the literature, which assume the droplet shape to be an ellipsoid at all times, with time-dependent major and minor axis. In particular we investigate the time-dependent deformation of a confined droplet in an oscillating Couette flow for the entire capillary range until droplet break-up. In this way these multi component simulations prove to be a useful tool to establish from ``first principles'' the dynamics of droplets in complex flows involving multiple scales. European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No 642069. & European Research Council under the European Community's Seventh Framework Program, ERC Grant Agreement No 339032.

  12. Testing density-dependent groundwater models: Two-dimensional steady state unstable convection in infinite, finite and inclined porous layers

    USGS Publications Warehouse

    Weatherill, D.; Simmons, C.T.; Voss, C.I.; Robinson, N.I.

    2004-01-01

    This study proposes the use of several problems of unstable steady state convection with variable fluid density in a porous layer of infinite horizontal extent as two-dimensional (2-D) test cases for density-dependent groundwater flow and solute transport simulators. Unlike existing density-dependent model benchmarks, these problems have well-defined stability criteria that are determined analytically. These analytical stability indicators can be compared with numerical model results to test the ability of a code to accurately simulate buoyancy driven flow and diffusion. The basic analytical solution is for a horizontally infinite fluid-filled porous layer in which fluid density decreases with depth. The proposed test problems include unstable convection in an infinite horizontal box, in a finite horizontal box, and in an infinite inclined box. A dimensionless Rayleigh number incorporating properties of the fluid and the porous media determines the stability of the layer in each case. Testing the ability of numerical codes to match both the critical Rayleigh number at which convection occurs and the wavelength of convection cells is an addition to the benchmark problems currently in use. The proposed test problems are modelled in 2-D using the SUTRA [SUTRA-A model for saturated-unsaturated variable-density ground-water flow with solute or energy transport. US Geological Survey Water-Resources Investigations Report, 02-4231, 2002. 250 p] density-dependent groundwater flow and solute transport code. For the case of an infinite horizontal box, SUTRA results show a distinct change from stable to unstable behaviour around the theoretical critical Rayleigh number of 4??2 and the simulated wavelength of unstable convection agrees with that predicted by the analytical solution. The effects of finite layer aspect ratio and inclination on stability indicators are also tested and numerical results are in excellent agreement with theoretical stability criteria and with numerical results previously reported in traditional fluid mechanics literature. ?? 2004 Elsevier Ltd. All rights reserved.

  13. Time-dependent spectral renormalization method

    NASA Astrophysics Data System (ADS)

    Cole, Justin T.; Musslimani, Ziad H.

    2017-11-01

    The spectral renormalization method was introduced by Ablowitz and Musslimani (2005) as an effective way to numerically compute (time-independent) bound states for certain nonlinear boundary value problems. In this paper, we extend those ideas to the time domain and introduce a time-dependent spectral renormalization method as a numerical means to simulate linear and nonlinear evolution equations. The essence of the method is to convert the underlying evolution equation from its partial or ordinary differential form (using Duhamel's principle) into an integral equation. The solution sought is then viewed as a fixed point in both space and time. The resulting integral equation is then numerically solved using a simple renormalized fixed-point iteration method. Convergence is achieved by introducing a time-dependent renormalization factor which is numerically computed from the physical properties of the governing evolution equation. The proposed method has the ability to incorporate physics into the simulations in the form of conservation laws or dissipation rates. This novel scheme is implemented on benchmark evolution equations: the classical nonlinear Schrödinger (NLS), integrable PT symmetric nonlocal NLS and the viscous Burgers' equations, each of which being a prototypical example of a conservative and dissipative dynamical system. Numerical implementation and algorithm performance are also discussed.

  14. Fourth-Order Conservative Vlasov-Maxwell Solver for Cartesian and Cylindrical Phase Space Coordinates

    NASA Astrophysics Data System (ADS)

    Vogman, Genia

    Plasmas are made up of charged particles whose short-range and long-range interactions give rise to complex behavior that can be difficult to fully characterize experimentally. One of the most complete theoretical descriptions of a plasma is that of kinetic theory, which treats each particle species as a probability distribution function in a six-dimensional position-velocity phase space. Drawing on statistical mechanics, these distribution functions mathematically represent a system of interacting particles without tracking individual ions and electrons. The evolution of the distribution function(s) is governed by the Boltzmann equation coupled to Maxwell's equations, which together describe the dynamics of the plasma and the associated electromagnetic fields. When collisions can be neglected, the Boltzmann equation is reduced to the Vlasov equation. High-fidelity simulation of the rich physics in even a subset of the full six-dimensional phase space calls for low-noise high-accuracy numerical methods. To that end, this dissertation investigates a fourth-order finite-volume discretization of the Vlasov-Maxwell equation system, and addresses some of the fundamental challenges associated with applying these types of computationally intensive enhanced-accuracy numerical methods to phase space simulations. The governing equations of kinetic theory are described in detail, and their conservation-law weak form is derived for Cartesian and cylindrical phase space coordinates. This formulation is well known when it comes to Cartesian geometries, as it is used in finite-volume and finite-element discretizations to guarantee local conservation for numerical solutions. By contrast, the conservation-law weak form of the Vlasov equation in cylindrical phase space coordinates is largely unexplored, and to the author's knowledge has never previously been solved numerically. Thereby the methods described in this dissertation for simulating plasmas in cylindrical phase space coordinates present a new development in the field of computational plasma physics. A fourth-order finite-volume method for solving the Vlasov-Maxwell equation system is presented first for Cartesian and then for cylindrical phase space coordinates. Special attention is given to the treatment of the discrete primary variables and to the quadrature rule for evaluating the surface and line integrals that appear in the governing equations. The finite-volume treatment of conducting wall and axis boundaries is particularly nuanced when it comes to phase space coordinates, and is described in detail. In addition to the mechanics of each part of the finite-volume discretization in the two different coordinate systems, the complete algorithm is also presented. The Cartesian coordinate discretization is applied to several well-known test problems. Since even linear analysis of kinetic theory governing equations is complicated on account of velocity being an independent coordinate, few analytic or semi-analytic predictions exist. Benchmarks are particularly scarce for configurations that have magnetic fields and involve more than two phase space dimensions. Ensuring that simulations are true to the physics thus presents a difficulty in the development of robust numerical methods. The research described in this dissertation addresses this challenge through the development of more complete physics-based benchmarks based on the Dory-Guest-Harris instability. The instability is a special case of perpendicularly-propagating kinetic electrostatic waves in a warm uniformly magnetized plasma. A complete derivation of the closed-form linear theory dispersion relation for the instability is presented. The electric field growth rates and oscillation frequencies specified by the dispersion relation provide concrete measures against which simulation results can be quantitatively compared. Furthermore, a specialized form of perturbation is shown to strongly excite the fastest growing mode. The fourth-order finite-volume algorithm is benchmarked against the instability, and is demonstrated to have good convergence properties and close agreement with theoretical growth rate and oscillation frequency predictions. The Dory-Guest-Harris instability benchmark extends the scope of standard test problems by providing a substantive means of validating continuum kinetic simulations of warm magnetized plasmas in higher-dimensional 3D ( x,vx,vy) phase space. The linear theory analysis, initial conditions, algorithm description, and comparisons between theoretical predictions and simulation results are presented. The cylindrical coordinate finite-volume discretization is applied to model axisymmetric systems. Since mitigating the prohibitive computational cost of simulating six dimensions is another challenge in phase space simulations, the development of a robust means of exploiting symmetry is a major advance when it comes to numerically solving the Vlasov-Maxwell equation system. The discretization is applied to a uniform distribution function to assess the nature of the singularity at the axis, and is demonstrated to converge at fourth-order accuracy. The numerical method is then applied to simulate electrostatic ion confinement in an axisymmetric Z-pinch configuration. To the author's knowledge this presents the first instance of a conservative finite-volume discretization of the cylindrical coordinate Vlasov equation. The computational framework for the Vlasov-Maxwell solver is described, and an outlook for future research is presented.

  15. Plasma Modeling with Speed-Limited Particle-in-Cell Techniques

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Werner, G. R.; Cary, J. R.; Stoltz, P. H.

    2017-10-01

    Speed-limited particle-in-cell (SLPIC) modeling is a new particle simulation technique for modeling systems wherein numerical constraints, e.g. limitations on timestep size required for numerical stability, are significantly more restrictive than is needed to model slower kinetic processes of interest. SLPIC imposes artificial speed-limiting behavior on fast particles whose kinetics do not play meaningful roles in the system dynamics, thus enabling larger simulation timesteps and more rapid modeling of such plasma discharges. The use of SLPIC methods to model plasma sheath formation and the free expansion of plasma into vacuum will be demonstrated. Wallclock times for these simulations, relative to conventional PIC, are reduced by a factor of 2.5 for the plasma expansion problem and by over 6 for the sheath formation problem; additional speedup is likely possible. Physical quantities of interest are shown to be correct for these benchmark problems. Additional SLPIC applications will also be discussed. Supported by US DoE SBIR Phase I/II Award DE-SC0015762.

  16. Metrics for comparing dynamic earthquake rupture simulations

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2014-01-01

    Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.

  17. An efficient multi-dimensional implementation of VSIAM3 and its applications to free surface flows

    NASA Astrophysics Data System (ADS)

    Yokoi, Kensuke; Furuichi, Mikito; Sakai, Mikio

    2017-12-01

    We propose an efficient multidimensional implementation of VSIAM3 (volume/surface integrated average-based multi-moment method). Although VSIAM3 is a highly capable fluid solver based on a multi-moment concept and has been used for a wide variety of fluid problems, VSIAM3 could not simulate some simple benchmark problems well (for instance, lid-driven cavity flows) due to relatively high numerical viscosity. In this paper, we resolve the issue by using the efficient multidimensional approach. The proposed VSIAM3 is shown to capture lid-driven cavity flows of the Reynolds number up to Re = 7500 with a Cartesian grid of 128 × 128, which was not capable for the original VSIAM3. We also tested the proposed framework in free surface flow problems (droplet collision and separation of We = 40 and droplet splashing on a superhydrophobic substrate). The numerical results by the proposed VSIAM3 showed reasonable agreements with these experiments. The proposed VSIAM3 could capture droplet collision and separation of We = 40 with a low numerical resolution (8 meshes for the initial diameter of droplets). We also simulated free surface flows including particles toward non-Newtonian flow applications. These numerical results have showed that the proposed VSIAM3 can robustly simulate interactions among air, particles (solid), and liquid.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H.

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less

  19. Real-case benchmark for flow and tracer transport in the fractured rock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hokr, M.; Shao, H.; Gardner, W. P.

    The paper is intended to define a benchmark problem related to groundwater flow and natural tracer transport using observations of discharge and isotopic tracers in fractured, crystalline rock. Three numerical simulators: Flow123d, OpenGeoSys, and PFLOTRAN are compared. The data utilized in the project were collected in a water-supply tunnel in granite of the Jizera Mountains, Bedrichov, Czech Republic. The problem configuration combines subdomains of different dimensions, 3D continuum for hard-rock blocks or matrix and 2D features for fractures or fault zones, together with realistic boundary conditions for tunnel-controlled drainage. Steady-state and transient flow and a pulse injection tracer transport problemmore » are solved. The results confirm mostly consistent behavior of the codes. Both the codes Flow123d and OpenGeoSys with 3D–2D coupling implemented differ by several percent in most cases, which is appropriate to, e.g., effects of discrete unknown placing in the mesh. Some of the PFLOTRAN results differ more, which can be explained by effects of the dispersion tensor evaluation scheme and of the numerical diffusion. Here, the phenomenon can get stronger with fracture/matrix coupling and with parameter magnitude contrasts. Although the study was not aimed on inverse solution, the models were fit to the measured data approximately, demonstrating the intended real-case relevance of the benchmark.« less

  20. Real-case benchmark for flow and tracer transport in the fractured rock

    DOE PAGES

    Hokr, M.; Shao, H.; Gardner, W. P.; ...

    2016-09-19

    The paper is intended to define a benchmark problem related to groundwater flow and natural tracer transport using observations of discharge and isotopic tracers in fractured, crystalline rock. Three numerical simulators: Flow123d, OpenGeoSys, and PFLOTRAN are compared. The data utilized in the project were collected in a water-supply tunnel in granite of the Jizera Mountains, Bedrichov, Czech Republic. The problem configuration combines subdomains of different dimensions, 3D continuum for hard-rock blocks or matrix and 2D features for fractures or fault zones, together with realistic boundary conditions for tunnel-controlled drainage. Steady-state and transient flow and a pulse injection tracer transport problemmore » are solved. The results confirm mostly consistent behavior of the codes. Both the codes Flow123d and OpenGeoSys with 3D–2D coupling implemented differ by several percent in most cases, which is appropriate to, e.g., effects of discrete unknown placing in the mesh. Some of the PFLOTRAN results differ more, which can be explained by effects of the dispersion tensor evaluation scheme and of the numerical diffusion. Here, the phenomenon can get stronger with fracture/matrix coupling and with parameter magnitude contrasts. Although the study was not aimed on inverse solution, the models were fit to the measured data approximately, demonstrating the intended real-case relevance of the benchmark.« less

  1. A Flow Solver for Three-Dimensional DRAGON Grids

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Zheng, Yao

    2002-01-01

    DRAGONFLOW code has been developed to solve three-dimensional Navier-Stokes equations over a complex geometry whose flow domain is discretized with the DRAGON grid-a combination of Chimera grid and a collection of unstructured grids. In the DRAGONFLOW suite, both OVERFLOW and USM3D are presented in form of module libraries, and a master module controls the invoking of these individual modules. This report includes essential aspects, programming structures, benchmark tests and numerical simulations.

  2. Using GTO-Velo to Facilitate Communication and Sharing of Simulation Results in Support of the Geothermal Technologies Office Code Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Signe K.; Purohit, Sumit; Boyd, Lauren W.

    The Geothermal Technologies Office Code Comparison Study (GTO-CCS) aims to support the DOE Geothermal Technologies Office in organizing and executing a model comparison activity. This project is directed at testing, diagnosing differences, and demonstrating modeling capabilities of a worldwide collection of numerical simulators for evaluating geothermal technologies. Teams of researchers are collaborating in this code comparison effort, and it is important to be able to share results in a forum where technical discussions can easily take place without requiring teams to travel to a common location. Pacific Northwest National Laboratory has developed an open-source, flexible framework called Velo that providesmore » a knowledge management infrastructure and tools to support modeling and simulation for a variety of types of projects in a number of scientific domains. GTO-Velo is a customized version of the Velo Framework that is being used as the collaborative tool in support of the GTO-CCS project. Velo is designed around a novel integration of a collaborative Web-based environment and a scalable enterprise Content Management System (CMS). The underlying framework provides a flexible and unstructured data storage system that allows for easy upload of files that can be in any format. Data files are organized in hierarchical folders and each folder and each file has a corresponding wiki page for metadata. The user interacts with Velo through a web browser based wiki technology, providing the benefit of familiarity and ease of use. High-level folders have been defined in GTO-Velo for the benchmark problem descriptions, descriptions of simulator/code capabilities, a project notebook, and folders for participating teams. Each team has a subfolder with write access limited only to the team members, where they can upload their simulation results. The GTO-CCS participants are charged with defining the benchmark problems for the study, and as each GTO-CCS Benchmark problem is defined, the problem creator can provide a description using a template on the metadata page corresponding to the benchmark problem folder. Project documents, references and videos of the weekly online meetings are shared via GTO-Velo. A results comparison tool allows users to plot their uploaded simulation results on the fly, along with those of other teams, to facilitate weekly discussions of the benchmark problem results being generated by the teams. GTO-Velo is an invaluable tool providing the project coordinators and team members with a framework for collaboration among geographically dispersed organizations.« less

  3. Assessing the capability of continuum and discrete particle methods to simulate gas-solids flow using DNS predictions as a benchmark

    DOE PAGES

    Lu, Liqiang; Liu, Xiaowen; Li, Tingwen; ...

    2017-08-12

    For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less

  4. Assessing the capability of continuum and discrete particle methods to simulate gas-solids flow using DNS predictions as a benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Liu, Xiaowen; Li, Tingwen

    For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less

  5. Excore Modeling with VERAShift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.

    It is important to be able to accurately predict the neutron flux outside the immediate reactor core for a variety of safety and material analyses. Monte Carlo radiation transport calculations are required to produce the high fidelity excore responses. Under this milestone VERA (specifically the VERAShift package) has been extended to perform excore calculations by running radiation transport calculations with Shift. This package couples VERA-CS with Shift to perform excore tallies for multiple state points concurrently, with each component capable of parallel execution on independent domains. Specifically, this package performs fluence calculations in the core barrel and vessel, or, performsmore » the requested tallies in any user-defined excore regions. VERAShift takes advantage of the general geometry package in Shift. This gives VERAShift the flexibility to explicitly model features outside the core barrel, including detailed vessel models, detectors, and power plant details. A very limited set of experimental and numerical benchmarks is available for excore simulation comparison. The Consortium for the Advanced Simulation of Light Water Reactors (CASL) has developed a set of excore benchmark problems to include as part of the VERA-CS verification and validation (V&V) problems. The excore capability in VERAShift has been tested on small representative assembly problems, multiassembly problems, and quarter-core problems. VERAView has also been extended to visualize these vessel fluence results from VERAShift. Preliminary vessel fluence results for quarter-core multistate calculations look very promising. Further development is needed to determine the details relevant to excore simulations. Validation of VERA for fluence and excore detectors still needs to be performed against experimental and numerical results.« less

  6. Application of Jacobian-free Newton–Krylov method in implicitly solving two-fluid six-equation two-phase flow problems: Implementation, validation and benchmark

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-03-09

    This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less

  7. Development of comprehensive numerical schemes for predicting evaporating gas-droplets flow processes of a liquid-fueled combustor

    NASA Technical Reports Server (NTRS)

    Chen, C. P.

    1990-01-01

    An existing Computational Fluid Dynamics code for simulating complex turbulent flows inside a liquid rocket combustion chamber was validated and further developed. The Advanced Rocket Injector/Combustor Code (ARICC) is simplified and validated against benchmark flow situations for laminar and turbulent flows. The numerical method used in ARICC Code is re-examined for incompressible flow calculations. For turbulent flows, both the subgrid and the two equation k-epsilon turbulence models are studied. Cases tested include idealized Burger's equation in complex geometries and boundaries, a laminar pipe flow, a high Reynolds number turbulent flow, and a confined coaxial jet with recirculations. The accuracy of the algorithm is examined by comparing the numerical results with the analytical solutions as well as experimented data with different grid sizes.

  8. Interface COMSOL-PHREEQC (iCP), an efficient numerical framework for the solution of coupled multiphysics and geochemistry

    NASA Astrophysics Data System (ADS)

    Nardi, Albert; Idiart, Andrés; Trinchero, Paolo; de Vries, Luis Manuel; Molinero, Jorge

    2014-08-01

    This paper presents the development, verification and application of an efficient interface, denoted as iCP, which couples two standalone simulation programs: the general purpose Finite Element framework COMSOL Multiphysics® and the geochemical simulator PHREEQC. The main goal of the interface is to maximize the synergies between the aforementioned codes, providing a numerical platform that can efficiently simulate a wide number of multiphysics problems coupled with geochemistry. iCP is written in Java and uses the IPhreeqc C++ dynamic library and the COMSOL Java-API. Given the large computational requirements of the aforementioned coupled models, special emphasis has been placed on numerical robustness and efficiency. To this end, the geochemical reactions are solved in parallel by balancing the computational load over multiple threads. First, a benchmark exercise is used to test the reliability of iCP regarding flow and reactive transport. Then, a large scale thermo-hydro-chemical (THC) problem is solved to show the code capabilities. The results of the verification exercise are successfully compared with those obtained using PHREEQC and the application case demonstrates the scalability of a large scale model, at least up to 32 threads.

  9. A new numerical benchmark of a freshwater lens

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  10. Auction dynamics: A volume constrained MBO scheme

    NASA Astrophysics Data System (ADS)

    Jacobs, Matt; Merkurjev, Ekaterina; Esedoǧlu, Selim

    2018-02-01

    We show how auction algorithms, originally developed for the assignment problem, can be utilized in Merriman, Bence, and Osher's threshold dynamics scheme to simulate multi-phase motion by mean curvature in the presence of equality and inequality volume constraints on the individual phases. The resulting algorithms are highly efficient and robust, and can be used in simulations ranging from minimal partition problems in Euclidean space to semi-supervised machine learning via clustering on graphs. In the case of the latter application, numerous experimental results on benchmark machine learning datasets show that our approach exceeds the performance of current state-of-the-art methods, while requiring a fraction of the computation time.

  11. Comparison of deterministic and stochastic methods for time-dependent Wigner simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Sihong, E-mail: sihong@math.pku.edu.cn; Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg

    2015-11-01

    Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution ofmore » a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.« less

  12. Accuracy and Numerical Stabilty Analysis of Lattice Boltzmann Method with Multiple Relaxation Time for Incompressible Flows

    NASA Astrophysics Data System (ADS)

    Pradipto; Purqon, Acep

    2017-07-01

    Lattice Boltzmann Method (LBM) is the novel method for simulating fluid dynamics. Nowadays, the application of LBM ranges from the incompressible flow, flow in the porous medium, until microflows. The common collision model of LBM is the BGK with a constant single relaxation time τ. However, BGK suffers from numerical instabilities. These instabilities could be eliminated by implementing LBM with multiple relaxation time. Both of those scheme have implemented for incompressible 2 dimensions lid-driven cavity. The stability analysis has done by finding the maximum Reynolds number and velocity for converged simulations. The accuracy analysis is done by comparing the velocity profile with the benchmark results from Ghia, et al and calculating the net velocity flux. The tests concluded that LBM with MRT are more stable than BGK, and have a similar accuracy. The maximum Reynolds number that converges for BGK is 3200 and 7500 for MRT respectively.

  13. GPU-accelerated simulations of isolated black holes

    NASA Astrophysics Data System (ADS)

    Lewis, Adam G. M.; Pfeiffer, Harald P.

    2018-05-01

    We present a port of the numerical relativity code SpEC which is capable of running on NVIDIA GPUs. Since this code must be maintained in parallel with SpEC itself, a primary design consideration is to perform as few explicit code changes as possible. We therefore rely on a hierarchy of automated porting strategies. At the highest level we use TLoops, a C++ library of our design, to automatically emit CUDA code equivalent to tensorial expressions written into C++ source using a syntax similar to analytic calculation. Next, we trace out and cache explicit matrix representations of the numerous linear transformations in the SpEC code, which allows these to be performed on the GPU using pre-existing matrix-multiplication libraries. We port the few remaining important modules by hand. In this paper we detail the specifics of our port, and present benchmarks of it simulating isolated black hole spacetimes on several generations of NVIDIA GPU.

  14. Spreading of correlations in the Falicov-Kimball model

    NASA Astrophysics Data System (ADS)

    Herrmann, Andreas J.; Antipov, Andrey E.; Werner, Philipp

    2018-04-01

    We study dynamical properties of the one- and two-dimensional Falicov-Kimball model using lattice Monte Carlo simulations. In particular, we calculate the spreading of charge correlations in the equilibrium model and after an interaction quench. The results show a reduction of the light-cone velocity with interaction strength at low temperature, while the phase velocity increases. At higher temperature, the initial spreading is determined by the Fermi velocity of the noninteracting system and the maximum range of the correlations decreases with increasing interaction strength. Charge order correlations in the disorder potential enhance the range of the correlations. We also use the numerically exact lattice Monte Carlo results to benchmark the accuracy of equilibrium and nonequilibrium dynamical cluster approximation calculations. It is shown that the bias introduced by the mapping to a periodized cluster is substantial, and that from a numerical point of view, it is more efficient to simulate the lattice model directly.

  15. Summary of FY15 results of benchmark modeling activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arguello, J. Guadalupe

    2015-08-01

    Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance ofmore » the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.« less

  16. Numerical Benchmark of 3D Ground Motion Simulation in the Alpine valley of Grenoble, France.

    NASA Astrophysics Data System (ADS)

    Tsuno, S.; Chaljub, E.; Cornou, C.; Bard, P.

    2006-12-01

    Thank to the use of sophisticated numerical methods and to the access to increasing computational resources, our predictions of strong ground motion become more and more realistic and need to be carefully compared. We report our effort of benchmarking numerical methods of ground motion simulation in the case of the valley of Grenoble in the French Alps. The Grenoble valley is typical of a moderate seismicity area where strong site effects occur. The benchmark consisted in computing the seismic response of the `Y'-shaped Grenoble valley to (i) two local earthquakes (Ml<=3) for which recordings were avalaible; and (ii) two local hypothetical events (Mw=6) occuring on the so-called Belledonne Border Fault (BBF) [1]. A free-style prediction was also proposed, in which participants were allowed to vary the source and/or the model parameters and were asked to provide the resulting uncertainty in their estimation of ground motion. We received a total of 18 contributions from 14 different groups; 7 of these use 3D methods, among which 3 could handle surface topography, the other half comprises predictions based upon 1D (2 contributions), 2D (4 contributions) and empirical Green's function (EGF) (3 contributions) methods. Maximal frequency analysed ranged between 2.5 Hz for 3D calculations and 40 Hz for EGF predictions. We present a detailed comparison of the different predictions using raw indicators (e.g. peak values of ground velocity and acceleration, Fourier spectra, site over reference spectral ratios, ...) as well as sophisticated misfit criteria based upon previous works [2,3]. We further discuss the variability in estimating the importance of particular effects such as non-linear rheology, or surface topography. References: [1] Thouvenot F. et al., The Belledonne Border Fault: identification of an active seismic strike-slip fault in the western Alps, Geophys. J. Int., 155 (1), p. 174-192, 2003. [2] Anderson J., Quantitative measure of the goodness-of-fit of synthetic seismograms, proceedings of the 13th World Conference on Earthquake Engineering, Vancouver, paper #243, 2004. [3] Kristekova M. et al., Misfit Criteria for Quantitative Comparison of Seismograms, Bull. Seism. Soc. Am., in press, 2006.

  17. Upgrades for the CMS simulation

    DOE PAGES

    Lange, D. J.; Hildreth, M.; Ivantchenko, V. N.; ...

    2015-05-22

    Over the past several years, the CMS experiment has made significant changes to its detector simulation application. The geometry has been generalized to include modifications being made to the CMS detector for 2015 operations, as well as model improvements to the simulation geometry of the current CMS detector and the implementation of a number of approved and possible future detector configurations. These include both completely new tracker and calorimetry systems. We have completed the transition to Geant4 version 10, we have made significant progress in reducing the CPU resources required to run our Geant4 simulation. These have been achieved throughmore » both technical improvements and through numerical techniques. Substantial speed improvements have been achieved without changing the physics validation benchmarks that the experiment uses to validate our simulation application for use in production. As a result, we will discuss the methods that we implemented and the corresponding demonstrated performance improvements deployed for our 2015 simulation application.« less

  18. Numerical Simulations of Vortex Shedding in Hydraulic Turbines

    NASA Technical Reports Server (NTRS)

    Dorney, Daniel; Marcu, Bogdan

    2004-01-01

    Turbomachines for rocket propulsion applications operate with many different working fluids and flow conditions. Oxidizer boost turbines often operate in liquid oxygen, resulting in an incompressible flow field. Vortex shedding from airfoils in this flow environment can have adverse effects on both turbine performance and durability. In this study the effects of vortex shedding in a low-pressure oxidizer turbine are investigated. Benchmark results are also presented for vortex shedding behind a circular cylinder. The predicted results are compared with available experimental data.

  19. Compiler-directed cache management in multiprocessors

    NASA Technical Reports Server (NTRS)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  20. Evaluating the Effect of Labeled Benchmarks on Children’s Number Line Estimation Performance and Strategy Use

    PubMed Central

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302

  1. Evaluating the Effect of Labeled Benchmarks on Children's Number Line Estimation Performance and Strategy Use.

    PubMed

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.

  2. Accelerating cardiac bidomain simulations using graphics processing units.

    PubMed

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  3. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  4. Benchmarking nitrogen removal suspended-carrier biofilm systems using dynamic simulation.

    PubMed

    Vanhooren, H; Yuan, Z; Vanrolleghem, P A

    2002-01-01

    We are witnessing an enormous growth in biological nitrogen removal from wastewater. It presents specific challenges beyond traditional COD (carbon) removal. A possibility for optimised process design is the use of biomass-supporting media. In this paper, attached growth processes (AGP) are evaluated using dynamic simulations. The advantages of these systems that were qualitatively described elsewhere, are validated quantitatively based on a simulation benchmark for activated sludge treatment systems. This simulation benchmark is extended with a biofilm model that allows for fast and accurate simulation of the conversion of different substrates in a biofilm. The economic feasibility of this system is evaluated using the data generated with the benchmark simulations. Capital savings due to volume reduction and reduced sludge production are weighed out against increased aeration costs. In this evaluation, effluent quality is integrated as well.

  5. Introduction to the IWA task group on biofilm modeling.

    PubMed

    Noguera, D R; Morgenroth, E

    2004-01-01

    An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.

  6. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)

  7. Enhanced Verification Test Suite for Physics Simulation Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, J R; Brock, J S; Brandon, S T

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater sophistication or other physics regimes (e.g., energetic material response, magneto-hydrodynamics), would represent a scientifically desirable complement to the fundamental test cases discussed in this report. The authors believe that this document can be used to enhance the verification analyses undertaken at the DOE WP Laboratories and, thus, to improve the quality, credibility, and usefulness of the simulation codes that are analyzed with these problems.« less

  8. Direct Numerical Simulation of Turbulent Flow Over Complex Bathymetry

    NASA Astrophysics Data System (ADS)

    Yue, L.; Hsu, T. J.

    2017-12-01

    Direct numerical simulation (DNS) is regarded as a powerful tool in the investigation of turbulent flow featured with a wide range of time and spatial scales. With the application of coordinate transformation in a pseudo-spectral scheme, a parallelized numerical modeling system was created aiming at simulating flow over complex bathymetry with high numerical accuracy and efficiency. The transformed governing equations were integrated in time using a third-order low-storage Runge-Kutta method. For spatial discretization, the discrete Fourier expansion was adopted in the streamwise and spanwise direction, enforcing the periodic boundary condition in both directions. The Chebyshev expansion on Chebyshev-Gauss-Lobatto points was used in the wall-normal direction, assuming there is no-slip on top and bottom walls. The diffusion terms were discretized with a Crank-Nicolson scheme, while the advection terms dealiased with the 2/3 rule were discretized with an Adams-Bashforth scheme. In the prediction step, the velocity was calculated in physical domain by solving the resulting linear equation directly. However, the extra terms introduced by coordinate transformation impose a strict limitation to time step and an iteration method was applied to overcome this restriction in the correction step for pressure by solving the Helmholtz equation. The numerical solver is written in object-oriented C++ programing language utilizing Armadillo linear algebra library for matrix computation. Several benchmarking cases in laminar and turbulent flow were carried out to verify/validate the numerical model and very good agreements are achieved. Ongoing work focuses on implementing sediment transport capability for multiple sediment classes and parameterizations for flocculation processes.

  9. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  10. Reactivity impact of {sup 16}O thermal elastic-scattering nuclear data for some numerical and critical benchmark systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozier, K. S.; Roubtsov, D.; Plompen, A. J. M.

    2012-07-01

    The thermal neutron-elastic-scattering cross-section data for {sup 16}O used in various modern evaluated-nuclear-data libraries were reviewed and found to be generally too high compared with the best available experimental measurements. Some of the proposed revisions to the ENDF/B-VII.0 {sup 16}O data library and recent results from the TENDL system increase this discrepancy further. The reactivity impact of revising the {sup 16}O data downward to be consistent with the best measurements was tested using the JENDL-3.3 {sup 16}O cross-section values and was found to be very small in MCNP5 simulations of the UO{sub 2} and reactor-recycle MOX-fuel cases of the ANSmore » Doppler-defect numerical benchmark. However, large reactivity differences of up to about 14 mk (1400 pcm) were observed using {sup 16}O data files from several evaluated-nuclear-data libraries in MCNP5 simulations of the Los Alamos National Laboratory HEU heavy-water solution thermal critical experiments, which were performed in the 1950's. The latter result suggests that new measurements using HEU in a heavy-water-moderated critical facility, such as the ZED-2 zero-power reactor at the Chalk River Laboratories, might help to resolve the discrepancy between the {sup 16}O thermal elastic-scattering cross-section values and thereby reduce or better define its uncertainty, although additional assessment work would be needed to confirm this. (authors)« less

  11. Performance of Landslide-HySEA tsunami model for NTHMP benchmarking validation process

    NASA Astrophysics Data System (ADS)

    Macias, Jorge

    2017-04-01

    In its FY2009 Strategic Plan, the NTHMP required that all numerical tsunami inundation models be verified as accurate and consistent through a model benchmarking process. This was completed in 2011, but only for seismic tsunami sources and in a limited manner for idealized solid underwater landslides. Recent work by various NTHMP states, however, has shown that landslide tsunami hazard may be dominant along significant parts of the US coastline, as compared to hazards from other tsunamigenic sources. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory date sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. The Landslide-HySEA model has participated in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017. The aim of this presentation is to show some of the numerical results obtained for Landslide-HySEA in the framework of this benchmarking validation/verification effort. Acknowledgements. This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069), the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  12. A New Code SORD for Simulation of Polarized Light Scattering in the Earth Atmosphere

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Aliaksandr; Holben, Brent

    2016-01-01

    We report a new publicly available radiative transfer (RT) code for numerical simulation of polarized light scattering in plane-parallel atmosphere of the Earth. Using 44 benchmark tests, we prove high accuracy of the new RT code, SORD (Successive ORDers of scattering). We describe capabilities of SORD and show run time for each test on two different machines. At present, SORD is supposed to work as part of the Aerosol Robotic NETwork (AERONET) inversion algorithm. For natural integration with the AERONET software, SORD is coded in Fortran 90/95. The code is available by email request from the corresponding (first) author or from ftp://climate1.gsfc.nasa.gov/skorkin/SORD/.

  13. Investigation of the Dynamic Contact Angle Using a Direct Numerical Simulation Method.

    PubMed

    Zhu, Guangpu; Yao, Jun; Zhang, Lei; Sun, Hai; Li, Aifen; Shams, Bilal

    2016-11-15

    A large amount of residual oil, which exists as isolated oil slugs, remains trapped in reservoirs after water flooding. Numerous numerical studies are performed to investigate the fundamental flow mechanism of oil slugs to improve flooding efficiency. Dynamic contact angle models are usually introduced to simulate an accurate contact angle and meniscus displacement of oil slugs under a high capillary number. Nevertheless, in the oil slug flow simulation process, it is unnecessary to introduce the dynamic contact angle model because of a negligible change in the meniscus displacement after using the dynamic contact angle model when the capillary number is small. Therefore, a critical capillary number should be introduced to judge whether the dynamic contact model should be incorporated into simulations. In this study, a direct numerical simulation method is employed to simulate the oil slug flow in a capillary tube at the pore scale. The position of the interface between water and the oil slug is determined using the phase-field method. The capacity and accuracy of the model are validated using a classical benchmark: a dynamic capillary filling process. Then, different dynamic contact angle models and the factors that affect the dynamic contact angle are analyzed. The meniscus displacements of oil slugs with a dynamic contact angle and a static contact angle (SCA) are obtained during simulations, and the relative error between them is calculated automatically. The relative error limit has been defined to be 5%, beyond which the dynamic contact angle model needs to be incorporated into the simulation to approach the realistic displacement. Thus, the desired critical capillary number can be determined. A three-dimensional universal chart of critical capillary number, which functions as static contact angle and viscosity ratio, is given to provide a guideline for oil slug simulation. Also, a fitting formula is presented for ease of use.

  14. 3D numerical simulations of negative hydrogen ion extraction using realistic plasma parameters, geometry of the extraction aperture and full 3D magnetic field map

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Franzen, P.; Fantz, U.; Minea, T.

    2014-02-01

    Decreasing the co-extracted electron current while simultaneously keeping negative ion (NI) current sufficiently high is a crucial issue on the development plasma source system for ITER Neutral Beam Injector. To support finding the best extraction conditions the 3D Particle-in-Cell Monte Carlo Collision electrostatic code ONIX (Orsay Negative Ion eXtraction) has been developed. Close collaboration with experiments and other numerical models allows performing realistic simulations with relevant input parameters: plasma properties, geometry of the extraction aperture, full 3D magnetic field map, etc. For the first time ONIX has been benchmarked with commercial positive ions tracing code KOBRA3D. A very good agreement in terms of the meniscus position and depth has been found. Simulation of NI extraction with different e/NI ratio in bulk plasma shows high relevance of the direct negative ion extraction from the surface produced NI in order to obtain extracted NI current as in the experimental results from BATMAN testbed.

  15. GPUs, a New Tool of Acceleration in CFD: Efficiency and Reliability on Smoothed Particle Hydrodynamics Methods

    PubMed Central

    Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185

  16. Numerical Prediction of Signal for Magnetic Flux Leakage Benchmark Task

    NASA Astrophysics Data System (ADS)

    Lunin, V.; Alexeevsky, D.

    2003-03-01

    Numerical results predicted by the finite element method based code are presented. The nonlinear magnetic time-dependent benchmark problem proposed by the World Federation of Nondestructive Evaluation Centers, involves numerical prediction of normal (radial) component of the leaked field in the vicinity of two practically rectangular notches machined on a rotating steel pipe (with known nonlinear magnetic characteristic). One notch is located on external surface of pipe and other is on internal one, and both are oriented axially.

  17. Anelastic and Compressible Simulation of Moist Dynamics at Planetary Scales

    NASA Astrophysics Data System (ADS)

    Kurowski, M.; Smolarkiewicz, P. K.; Grabowski, W.

    2015-12-01

    Moist anelastic and compressible numerical solutions to the planetary baroclinic instability and climate benchmarks are compared. The solutions are obtained applying a consistent numerical framework for dis- crete integrations of the various nonhydrostatic flow equations. Moist extension of the baroclinic instability benchmark is formulated as an analog of the dry case. Flow patterns, surface vertical vorticity and pressure, total kinetic energy, power spectra, and total amount of condensed water are analyzed. The climate bench- mark extends the baroclinic instability study by addressing long-term statistics of an idealized planetary equilibrium and associated meridional transports. Short-term deterministic anelastic and compressible so- lutions differ significantly. In particular, anelastic baroclinic eddies propagate faster and develop slower owing to, respectively, modified dispersion relation and abbreviated baroclinic vorticity production. These eddies also carry less kinetic energy, and the onset of their rapid growth occurs later than for the compressible solutions. The observed differences between the two solutions are sensitive to initial conditions as they di- minish for large-amplitude excitations of the instability. In particular, on the climatic time scales, the anelastic and compressible solutions evince similar zonally averaged flow patterns with the matching meridional transports of entropy, momentum, and moisture.

  18. Constant pressure and temperature discrete-time Langevin molecular dynamics

    NASA Astrophysics Data System (ADS)

    Grønbech-Jensen, Niels; Farago, Oded

    2014-11-01

    We present a new and improved method for simultaneous control of temperature and pressure in molecular dynamics simulations with periodic boundary conditions. The thermostat-barostat equations are built on our previously developed stochastic thermostat, which has been shown to provide correct statistical configurational sampling for any time step that yields stable trajectories. Here, we extend the method and develop a set of discrete-time equations of motion for both particle dynamics and system volume in order to seek pressure control that is insensitive to the choice of the numerical time step. The resulting method is simple, practical, and efficient. The method is demonstrated through direct numerical simulations of two characteristic model systems—a one-dimensional particle chain for which exact statistical results can be obtained and used as benchmarks, and a three-dimensional system of Lennard-Jones interacting particles simulated in both solid and liquid phases. The results, which are compared against the method of Kolb and Dünweg [J. Chem. Phys. 111, 4453 (1999)], show that the new method behaves according to the objective, namely that acquired statistical averages and fluctuations of configurational measures are accurate and robust against the chosen time step applied to the simulation.

  19. Coupled Thermo-Hydro-Mechanical Numerical Framework for Simulating Unconventional Formations

    NASA Astrophysics Data System (ADS)

    Garipov, T. T.; White, J. A.; Lapene, A.; Tchelepi, H.

    2016-12-01

    Unconventional deposits are found in all world oil provinces. Modeling these systems is challenging, however, due to complex thermo-hydro-mechanical processes that govern their behavior. As a motivating example, we consider in situ thermal processing of oil shale deposits. When oil shale is heated to sufficient temperatures, kerogen can be converted to oil and gas products over a relatively short timespan. This phase change dramatically impact both the mechanical and hydrologic properties of the rock, leading to strongly coupled THMC interactions. Here, we present a numerical framework for simulating tightly-coupled chemistry, geomechanics, and multiphase flow within a reservoir simulator (the AD-GPRS General Purpose Research Simulator). We model changes in constitutive behavior of the rock using a thermoplasticity model that accounts for microstructural evolution. The multi-component, multiphase flow and transport processes of both mass and heat are modeled at the macroscopic (e.g., Darcy) scale. The phase compositions and properties are described by a cubic equation of state; Arrhenius-type chemical reactions are used to represent kerogen conversion. The system of partial differential equations is discretized using a combination of finite-volumes and finite-elements, respectively, for the flow and mechanics problems. Fully implicit and sequentially implicit method are used to solve resulting nonlinear problem. The proposed framework is verified against available analytical and numerical benchmark cases. We demonstrate the efficiency, performance, and capabilities of the proposed simulation framework by analyzing near well deformation in an oil shale formation.

  20. A 3-D Finite-Volume Non-hydrostatic Icosahedral Model (NIM)

    NASA Astrophysics Data System (ADS)

    Lee, Jin

    2014-05-01

    The Nonhydrostatic Icosahedral Model (NIM) formulates the latest numerical innovation of the three-dimensional finite-volume control volume on the quasi-uniform icosahedral grid suitable for ultra-high resolution simulations. NIM's modeling goal is to improve numerical accuracy for weather and climate simulations as well as to utilize the state-of-art computing architecture such as massive parallel CPUs and GPUs to deliver routine high-resolution forecasts in timely manner. NIM dynamic corel innovations include: * A local coordinate system remapped spherical surface to plane for numerical accuracy (Lee and MacDonald, 2009), * Grid points in a table-driven horizontal loop that allow any horizontal point sequence (A.E. MacDonald, et al., 2010), * Flux-Corrected Transport formulated on finite-volume operators to maintain conservative positive definite transport (J.-L, Lee, ET. Al., 2010), *Icosahedral grid optimization (Wang and Lee, 2011), * All differentials evaluated as three-dimensional finite-volume integrals around the control volume. The three-dimensional finite-volume solver in NIM is designed to improve pressure gradient calculation and orographic precipitation over complex terrain. NIM dynamical core has been successfully verified with various non-hydrostatic benchmark test cases such as internal gravity wave, and mountain waves in Dynamical Cores Model Inter-comparisons Projects (DCMIP). Physical parameterizations suitable for NWP are incorporated into NIM dynamical core and successfully tested with multimonth aqua-planet simulations. Recently, NIM has started real data simulations using GFS initial conditions. Results from the idealized tests as well as real-data simulations will be shown in the conference.

  1. Benchmarking with the BLASST Sessional Staff Standards Framework

    ERIC Educational Resources Information Center

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  2. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They range from simpler, purely thermal cases (benchmark T1) to more complex, coupled 2D TH cases (benchmarks TH1, TH2, and TH3). Some experimental cases conducted in cold room complement the validation approach. A web site hosted by LSCE (Laboratoire des Sciences du Climat et de l'Environnement) is an interaction platform for the participants and hosts the test cases database at the following address: https://wiki.lsce.ipsl.fr/interfrost. The results of the first stage of the benchmark exercise will be presented. We will mainly focus on the inter-comparison of participant results for the coupled cases (TH1, TH2 & TH3). Further perspectives of the exercise will also be presented. Extensions to more complex physical conditions (e.g. unsaturated conditions and geometrical deformations) are contemplated. In addition, 1D vertical cases of interest to the Climate Modeling community will be proposed. Keywords: Permafrost; Numerical modeling; River-soil interaction; Arctic systems; soil freeze-thaw

  3. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrie, Michael; Shadwick, B. A.

    2016-01-04

    Here, we present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Juttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviors that do not exist in the non relativistic case.more » The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.« less

  4. A benchmark study of the sea-level equation in GIA modelling

    NASA Astrophysics Data System (ADS)

    Martinec, Zdenek; Klemann, Volker; van der Wal, Wouter; Riva, Riccardo; Spada, Giorgio; Simon, Karen; Blank, Bas; Sun, Yu; Melini, Daniele; James, Tom; Bradley, Sarah

    2017-04-01

    The sea-level load in glacial isostatic adjustment (GIA) is described by the so called sea-level equation (SLE), which represents the mass redistribution between ice sheets and oceans on a deforming earth. Various levels of complexity of SLE have been proposed in the past, ranging from a simple mean global sea level (the so-called eustatic sea level) to the load with a deforming ocean bottom, migrating coastlines and a changing shape of the geoid. Several approaches to solve the SLE have been derived, from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, there has been no systematic intercomparison amongst the solvers through which the methods may be validated. The goal of this paper is to present a series of benchmark experiments designed for testing and comparing numerical implementations of the SLE. Our approach starts with simple load cases even though the benchmark will not result in GIA predictions for a realistic loading scenario. In the longer term we aim for a benchmark with a realistic loading scenario, and also for benchmark solutions with rotational feedback. The current benchmark uses an earth model for which Love numbers have been computed and benchmarked in Spada et al (2011). In spite of the significant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found can often be attributed to the different approximations inherent to the various algorithms. Literature G. Spada, V. R. Barletta, V. Klemann, R. E. M. Riva, Z. Martinec, P. Gasperini, B. Lund, D. Wolf, L. L. A. Vermeersen, and M. A. King, 2011. A benchmark study for glacial isostatic adjustment codes. Geophys. J. Int. 185: 106-132 doi:10.1111/j.1365-

  5. A Simple Graphical Method for Quantification of Disaster Management Surge Capacity Using Computer Simulation and Process-control Tools.

    PubMed

    Franc, Jeffrey Michael; Ingrassia, Pier Luigi; Verde, Manuela; Colombo, Davide; Della Corte, Francesco

    2015-02-01

    Surge capacity, or the ability to manage an extraordinary volume of patients, is fundamental for hospital management of mass-casualty incidents. However, quantification of surge capacity is difficult and no universal standard for its measurement has emerged, nor has a standardized statistical method been advocated. As mass-casualty incidents are rare, simulation may represent a viable alternative to measure surge capacity. Hypothesis/Problem The objective of the current study was to develop a statistical method for the quantification of surge capacity using a combination of computer simulation and simple process-control statistical tools. Length-of-stay (LOS) and patient volume (PV) were used as metrics. The use of this method was then demonstrated on a subsequent computer simulation of an emergency department (ED) response to a mass-casualty incident. In the derivation phase, 357 participants in five countries performed 62 computer simulations of an ED response to a mass-casualty incident. Benchmarks for ED response were derived from these simulations, including LOS and PV metrics for triage, bed assignment, physician assessment, and disposition. In the application phase, 13 students of the European Master in Disaster Medicine (EMDM) program completed the same simulation scenario, and the results were compared to the standards obtained in the derivation phase. Patient-volume metrics included number of patients to be triaged, assigned to rooms, assessed by a physician, and disposed. Length-of-stay metrics included median time to triage, room assignment, physician assessment, and disposition. Simple graphical methods were used to compare the application phase group to the derived benchmarks using process-control statistical tools. The group in the application phase failed to meet the indicated standard for LOS from admission to disposition decision. This study demonstrates how simulation software can be used to derive values for objective benchmarks of ED surge capacity using PV and LOS metrics. These objective metrics can then be applied to other simulation groups using simple graphical process-control tools to provide a numeric measure of surge capacity. Repeated use in simulations of actual EDs may represent a potential means of objectively quantifying disaster management surge capacity. It is hoped that the described statistical method, which is simple and reusable, will be useful for investigators in this field to apply to their own research.

  6. Coupling Hydraulic Fracturing Propagation and Gas Well Performance for Simulation of Production in Unconventional Shale Gas Reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, C.; Winterfeld, P. H.; Wu, Y. S.; Wang, Y.; Chen, D.; Yin, C.; Pan, Z.

    2014-12-01

    Hydraulic fracturing combined with horizontal drilling has made it possible to economically produce natural gas from unconventional shale gas reservoirs. An efficient methodology for evaluating hydraulic fracturing operation parameters, such as fluid and proppant properties, injection rates, and wellhead pressure, is essential for the evaluation and efficient design of these processes. Traditional numerical evaluation and optimization approaches are usually based on simulated fracture properties such as the fracture area. In our opinion, a methodology based on simulated production data is better, because production is the goal of hydraulic fracturing and we can calibrate this approach with production data that is already known. This numerical methodology requires a fully-coupled hydraulic fracture propagation and multi-phase flow model. In this paper, we present a general fully-coupled numerical framework to simulate hydraulic fracturing and post-fracture gas well performance. This three-dimensional, multi-phase simulator focuses on: (1) fracture width increase and fracture propagation that occurs as slurry is injected into the fracture, (2) erosion caused by fracture fluids and leakoff, (3) proppant subsidence and flowback, and (4) multi-phase fluid flow through various-scaled anisotropic natural and man-made fractures. Mathematical and numerical details on how to fully couple the fracture propagation and fluid flow parts are discussed. Hydraulic fracturing and production operation parameters, and properties of the reservoir, fluids, and proppants, are taken into account. The well may be horizontal, vertical, or deviated, as well as open-hole or cemented. The simulator is verified based on benchmarks from the literature and we show its application by simulating fracture network (hydraulic and natural fractures) propagation and production data history matching of a field in China. We also conduct a series of real-data modeling studies with different combinations of hydraulic fracturing parameters and present the methodology to design these operations with feedback of simulated production data. The unified model aids in the optimization of hydraulic fracturing design, operations, and production.

  7. Numerical simulation of freshwater/seawater interaction in a dual-permeability karst system with conduits: the development of discrete-continuum VDFST-CFP model

    NASA Astrophysics Data System (ADS)

    Xu, Zexuan; Hu, Bill

    2016-04-01

    Dual-permeability karst aquifers of porous media and conduit networks with significant different hydrological characteristics are widely distributed in the world. Discrete-continuum numerical models, such as MODFLOW-CFP and CFPv2, have been verified as appropriate approaches to simulate groundwater flow and solute transport in numerical modeling of karst hydrogeology. On the other hand, seawater intrusion associated with fresh groundwater resources contamination has been observed and investigated in numbers of coastal aquifers, especially under conditions of sea level rise. Density-dependent numerical models including SEAWAT are able to quantitatively evaluate the seawater/freshwater interaction processes. A numerical model of variable-density flow and solute transport - conduit flow process (VDFST-CFP) is developed to provide a better description of seawater intrusion and submarine groundwater discharge in a coastal karst aquifer with conduits. The coupling discrete-continuum VDFST-CFP model applies Darcy-Weisbach equation to simulate non-laminar groundwater flow in the conduit system in which is conceptualized and discretized as pipes, while Darcy equation is still used in continuum porous media. Density-dependent groundwater flow and solute transport equations with appropriate density terms in both conduit and porous media systems are derived and numerically solved using standard finite difference method with an implicit iteration procedure. Synthetic horizontal and vertical benchmarks are created to validate the newly developed VDFST-CFP model by comparing with other numerical models such as variable density SEAWAT, couplings of constant density groundwater flow and solute transport MODFLOW/MT3DMS and discrete-continuum CFPv2/UMT3D models. VDFST-CFP model improves the simulation of density dependent seawater/freshwater mixing processes and exchanges between conduit and matrix. Continuum numerical models greatly overestimated the flow rate under turbulent flow condition but discrete-continuum models provide more accurate results. Parameters sensitivities analysis indicates that conduit diameter and friction factor, matrix hydraulic conductivity and porosity are important parameters that significantly affect variable-density flow and solute transport simulation. The pros and cons of model assumptions, conceptual simplifications and numerical techniques in VDFST-CFP are discussed. In general, the development of VDFST-CFP model is an innovation in numerical modeling methodology and could be applied to quantitatively evaluate the seawater/freshwater interaction in coastal karst aquifers. Keywords: Discrete-continuum numerical model; Variable density flow and transport; Coastal karst aquifer; Non-laminar flow

  8. Analysis of a benchmark suite to evaluate mixed numeric and symbolic processing

    NASA Technical Reports Server (NTRS)

    Ragharan, Bharathi; Galant, David

    1992-01-01

    The suite of programs that formed the benchmark for a proposed advanced computer is described and analyzed. The features of the processor and its operating system that are tested by the benchmark are discussed. The computer codes and the supporting data for the analysis are given as appendices.

  9. Mixed Arlequin method for multiscale poromechanics problems: Mixed Arlequin method for multiscale poromechanics problems

    DOE PAGES

    Sun, WaiChing; Cai, Zhijun; Choo, Jinhyun

    2016-11-18

    An Arlequin poromechanics model is introduced to simulate the hydro-mechanical coupling effects of fluid-infiltrated porous media across different spatial scales within a concurrent computational framework. A two-field poromechanics problem is first recast as the twofold saddle point of an incremental energy functional. We then introduce Lagrange multipliers and compatibility energy functionals to enforce the weak compatibility of hydro-mechanical responses in the overlapped domain. Here, to examine the numerical stability of this hydro-mechanical Arlequin model, we derive a necessary condition for stability, the twofold inf–sup condition for multi-field problems, and establish a modified inf–sup test formulated in the product space ofmore » the solution field. We verify the implementation of the Arlequin poromechanics model through benchmark problems covering the entire range of drainage conditions. Finally, through these numerical examples, we demonstrate the performance, robustness, and numerical stability of the Arlequin poromechanics model.« less

  10. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison phase results

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Rühaak, Wolfram

    2016-04-01

    Climate change impacts in permafrost regions have received considerable attention recently due to the pronounced warming trends experienced in recent decades and which have been projected into the future. Large portions of these permafrost regions are characterized by surface water bodies (lakes, rivers) that interact with the surrounding permafrost often generating taliks (unfrozen zones) within the permafrost that allow for hydrologic interactions between the surface water bodies and underlying aquifers and thus influence the hydrologic response of a landscape to climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model past and future evolution such units (Kurylyk et al. 2014). However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, which can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. A benchmark exercise was initialized at the end of 2014. Participants convened from USA, Canada, Europe, representing 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones (Kurylyk et al. 2014; Grenier et al. in prep.; Rühaak et al. 2015). They range from simpler, purely thermal 1D cases to more complex, coupled 2D TH cases (benchmarks TH1, TH2, and TH3). Some experimental cases conducted in a cold room complement the validation approach. A web site hosted by LSCE (Laboratoire des Sciences du Climat et de l'Environnement) is an interaction platform for the participants and hosts the test case databases at the following address: https://wiki.lsce.ipsl.fr/interfrost. The results of the first stage of the benchmark exercise will be presented. We will mainly focus on the inter-comparison of participant results for the coupled cases TH2 & TH3. Both cases are essentially theoretical but include the full complexity of the coupled non-linear set of equations (heat transfer with conduction, advection, phase change and Darcian flow). The complete set of inter-comparison results shows that the participating codes all produce simulations which are quantitatively similar and correspond to physical intuition. From a quantitative perspective, they agree well over the whole set of performance measures. The differences among the simulation results will be discussed in more depth throughout the test cases especially for the identification of the threshold times for each system as these exhibited the least agreement. However, the results suggest that in spite of the difficulties associated with the resolution of the set of TH equations (coupled and non-linear structure with phase change providing steep slopes), the developed codes provide robust results with a qualitatively reasonable representation of the processes and offer a quantitatively realistic basis. Further perspectives of the exercise will also be presented.

  11. Numerical modelling of the Madison Dynamo Experiment.

    NASA Astrophysics Data System (ADS)

    Bayliss, R. A.; Wright, J. C.; Forest, C. B.; O'Connell, R.; Truitt, J. L.

    2000-10-01

    Growth, saturation and turbulent evolution of the Madison dynamo experiment is investigated numerically using a newly developed 3-D pseudo-spectral simulation of the MHD equations; results of the simulations will be compared to the experimental results obtained from the experiment. The code, Dynamo, is in Fortran90 and allows for full evolution of the magnetic and velocity fields. The induction equation governing B and the Navier-Stokes equation governing V are solved. The code uses a spectral representation via spherical harmonic basis functions of the vector fields in longitude and latitude, and finite differences in the radial direction. The magnetic field evolution has been benchmarked against the laminar kinematic dynamo predicted by M.L. Dudley and R.W. James (M.L. Dudley and R.W. James, Time-dependant kinematic dynamos with stationary flows, Proc. R. Soc. Lond. A 425, p. 407 (1989)). Initial results on magnetic field saturation, generated by the simultaneous evolution of magnetic and velocity fields be presented using a variety of mechanical forcing terms.

  12. Shift Verification and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less

  13. ogs6 - a new concept for porous-fractured media simulations

    NASA Astrophysics Data System (ADS)

    Naumov, Dmitri; Bilke, Lars; Fischer, Thomas; Rink, Karsten; Wang, Wenqing; Watanabe, Norihiro; Kolditz, Olaf

    2015-04-01

    OpenGeoSys (OGS) is a scientific open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THMC) processes in porous and fractured media, continuously developed since the mid-eighties. The basic concept is to provide a flexible numerical framework for solving coupled multi-field problems. OGS is targeting mainly on applications in environmental geoscience, e.g. in the fields of contaminant hydrology, water resources management, waste deposits, or geothermal energy systems, but it has also been successfully applied to new topics in energy storage recently. OGS is actively participating several international benchmarking initiatives, e.g. DECOVALEX (waste management), CO2BENCH (CO2 storage and sequestration), SeSBENCH (reactive transport processes) and HM-Intercomp (coupled hydrosystems). Despite the broad applicability of OGS in geo-, hydro- and energy-sciences, several shortcomings became obvious concerning the computational efficiency as well as the code structure became too sophisticated for further efficient development. OGS-5 was designed for object-oriented FEM applications. However, in many multi-field problems a certain flexibility of tailored numerical schemes is essential. Therefore, a new concept was designed to overcome existing bottlenecks. The paradigms for ogs6 are: - Flexibility of numerical schemes (FEM#FVM#FDM), - Computational efficiency (PetaScale ready), - Developer- and user-friendly. ogs6 has a module-oriented architecture based on thematic libraries (e.g. MeshLib, NumLib) on the large scale and uses object-oriented approach for the small scale interfaces. Usage of a linear algebra library (Eigen3) for the mathematical operations together with the ISO C++11 standard increases the expressiveness of the code and makes it more developer-friendly. The new C++ standard also makes the template meta-programming technique code used for compile-time optimizations more compact. We have transitioned the main code development to the GitHub code hosting system (https://github.com/ufz/ogs). The very flexible revision control system Git in combination with issue tracking, developer feedback and the code review options improve the code quality and the development process in general. The continuous testing procedure of the benchmarks as it was established for OGS-5 is maintained. Additionally unit testing, which is automatically triggered by any code changes, is executed by two continuous integration frameworks (Jenkins CI, Travis CI) which build and test the code on different operating systems (Windows, Linux, Mac OS), in multiple configurations and with different compilers (GCC, Clang, Visual Studio). To improve the testing possibilities further, XML based file input formats are introduced helping with automatic validation of the user contributed benchmarks. The first ogs6 prototype version 6.0.1 has been implemented for solving generic elliptic problems. Next steps are envisaged to transient, non-linear and coupled problems. Literature: [1] Kolditz O, Shao H, Wang W, Bauer S (eds) (2014): Thermo-Hydro-Mechanical-Chemical Processes in Fractured Porous Media: Modelling and Benchmarking - Closed Form Solutions. In: Terrestrial Environmental Sciences, Vol. 1, Springer, Heidelberg, ISBN 978-3-319-11893-2, 315pp. http://www.springer.com/earth+sciences+and+geography/geology/book/978-3-319-11893-2 [2] Naumov D (2015): Computational Fluid Dynamics in Unconsolidated Sediments: Model Generation and Discrete Flow Simulations, PhD thesis, Technische Universität Dresden.

  14. GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise Paul

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary. 09/2016: Tables 6 and 8 updated. AGR-2 input data added« less

  15. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise P.

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparisonmore » of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary.« less

  16. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  17. Efficient Modeling of Laser-Plasma Accelerators with INF&RNO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.

    2010-06-01

    The numerical modeling code INF&RNO (INtegrated Fluid& paRticle simulatioN cOde, pronounced"inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The codemore » has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less

  18. Efficient Modeling of Laser-Plasma Accelerators with INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.; Geddes, C. G. R.; Leemans, W. P.

    2010-11-01

    The numerical modeling code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde, pronounced "inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.

  19. An efficient hybrid method for stochastic reaction-diffusion biochemical systems with delay

    NASA Astrophysics Data System (ADS)

    Sayyidmousavi, Alireza; Ilie, Silvana

    2017-12-01

    Many chemical reactions, such as gene transcription and translation in living cells, need a certain time to finish once they are initiated. Simulating stochastic models of reaction-diffusion systems with delay can be computationally expensive. In the present paper, a novel hybrid algorithm is proposed to accelerate the stochastic simulation of delayed reaction-diffusion systems. The delayed reactions may be of consuming or non-consuming delay type. The algorithm is designed for moderately stiff systems in which the events can be partitioned into slow and fast subsets according to their propensities. The proposed algorithm is applied to three benchmark problems and the results are compared with those of the delayed Inhomogeneous Stochastic Simulation Algorithm. The numerical results show that the new hybrid algorithm achieves considerable speed-up in the run time and very good accuracy.

  20. ELECTROMAGNETISM, OPTICS, ACOUSTICS, HEAT TRANSFER, CLASSICAL MECHANICS, AND FLUID DYNAMICS: Highly Efficient Lattice Boltzmann Model for Compressible Fluids: Two-Dimensional Case

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Ai-Guo; Zhang, Guang-Cai; Gan, Yan-Biao; Cheng, Tao; Li, Ying-Jun

    2009-10-01

    We present a highly efficient lattice Boltzmann model for simulating compressible flows. This model is based on the combination of an appropriate finite difference scheme, a 16-discrete-velocity model [Kataoka and Tsutahara, Phys. Rev. E 69 (2004) 035701(R)] and reasonable dispersion and dissipation terms. The dispersion term effectively reduces the oscillation at the discontinuity and enhances numerical precision. The dissipation term makes the new model more easily meet with the von Neumann stability condition. This model works for both high-speed and low-speed flows with arbitrary specific-heat-ratio. With the new model simulation results for the well-known benchmark problems get a high accuracy compared with the analytic or experimental ones. The used benchmark tests include (i) Shock tubes such as the Sod, Lax, Sjogreen, Colella explosion wave, and collision of two strong shocks, (ii) Regular and Mach shock reflections, and (iii) Shock wave reaction on cylindrical bubble problems. With a more realistic equation of state or free-energy functional, the new model has the potential tostudy the complex procedure of shock wave reaction on porous materials.

  1. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT-Japan Joint Call and Istanbul Metropolitan Municipality are all acknowledged.

  2. Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation.

    PubMed

    Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk

    2017-05-01

    To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  3. E × B electron drift instability in Hall thrusters: Particle-in-cell simulations vs. theory

    NASA Astrophysics Data System (ADS)

    Boeuf, J. P.; Garrigues, L.

    2018-06-01

    The E × B Electron Drift Instability (E × B EDI), also called Electron Cyclotron Drift Instability, has been observed in recent particle simulations of Hall thrusters and is a possible candidate to explain anomalous electron transport across the magnetic field in these devices. This instability is characterized by the development of an azimuthal wave with wavelength in the mm range and velocity on the order of the ion acoustic velocity, which enhances electron transport across the magnetic field. In this paper, we study the development and convection of the E × B EDI in the acceleration and near plume regions of a Hall thruster using a simplified 2D axial-azimuthal Particle-In-Cell simulation. The simulation is collisionless and the ionization profile is not-self-consistent but rather is given as an input parameter of the model. The aim is to study the development and properties of the instability for different values of the ionization rate (i.e., of the total ion production rate or current) and to compare the results with the theory. An important result is that the wavelength of the simulated azimuthal wave scales as the electron Debye length and that its frequency is on the order of the ion plasma frequency. This is consistent with the theory predicting destruction of electron cyclotron resonance of the E × B EDI in the non-linear regime resulting in the transition to an ion acoustic instability. The simulations also show that for plasma densities smaller than under nominal conditions of Hall thrusters the field fluctuations induced by the E × B EDI are no longer sufficient to significantly enhance electron transport across the magnetic field, and transit time instabilities develop in the axial direction. The conditions and results of the simulations are described in detail in this paper and they can serve as benchmarks for comparisons between different simulation codes. Such benchmarks would be very useful to study the role of numerical noise (numerical noise can also be responsible to the destruction of electron cyclotron resonance) or the influence of the period of the azimuthal domain, as well as to reach a better and consensual understanding of the physics.

  4. Performance Comparison of NAMI DANCE and FLOW-3D® Models in Tsunami Propagation, Inundation and Currents using NTHMP Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioglu Sogut, Deniz; Yalciner, Ahmet Cevdet

    2018-06-01

    Field observations provide valuable data regarding nearshore tsunami impact, yet only in inundation areas where tsunami waves have already flooded. Therefore, tsunami modeling is essential to understand tsunami behavior and prepare for tsunami inundation. It is necessary that all numerical models used in tsunami emergency planning be subject to benchmark tests for validation and verification. This study focuses on two numerical codes, NAMI DANCE and FLOW-3D®, for validation and performance comparison. NAMI DANCE is an in-house tsunami numerical model developed by the Ocean Engineering Research Center of Middle East Technical University, Turkey and Laboratory of Special Research Bureau for Automation of Marine Research, Russia. FLOW-3D® is a general purpose computational fluid dynamics software, which was developed by scientists who pioneered in the design of the Volume-of-Fluid technique. The codes are validated and their performances are compared via analytical, experimental and field benchmark problems, which are documented in the ``Proceedings and Results of the 2011 National Tsunami Hazard Mitigation Program (NTHMP) Model Benchmarking Workshop'' and the ``Proceedings and Results of the NTHMP 2015 Tsunami Current Modeling Workshop". The variations between the numerical solutions of these two models are evaluated through statistical error analysis.

  5. StirMark Benchmark: audio watermarking attacks based on lossy compression

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  6. A new code SORD for simulation of polarized light scattering in the Earth atmosphere

    NASA Astrophysics Data System (ADS)

    Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Aliaksandr; Holben, Brent

    2016-05-01

    We report a new publicly available radiative transfer (RT) code for numerical simulation of polarized light scattering in plane-parallel Earth atmosphere. Using 44 benchmark tests, we prove high accuracy of the new RT code, SORD (Successive ORDers of scattering1, 2). We describe capabilities of SORD and show run time for each test on two different machines. At present, SORD is supposed to work as part of the Aerosol Robotic NETwork3 (AERONET) inversion algorithm. For natural integration with the AERONET software, SORD is coded in Fortran 90/95. The code is available by email request from the corresponding (first) author or from ftp://climate1.gsfc.nasa.gov/skorkin/SORD/ or ftp://maiac.gsfc.nasa.gov/pub/SORD.zip

  7. Atomization simulations using an Eulerian-VOF-Lagrangian method

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Liaw, Paul; Chen, C. P.

    1994-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservations are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present innovative approach by simulating benchmark problems including the coaxial jet atomization.

  8. Benchmarking in a differentially heated rotating annulus experiment: Multiple equilibria in the light of laboratory experiments and simulations

    NASA Astrophysics Data System (ADS)

    Vincze, Miklos; Harlander, Uwe; Borchert, Sebastian; Achatz, Ulrich; Baumann, Martin; Egbers, Christoph; Fröhlich, Jochen; Hertel, Claudia; Heuveline, Vincent; Hickel, Stefan; von Larcher, Thomas; Remmler, Sebastian

    2014-05-01

    In the framework of the German Science Foundation's (DFG) priority program 'MetStröm' various laboratory experiments have been carried out in a differentially heated rotating annulus configuration in order to test, validate and tune numerical methods to be used for modeling large-scale atmospheric processes. This classic experimental set-up is well known since the late 1940s and is a widely studied minimal model of the general mid-latitude atmospheric circulation. The two most relevant factors of cyclogenesis, namely rotation and meridional temperature gradient are quite well captured in this simple arrangement. The tabletop-size rotating tank is divided into three sections by coaxial cylindrical sidewalls. The innermost section is cooled whereas the outermost annular cavity is heated, therefore the working fluid (de-ionized water) in the middle annular section experiences differential heat flow, which imposes thermal (density) stratification on the fluid. At high enough rotation rates the isothermal surfaces tilt, leading to baroclinic instability. The extra potential energy stored in this unstable configuration is then converted into kinetic energy, exciting drifting wave patterns of temperature and momentum anomalies. The signatures of these baroclinic waves at the free water surface have been analysed via infrared thermography in a wide range of rotation rates (keeping the radial temperature difference constant) and under different initial conditions (namely, initial spin-up and "spin-down"). Paralelly to the laboratory simulations of BTU Cottbus-Senftenberg, five other groups from the MetStröm collaboration have conducted simulations in the same parameter regime using different numerical approaches and solvers, and applying different initial conditions and perturbations for stability analysis. The obtained baroclinic wave patterns have been evaluated via determining and comparing their Empirical Orthogonal Functions (EOFs), drift rates and dominant wave modes. Thus certain "benchmarks" have been created that can later be used as test cases for atmospheric numerical model validation. Both in the experiments and in the numerics multiple equilibrium states have been observed in the form of hysteretic behavior depending on the initial conditions. The precise quantification of these state and wave mode transitions may shed light to some aspects of the basic underlying dynamics of the baroclinic annulus configuration, still to be understood.

  9. Joint numerical study of the 2011 Tohoku-Oki tsunami: comparative propagation simulations and high resolution coastal models

    NASA Astrophysics Data System (ADS)

    Loevenbruck, Anne; Arpaia, Luca; Ata, Riadh; Gailler, Audrey; Hayashi, Yutaka; Hébert, Hélène; Heinrich, Philippe; Le Gal, Marine; Lemoine, Anne; Le Roy, Sylvestre; Marcer, Richard; Pedreros, Rodrigo; Pons, Kevin; Ricchiuto, Mario; Violeau, Damien

    2017-04-01

    This study is part of the joint actions carried out within TANDEM (Tsunamis in northern AtlaNtic: Definition of Effects by Modeling). This French project, mainly dedicated to the appraisal of coastal effects due to tsunami waves on the French coastlines, was initiated after the catastrophic 2011 Tohoku-Oki tsunami. This event, which tragically struck Japan, drew the attention to the importance of tsunami risk assessment, in particular when nuclear facilities are involved. As a contribution to this challenging task, the TANDEM partners intend to provide guidance for the French Atlantic area based on numerical simulation. One of the identified objectives consists in designing, adapting and validating simulation codes for tsunami hazard assessment. Besides an integral benchmarking workpackage, the outstanding database of the 2011 event offers the TANDEM partners the opportunity to test their numerical tools with a real case. As a prerequisite, among the numerous published seismic source models arisen from the inversion of the various available records, a couple of coseismic slip distributions have been selected to provide common initial input parameters for the tsunami computations. After possible adaptations or specific developments, the different codes are employed to simulate the Tohoku-Oki tsunami from its source to the northeast Japanese coastline. The results are tested against the numerous tsunami measurements and, when relevant, comparisons of the different codes are carried out. First, the results related to the oceanic propagation phase are compared with the offshore records. Then, the modeled coastal impacts are tested against the onshore data. Flooding at a regional scale is considered, but high resolution simulations are also performed with some of the codes. They allow examining in detail the runup amplitudes and timing, as well as the complexity of the tsunami interaction with the coastal structures. The work is supported by the Tandem project in the frame of French PIA grant ANR-11-RSNR-00023.

  10. Numerical optimization of actuator trajectories for ITER hybrid scenario profile evolution

    NASA Astrophysics Data System (ADS)

    van Dongen, J.; Felici, F.; Hogeweij, G. M. D.; Geelen, P.; Maljaars, E.

    2014-12-01

    Optimal actuator trajectories for an ITER hybrid scenario ramp-up are computed using a numerical optimization method. For both L-mode and H-mode scenarios, the time trajectory of plasma current, EC heating and current drive distribution is determined that minimizes a chosen cost function, while satisfying constraints. The cost function is formulated to reflect two desired properties of the plasma q profile at the end of the ramp-up. The first objective is to maximize the ITG turbulence threshold by maximizing the volume-averaged s/q ratio. The second objective is to achieve a stationary q profile by having a flat loop voltage profile. Actuator and physics-derived constraints are included, imposing limits on plasma current, ramp rates, internal inductance and q profile. This numerical method uses the fast control-oriented plasma profile evolution code RAPTOR, which is successfully benchmarked against more complete CRONOS simulations for L-mode and H-mode mode ITER hybrid scenarios. It is shown that the optimized trajectories computed using RAPTOR also result in an improved ramp-up scenario for CRONOS simulations using the same input trajectories. Furthermore, the optimal trajectories are shown to vary depending on the precise timing of the L-H transition.

  11. A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    1998-01-01

    An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.

  12. Impact of velocity space distribution on hybrid kinetic-magnetohydrodynamic simulation of the (1,1) mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Charlson C.

    2008-07-15

    Numeric studies of the impact of the velocity space distribution on the stabilization of (1,1) internal kink mode and excitation of the fishbone mode are performed with a hybrid kinetic-magnetohydrodynamic model. These simulations demonstrate an extension of the physics capabilities of NIMROD[C. R. Sovinec et al., J. Comput. Phys. 195, 355 (2004)], a three-dimensional extended magnetohydrodynamic (MHD) code, to include the kinetic effects of an energetic minority ion species. Kinetic effects are captured by a modification of the usual MHD momentum equation to include a pressure tensor calculated from the {delta}f particle-in-cell method [S. E. Parker and W. W. Lee,more » Phys. Fluids B 5, 77 (1993)]. The particles are advanced in the self-consistent NIMROD fields. We outline the implementation and present simulation results of energetic minority ion stabilization of the (1,1) internal kink mode and excitation of the fishbone mode. A benchmark of the linear growth rate and real frequency is shown to agree well with another code. The impact of the details of the velocity space distribution is examined; particularly extending the velocity space cutoff of the simulation particles. Modestly increasing the cutoff strongly impacts the (1,1) mode. Numeric experiments are performed to study the impact of passing versus trapped particles. Observations of these numeric experiments suggest that assumptions of energetic particle effects should be re-examined.« less

  13. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  14. Higher-Order Compact Schemes for Numerical Simulation of Incompressible Flows

    NASA Technical Reports Server (NTRS)

    Wilson, Robert V.; Demuren, Ayodeji O.; Carpenter, Mark

    1998-01-01

    A higher order accurate numerical procedure has been developed for solving incompressible Navier-Stokes equations for 2D or 3D fluid flow problems. It is based on low-storage Runge-Kutta schemes for temporal discretization and fourth and sixth order compact finite-difference schemes for spatial discretization. The particular difficulty of satisfying the divergence-free velocity field required in incompressible fluid flow is resolved by solving a Poisson equation for pressure. It is demonstrated that for consistent global accuracy, it is necessary to employ the same order of accuracy in the discretization of the Poisson equation. Special care is also required to achieve the formal temporal accuracy of the Runge-Kutta schemes. The accuracy of the present procedure is demonstrated by application to several pertinent benchmark problems.

  15. Numerical modeling of spray combustion with an advanced VOF method

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul

    1995-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.

  16. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  17. Analogue experiments as benchmarks for models of lava flow emplacement

    NASA Astrophysics Data System (ADS)

    Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.

    2013-12-01

    During an effusive volcanic eruption, the crisis management is mainly based on the prediction of lava flow advance and its velocity. The spreading of a lava flow, seen as a gravity current, depends on its "effective rheology" and on the effusion rate. Fast-computing models have arisen in the past decade in order to predict in near real time lava flow path and rate of advance. This type of model, crucial to mitigate volcanic hazards and organize potential evacuation, has been mainly compared a posteriori to real cases of emplaced lava flows. The input parameters of such simulations applied to natural eruptions, especially effusion rate and topography, are often not known precisely, and are difficult to evaluate after the eruption. It is therefore not straightforward to identify the causes of discrepancies between model outputs and observed lava emplacement, whereas the comparison of models with controlled laboratory experiments appears easier. The challenge for numerical simulations of lava flow emplacement is to model the simultaneous advance and thermal structure of viscous lava flows. To provide original constraints later to be used in benchmark numerical simulations, we have performed lab-scale experiments investigating the cooling of isoviscous gravity currents. The simplest experimental set-up is as follows: silicone oil, whose viscosity, around 5 Pa.s, varies less than a factor of 2 in the temperature range studied, is injected from a point source onto a horizontal plate and spreads axisymmetrically. The oil is injected hot, and progressively cools down to ambient temperature away from the source. Once the flow is developed, it presents a stationary radial thermal structure whose characteristics depend on the input flow rate. In addition to the experimental observations, we have developed in Garel et al., JGR, 2012 a theoretical model confirming the relationship between supply rate, flow advance and stationary surface thermal structure. We also provide experimental observations of the effect of wind the surface thermal structure of a viscous flow, that could be used to benchmark a thermal heat loss model. We will also briefly present more complex analogue experiments using wax material. These experiments present discontinuous advance behavior, and a dual surface thermal structure with low (solidified) vs. high (hot liquid exposed at the surface) surface temperatures regions. Emplacement models should tend to reproduce these two features, also observed on lava flows, to better predict the hazard of lava inundation.

  18. Numerical modelling of gravel unconstrained flow experiments with the DAN3D and RASH3D codes

    NASA Astrophysics Data System (ADS)

    Sauthier, Claire; Pirulli, Marina; Pisani, Gabriele; Scavia, Claudio; Labiouse, Vincent

    2015-12-01

    Landslide continuum dynamic models have improved considerably in the last years, but a consensus on the best method of calibrating the input resistance parameter values for predictive analyses has not yet emerged. In the present paper, numerical simulations of a series of laboratory experiments performed at the Laboratory for Rock Mechanics of the EPF Lausanne were undertaken with the RASH3D and DAN3D numerical codes. They aimed at analysing the possibility to use calibrated ranges of parameters (1) in a code different from that they were obtained from and (2) to simulate potential-events made of a material with the same characteristics as back-analysed past-events, but involving a different volume and propagation path. For this purpose, one of the four benchmark laboratory tests was used as past-event to calibrate the dynamic basal friction angle assuming a Coulomb-type behaviour of the sliding mass, and this back-analysed value was then used to simulate the three other experiments, assumed as potential-events. The computational findings show good correspondence with experimental results in terms of characteristics of the final deposits (i.e., runout, length and width). Furthermore, the obtained best fit values of the dynamic basal friction angle for the two codes turn out to be close to each other and within the range of values measured with pseudo-dynamic tilting tests.

  19. GLOFRIM v1.0 - A globally applicable computational framework for integrated hydrological-hydrodynamic modelling

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis M.; Neal, Jeffrey C.; Baart, Fedor; van Beek, Rens; Winsemius, Hessel C.; Bates, Paul D.; Bierkens, Marc F. P.

    2017-10-01

    We here present GLOFRIM, a globally applicable computational framework for integrated hydrological-hydrodynamic modelling. GLOFRIM facilitates spatially explicit coupling of hydrodynamic and hydrologic models and caters for an ensemble of models to be coupled. It currently encompasses the global hydrological model PCR-GLOBWB as well as the hydrodynamic models Delft3D Flexible Mesh (DFM; solving the full shallow-water equations and allowing for spatially flexible meshing) and LISFLOOD-FP (LFP; solving the local inertia equations and running on regular grids). The main advantages of the framework are its open and free access, its global applicability, its versatility, and its extensibility with other hydrological or hydrodynamic models. Before applying GLOFRIM to an actual test case, we benchmarked both DFM and LFP for a synthetic test case. Results show that for sub-critical flow conditions, discharge response to the same input signal is near-identical for both models, which agrees with previous studies. We subsequently applied the framework to the Amazon River basin to not only test the framework thoroughly, but also to perform a first-ever benchmark of flexible and regular grids on a large-scale. Both DFM and LFP produce comparable results in terms of simulated discharge with LFP exhibiting slightly higher accuracy as expressed by a Kling-Gupta efficiency of 0.82 compared to 0.76 for DFM. However, benchmarking inundation extent between DFM and LFP over the entire study area, a critical success index of 0.46 was obtained, indicating that the models disagree as often as they agree. Differences between models in both simulated discharge and inundation extent are to a large extent attributable to the gridding techniques employed. In fact, the results show that both the numerical scheme of the inundation model and the gridding technique can contribute to deviations in simulated inundation extent as we control for model forcing and boundary conditions. This study shows that the presented computational framework is robust and widely applicable. GLOFRIM is designed as open access and easily extendable, and thus we hope that other large-scale hydrological and hydrodynamic models will be added. Eventually, more locally relevant processes would be captured and more robust model inter-comparison, benchmarking, and ensemble simulations of flood hazard on a large scale would be allowed for.

  20. Numerical simulation of heat transfer to separation tio2/water nanofluids flow in an asymmetric abrupt expansion

    NASA Astrophysics Data System (ADS)

    Oon, Cheen Sean; Nee Yew, Sin; Chew, Bee Teng; Salim Newaz, Kazi Md; Al-Shamma'a, Ahmed; Shaw, Andy; Amiri, Ahmad

    2015-05-01

    Flow separation and reattachment of 0.2% TiO2 nanofluid in an asymmetric abrupt expansion is studied in this paper. Such flows occur in various engineering and heat transfer applications. Computational fluid dynamics package (FLUENT) is used to investigate turbulent nanofluid flow in the horizontal double-tube heat exchanger. The meshing of this model consists of 43383 nodes and 74891 elements. Only a quarter of the annular pipe is developed and simulated as it has symmetrical geometry. Standard k-epsilon second order implicit, pressure based-solver equation is applied. Reynolds numbers between 17050 and 44545, step height ratio of 1 and 1.82 and constant heat flux of 49050 W/m2 was utilized in the simulation. Water was used as a working fluid to benchmark the study of the heat transfer enhancement in this case. Numerical simulation results show that the increase in the Reynolds number increases the heat transfer coefficient and Nusselt number of the flowing fluid. Moreover, the surface temperature will drop to its lowest value after the expansion and then gradually increase along the pipe. Finally, the chaotic movement and higher thermal conductivity of the TiO2 nanoparticles have contributed to the overall heat transfer enhancement of the nanofluid compare to the water.

  1. LES models for incompressible magnetohydrodynamics derived from the variational multiscale formulation

    NASA Astrophysics Data System (ADS)

    Sondak, David; Oberai, Assad

    2012-10-01

    Novel large eddy simulation (LES) models are developed for incompressible magnetohydrodynamics (MHD). These models include the application of the variational multiscale formulation (VMS) of LES to the equations of incompressible MHD, a new residual-based eddy viscosity model (RBEVM,) and a mixed LES model that combines the strengths of both of these models. The new models result in a consistent numerical method that is relatively simple to implement. A dynamic procedure for determining model coefficients is no longer required. The new LES models are tested on a decaying Taylor-Green vortex generalized to MHD and benchmarked against classical and state-of-the art LES turbulence models as well as direct numerical simulations (DNS). These new models are able to account for the essential MHD physics which is demonstrated via comparisons of energy spectra. We also compare the performance of our models to a DNS simulation by A. Pouquet et al., for which the ratio of DNS modes to LES modes is 262,144. Additionally, we extend these models to a finite element setting in which boundary conditions play a role. A classic problem on which we test these models is turbulent channel flow, which in the case of MHD, is called Hartmann flow.

  2. Low-Dissipation Advection Schemes Designed for Large Eddy Simulations of Hypersonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    White, Jeffrey A.; Baurle, Robert A.; Fisher, Travis C.; Quinlan, Jesse R.; Black, William S.

    2012-01-01

    The 2nd-order upwind inviscid flux scheme implemented in the multi-block, structured grid, cell centered, finite volume, high-speed reacting flow code VULCAN has been modified to reduce numerical dissipation. This modification was motivated by the desire to improve the codes ability to perform large eddy simulations. The reduction in dissipation was accomplished through a hybridization of non-dissipative and dissipative discontinuity-capturing advection schemes that reduces numerical dissipation while maintaining the ability to capture shocks. A methodology for constructing hybrid-advection schemes that blends nondissipative fluxes consisting of linear combinations of divergence and product rule forms discretized using 4th-order symmetric operators, with dissipative, 3rd or 4th-order reconstruction based upwind flux schemes was developed and implemented. A series of benchmark problems with increasing spatial and fluid dynamical complexity were utilized to examine the ability of the candidate schemes to resolve and propagate structures typical of turbulent flow, their discontinuity capturing capability and their robustness. A realistic geometry typical of a high-speed propulsion system flowpath was computed using the most promising of the examined schemes and was compared with available experimental data to demonstrate simulation fidelity.

  3. Impact of uncertainties in free stream conditions on the aerodynamics of a rectangular cylinder

    NASA Astrophysics Data System (ADS)

    Mariotti, Alessandro; Shoeibi Omrani, Pejman; Witteveen, Jeroen; Salvetti, Maria Vittoria

    2015-11-01

    The BARC benchmark deals with the flow around a rectangular cylinder with chord-to-depth ratio equal to 5. This flow configuration is of practical interest for civil and industrial structures and it is characterized by massively separated flow and unsteadiness. In a recent review of BARC results, significant dispersion was observed both in experimental and numerical predictions of some flow quantities, which are extremely sensitive to various uncertainties, which may be present in experiments and simulations. Besides modeling and numerical errors, in simulations it is difficult to exactly reproduce the experimental conditions due to uncertainties in the set-up parameters, which sometimes cannot be exactly controlled or characterized. Probabilistic methods and URANS simulations are used to investigate the impact of the uncertainties in the following set-up parameters: the angle of incidence, the free stream longitudinal turbulence intensity and length scale. Stochastic collocation is employed to perform the probabilistic propagation of the uncertainty. The discretization and modeling errors are estimated by repeating the same analysis for different grids and turbulence models. The results obtained for different assumed PDF of the set-up parameters are also compared.

  4. Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set

    NASA Astrophysics Data System (ADS)

    Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.

    2017-05-01

    A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.

  5. Numerical simulation of the submarine landslides and tsunami occurred at Port Valdez, AK during 1964 Alaska Earthquake with Landslide-HySEA model

    NASA Astrophysics Data System (ADS)

    González-Vida, Jose M.; Ortega, Sergio; Macías, Jorge; Castro, Manuel J.; Escalante, Cipriano

    2017-04-01

    This is a benchmark problem recently proposed in the framework of the Landslide Tsunami Model Benchmarking Workshop organized by the NTHMP (National tsunami Hazard mitigation program -USA-) at Galveston (USA). The benchmark is based on the historical event which occurred at Port Valdez, AK during the Alaska Earthquake of March 27, 1964. The great disaster during the Mw9.2 Alaska Earthquake happened in the dock and harbour area of Port Valdez, where a massive submarine landslide generated a tsunami, inundating the waterfront up to two blocks inland. Then, a second wave crossed the waterfront 10-15 minutes after the first wave, carrying a large amount of the debris. It has been described as a violent surging wave only slightly smaller than the first. It is believed that the second wave which flooded the waterfront was originated at the other side of the Port Valdez near the Shoup Bay moraine. The benchmark consists in simulating with the (GPU based) Landslide-HySEA model the extent of inundation for two slide events, based on before and after bathymetry data, eye-witness observations of the event, and observed runup distribution. First, both landslides have been simulated separately, studying time series of the water waves at determined locations, runups at different areas and the extent of inundation around the first two blocks inland of Port Valdez. Then, the two landslides are triggered at the same time and the joint effect is studied. Obtained results are satisfactory and they agree with the existing observations. References Castro, M. J., Fernández-Nieto, E. D., González-Vida, J. M., Parés, C. (2011). Numerical Treatment of the Loss of Hyperbolicity of the Two-Layer Shallow-Water System. Journal of Scientific Computing, 48(1):16-40. Fernández, E.H., Bouchut, F., Bresh, D., Castro, M.J. and, Mangeney, A. (2008). A new Savage-Hutter type model for submarine avalanches and generated tsunami. J. Comp. Phys., 227: 7720-7754. Fernández-Nieto, E.D., Castro, M.J., Parés, C. (2011). On an Intermediate Field Capturing Riemann Solver Based on a Parabolic Viscosity Matrix for the Two-Layer Shallow Water System. J. Sci. Comp. 48:117-140. Macías, J., Vázquez, J.T., Fernández-Salas, L.M., González-Vida, J.M., Bárcenas, P., Castro, M.J., Díaz-del-Río, and V., Alonso, B. (2015). The Al-Boraní submarine landslide and associated tsunami. A modelling approach. Marine Geology, 361:79-95.

  6. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  7. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    PubMed

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  8. LES-based filter-matrix lattice Boltzmann model for simulating fully developed turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Zhuo, Congshan; Zhong, Chengwen

    2016-11-01

    In this paper, a three-dimensional filter-matrix lattice Boltzmann (FMLB) model based on large eddy simulation (LES) was verified for simulating wall-bounded turbulent flows. The Vreman subgrid-scale model was employed in the present FMLB-LES framework, which had been proved to be capable of predicting turbulent near-wall region accurately. The fully developed turbulent channel flows were performed at a friction Reynolds number Reτ of 180. The turbulence statistics computed from the present FMLB-LES simulations, including mean stream velocity profile, Reynolds stress profile and root-mean-square velocity fluctuations greed well with the LES results of multiple-relaxation-time (MRT) LB model, and some discrepancies in comparison with those direct numerical simulation (DNS) data of Kim et al. was also observed due to the relatively low grid resolution. Moreover, to investigate the influence of grid resolution on the present LES simulation, a DNS simulation on a finer gird was also implemented by present FMLB-D3Q19 model. Comparisons of detailed computed various turbulence statistics with available benchmark data of DNS showed quite well agreement.

  9. WWTP dynamic disturbance modelling--an essential module for long-term benchmarking development.

    PubMed

    Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    Intensive use of the benchmark simulation model No. 1 (BSM1), a protocol for objective comparison of the effectiveness of control strategies in biological nitrogen removal activated sludge plants, has also revealed a number of limitations. Preliminary definitions of the long-term benchmark simulation model No. 1 (BSM1_LT) and the benchmark simulation model No. 2 (BSM2) have been made to extend BSM1 for evaluation of process monitoring methods and plant-wide control strategies, respectively. Influent-related disturbances for BSM1_LT/BSM2 are to be generated with a model, and this paper provides a general overview of the modelling methods used. Typical influent dynamic phenomena generated with the BSM1_LT/BSM2 influent disturbance model, including diurnal, weekend, seasonal and holiday effects, as well as rainfall, are illustrated with simulation results. As a result of the work described in this paper, a proposed influent model/file has been released to the benchmark developers for evaluation purposes. Pending this evaluation, a final BSM1_LT/BSM2 influent disturbance model definition is foreseen. Preliminary simulations with dynamic influent data generated by the influent disturbance model indicate that default BSM1 activated sludge plant control strategies will need extensions for BSM1_LT/BSM2 to efficiently handle 1 year of influent dynamics.

  10. Vector radiative transfer code SORD: Performance analysis and quick start guide

    NASA Astrophysics Data System (ADS)

    Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Alexander; Holben, Brent; Kokhanovsky, Alexander

    2017-10-01

    We present a new open source polarized radiative transfer code SORD written in Fortran 90/95. SORD numerically simulates propagation of monochromatic solar radiation in a plane-parallel atmosphere over a reflecting surface using the method of successive orders of scattering (hence the name). Thermal emission is ignored. We did not improve the method in any way, but report the accuracy and runtime in 52 benchmark scenarios. This paper also serves as a quick start user's guide for the code available from ftp://maiac.gsfc.nasa.gov/pub/skorkin, from the JQSRT website, or from the corresponding (first) author.

  11. Towards a physical interpretation of the entropic lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Malaspinas, Orestis; Deville, Michel; Chopard, Bastien

    2008-12-01

    The entropic lattice Boltzmann method (ELBM) is one among several different versions of the lattice Boltzmann method for the simulation of hydrodynamics. The collision term of the ELBM is characterized by a nonincreasing H function, guaranteed by a variable relaxation time. We propose here an analysis of the ELBM using the Chapman-Enskog expansion. We show that it can be interpreted as some kind of subgrid model, where viscosity correction scales like the strain rate tensor. We confirm our analytical results by the numerical computations of the relaxation time modifications on the two-dimensional dipole-wall interaction benchmark.

  12. Benchmark Results Of Active Tracer Particles In The Open Souce Code ASPECT For Modelling Convection In The Earth's Mantle

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Kaloti, A. P.; Levinson, H. R.; Nguyen, N.; Puckett, E. G.; Lokavarapu, H. V.

    2016-12-01

    We present the results of three standard benchmarks for the new active tracer particle algorithm in ASPECT. The three benchmarks are SolKz, SolCx, and SolVI (also known as the 'inclusion benchmark') first proposed by Duretz, May, Gerya, and Tackley (G Cubed, 2011) and in subsequent work by Theilman, May, and Kaus (Pure and Applied Geophysics, 2014). Each of the three benchmarks compares the accuracy of the numerical solution to a steady (time-independent) solution of the incompressible Stokes equations with a known exact solution. These benchmarks are specifically designed to test the accuracy and effectiveness of the numerical method when the viscosity varies up to six orders of magnitude. ASPECT has been shown to converge to the exact solution of each of these benchmarks at the correct design rate when all of the flow variables, including the density and viscosity, are discretized on the underlying finite element grid (Krobichler, Heister, and Bangerth, GJI, 2012). In our work we discretize the density and viscosity by initially placing the true values of the density and viscosity at the intial particle positions. At each time step, including the initialization step, the density and viscosity are interpolated from the particles onto the finite element grid. The resulting Stokes system is solved for the velocity and pressure, and the particle positions are advanced in time according to this new, numerical, velocity field. Note that this procedure effectively changes a steady solution of the Stokes equaton (i.e., one that is independent of time) to a solution of the Stokes equations that is time dependent. Furthermore, the accuracy of the active tracer particle algorithm now also depends on the accuracy of the interpolation algorithm and of the numerical method one uses to advance the particle positions in time. Finally, we will present new interpolation algorithms designed to increase the overall accuracy of the active tracer algorithms in ASPECT and interpolation algotithms designed to conserve properties, such as mass density, that are being carried by the particles.

  13. Nonlinear plasma wave models in 3D fluid simulations of laser-plasma interaction

    NASA Astrophysics Data System (ADS)

    Chapman, Thomas; Berger, Richard; Arrighi, Bill; Langer, Steve; Banks, Jeffrey; Brunner, Stephan

    2017-10-01

    Simulations of laser-plasma interaction (LPI) in inertial confinement fusion (ICF) conditions require multi-mm spatial scales due to the typical laser beam size and durations of order 100 ps in order for numerical laser reflectivities to converge. To be computationally achievable, these scales necessitate a fluid-like treatment of light and plasma waves with a spatial grid size on the order of the light wave length. Plasma waves experience many nonlinear phenomena not naturally described by a fluid treatment, such as frequency shifts induced by trapping, a nonlinear (typically suppressed) Landau damping, and mode couplings leading to instabilities that can cause the plasma wave to decay rapidly. These processes affect the onset and saturation of stimulated Raman and Brillouin scattering, and are of direct interest to the modeling and prediction of deleterious LPI in ICF. It is not currently computationally feasible to simulate these Debye length-scale phenomena in 3D across experimental scales. Analytically-derived and/or numerically benchmarked models of processes occurring at scales finer than the fluid simulation grid offer a path forward. We demonstrate the impact of a range of kinetic processes on plasma reflectivity via models included in the LPI simulation code pF3D. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  14. Slat Noise Simulations: Status and Challenges

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Lockard, David P.; Khorrami, Mehdi R.; Mineck, Raymond E.

    2011-01-01

    Noise radiation from the leading edge slat of a high-lift system is known to be an important component of aircraft noise during approach. NASA's Langley Research Center is engaged in a coordinated series of investigations combining high-fidelity numerical simulations and detailed wind tunnel measurements of a generic, unswept, 3-element, high-lift configuration. The goal of this effort is to provide a validated predictive capability that would enable identification of the dominant noise source mechanisms and, ultimately, help develop physics inspired concepts for reducing the far-field acoustic intensity. This paper provides a brief overview of the current status of the computational effort and describes new findings pertaining to the effects of the angle of attack on the aeroacoustics of the slat cove region. Finally, the interplay of the simulation campaign with the concurrently evolving development of a benchmark dataset for an international workshop on airframe noise is outlined.

  15. Efficient Modeling of Laser-Plasma Accelerators with INF and RNO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.

    2010-11-04

    The numerical modeling code INF and RNO (INtegrated Fluid and paRticle simulatioN cOde, pronounced 'inferno') is presented. INF and RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations whilemore » still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less

  16. Development of a Hybrid RANS/LES Method for Compressible Mixing Layer Simulations

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Alexander, J. Iwan D.; Reshotko, Eli

    2001-01-01

    A hybrid method has been developed for simulations of compressible turbulent mixing layers. Such mixing layers dominate the flows in exhaust systems of modem day aircraft and also those of hypersonic vehicles currently under development. The hybrid method uses a Reynolds-averaged Navier-Stokes (RANS) procedure to calculate wall bounded regions entering a mixing section, and a Large Eddy Simulation (LES) procedure to calculate the mixing dominated regions. A numerical technique was developed to enable the use of the hybrid RANS/LES method on stretched, non-Cartesian grids. The hybrid RANS/LES method is applied to a benchmark compressible mixing layer experiment. Preliminary two-dimensional calculations are used to investigate the effects of axial grid density and boundary conditions. Actual LES calculations, performed in three spatial directions, indicated an initial vortex shedding followed by rapid transition to turbulence, which is in agreement with experimental observations.

  17. Benchmarking sheath subgrid boundary conditions for macroscopic-scale simulations

    NASA Astrophysics Data System (ADS)

    Jenkins, T. G.; Smithe, D. N.

    2015-02-01

    The formation of sheaths near metallic or dielectric-coated wall materials in contact with a plasma is ubiquitous, often giving rise to physical phenomena (sputtering, secondary electron emission, etc) which influence plasma properties and dynamics both near and far from the material interface. In this paper, we use first-principles PIC simulations of such interfaces to formulate a subgrid sheath boundary condition which encapsulates fundamental aspects of the sheath behavior at the interface. Such a boundary condition, based on the capacitive behavior of the sheath, is shown to be useful in fluid simulations wherein sheath scale lengths are substantially smaller than scale lengths for other relevant physical processes (e.g. radiofrequency wavelengths), in that it enables kinetic processes associated with the presence of the sheath to be numerically modeled without explicit resolution of spatial and temporal sheath scales such as electron Debye length or plasma frequency.

  18. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these benchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  19. The ADER-DG method for seismic wave propagation and earthquake rupture dynamics

    NASA Astrophysics Data System (ADS)

    Pelties, Christian; Gabriel, Alice; Ampuero, Jean-Paul; de la Puente, Josep; Käser, Martin

    2013-04-01

    We will present the Arbitrary high-order DERivatives Discontinuous Galerkin (ADER-DG) method for solving the combined elastodynamic wave propagation and dynamic rupture problem. The ADER-DG method enables high-order accuracy in space and time while being implemented on unstructured tetrahedral meshes. A tetrahedral element discretization provides rapid and automatized mesh generation as well as geometrical flexibility. Features as mesh coarsening and local time stepping schemes can be applied to reduce computational efforts without introducing numerical artifacts. The method is well suited for parallelization and large scale high-performance computing since only directly neighboring elements exchange information via numerical fluxes. The concept of fluxes is a key ingredient of the numerical scheme as it governs the numerical dispersion and diffusion properties and allows to accommodate for boundary conditions, empirical friction laws of dynamic rupture processes, or the combination of different element types and non-conforming mesh transitions. After introducing fault dynamics into the ADER-DG framework, we will demonstrate its specific advantages in benchmarking test scenarios provided by the SCEC/USGS Spontaneous Rupture Code Verification Exercise. An important result of the benchmark is that the ADER-DG method avoids spurious high-frequency contributions in the slip rate spectra and therefore does not require artificial Kelvin-Voigt damping, filtering or other modifications of the produced synthetic seismograms. To demonstrate the capabilities of the proposed scheme we simulate an earthquake scenario, inspired by the 1992 Landers earthquake, that includes branching and curved fault segments. Furthermore, topography is respected in the discretized model to capture the surface waves correctly. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies.

  20. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    NASA Astrophysics Data System (ADS)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  1. Tetrahedral-Mesh Simulation of Turbulent Flows with the Space-Time Conservative Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji; Cheng, Gary C.

    2015-01-01

    Direct numerical simulations of turbulent flows are predominantly carried out using structured, hexahedral meshes despite decades of development in unstructured mesh methods. Tetrahedral meshes offer ease of mesh generation around complex geometries and the potential of an orientation free grid that would provide un-biased small-scale dissipation and more accurate intermediate scale solutions. However, due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for triangular and tetrahedral meshes at the cell interfaces, numerical issues exist when flow discontinuities or stagnation regions are present. The space-time conservative conservation element solution element (CESE) method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to more accurately simulate turbulent flows using unstructured tetrahedral meshes. To pave the way towards accurate simulation of shock/turbulent boundary-layer interaction, a series of wave and shock interaction benchmark problems that increase in complexity, are computed in this paper with triangular/tetrahedral meshes. Preliminary computations for the normal shock/turbulence interactions are carried out with a relatively coarse mesh, by direct numerical simulations standards, in order to assess other effects such as boundary conditions and the necessity of a buffer domain. The results indicate that qualitative agreement with previous studies can be obtained for flows where, strong shocks co-exist along with unsteady waves that display a broad range of scales, with a relatively compact computational domain and less stringent requirements for grid clustering near the shock. With the space-time conservation properties, stable solutions without any spurious wave reflections can be obtained without a need for buffer domains near the outflow/farfield boundaries. Computational results for the isotropic turbulent flow decay, at a relatively high turbulent Mach number, show a nicely behaved spectral decay rate for medium to high wave numbers. The high-order CESE schemes offer very robust solutions even with the presence of strong shocks or widespread shocklets. The explicit formulation in conjunction with a close to unity theoretical upper Courant number bound has the potential to offer an efficient numerical framework for general compressible turbulent flow simulations with unstructured meshes.

  2. Geometrically motivated coordinate system for exploring spacetime dynamics in numerical-relativity simulations using a quasi-Kinnersley tetrad

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Brink, Jeandrew; Szilágyi, Béla; Lovelace, Geoffrey

    2012-10-01

    We investigate the suitability and properties of a quasi-Kinnersley tetrad and a geometrically motivated coordinate system as tools for quantifying both strong-field and wave-zone effects in numerical relativity (NR) simulations. We fix two of the coordinate degrees of freedom of the metric, namely, the radial and latitudinal coordinates, using the Coulomb potential associated with the quasi-Kinnersley transverse frame. These coordinates are invariants of the spacetime and can be used to unambiguously fix the outstanding spin-boost freedom associated with the quasi-Kinnersley frame (and thus can be used to choose a preferred quasi-Kinnersley tetrad). In the limit of small perturbations about a Kerr spacetime, these geometrically motivated coordinates and quasi-Kinnersley tetrad reduce to Boyer-Lindquist coordinates and the Kinnersley tetrad, irrespective of the simulation gauge choice. We explore the properties of this construction both analytically and numerically, and we gain insights regarding the propagation of radiation described by a super-Poynting vector, further motivating the use of this construction in NR simulations. We also quantify in detail the peeling properties of the chosen tetrad and gauge. We argue that these choices are particularly well-suited for a rapidly converging wave-extraction algorithm as the extraction location approaches infinity, and we explore numerically the extent to which this property remains applicable on the interior of a computational domain. Using a number of additional tests, we verify numerically that the prescription behaves as required in the appropriate limits regardless of simulation gauge; these tests could also serve to benchmark other wave extraction methods. We explore the behavior of the geometrically motivated coordinate system in dynamical binary-black-hole NR mergers; while we obtain no unexpected results, we do find that these coordinates turn out to be useful for visualizing NR simulations (for example, for vividly illustrating effects such as the initial burst of spurious junk radiation passing through the computational domain). Finally, we carefully scrutinize the head-on collision of two black holes and, for example, the way in which the extracted waveform changes as it moves through the computational domain.

  3. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    NASA Astrophysics Data System (ADS)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  4. An Optimization Study of Hot Stamping Operation

    NASA Astrophysics Data System (ADS)

    Ghoo, Bonyoung; Umezu, Yasuyoshi; Watanabe, Yuko; Ma, Ninshu; Averill, Ron

    2010-06-01

    In the present study, 3-dimensional finite element analyses for hot-stamping processes of Audi B-pillar product are conducted using JSTAMP/NV and HEEDS. Special attention is paid to the optimization of simulation technology coupling with thermal-mechanical formulations. Numerical simulation based on FEM technology and optimization design using the hybrid adaptive SHERPA algorithm are applied to hot stamping operation to improve productivity. The robustness of the SHERPA algorithm is found through the results of the benchmark example. The SHERPA algorithm is shown to be far superior to the GA (Genetic Algorithm) in terms of efficiency, whose calculation time is about 7 times faster than that of the GA. The SHERPA algorithm could show high performance in a large scale problem having complicated design space and long calculation time.

  5. Real-time simulation of biological soft tissues: a PGD approach.

    PubMed

    Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F

    2013-05-01

    We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Five-equation and robust three-equation methods for solution verification of large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dutta, Rabijit; Xing, Tao

    2018-02-01

    This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.

  7. The UBO-TSUFD tsunami inundation model: validation and application to a tsunami case study focused on the city of Catania, Italy

    NASA Astrophysics Data System (ADS)

    Tinti, S.; Tonini, R.

    2013-07-01

    Nowadays numerical models are a powerful tool in tsunami research since they can be used (i) to reconstruct modern and historical events, (ii) to cast new light on tsunami sources by inverting tsunami data and observations, (iii) to build scenarios in the frame of tsunami mitigation plans, and (iv) to produce forecasts of tsunami impact and inundation in systems of early warning. In parallel with the general recognition of the importance of numerical tsunami simulations, the demand has grown for reliable tsunami codes, validated through tests agreed upon by the tsunami community. This paper presents the tsunami code UBO-TSUFD that has been developed at the University of Bologna, Italy, and that solves the non-linear shallow water (NSW) equations in a Cartesian frame, with inclusion of bottom friction and exclusion of the Coriolis force, by means of a leapfrog (LF) finite-difference scheme on a staggered grid and that accounts for moving boundaries to compute sea inundation and withdrawal at the coast. Results of UBO-TSUFD applied to four classical benchmark problems are shown: two benchmarks are based on analytical solutions, one on a plane wave propagating on a flat channel with a constant slope beach; and one on a laboratory experiment. The code is proven to perform very satisfactorily since it reproduces quite well the benchmark theoretical and experimental data. Further, the code is applied to a realistic tsunami case: a scenario of a tsunami threatening the coasts of eastern Sicily, Italy, is defined and discussed based on the historical tsunami of 11 January 1693, i.e. one of the most severe events in the Italian history.

  8. A 1D radiative transfer benchmark with polarization via doubling and adding

    NASA Astrophysics Data System (ADS)

    Ganapol, B. D.

    2017-11-01

    Highly precise numerical solutions to the radiative transfer equation with polarization present a special challenge. Here, we establish a precise numerical solution to the radiative transfer equation with combined Rayleigh and isotropic scattering in a 1D-slab medium with simple polarization. The 2-Stokes vector solution for the fully discretized radiative transfer equation in space and direction derives from the method of doubling and adding enhanced through convergence acceleration. Updates to benchmark solutions found in the literature to seven places for reflectance and transmittance as well as for angular flux follow. Finally, we conclude with the numerical solution in a partially randomly absorbing heterogeneous medium.

  9. A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm

    DOE PAGES

    Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...

    2016-02-17

    We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.

  10. Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-06-13

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less

  11. Simulator for SUPO, a Benchmark Aqueous Homogeneous Reactor (AHR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Steven Karl; Determan, John C.

    2015-10-14

    A simulator has been developed for SUPO (Super Power) an aqueous homogeneous reactor (AHR) that operated at Los Alamos National Laboratory (LANL) from 1951 to 1974. During that period SUPO accumulated approximately 600,000 kWh of operation. It is considered the benchmark for steady-state operation of an AHR. The SUPO simulator was developed using the process that resulted in a simulator for an accelerator-driven subcritical system, which has been previously reported.

  12. Adaptive MPC based on MIMO ARX-Laguerre model.

    PubMed

    Ben Abdelwahed, Imen; Mbarek, Abdelkader; Bouzrara, Kais

    2017-03-01

    This paper proposes a method for synthesizing an adaptive predictive controller using a reduced complexity model. This latter is given by the projection of the ARX model on Laguerre bases. The resulting model is entitled MIMO ARX-Laguerre and it is characterized by an easy recursive representation. The adaptive predictive control law is computed based on multi-step-ahead finite-element predictors, identified directly from experimental input/output data. The model is tuned in each iteration by an online identification algorithms of both model parameters and Laguerre poles. The proposed approach avoids time consuming numerical optimization algorithms associated with most common linear predictive control strategies, which makes it suitable for real-time implementation. The method is used to synthesize and test in numerical simulations adaptive predictive controllers for the CSTR process benchmark. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

    PubMed Central

    Wang, Hailong; Sun, Yuqiu; Su, Qinghua; Xia, Xuewen

    2018-01-01

    The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F) is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed. PMID:29666635

  14. Towards a new multiscale air quality transport model using the fully unstructured anisotropic adaptive mesh technology of Fluidity (version 4.1.9)

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.

    2015-10-01

    An integrated method of advanced anisotropic hr-adaptive mesh and discretization numerical techniques has been, for first time, applied to modelling of multiscale advection-diffusion problems, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been set up for two-dimensional (2-D) advection phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes. Performance achieved in 3-D simulation of power plant plumes indicates that this new adaptive multiscale model has the potential to provide accurate air quality modelling solutions effectively.

  15. Simulation of Quantum Many-Body Dynamics for Generic Strongly-Interacting Systems

    NASA Astrophysics Data System (ADS)

    Meyer, Gregory; Machado, Francisco; Yao, Norman

    2017-04-01

    Recent experimental advances have enabled the bottom-up assembly of complex, strongly interacting quantum many-body systems from individual atoms, ions, molecules and photons. These advances open the door to studying dynamics in isolated quantum systems as well as the possibility of realizing novel out-of-equilibrium phases of matter. Numerical studies provide insight into these systems; however, computational time and memory usage limit common numerical methods such as exact diagonalization to relatively small Hilbert spaces of dimension 215 . Here we present progress toward a new software package for dynamical time evolution of large generic quantum systems on massively parallel computing architectures. By projecting large sparse Hamiltonians into a much smaller Krylov subspace, we are able to compute the evolution of strongly interacting systems with Hilbert space dimension nearing 230. We discuss and benchmark different design implementations, such as matrix-free methods and GPU based calculations, using both pre-thermal time crystals and the Sachdev-Ye-Kitaev model as examples. We also include a simple symbolic language to describe generic Hamiltonians, allowing simulation of diverse quantum systems without any modification of the underlying C and Fortran code.

  16. Nonlinear Lamb waves for fatigue damage identification in FRP-reinforced steel plates.

    PubMed

    Wang, Yikuan; Guan, Ruiqi; Lu, Ye

    2017-09-01

    A nonlinear Lamb-wave-based method for fatigue crack detection in steel plates with and without carbon fibre reinforcement polymer (CFRP) reinforcement is presented in this study. Both numerical simulation and experimental evaluation were performed for Lamb wave propagation and its interaction with a fatigue crack on these two steel plate types. With the generation of the second harmonic, the damage-induced wave nonlinearities were identified by surface-bonded piezoelectric sensors. Numerical simulation revealed that the damage-induced wave component at the second harmonic was slightly affected by the existence of CFRP laminate, although the total wave energy was decreased because of wave leakage into the CFRP laminate. Due to unavoidable nonlinearity from the experimental environments, it was impractical to directly extract the time-of-flight of the second harmonic for locating the crack. To this end, the correlation coefficient of benchmark and signal with damage at double frequency in the time domain was calculated, based on which an imaging method was introduced to locate the fatigue crack in steel plates with and without CFRP laminates. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Numerical modeling of the Madison Dynamo Experiment.

    NASA Astrophysics Data System (ADS)

    Bayliss, R. A.; Wright, J. C.; Forest, C. B.; O'Connell, R.

    2002-11-01

    Growth, saturation and turbulent evolution of the Madison dynamo experiment is investigated numerically using a 3-D pseudo-spectral simulation of the MHD equations; results of the simulations will be compared to results obtained from the experiment. The code, Dynamo (Fortran90), allows for full evolution of the magnetic and velocity fields. The induction equation governing B and the curl of the momentum equation governing V are separately or simultaneously solved. The code uses a spectral representation via spherical harmonic basis functions of the vector fields in longitude and latitude, and fourth order finite differences in the radial direction. The magnetic field evolution has been benchmarked against the laminar kinematic dynamo predicted by M.L. Dudley and R.W. James (M.L. Dudley and R.W. James, Time-dependent kinematic dynamos with stationary flows, Proc. R. Soc. Lond. A 425, p. 407 (1989)). Power balance in the system has been verified in both mechanically driven and perturbed hydrodynamic, kinematic, and dynamic cases. Evolution of the vacuum magnetic field has been added to facilitate comparison with the experiment. Modeling of the Madison Dynamo eXperiment will be presented.

  18. Revisiting Yasinsky and Henry`s benchmark using modern nodal codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.; Becker, M.W.

    1995-12-31

    The numerical experiments analyzed by Yasinsky and Henry are quite trivial by comparison with today`s standards because they used the finite difference code WIGLE for their benchmark. Also, this problem is a simple slab (one-dimensional) case with no feedback mechanisms. This research attempts to obtain STAR (Ref. 2) and NEM (Ref. 3) code results in order to produce a more modern kinetics benchmark with results comparable WIGLE.

  19. Evaluation of the ACEC Benchmark Suite for Real-Time Applications

    DTIC Science & Technology

    1990-07-23

    1.0 benchmark suite waSanalyzed with respect to its measuring of Ada real-time features such as tasking, memory management, input/output, scheduling...and delay statement, Chapter 13 features , pragmas, interrupt handling, subprogram overhead, numeric computations etc. For most of the features that...meant for programming real-time systems. The ACEC benchmarks have been analyzed extensively with respect to their measuring of Ada real-time features

  20. Simulation of granular and gas-solid flows using discrete element method

    NASA Astrophysics Data System (ADS)

    Boyalakuntla, Dhanunjay S.

    2003-10-01

    In recent years there has been increased research activity in the experimental and numerical study of gas-solid flows. Flows of this type have numerous applications in the energy, pharmaceuticals, and chemicals process industries. Typical applications include pulverized coal combustion, flow and heat transfer in bubbling and circulating fluidized beds, hopper and chute flows, pneumatic transport of pharmaceutical powders and pellets, and many more. The present work addresses the study of gas-solid flows using computational fluid dynamics (CFD) techniques and discrete element simulation methods (DES) combined. Many previous studies of coupled gas-solid flows have been performed assuming the solid phase as a continuum with averaged properties and treating the gas-solid flow as constituting of interpenetrating continua. Instead, in the present work, the gas phase flow is simulated using continuum theory and the solid phase flow is simulated using DES. DES treats each solid particle individually, thus accounting for its dynamics due to particle-particle interactions, particle-wall interactions as well as fluid drag and buoyancy. The present work involves developing efficient DES methods for dense granular flow and coupling this simulation to continuum simulations of the gas phase flow. Simulations have been performed to observe pure granular behavior in vibrating beds. Benchmark cases have been simulated and the results obtained match the published literature. The dimensionless acceleration amplitude and the bed height are the parameters governing bed behavior. Various interesting behaviors such as heaping, round and cusp surface standing waves, as well as kinks, have been observed for different values of the acceleration amplitude for a given bed height. Furthermore, binary granular mixtures (granular mixtures with two particle sizes) in a vibrated bed have also been studied. Gas-solid flow simulations have been performed to study fluidized beds. Benchmark 2D fluidized bed simulations have been performed and the results have been shown to satisfactorily compare with those published in the literature. A comprehensive study of the effect of drag correlations on the simulation of fluidized beds has been performed. It has been found that nearly all the drag correlations studied make similar predictions of global quantities such as the time-dependent pressure drop, bubbling frequency and growth. In conclusion, discrete element simulation has been successfully coupled to continuum gas-phase. Though all the results presented in the thesis are two-dimensional, the present implementation is completely three dimensional and can be used to study 3D fluidized beds to aid in better design and understanding. Other industrially important phenomena like particle coating, coal gasification etc., and applications in emerging areas such as nano-particle/fluid mixtures can also be studied through this type of simulation. (Abstract shortened by UMI.)

  1. Numerical simulation of magmatic hydrothermal systems

    USGS Publications Warehouse

    Ingebritsen, S.E.; Geiger, S.; Hurwitz, S.; Driesner, T.

    2010-01-01

    The dynamic behavior of magmatic hydrothermal systems entails coupled and nonlinear multiphase flow, heat and solute transport, and deformation in highly heterogeneous media. Thus, quantitative analysis of these systems depends mainly on numerical solution of coupled partial differential equations and complementary equations of state (EOS). The past 2 decades have seen steady growth of computational power and the development of numerical models that have eliminated or minimized the need for various simplifying assumptions. Considerable heuristic insight has been gained from process-oriented numerical modeling. Recent modeling efforts employing relatively complete EOS and accurate transport calculations have revealed dynamic behavior that was damped by linearized, less accurate models, including fluid property control of hydrothermal plume temperatures and three-dimensional geometries. Other recent modeling results have further elucidated the controlling role of permeability structure and revealed the potential for significant hydrothermally driven deformation. Key areas for future reSearch include incorporation of accurate EOS for the complete H2O-NaCl-CO2 system, more realistic treatment of material heterogeneity in space and time, realistic description of large-scale relative permeability behavior, and intercode benchmarking comparisons. Copyright 2010 by the American Geophysical Union.

  2. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    NASA Astrophysics Data System (ADS)

    Griffiths, M. K.; Fedun, V.; Erdélyi, R.

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1-3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  3. Benchmarking the mesoscale variability in global ocean eddy-permitting numerical systems

    NASA Astrophysics Data System (ADS)

    Cipollone, Andrea; Masina, Simona; Storto, Andrea; Iovino, Doroteaciro

    2017-10-01

    The role of data assimilation procedures on representing ocean mesoscale variability is assessed by applying eddy statistics to a state-of-the-art global ocean reanalysis (C-GLORS), a free global ocean simulation (performed with the NEMO system) and an observation-based dataset (ARMOR3D) used as an independent benchmark. Numerical results are computed on a 1/4 ∘ horizontal grid (ORCA025) and share the same resolution with ARMOR3D dataset. This "eddy-permitting" resolution is sufficient to allow ocean eddies to form. Further to assessing the eddy statistics from three different datasets, a global three-dimensional eddy detection system is implemented in order to bypass the need of regional-dependent definition of thresholds, typical of commonly adopted eddy detection algorithms. It thus provides full three-dimensional eddy statistics segmenting vertical profiles from local rotational velocities. This criterion is crucial for discerning real eddies from transient surface noise that inevitably affects any two-dimensional algorithm. Data assimilation enhances and corrects mesoscale variability on a wide range of features that cannot be well reproduced otherwise. The free simulation fairly reproduces eddies emerging from western boundary currents and deep baroclinic instabilities, while underestimates shallower vortexes that populate the full basin. The ocean reanalysis recovers most of the missing turbulence, shown by satellite products , that is not generated by the model itself and consistently projects surface variability deep into the water column. The comparison with the statistically reconstructed vertical profiles from ARMOR3D show that ocean data assimilation is able to embed variability into the model dynamics, constraining eddies with in situ and altimetry observation and generating them consistently with local environment.

  4. Effect of Variable Manning Coefficients on Tsunami Inundation

    NASA Astrophysics Data System (ADS)

    Barberopoulou, A.; Rees, D.

    2017-12-01

    Numerical simulations are commonly used to help estimate tsunami hazard, improve evacuation plans, issue or cancel tsunami warnings, inform forecasting and hazard assessments and have therefore become an integral part of hazard mitigation among the tsunami community. Many numerical codes exist for simulating tsunamis, most of which have undergone extensive benchmarking and testing. Tsunami hazard or risk assessments employ these codes following a deterministic or probabilistic approach. Depending on the scope these studies may or may not consider uncertainty in the numerical simulations, the effects of tides, variable friction or estimate financial losses, none of which are necessarily trivial. Distributed manning coefficients, the roughness coefficients used in hydraulic modeling, are commonly used in simulating both riverine and pluvial flood events however, their use in tsunami hazard assessments is primarily part of limited scope studies and for the most part, not a standard practice. For this work, we investigate variations in manning coefficients and their effects on tsunami inundation extent, pattern and financial loss. To assign manning coefficients we use land use maps that come from the New Zealand Land Cover Database (LCDB) and more recent data from the Ministry of the Environment. More than 40 classes covering different types of land use are combined into major classes such as cropland, grassland and wetland representing common types of land use in New Zealand, each of which is assigned a unique manning coefficient. By utilizing different data sources for variable manning coefficients, we examine the impact of data sources and classification methodology on the accuracy of model outputs.

  5. Characterizing Wheel-Soil Interaction Loads Using Meshfree Finite Element Methods: A Sensitivity Analysis for Design Trade Studies

    NASA Technical Reports Server (NTRS)

    Contreras, Michael T.; Trease, Brian P.; Bojanowski, Cezary; Kulakx, Ronald F.

    2013-01-01

    A wheel experiencing sinkage and slippage events poses a high risk to planetary rover missions as evidenced by the mobility challenges endured by the Mars Exploration Rover (MER) project. Current wheel design practice utilizes loads derived from a series of events in the life cycle of the rover which do not include (1) failure metrics related to wheel sinkage and slippage and (2) performance trade-offs based on grouser placement/orientation. Wheel designs are rigorously tested experimentally through a variety of drive scenarios and simulated soil environments; however, a robust simulation capability is still in development due to myriad of complex interaction phenomena that contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree nite element approaches enable simulations that capture su cient detail of wheel-soil interaction while remaining computationally feasible. This study implements the JPL wheel-soil benchmark problem in the commercial code environment utilizing the large deformation modeling capability of Smooth Particle Hydrodynamics (SPH) meshfree methods. The nominal, benchmark wheel-soil interaction model that produces numerically stable and physically realistic results is presented and simulations are shown for both wheel traverse and wheel sinkage cases. A sensitivity analysis developing the capability and framework for future ight applications is conducted to illustrate the importance of perturbations to critical material properties and parameters. Implementation of the proposed soil-wheel interaction simulation capability and associated sensitivity framework has the potential to reduce experimentation cost and improve the early stage wheel design proce

  6. Data assimilation and prognostic whole ice sheet modelling with the variationally derived, higher order, open source, and fully parallel ice sheet model VarGlaS

    NASA Astrophysics Data System (ADS)

    Brinkerhoff, D. J.; Johnson, J. V.

    2013-07-01

    We introduce a novel, higher order, finite element ice sheet model called VarGlaS (Variational Glacier Simulator), which is built on the finite element framework FEniCS. Contrary to standard procedure in ice sheet modelling, VarGlaS formulates ice sheet motion as the minimization of an energy functional, conferring advantages such as a consistent platform for making numerical approximations, a coherent relationship between motion and heat generation, and implicit boundary treatment. VarGlaS also solves the equations of enthalpy rather than temperature, avoiding the solution of a contact problem. Rather than include a lengthy model spin-up procedure, VarGlaS possesses an automated framework for model inversion. These capabilities are brought to bear on several benchmark problems in ice sheet modelling, as well as a 500 yr simulation of the Greenland ice sheet at high resolution. VarGlaS performs well in benchmarking experiments and, given a constant climate and a 100 yr relaxation period, predicts a mass evolution of the Greenland ice sheet that matches present-day observations of mass loss. VarGlaS predicts a thinning in the interior and thickening of the margins of the ice sheet.

  7. Inter-model analysis of tsunami-induced coastal currents

    NASA Astrophysics Data System (ADS)

    Lynett, Patrick J.; Gately, Kara; Wilson, Rick; Montoya, Luis; Arcas, Diego; Aytore, Betul; Bai, Yefei; Bricker, Jeremy D.; Castro, Manuel J.; Cheung, Kwok Fai; David, C. Gabriel; Dogan, Gozde Guney; Escalante, Cipriano; González-Vida, José Manuel; Grilli, Stephan T.; Heitmann, Troy W.; Horrillo, Juan; Kânoğlu, Utku; Kian, Rozita; Kirby, James T.; Li, Wenwen; Macías, Jorge; Nicolsky, Dmitry J.; Ortega, Sergio; Pampell-Manis, Alyssa; Park, Yong Sung; Roeber, Volker; Sharghivand, Naeimeh; Shelby, Michael; Shi, Fengyan; Tehranirad, Babak; Tolkova, Elena; Thio, Hong Kie; Velioğlu, Deniz; Yalçıner, Ahmet Cevdet; Yamazaki, Yoshiki; Zaytsev, Andrey; Zhang, Y. J.

    2017-06-01

    To help produce accurate and consistent maritime hazard products, the National Tsunami Hazard Mitigation Program organized a benchmarking workshop to evaluate the numerical modeling of tsunami currents. Thirteen teams of international researchers, using a set of tsunami models currently utilized for hazard mitigation studies, presented results for a series of benchmarking problems; these results are summarized in this paper. Comparisons focus on physical situations where the currents are shear and separation driven, and are thus de-coupled from the incident tsunami waveform. In general, we find that models of increasing physical complexity provide better accuracy, and that low-order three-dimensional models are superior to high-order two-dimensional models. Inside separation zones and in areas strongly affected by eddies, the magnitude of both model-data errors and inter-model differences can be the same as the magnitude of the mean flow. Thus, we make arguments for the need of an ensemble modeling approach for areas affected by large-scale turbulent eddies, where deterministic simulation may be misleading. As a result of the analyses presented herein, we expect that tsunami modelers now have a better awareness of their ability to accurately capture the physics of tsunami currents, and therefore a better understanding of how to use these simulation tools for hazard assessment and mitigation efforts.

  8. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less

  9. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DOE PAGES

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.; ...

    2017-06-13

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ϵ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are usedmore » to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Altogether, all the microscale simulations produce a consistent coupling with mesoscale forcings.« less

  10. Benchmark Simulation Model No 2: finalisation of plant layout and default control strategy.

    PubMed

    Nopens, I; Benedetti, L; Jeppsson, U; Pons, M-N; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A

    2010-01-01

    The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted in more than 300 publications worldwide demonstrates the interest in and need of such tools within the research community. Recent efforts within the IWA Task Group on "Benchmarking of control strategies for WWTPs" have focused on an extension of the benchmark simulation model. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pretreatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given.

  11. A penalty-based nodal discontinuous Galerkin method for spontaneous rupture dynamics

    NASA Astrophysics Data System (ADS)

    Ye, R.; De Hoop, M. V.; Kumar, K.

    2017-12-01

    Numerical simulation of the dynamic rupture processes with slip is critical to understand the earthquake source process and the generation of ground motions. However, it can be challenging due to the nonlinear friction laws interacting with seismicity, coupled with the discontinuous boundary conditions across the rupture plane. In practice, the inhomogeneities in topography, fault geometry, elastic parameters and permiability add extra complexity. We develop a nodal discontinuous Galerkin method to simulate seismic wave phenomenon with slipping boundary conditions, including the fluid-solid boundaries and ruptures. By introducing a novel penalty flux, we avoid solving Riemann problems on interfaces, which makes our method capable for general anisotropic and poro-elastic materials. Based on unstructured tetrahedral meshes in 3D, the code can capture various geometries in geological model, and use polynomial expansion to achieve high-order accuracy. We consider the rate and state friction law, in the spontaneous rupture dynamics, as part of a nonlinear transmitting boundary condition, which is weakly enforced across the fault surface as numerical flux. An iterative coupling scheme is developed based on implicit time stepping, containing a constrained optimization process that accounts for the nonlinear part. To validate the method, we proof the convergence of the coupled system with error estimates. We test our algorithm on a well-established numerical example (TPV102) of the SCEC/USGS Spontaneous Rupture Code Verification Project, and benchmark with the simulation of PyLith and SPECFEM3D with agreeable results.

  12. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org

    2016-01-15

    We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numericalmore » study of the relativistic two-stream instability completes the set of benchmarking tests.« less

  13. Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W. (Editor); Hardin, J. C. (Editor)

    1997-01-01

    The proceedings of the Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems held at Florida State University are the subject of this report. For this workshop, problems arising in typical industrial applications of CAA were chosen. Comparisons between numerical solutions and exact solutions are presented where possible.

  14. Simulation Studies for Inspection of the Benchmark Test with PATRASH

    NASA Astrophysics Data System (ADS)

    Shimosaki, Y.; Igarashi, S.; Machida, S.; Shirakata, M.; Takayama, K.; Noda, F.; Shigaki, K.

    2002-12-01

    In order to delineate the halo-formation mechanisms in a typical FODO lattice, a 2-D simulation code PATRASH (PArticle TRAcking in a Synchrotron for Halo analysis) has been developed. The electric field originating from the space charge is calculated by the Hybrid Tree code method. Benchmark tests utilizing three simulation codes of ACCSIM, PATRASH and SIMPSONS were carried out. These results have been confirmed to be fairly in agreement with each other. The details of PATRASH simulation are discussed with some examples.

  15. Physics-based multiscale coupling for full core nuclear reactor simulation

    DOE PAGES

    Gaston, Derek R.; Permann, Cody J.; Peterson, John W.; ...

    2015-10-01

    Numerical simulation of nuclear reactors is a key technology in the quest for improvements in efficiency, safety, and reliability of both existing and future reactor designs. Historically, simulation of an entire reactor was accomplished by linking together multiple existing codes that each simulated a subset of the relevant multiphysics phenomena. Recent advances in the MOOSE (Multiphysics Object Oriented Simulation Environment) framework have enabled a new approach: multiple domain-specific applications, all built on the same software framework, are efficiently linked to create a cohesive application. This is accomplished with a flexible coupling capability that allows for a variety of different datamore » exchanges to occur simultaneously on high performance parallel computational hardware. Examples based on the KAIST-3A benchmark core, as well as a simplified Westinghouse AP-1000 configuration, demonstrate the power of this new framework for tackling—in a coupled, multiscale manner—crucial reactor phenomena such as CRUD-induced power shift and fuel shuffle. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license« less

  16. Simulation for analysis and control of superplastic forming. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zacharia, T.; Aramayo, G.A.; Simunovic, S.

    1996-08-01

    A joint study was conducted by Oak Ridge National Laboratory (ORNL) and the Pacific Northwest Laboratory (PNL) for the U.S. Department of Energy-Lightweight Materials (DOE-LWM) Program. the purpose of the study was to assess and benchmark the current modeling capabilities with respect to accuracy of predictions and simulation time. Two modeling capabilities with respect to accuracy of predictions and simulation time. Two simulation platforms were considered in this study, which included the LS-DYNA3D code installed on ORNL`s high- performance computers and the finite element code MARC used at PNL. both ORNL and PNL performed superplastic forming (SPF) analysis on amore » standard butter-tray geometry, which was defined by PNL, to better understand the capabilities of the respective models. The specific geometry was selected and formed at PNL, and the experimental results, such as forming time and thickness at specific locations, were provided for comparisons with numerical predictions. Furthermore, comparisons between the ORNL simulation results, using elasto-plastic analysis, and PNL`s results, using rigid-plastic flow analysis, were performed.« less

  17. Flame-Vortex Interactions in Microgravity to Improve Models of Turbulent Combustion

    NASA Technical Reports Server (NTRS)

    Driscoll, James F.

    1999-01-01

    A unique flame-vortex interaction experiment is being operated in microgravity in order to obtain fundamental data to assess the Theory of Flame Stretch which will be used to improve models of turbulent combustion. The experiment provides visual images of the physical process by which an individual eddy in a turbulent flow increases the flame surface area, changes the local flame propagation speed, and can extinguish the reaction. The high quality microgravity images provide benchmark data that are free from buoyancy effects. Results are used to assess Direct Numerical Simulations of Dr. K. Kailasanath at NRL, which were run for the same conditions.

  18. Analytical theory of coherent synchrotron radiation wakefield of short bunches shielded by conducting parallel plates

    NASA Astrophysics Data System (ADS)

    Stupakov, Gennady; Zhou, Demin

    2016-04-01

    We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. All our formulas are benchmarked against numerical simulations with the CSRZ computer code.

  19. Time Hierarchies and Model Reduction in Canonical Non-linear Models

    PubMed Central

    Löwe, Hannes; Kremling, Andreas; Marin-Sanguino, Alberto

    2016-01-01

    The time-scale hierarchies of a very general class of models in differential equations is analyzed. Classical methods for model reduction and time-scale analysis have been adapted to this formalism and a complementary method is proposed. A unified theoretical treatment shows how the structure of the system can be much better understood by inspection of two sets of singular values: one related to the stoichiometric structure of the system and another to its kinetics. The methods are exemplified first through a toy model, then a large synthetic network and finally with numeric simulations of three classical benchmark models of real biological systems. PMID:27708665

  20. Analytical theory of coherent synchrotron radiation wakefield of short bunches shielded by conducting parallel plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stupakov, Gennady; Zhou, Demin

    2016-04-21

    We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. All our formulas are benchmarked against numerical simulations with the CSRZ computer code.

  1. Great interactions: How binding incorrect partners can teach us about protein recognition and function.

    PubMed

    Vamparys, Lydie; Laurent, Benoist; Carbone, Alessandra; Sacquin-Mora, Sophie

    2016-10-01

    Protein-protein interactions play a key part in most biological processes and understanding their mechanism is a fundamental problem leading to numerous practical applications. The prediction of protein binding sites in particular is of paramount importance since proteins now represent a major class of therapeutic targets. Amongst others methods, docking simulations between two proteins known to interact can be a useful tool for the prediction of likely binding patches on a protein surface. From the analysis of the protein interfaces generated by a massive cross-docking experiment using the 168 proteins of the Docking Benchmark 2.0, where all possible protein pairs, and not only experimental ones, have been docked together, we show that it is also possible to predict a protein's binding residues without having any prior knowledge regarding its potential interaction partners. Evaluating the performance of cross-docking predictions using the area under the specificity-sensitivity ROC curve (AUC) leads to an AUC value of 0.77 for the complete benchmark (compared to the 0.5 AUC value obtained for random predictions). Furthermore, a new clustering analysis performed on the binding patches that are scattered on the protein surface show that their distribution and growth will depend on the protein's functional group. Finally, in several cases, the binding-site predictions resulting from the cross-docking simulations will lead to the identification of an alternate interface, which corresponds to the interaction with a biomolecular partner that is not included in the original benchmark. Proteins 2016; 84:1408-1421. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc.

  2. Great interactions: How binding incorrect partners can teach us about protein recognition and function

    PubMed Central

    Vamparys, Lydie; Laurent, Benoist; Carbone, Alessandra

    2016-01-01

    ABSTRACT Protein–protein interactions play a key part in most biological processes and understanding their mechanism is a fundamental problem leading to numerous practical applications. The prediction of protein binding sites in particular is of paramount importance since proteins now represent a major class of therapeutic targets. Amongst others methods, docking simulations between two proteins known to interact can be a useful tool for the prediction of likely binding patches on a protein surface. From the analysis of the protein interfaces generated by a massive cross‐docking experiment using the 168 proteins of the Docking Benchmark 2.0, where all possible protein pairs, and not only experimental ones, have been docked together, we show that it is also possible to predict a protein's binding residues without having any prior knowledge regarding its potential interaction partners. Evaluating the performance of cross‐docking predictions using the area under the specificity‐sensitivity ROC curve (AUC) leads to an AUC value of 0.77 for the complete benchmark (compared to the 0.5 AUC value obtained for random predictions). Furthermore, a new clustering analysis performed on the binding patches that are scattered on the protein surface show that their distribution and growth will depend on the protein's functional group. Finally, in several cases, the binding‐site predictions resulting from the cross‐docking simulations will lead to the identification of an alternate interface, which corresponds to the interaction with a biomolecular partner that is not included in the original benchmark. Proteins 2016; 84:1408–1421. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:27287388

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balsa Terzic, Gabriele Bassi

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less

  4. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  5. Verification of cardiac mechanics software: benchmark problems and solutions for testing active and passive material behaviour.

    PubMed

    Land, Sander; Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E; Hurtado, Daniel E; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P; Sundnes, Joakim; Rice, J Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A

    2015-12-08

    Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software.

  6. Simulation-based comprehensive benchmarking of RNA-seq aligners

    PubMed Central

    Baruzzo, Giacomo; Hayer, Katharina E; Kim, Eun Ji; Di Camillo, Barbara; FitzGerald, Garret A; Grant, Gregory R

    2018-01-01

    Alignment is the first step in most RNA-seq analysis pipelines, and the accuracy of downstream analyses depends heavily on it. Unlike most steps in the pipeline, alignment is particularly amenable to benchmarking with simulated data. We performed a comprehensive benchmarking of 14 common splice-aware aligners for base, read, and exon junction-level accuracy and compared default with optimized parameters. We found that performance varied by genome complexity, and accuracy and popularity were poorly correlated. The most widely cited tool underperforms for most metrics, particularly when using default settings. PMID:27941783

  7. Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.

    2012-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  8. Isochoric heating and strong blast wave formation driven by fast electrons in solid-density targets

    NASA Astrophysics Data System (ADS)

    Santos, J. J.; Vauzour, B.; Touati, M.; Gremillet, L.; Feugeas, J.-L.; Ceccotti, T.; Bouillaud, R.; Deneuville, F.; Floquet, V.; Fourment, C.; Hadj-Bachir, M.; Hulin, S.; Morace, A.; Nicolaï, Ph; d'Oliveira, P.; Reau, F.; Samaké, A.; Tcherbakoff, O.; Tikhonchuk, V. T.; Veltcheva, M.; Batani, D.

    2017-10-01

    We experimentally investigate the fast (< 1 {ps}) isochoric heating of multi-layer metallic foils and subsequent high-pressure hydrodynamics induced by energetic electrons driven by high-intensity, high-contrast laser pulses. The early-time temperature profile inside the target is measured from the streaked optical pyrometry of the target rear side. This is further characterized from benchmarked simulations of the laser-target interaction and the fast electron transport. Despite a modest laser energy (< 1 {{J}}), the early-time high pressures and associated gradients launch inwards a strong compression wave developing over ≳ 10 ps into a ≈ 140 {Mbar} blast wave, according to hydrodynamic simulations, consistent with our measurements. These experimental and numerical findings pave the way to a short-pulse-laser-based platform dedicated to high-energy-density physics studies.

  9. Optimization of the cooling profile to achieve crack-free Yb:S-FAP crystals

    NASA Astrophysics Data System (ADS)

    Fang, H. S.; Qiu, S. R.; Zheng, L. L.; Schaffers, K. I.; Tassano, J. B.; Caird, J. A.; Zhang, H.

    2008-08-01

    Yb:S-FAP [Yb 3+:Sr 5(PO 4) 3F] crystals are an important gain medium for diode-pumped laser applications. Growth of 7.0 cm diameter Yb:S-FAP crystals utilizing the Czochralski (CZ) method from SrF 2-rich melts often encounters cracks during the post-growth cool-down stage. To suppress cracking during cool-down, a numerical simulation of the growth system was used to understand the correlation between the furnace power during cool-down and the radial temperature differences within the crystal. The critical radial temperature difference, above which the crystal cracks, has been determined by benchmarking the simulation results against experimental observations. Based on this comparison, an optimal three-stage ramp-down profile was implemented, which produced high-quality, crack-free Yb:S-FAP crystals.

  10. Copper Tube Compression in Z-Current Geometry, Numerical Simulations and Comparison with Cyclope Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefrancois, A.; L'Eplattenier, P.; Burger, M.

    2006-02-13

    Metallic tubes compressions in Z-current geometry were performed at the Cyclope facility from Gramat Research Center in order to study the behavior of metals under large strain at high strain rate. 3D configurations of cylinder compressions have been calculated here to benchmark the new beta version of the electromagnetism package coupled with the dynamics in Ls-Dyna and compared with the Cyclope experiments. The electromagnetism module is being developed in the general-purpose explicit and implicit finite element program LS-DYNA{reg_sign} in order to perform coupled mechanical/thermal/electromagnetism simulations. The Maxwell equations are solved using a Finite Element Method (FEM) for the solid conductorsmore » coupled with a Boundary Element Method (BEM) for the surrounding air (or vacuum). More details can be read in the references.« less

  11. Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.

    2018-02-01

    We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.

  12. Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.

    2018-06-01

    We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.

  13. A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators.

    PubMed

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2015-12-19

    In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented.

  14. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    PubMed

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Towards modelling of water inflow into the mantle

    NASA Astrophysics Data System (ADS)

    Thielmann, M.; Eichheimer, P.; Golabek, G.

    2017-12-01

    The transport and storage of water in the mantle significantly affects various material properties of mantle rocks and thus water plays a key role in a variety of geodynamical processes (tectonics, magmatism etc.) Geological and seismological observations suggest different inflow mechanisms of water via the subducting slab like slab bending, thermal cracking and serpentinization (Faccenda et al., 2009; Korenaga, 2017). Most of the previous numerical models do not take different dip angles of the subduction slab and subduction velocities into account, while nature provides two different types of subduction regimes i.e. shallow and deep subduction (Li et al., 2011). To which extent both parameters influence the inflow and outflow of water in the mantle still remains unclear. For the investigation of the inflow and outflow of fluids e.g. water in the mantle, we use high resolution 2D finite element simulations, which allow us to resolve subducted sediments and crustal layers. For this purpose the finite element code MVEP2 (Kaus, 2010), is tested against benchmark results (van Keken et al., 2008). In a first step we reproduced the analytical cornerflow model (Batchelor, 1967) used in the benchmark of van Keken et al.(2008) as well as the steady state temperature field. Further steps consist of successively increasing model complexity, such as the incorporation of hydrogen diffusion, water transport and dehydration reactions. ReferencesBatchelor, G. K. An Introduction to Fluid Dynamics. Cambridge University Press, Cambridge, UK (1967) van Keken, P. E., et al. A community benchmark for subduction zone modeling. Phys. Earth Planet. Int. 171, 187-197 (2008). Faccenda, M., T.V. Gerya, and L. Burlini. Deep slab hydration induced by bending-related variations in tectonic pressure. Nat. Geosci. 2, 790-793 (2009). Korenaga, J. On the extent of mantle hydration caused by plate bending. Earth Planet. Sci. Lett. 457, 1-9 (2017). Li, Z. H., Xu, Z. Q., and T.V. Gerya. Flat versus steep subduction: Contrasting modes for the formation and exhumation of high- to ultrahigh-pressure rocks in continental collision zones. Earth Planet. Sci. Lett. 301, 65-77 (2011). Kaus, B. J. P. Factors that control the angle of shear bands in geodynamic numerical models of brittle deformation. Tectonophys. 484, 36-47 (2010). The transport and storage of water in the mantle significantly affects various material properties of mantle rocks and thus water plays a key role in a variety of geodynamical processes (tectonics, magmatism etc.) Geological and seismological observations suggest different inflow mechanisms of water via the subducting slab like slab bending, thermal cracking and serpentinization (Faccenda et al., 2009; Korenaga, 2017). Most of the previous numerical models do not take different dip angles of the subduction slab and subduction velocities into account, while nature provides two different types of subduction regimes i.e. shallow and deep subduction (Li et al., 2011). To which extent both parameters influence the inflow and outflow of water in the mantle still remains unclear. For the investigation of the inflow and outflow of fluids e.g. water in the mantle, we use high resolution 2D finite element simulations, which allow us to resolve subducted sediments and crustal layers. For this purpose the finite element code MVEP2 (Kaus, 2010), is tested against benchmark results (van Keken et al., 2008). In a first step we reproduced the analytical cornerflow model (Batchelor, 1967) used in the benchmark of van Keken et al.(2008) as well as the steady state temperature field.Further steps consist of successively increasing model complexity, such as the incorporation of hydrogen diffusion, water transport and dehydration reactions. Systematic simulations are performed to assess the influence of different model parameters on various target parameters such as dehydration depth, volcanic line position etc., the ultimate goal being the derivation of scaling laws for water transport in the mantleReferencesBatchelor, G. K. An Introduction to Fluid Dynamics. Cambridge University Press, Cambridge, UK (1967)van Keken, P. E., et al. A community benchmark for subduction zone modeling. Phys. Earth Planet. Int. 171, 187-197 (2008). Faccenda, M., T.V. Gerya, and L. Burlini. Deep slab hydration induced by bending-related variations in tectonic pressure. Nat. Geosci. 2, 790-793 (2009). Korenaga, J. On the extent of mantle hydration caused by plate bending. Earth Planet. Sci. Lett. 457, 1-9 (2017). Li, Z. H., Xu, Z. Q., and T.V. Gerya. Flat versus steep subduction: Contrasting modes for the formation and exhumation of high- to ultrahigh-pressure rocks in continental collision zones. Earth Planet. Sci. Lett. 301, 65-77 (2011). Kaus, B. J. P. Factors that control the angle of shear bands in geodynamic numerical models of brittle deformation. Tectonophys. 484, 36-47 (2010).

  16. Magnetic islands and singular currents at rational surfaces in three-dimensional magnetohydrodynamic equilibria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loizu, J., E-mail: joaquim.loizu@ipp.mpg.de; Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton New Jersey 08543; Hudson, S.

    2015-02-15

    Using the recently developed multiregion, relaxed MHD (MRxMHD) theory, which bridges the gap between Taylor's relaxation theory and ideal MHD, we provide a thorough analytical and numerical proof of the formation of singular currents at rational surfaces in non-axisymmetric ideal MHD equilibria. These include the force-free singular current density represented by a Dirac δ-function, which presumably prevents the formation of islands, and the Pfirsch-Schlüter 1/x singular current, which arises as a result of finite pressure gradient. An analytical model based on linearized MRxMHD is derived that can accurately (1) describe the formation of magnetic islands at resonant rational surfaces, (2)more » retrieve the ideal MHD limit where magnetic islands are shielded, and (3) compute the subsequent formation of singular currents. The analytical results are benchmarked against numerical simulations carried out with a fully nonlinear implementation of MRxMHD.« less

  17. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries.

    PubMed

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.

  18. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries

    PubMed Central

    Ge, Liang; Sotiropoulos, Fotis

    2008-01-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus. PMID:19194533

  19. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  20. Computations of Complex Three-Dimensional Turbulent Free Jets

    NASA Technical Reports Server (NTRS)

    Wilson, Robert V.; Demuren, Ayodeji O.

    1997-01-01

    Three-dimensional, incompressible turbulent jets with rectangular and elliptical cross-sections are simulated with a finite-difference numerical method. The full Navier- Stokes equations are solved at low Reynolds numbers, whereas at high Reynolds numbers filtered forms of the equations are solved along with a sub-grid scale model to approximate the effects of the unresolved scales. A 2-N storage, third-order Runge-Kutta scheme is used for temporary discretization and a fourth-order compact scheme is used for spatial discretization. Although such methods are widely used in the simulation of compressible flows, the lack of an evolution equation for pressure or density presents particular difficulty in incompressible flows. The pressure-velocity coupling must be established indirectly. It is achieved, in this study, through a Poisson equation which is solved by a compact scheme of the same order of accuracy. The numerical formulation is validated and the dispersion and dissipation errors are documented by the solution of a wide range of benchmark problems. Three-dimensional computations are performed for different inlet conditions which model the naturally developing and forced jets. The experimentally observed phenomenon of axis-switching is captured in the numerical simulation, and it is confirmed through flow visualization that this is based on self-induction of the vorticity field. Statistical quantities such as mean velocity, mean pressure, two-point velocity spatial correlations and Reynolds stresses are presented. Detailed budgets of the mean momentum and Reynolds stresses are presented. Detailed budgets of the mean momentum and Reynolds stress equations are presented to aid in the turbulence modeling of complex jets. Simulations of circular jets are used to quantify the effect of the non-uniform curvature of the non-circular jets.

  1. Backward-stochastic-differential-equation approach to modeling of gene expression

    NASA Astrophysics Data System (ADS)

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F.; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  2. Cookbook Recipe to Simulate Seawater Intrusion with Standard MODFLOW

    NASA Astrophysics Data System (ADS)

    Schaars, F.; Bakker, M.

    2012-12-01

    We developed a cookbook recipe to simulate steady interface flow in multi-layer coastal aquifers with regular groundwater codes such as standard MODFLOW. The main step in the recipe is a simple transformation of the hydraulic conductivities and thicknesses of the aquifers. Standard groundwater codes may be applied to compute the head distribution in the aquifer using the transformed parameters. For example, for flow in a single unconfined aquifer, the hydraulic conductivity needs to be multiplied with 41 and the base of the aquifer needs to be set to mean sea level (for a relative seawater density of 1.025). Once the head distribution is obtained, the Ghijben-Herzberg relationship is applied to compute the depth of the interface. The recipe may be applied to quite general settings, including spatially variable aquifer properties. Any standard groundwater code may be used, as long as it can simulate unconfined flow where the transmissivity is a linear function of the head. The proposed recipe is benchmarked successfully against a number of analytic and numerical solutions.

  3. Backward-stochastic-differential-equation approach to modeling of gene expression.

    PubMed

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  4. Convection Effects During Bulk Transparent Alloy Solidification in DECLIC-DSI and Phase-Field Simulations in Diffusive Conditions

    NASA Astrophysics Data System (ADS)

    Mota, F. L.; Song, Y.; Pereda, J.; Billia, B.; Tourret, D.; Debierre, J.-M.; Trivedi, R.; Karma, A.; Bergeon, N.

    2017-08-01

    To study the dynamical formation and evolution of cellular and dendritic arrays under diffusive growth conditions, three-dimensional (3D) directional solidification experiments were conducted in microgravity on a model transparent alloy onboard the International Space Station using the Directional Solidification Insert in the DEvice for the study of Critical LIquids and Crystallization. Selected experiments were repeated on Earth under gravity-driven fluid flow to evidence convection effects. Both radial and axial macrosegregation resulting from convection are observed in ground experiments, and primary spacings measured on Earth and microgravity experiments are noticeably different. The microgravity experiments provide unique benchmark data for numerical simulations of spatially extended pattern formation under diffusive growth conditions. The results of 3D phase-field simulations highlight the importance of accurately modeling thermal conditions that strongly influence the front recoil of the interface and the selection of the primary spacing. The modeling predictions are in good quantitative agreements with the microgravity experiments.

  5. Tunneling ionization and Wigner transform diagnostics in OSIRIS

    NASA Astrophysics Data System (ADS)

    Martins, S.; Fonseca, R. A.; Silva, L. O.; Deng, S.; Katsouleas, T.; Tsung, F.; Mori, W. B.

    2004-11-01

    We describe the ionization module implemented in the PIC code OSIRIS [1]. Benchmarks with previously published tunnel ionization results were made. Our ionization module works in 1D, 2D and 3D simulations with barrier suppression ionization or the ADK ionization model, and allows for moving ions. Several illustrative 3D numerical simulations were performed, namely of the propagation of a SLAC beam in a Li gas cell, for the parameters of [2]. We compare the performance of OSIRIS with/without the ionization module, concluding that much less simulation time is usually required when using the ionization module. A novel diagnostic over the electric field is implemented, the Wigner transform, that provides information on the local spectral content of the field. This diagnostic is applied to the analysis of the chirp induced in an ionizing laser pulse. [1] R. A. Fonseca et al., LNCS 2331, 342-351, (Springer, Heidelberg, 2002). [2] S. Deng et al., Phys. Rev. E 68, 047401 (2003).

  6. 3D-MHD Simulations of the Madison Dynamo Experiment

    NASA Astrophysics Data System (ADS)

    Bayliss, R. A.; Forest, C. B.; Wright, J. C.; O'Connell, R.

    2003-10-01

    Growth, saturation and turbulent evolution of the Madison dynamo experiment is investigated numerically using a 3-D pseudo-spectral simulation of the MHD equations; results of the simulations are used to predict behavior of the experiment. The code solves the self-consistent full evolution of the magnetic and velocity fields. The code uses a spectral representation via spherical harmonic basis functions of the vector fields in longitude and latitude, and fourth order finite differences in the radial direction. The magnetic field evolution has been benchmarked against the laminar kinematic dynamo predicted by M.L. Dudley and R.W. James [Proc. R. Soc. Lond. A 425. 407-429 (1989)]. Initial results indicate that saturation of the magnetic field occurs so that the resulting perturbed backreaction of the induced magnetic field changes the velocity field such that it would no longer be linearly unstable, suggesting non-linear terms are necessary for explaining the resulting state. Saturation and self-excitation depend in detail upon the magnetic Prandtl number.

  7. Benchmarking NNWSI flow and transport codes: COVE 1 results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of themore » codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs.« less

  8. Working Group 1 "Advanced GNSS Processing Techniques" of the COST Action GNSS4SWEC: Overview of main achievements

    NASA Astrophysics Data System (ADS)

    Douša, Jan; Dick, Galina; Kačmařík, Michal; Václavovic, Pavel; Pottiaux, Eric; Zus, Florian; Brenot, Hugues; Moeller, Gregor; Hinterberger, Fabian; Pacione, Rosa; Stuerze, Andrea; Eben, Kryštof; Teferle, Norman; Ding, Wenwu; Morel, Laurent; Kaplon, Jan; Hordyniec, Pavel; Rohm, Witold

    2017-04-01

    The COST Action ES1206 GNSS4SWEC addresses new exploitations of the synergy between developments in GNSS and meteorological communities. The Working Group 1 (Advanced GNSS processing techniques) deals with implementing and assessing new methods for GNSS tropospheric monitoring and precise positioning exploiting all modern GNSS constellations, signals, products etc. Besides other goals, WG1 coordinates development of advanced tropospheric products in support of weather numerical and non-numerical nowcasting. These are ultra-fast and high-resolution tropospheric products available in real time or in a sub-hourly fashion and parameters in support of monitoring an anisotropy of the troposphere, e.g. horizontal gradients and tropospheric slant path delays. This talk gives an overview of WG1 activities and, particularly, achievements in two activities, Benchmark and Real-time demonstration campaigns. For the Benchmark campaign a complex data set of GNSS observations and various meteorological data were collected for a two-month period in 2013 (May-June) which included severe weather events in central Europe. An initial processing of data sets from GNSS and numerical weather models (NWM) provided independently estimated reference parameters - ZTDs and tropospheric horizontal gradients. The comparison of horizontal tropospheric gradients from GNSS and NWM data demonstrated a very good agreement among independent solutions with negligible biases and an accuracy of about 0.5 mm. Visual comparisons of maps of zenith wet delays and tropospheric horizontal gradients showed very promising results for future exploitations of advanced GNSS tropospheric products in meteorological applications such as severe weather event monitoring and weather nowcasting. The Benchmark data set is also used for an extensive validation of line-of-sight tropospheric Slant Total Delays (STD) from GNSS, NWM-raytracing and Water Vapour Radiometer (WVR) solutions. Seven institutions delivered their STDs estimated based on GNSS observations processed using different software and strategies. STDs from NWM ray-tracing came from three institutions using four different NWM models. Results show generally a very good mutual agreement among all solutions from all techniques. The influence of adding not cleaned GNSS post-fit residuals, i.e. residuals that still contains non-tropospheric systematic effects such as multipath, to estimated STDs will be presented. The Real-time demonstration campaign aims at enhancing and assessing ultra-fast GNSS tropospheric products for severe weather and NWM nowcasting. Results are showed from real-time demonstrations as well as offline production simulating real-time using Benchmark campaign.

  9. A numerical method for solving the 3D unsteady incompressible Navier Stokes equations in curvilinear domains with complex immersed boundaries

    NASA Astrophysics Data System (ADS)

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.

  10. MITHRA 1.0: A full-wave simulation tool for free electron lasers

    NASA Astrophysics Data System (ADS)

    Fallahi, Arya; Yahaghi, Alireza; Kärtner, Franz X.

    2018-07-01

    Free Electron Lasers (FELs) are a solution for providing intense, coherent and bright radiation in the hard X-ray regime. Due to the low wall-plug efficiency of FEL facilities, it is crucial and additionally very useful to develop complete and accurate simulation tools for better optimizing a FEL interaction. The highly sophisticated dynamics involved in a FEL process was the main obstacle hindering the development of general simulation tools for this problem. We present a numerical algorithm based on finite difference time domain/Particle in cell (FDTD/PIC) in a Lorentz boosted coordinate system which is able to fulfill a full-wave simulation of a FEL process. The developed software offers a suitable tool for the analysis of FEL interactions without considering any of the usual approximations. A coordinate transformation to bunch rest frame makes the very different length scales of bunch size, optical wavelengths and the undulator period transform to values with the same order. Consequently, FDTD/PIC simulations in conjunction with efficient parallelization techniques make the full-wave simulation feasible using the available computational resources. Several examples of free electron lasers are analyzed using the developed software, the results are benchmarked based on standard FEL codes and discussed in detail.

  11. TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.

    2014-06-01

    The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.

  12. Implementation and benchmark of a long-range corrected functional in the density functional based tight-binding method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lutsker, V.; Niehaus, T. A., E-mail: thomas.niehaus@physik.uni-regensburg.de; Aradi, B.

    2015-11-14

    Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply themore » method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data.« less

  13. Center for Extended Magnetohydrodynamic Modeling Cooperative Agreement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carl R. Sovinec

    The Center for Extended Magnetohydrodynamic Modeling (CEMM) is developing computer simulation models for predicting the behavior of magnetically confined plasmas. Over the first phase of support from the Department of Energy’s Scientific Discovery through Advanced Computing (SciDAC) initiative, the focus has been on macroscopic dynamics that alter the confinement properties of magnetic field configurations. The ultimate objective is to provide computational capabilities to predict plasma behavior—not unlike computational weather prediction—to optimize performance and to increase the reliability of magnetic confinement for fusion energy. Numerical modeling aids theoretical research by solving complicated mathematical models of plasma behavior including strong nonlinear effectsmore » and the influences of geometrical shaping of actual experiments. The numerical modeling itself remains an area of active research, due to challenges associated with simulating multiple temporal and spatial scales. The research summarized in this report spans computational and physical topics associated with state of the art simulation of magnetized plasmas. The tasks performed for this grant are categorized according to whether they are primarily computational, algorithmic, or application-oriented in nature. All involve the development and use of the Non-Ideal Magnetohydrodynamics with Rotation, Open Discussion (NIMROD) code, which is described at http://nimrodteam.org. With respect to computation, we have tested and refined methods for solving the large algebraic systems of equations that result from our numerical approximations of the physical model. Collaboration with the Terascale Optimal PDE Solvers (TOPS) SciDAC center led us to the SuperLU_DIST software library [http://crd.lbl.gov/~xiaoye/SuperLU/] for solving large sparse matrices using direct methods on parallel computers. Switching to this solver library boosted NIMROD’s performance by a factor of five in typical large nonlinear simulations, which has been publicized as a success story of SciDAC-fostered collaboration. Furthermore, the SuperLU software does not assume any mathematical symmetry, and its generality provides an important capability for extending the physical model beyond magnetohydrodynamics (MHD). With respect to algorithmic and model development, our most significant accomplishment is the development of a new method for solving plasma models that treat electrons as an independent plasma component. These ‘two-fluid’ models encompass MHD and add temporal and spatial scales that are beyond the response of the ion species. Implementation and testing of a previously published algorithm did not prove successful for NIMROD, and the new algorithm has since been devised, analyzed, and implemented. Two-fluid modeling, an important objective of the original NIMROD project, is now routine in 2D applications. Algorithmic components for 3D modeling are in place and tested; though, further computational work is still needed for efficiency. Other algorithmic work extends the ion-fluid stress tensor to include models for parallel and gyroviscous stresses. In addition, our hot-particle simulation capability received important refinements that permitted completion of a benchmark with the M3D code. A highlight of our applications work is the edge-localized mode (ELM) modeling, which was part of the first-ever computational Performance Target for the DOE Office of Fusion Energy Science, see http://www.science.doe.gov/ofes/performancetargets.shtml. Our efforts allowed MHD simulations to progress late into the nonlinear stage, where energy is conducted to the wall location. They also produced a two-fluid ELM simulation starting from experimental information and demonstrating critical drift effects that are characteristic of two-fluid physics. Another important application is the internal kink mode in a tokamak. Here, the primary purpose of the study has been to benchmark the two main code development lines of CEMM, NIMROD and M3D, on a relevant nonlinear problem. Results from the two codes show repeating nonlinear relaxation events driven by the kink mode over quantitatively comparable timescales. The work has launched a more comprehensive nonlinear benchmarking exercise, where realistic transport effects have an important role.« less

  14. Multiscale computations with a wavelet-adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Rastigejev, Yevgenii Anatolyevich

    A wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems governed by partial differential equations is introduced. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution which are used in conjunction with an appropriate threshold criteria to adapt the collocation grid. The efficient data structures for grid representation as well as related computational algorithms to support grid rearrangement procedure are developed. The algorithm is applied to the simulation of phenomena described by Navier-Stokes equations. First, we undertake the study of the ignition and subsequent viscous detonation of a H2 : O2 : Ar mixture in a one-dimensional shock tube. Subsequently, we apply the algorithm to solve the two- and three-dimensional benchmark problem of incompressible flow in a lid-driven cavity at large Reynolds numbers. For these cases we show that solutions of comparable accuracy as the benchmarks are obtained with more than an order of magnitude reduction in degrees of freedom. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost.

  15. GeNN: a code generation framework for accelerated brain simulations

    NASA Astrophysics Data System (ADS)

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  16. GeNN: a code generation framework for accelerated brain simulations.

    PubMed

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-07

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  17. GeNN: a code generation framework for accelerated brain simulations

    PubMed Central

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369

  18. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    NASA Astrophysics Data System (ADS)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  19. Numerical investigation of nonlinear fluid-structure interaction dynamic behaviors under a general Immersed Boundary-Lattice Boltzmann-Finite Element method

    NASA Astrophysics Data System (ADS)

    Gong, Chun-Lin; Fang, Zhe; Chen, Gang

    A numerical approach based on the immersed boundary (IB), lattice Boltzmann and nonlinear finite element method (FEM) is proposed to simulate hydrodynamic interactions of very flexible objects. In the present simulation framework, the motion of fluid is obtained by solving the discrete lattice Boltzmann equations on Eulerian grid, the behaviors of flexible objects are calculated through nonlinear dynamic finite element method, and the interactive forces between them are implicitly obtained using velocity correction IB method which satisfies the no-slip conditions well at the boundary points. The efficiency and accuracy of the proposed Immersed Boundary-Lattice Boltzmann-Finite Element method is first validated by a fluid-structure interaction (F-SI) benchmark case, in which a flexible filament flaps behind a cylinder in channel flow, then the nonlinear vibration mechanism of the cylinder-filament system is investigated by altering the Reynolds number of flow and the material properties of filament. The interactions between two tandem and side-by-side identical objects in a uniform flow are also investigated, and the in-phase and out-of-phase flapping behaviors are captured by the proposed method.

  20. An interface capturing scheme for modeling atomization in compressible flows

    NASA Astrophysics Data System (ADS)

    Garrick, Daniel P.; Hagen, Wyatt A.; Regele, Jonathan D.

    2017-09-01

    The study of atomization in supersonic flow is critical to ensuring reliable ignition of scramjet combustors under startup conditions. Numerical methods incorporating surface tension effects have largely focused on the incompressible regime as most atomization applications occur at low Mach numbers. Simulating surface tension effects in compressible flow requires robust numerical methods that can handle discontinuities caused by both shocks and material interfaces with high density ratios. In this work, a shock and interface capturing scheme is developed that uses the Harten-Lax-van Leer-Contact (HLLC) Riemann solver while a Tangent of Hyperbola for INterface Capturing (THINC) interface reconstruction scheme retains the fluid immiscibility condition in the volume fraction and phasic densities in the context of the five equation model. The approach includes the effects of compressibility, surface tension, and molecular viscosity. One and two-dimensional benchmark problems demonstrate the desirable interface sharpening and conservation properties of the approach. Simulations of secondary atomization of a cylindrical water column after its interaction with a shockwave show good qualitative agreement with experimentally observed behavior. Three-dimensional examples of primary atomization of a liquid jet in a Mach 2 crossflow demonstrate the robustness of the method.

  1. Effect of the forcing term in the pseudopotential lattice Boltzmann modeling of thermal flows

    NASA Astrophysics Data System (ADS)

    Li, Qing; Luo, K. H.

    2014-05-01

    The pseudopotential lattice Boltzmann (LB) model is a popular model in the LB community for simulating multiphase flows. Recently, several thermal LB models, which are based on the pseudopotential LB model and constructed within the framework of the double-distribution-function LB method, were proposed to simulate thermal multiphase flows [G. Házi and A. Márkus, Phys. Rev. E 77, 026305 (2008), 10.1103/PhysRevE.77.026305; L. Biferale, P. Perlekar, M. Sbragaglia, and F. Toschi, Phys. Rev. Lett. 108, 104502 (2012), 10.1103/PhysRevLett.108.104502; S. Gong and P. Cheng, Int. J. Heat Mass Transfer 55, 4923 (2012), 10.1016/j.ijheatmasstransfer.2012.04.037; M. R. Kamali et al., Phys. Rev. E 88, 033302 (2013), 10.1103/PhysRevE.88.033302]. The objective of the present paper is to show that the effect of the forcing term on the temperature equation must be eliminated in the pseudopotential LB modeling of thermal flows. First, the effect of the forcing term on the temperature equation is shown via the Chapman-Enskog analysis. For comparison, alternative treatments that are free from the forcing-term effect are provided. Subsequently, numerical investigations are performed for two benchmark tests. The numerical results clearly show that the existence of the forcing-term effect will lead to significant numerical errors in the pseudopotential LB modeling of thermal flows.

  2. Establishing objective benchmarks in robotic virtual reality simulation at the level of a competent surgeon using the RobotiX Mentor simulator.

    PubMed

    Watkinson, William; Raison, Nicholas; Abe, Takashige; Harrison, Patrick; Khan, Shamim; Van der Poel, Henk; Dasgupta, Prokar; Ahmed, Kamran

    2018-05-01

    To establish objective benchmarks at the level of a competent robotic surgeon across different exercises and metrics for the RobotiX Mentor virtual reality (VR) simulator suitable for use within a robotic surgical training curriculum. This retrospective observational study analysed results from multiple data sources, all of which used the RobotiX Mentor VR simulator. 123 participants with varying experience from novice to expert completed the exercises. Competency was established as the 25th centile of the mean advanced intermediate score. Three basic skill exercises and two advanced skill exercises were used. King's College London. 84 Novice, 26 beginner intermediates, 9 advanced intermediates and 4 experts were used in this retrospective observational study. Objective benchmarks derived from the 25th centile of the mean scores of the advanced intermediates provided suitably challenging yet also achievable targets for training surgeons. The disparity in scores was greatest for the advanced exercises. Novice surgeons are able to achieve the benchmarks across all exercises in the majority of metrics. We have successfully created this proof-of-concept study, which requires validation in a larger cohort. Objective benchmarks obtained from the 25th centile of the mean scores of advanced intermediates provide clinically relevant benchmarks at the standard of a competent robotic surgeon that are challenging yet also attainable. That can be used within a VR training curriculum allowing participants to track and monitor their progress in a structured and progressional manner through five exercises. Providing clearly defined targets, ensuring that a universal training standard has been achieved across training surgeons. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Access to a simulator is not enough: the benefits of virtual reality training based on peer-group-derived benchmarks--a randomized controlled trial.

    PubMed

    von Websky, Martin W; Raptis, Dimitri A; Vitz, Martina; Rosenthal, Rachel; Clavien, P A; Hahnloser, Dieter

    2013-11-01

    Virtual reality (VR) simulators are widely used to familiarize surgical novices with laparoscopy, but VR training methods differ in efficacy. In the present trial, self-controlled basic VR training (SC-training) was tested against training based on peer-group-derived benchmarks (PGD-training). First, novice laparoscopic residents were randomized into a SC group (n = 34), and a group using PGD-benchmarks (n = 34) for basic laparoscopic training. After completing basic training, both groups performed 60 VR laparoscopic cholecystectomies for performance analysis. Primary endpoints were simulator metrics; secondary endpoints were program adherence, trainee motivation, and training efficacy. Altogether, 66 residents completed basic training, and 3,837 of 3,960 (96.8 %) cholecystectomies were available for analysis. Course adherence was good, with only two dropouts, both in the SC-group. The PGD-group spent more time and repetitions in basic training until the benchmarks were reached and subsequently showed better performance in the readout cholecystectomies: Median time (gallbladder extraction) showed significant differences of 520 s (IQR 354-738 s) in SC-training versus 390 s (IQR 278-536 s) in the PGD-group (p < 0.001) and 215 s (IQR 175-276 s) in experts, respectively. Path length of the right instrument also showed significant differences, again with the PGD-training group being more efficient. Basic VR laparoscopic training based on PGD benchmarks with external assessment is superior to SC training, resulting in higher trainee motivation and better performance in simulated laparoscopic cholecystectomies. We recommend such a basic course based on PGD benchmarks before advancing to more elaborate VR training.

  4. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  5. An efficient numerical method for solving the Boltzmann equation in multidimensions

    NASA Astrophysics Data System (ADS)

    Dimarco, Giacomo; Loubère, Raphaël; Narski, Jacek; Rey, Thomas

    2018-01-01

    In this paper we deal with the extension of the Fast Kinetic Scheme (FKS) (Dimarco and Loubère, 2013 [26]) originally constructed for solving the BGK equation, to the more challenging case of the Boltzmann equation. The scheme combines a robust and fast method for treating the transport part based on an innovative Lagrangian technique supplemented with conservative fast spectral schemes to treat the collisional operator by means of an operator splitting approach. This approach along with several implementation features related to the parallelization of the algorithm permits to construct an efficient simulation tool which is numerically tested against exact and reference solutions on classical problems arising in rarefied gas dynamic. We present results up to the 3 D × 3 D case for unsteady flows for the Variable Hard Sphere model which may serve as benchmark for future comparisons between different numerical methods for solving the multidimensional Boltzmann equation. For this reason, we also provide for each problem studied details on the computational cost and memory consumption as well as comparisons with the BGK model or the limit model of compressible Euler equations.

  6. Turbulent dissipation challenge: a community-driven effort

    NASA Astrophysics Data System (ADS)

    Parashar, Tulasi N.; Salem, Chadi; Wicks, Robert T.; Karimabadi, H.; Gary, S. Peter; Matthaeus, William H.

    2015-10-01

    > Many naturally occurring and man-made plasmas are collisionless and turbulent. It is not yet well understood how the energy in fields and fluid motions is transferred into the thermal degrees of freedom of constituent particles in such systems. The debate at present primarily concerns proton heating. Multiple possible heating mechanisms have been proposed over the past few decades, including cyclotron damping, Landau damping, heating at intermittent structures and stochastic heating. Recently, a community-driven effort was proposed (Parashar & Salem, 2013, arXiv:1303.0204) to bring the community together and understand the relative contributions of these processes under given conditions. In this paper, we propose the first step of this challenge: a set of problems and diagnostics for benchmarking and comparing different types of 2.5D simulations. These comparisons will provide insights into the strengths and limitations of different types of numerical simulations and will help guide subsequent stages of the challenge.

  7. Effect of randomness on multi-frequency aeroelastic responses resolved by Unsteady Adaptive Stochastic Finite Elements

    NASA Astrophysics Data System (ADS)

    Witteveen, Jeroen A. S.; Bijl, Hester

    2009-10-01

    The Unsteady Adaptive Stochastic Finite Elements (UASFE) method resolves the effect of randomness in numerical simulations of single-mode aeroelastic responses with a constant accuracy in time for a constant number of samples. In this paper, the UASFE framework is extended to multi-frequency responses and continuous structures by employing a wavelet decomposition pre-processing step to decompose the sampled multi-frequency signals into single-frequency components. The effect of the randomness on the multi-frequency response is then obtained by summing the results of the UASFE interpolation at constant phase for the different frequency components. Results for multi-frequency responses and continuous structures show a three orders of magnitude reduction of computational costs compared to crude Monte Carlo simulations in a harmonically forced oscillator, a flutter panel problem, and the three-dimensional transonic AGARD 445.6 wing aeroelastic benchmark subject to random fields and random parameters with various probability distributions.

  8. Milestone Deliverable: FY18-Q1: Deploy production sliding mesh capability with linear solver benchmarking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domino, Stefan P.

    2017-12-01

    This milestone was focused on deploying and verifying a “sliding-mesh interface,” and establishing baseline timings for blade-resolved simulations of a sub-MW-scale turbine. In the ExaWind project, we are developing both sliding-mesh and overset-mesh approaches for handling the rotating blades in an operating wind turbine. In the sliding-mesh approach, the turbine rotor and its immediate surrounding fluid are captured in a “disk” that is embedded in the larger fluid domain. The embedded fluid is simulated in a coordinate system that rotates with the rotor. It is important that the coupling algorithm (and its implementation) between the rotating and inertial discrete modelsmore » maintains the accuracy of the numerical methods on either side of the interface, i.e., the interface is “design order.”« less

  9. Optimization of the cooling profile to achieve crack-free Yb:S-FAP crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, H; Qiu, S; Kheng, L

    Yb:S-FAP [Yb{sup 3+}:Sr{sub 5}(PO{sub 4}){sub 3}F] crystals are an important gain medium for diode-pumped laser applications. Growth of 7.0 cm diameter Yb:S-FAP crystals utilizing the Czochralski (CZ) method from SrF{sub 2}-rich melts often encounter cracks during the post growth cool down stage. To suppress cracking during cool down, a numerical simulation of the growth system was used to understand the correlation between the furnace power during cool down and the radial temperature differences within the crystal. The critical radial temperature difference, above which the crystal cracks, has been determined by benchmarking the simulation results against experimental observations. Based on thismore » comparison, an optimal three-stage ramp-down profile was implemented and produced high quality, crack-free Yb:S-FAP crystals.« less

  10. Isentropic Compression with a Rectangular Configuration for Tungstene and Tantalum, Computations and Comparison with Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefrancois, A.; Reisman, D. B.; Bastea, M.

    2006-02-13

    Isentropic compression experiments and numerical simulations on metals are performed at Z accelerator facility from Sandia National Laboratory and at Lawrence Livermore National Laboratory in order to study the isentrope, associated Hugoniot and phase changes of these metals. 3D configurations have been calculated here to benchmark the new beta version of the electromagnetism package coupled with the dynamics in Ls-Dyna and compared with the ICE Z shots 1511 and 1555. The electromagnetism module is being developed in the general-purpose explicit and implicit finite element program LS-DYNA{reg_sign} in order to perform coupled mechanical/thermal/electromagnetism simulations. The Maxwell equations are solved using amore » Finite Element Method (FEM) for the solid conductors coupled with a Boundary Element Method (BEM) for the surrounding air (or vacuum). More details can be read in the references.« less

  11. Efficient LBM visual simulation on face-centered cubic lattices.

    PubMed

    Petkov, Kaloian; Qiu, Feng; Fan, Zhe; Kaufman, Arie E; Mueller, Klaus

    2009-01-01

    The Lattice Boltzmann method (LBM) for visual simulation of fluid flow generally employs cubic Cartesian (CC) lattices such as the D3Q13 and D3Q19 lattices for the particle transport. However, the CC lattices lead to suboptimal representation of the simulation space. We introduce the face-centered cubic (FCC) lattice, fD3Q13, for LBM simulations. Compared to the CC lattices, the fD3Q13 lattice creates a more isotropic sampling of the simulation domain and its single lattice speed (i.e., link length) simplifies the computations and data storage. Furthermore, the fD3Q13 lattice can be decomposed into two independent interleaved lattices, one of which can be discarded, which doubles the simulation speed. The resulting LBM simulation can be efficiently mapped to the GPU, further increasing the computational performance. We show the numerical advantages of the FCC lattice on channeled flow in 2D and the flow-past-a-sphere benchmark in 3D. In both cases, the comparison is against the corresponding CC lattices using the analytical solutions for the systems as well as velocity field visualizations. We also demonstrate the performance advantages of the fD3Q13 lattice for interactive simulation and rendering of hot smoke in an urban environment using thermal LBM.

  12. Analytical theory of coherent synchrotron radiation wakefield of short bunches shielded by conducting parallel plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stupakov, Gennady; Zhou, Demin

    2016-04-21

    We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. Furthermore, all our formulas are benchmarked against numerical simulations with the CSRZ computermore » code.« less

  13. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.; Kornreich, D.E.

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less

  14. Development of a two-fluid drag law for clustered particles using direct numerical simulation and validation through experiments

    NASA Astrophysics Data System (ADS)

    Abbasi Baharanchi, Ahmadreza

    This dissertation focused on development and utilization of numerical and experimental approaches to improve the CFD modeling of fluidization flow of cohesive micron size particles. The specific objectives of this research were: (1) Developing a cluster prediction mechanism applicable to Two-Fluid Modeling (TFM) of gas-solid systems (2) Developing more accurate drag models for Two-Fluid Modeling (TFM) of gas-solid fluidization flow with the presence of cohesive interparticle forces (3) using the developed model to explore the improvement of accuracy of TFM in simulation of fluidization flow of cohesive powders (4) Understanding the causes and influential factor which led to improvements and quantification of improvements (5) Gathering data from a fast fluidization flow and use these data for benchmark validations. Simulation results with two developed cluster-aware drag models showed that cluster prediction could effectively influence the results in both the first and second cluster-aware models. It was proven that improvement of accuracy of TFM modeling using three versions of the first hybrid model was significant and the best improvements were obtained by using the smallest values of the switch parameter which led to capturing the smallest chances of cluster prediction. In the case of the second hybrid model, dependence of critical model parameter on only Reynolds number led to the fact that improvement of accuracy was significant only in dense section of the fluidized bed. This finding may suggest that a more sophisticated particle resolved DNS model, which can span wide range of solid volume fraction, can be used in the formulation of the cluster-aware drag model. The results of experiment suing high speed imaging indicated the presence of particle clusters in the fluidization flow of FCC inside the riser of FIU-CFB facility. In addition, pressure data was successfully captured along the fluidization column of the facility and used as benchmark validation data for the second hybrid model developed in the present dissertation. It was shown the second hybrid model could predict the pressure data in the dense section of the fluidization column with better accuracy.

  15. BACT Simulation User Guide (Version 7.0)

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1997-01-01

    This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.

  16. An overview of the ENEA activities in the field of coupled codes NPP simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parisi, C.; Negrenti, E.; Sepielli, M.

    2012-07-01

    In the framework of the nuclear research activities in the fields of safety, training and education, ENEA (the Italian National Agency for New Technologies, Energy and the Sustainable Development) is in charge of defining and pursuing all the necessary steps for the development of a NPP engineering simulator at the 'Casaccia' Research Center near Rome. A summary of the activities in the field of the nuclear power plants simulation by coupled codes is here presented with the long term strategy for the engineering simulator development. Specifically, results from the participation in international benchmarking activities like the OECD/NEA 'Kalinin-3' benchmark andmore » the 'AER-DYN-002' benchmark, together with simulations of relevant events like the Fukushima accident, are here reported. The ultimate goal of such activities performed using state-of-the-art technology is the re-establishment of top level competencies in the NPP simulation field in order to facilitate the development of Enhanced Engineering Simulators and to upgrade competencies for supporting national energy strategy decisions, the nuclear national safety authority, and the R and D activities on NPP designs. (authors)« less

  17. Dynamic Simulation of VEGA SRM Bench Firing By Using Propellant Complex Characterization

    NASA Astrophysics Data System (ADS)

    Di Trapani, C. D.; Mastrella, E.; Bartoccini, D.; Squeo, E. A.; Mastroddi, F.; Coppotelli, G.; Linari, M.

    2012-07-01

    During the VEGA launcher development, from the 2004 up to now, 8 firing tests have been performed at Salto di Quirra (Sardinia, Italy) and Kourou (Guyana, Fr) with the objective to characterize and qualify of the Zefiros and P80 Solid Rocket Motors (SRM). In fact the VEGA launcher configuration foreseen 3 solid stages based on P80, Z23 and Z9 Solid Rocket Motors respectively. One of the primary objectives of the firing test is to correctly characterize the dynamic response of the SRM in order to apply such a characterization to the predictions and simulations of the VEGA launch dynamic environment. Considering that the solid propellant is around 90% of the SRM mass, it is very important to dynamically characterize it, and to increase the confidence in the simulation of the dynamic levels transmitted to the LV upper part from the SRMs. The activity is articulated in three parts: • consolidation of an experimental method for the dynamic characterization of the complex dynamic elasticity modulus of elasticity of visco-elastic materials applicable to the SRM propellant operative conditions • introduction of the complex dynamic elasticity modulus in a numerical FEM benchmark based on MSC NASTRAN solver • analysis of the effect of the introduction of the complex dynamic elasticity modulus in the Zefiros FEM focusing on experimental firing test data reproduction with numerical approach.

  18. Adaptive temporal refinement in injection molding

    NASA Astrophysics Data System (ADS)

    Karyofylli, Violeta; Schmitz, Mauritius; Hopmann, Christian; Behr, Marek

    2018-05-01

    Mold filling is an injection molding stage of great significance, because many defects of the plastic components (e.g. weld lines, burrs or insufficient filling) can occur during this process step. Therefore, it plays an important role in determining the quality of the produced parts. Our goal is the temporal refinement in the vicinity of the evolving melt front, in the context of 4D simplex-type space-time grids [1, 2]. This novel discretization method has an inherent flexibility to employ completely unstructured meshes with varying levels of resolution both in spatial dimensions and in the time dimension, thus allowing the use of local time-stepping during the simulations. This can lead to a higher simulation precision, while preserving calculation efficiency. A 3D benchmark case, which concerns the filling of a plate-shaped geometry, is used for verifying our numerical approach [3]. The simulation results obtained with the fully unstructured space-time discretization are compared to those obtained with the standard space-time method and to Moldflow simulation results. This example also serves for providing reliable timing measurements and the efficiency aspects of the filling simulation of complex 3D molds while applying adaptive temporal refinement.

  19. A method for the direct numerical simulation of hypersonic boundary-layer instability with finite-rate chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marxen, Olaf, E-mail: olaf.marxen@vki.ac.be; Aeronautics and Aerospace Department, von Karman Institute for Fluid Dynamics, Chaussée de Waterloo, 72, 1640 Rhode-St-Genèse; Magin, Thierry E.

    2013-12-15

    A new numerical method is presented here that allows to consider chemically reacting gases during the direct numerical simulation of a hypersonic fluid flow. The method comprises the direct coupling of a solver for the fluid mechanical model and a library providing the physio-chemical model. The numerical method for the fluid mechanical model integrates the compressible Navier–Stokes equations using an explicit time advancement scheme and high-order finite differences. This Navier–Stokes code can be applied to the investigation of laminar-turbulent transition and boundary-layer instability. The numerical method for the physio-chemical model provides thermodynamic and transport properties for different gases as wellmore » as chemical production rates, while here we exclusively consider a five species air mixture. The new method is verified for a number of test cases at Mach 10, including the one-dimensional high-temperature flow downstream of a normal shock, a hypersonic chemical reacting boundary layer in local thermodynamic equilibrium and a hypersonic reacting boundary layer with finite-rate chemistry. We are able to confirm that the diffusion flux plays an important role for a high-temperature boundary layer in local thermodynamic equilibrium. Moreover, we demonstrate that the flow for a case previously considered as a benchmark for the investigation of non-equilibrium chemistry can be regarded as frozen. Finally, the new method is applied to investigate the effect of finite-rate chemistry on boundary layer instability by considering the downstream evolution of a small-amplitude wave and comparing results with those obtained for a frozen gas as well as a gas in local thermodynamic equilibrium.« less

  20. A review of laboratory and numerical modelling in volcanology

    NASA Astrophysics Data System (ADS)

    Kavanagh, Janine L.; Engwell, Samantha L.; Martin, Simon A.

    2018-04-01

    Modelling has been used in the study of volcanic systems for more than 100 years, building upon the approach first applied by Sir James Hall in 1815. Informed by observations of volcanological phenomena in nature, including eye-witness accounts of eruptions, geophysical or geodetic monitoring of active volcanoes, and geological analysis of ancient deposits, laboratory and numerical models have been used to describe and quantify volcanic and magmatic processes that span orders of magnitudes of time and space. We review the use of laboratory and numerical modelling in volcanological research, focussing on sub-surface and eruptive processes including the accretion and evolution of magma chambers, the propagation of sheet intrusions, the development of volcanic flows (lava flows, pyroclastic density currents, and lahars), volcanic plume formation, and ash dispersal. When first introduced into volcanology, laboratory experiments and numerical simulations marked a transition in approach from broadly qualitative to increasingly quantitative research. These methods are now widely used in volcanology to describe the physical and chemical behaviours that govern volcanic and magmatic systems. Creating simplified models of highly dynamical systems enables volcanologists to simulate and potentially predict the nature and impact of future eruptions. These tools have provided significant insights into many aspects of the volcanic plumbing system and eruptive processes. The largest scientific advances in volcanology have come from a multidisciplinary approach, applying developments in diverse fields such as engineering and computer science to study magmatic and volcanic phenomena. A global effort in the integration of laboratory and numerical volcano modelling is now required to tackle key problems in volcanology and points towards the importance of benchmarking exercises and the need for protocols to be developed so that models are routinely tested against real world data.

  1. [Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy].

    PubMed

    Renner, Franziska

    2016-09-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.

  2. Generalizable open source urban water portfolio simulation framework demonstrated using a multi-objective risk-based planning benchmark problem.

    NASA Astrophysics Data System (ADS)

    Trindade, B. C.; Reed, P. M.

    2017-12-01

    The growing access and reduced cost for computing power in recent years has promoted rapid development and application of multi-objective water supply portfolio planning. As this trend continues there is a pressing need for flexible risk-based simulation frameworks and improved algorithm benchmarking for emerging classes of water supply planning and management problems. This work contributes the Water Utilities Management and Planning (WUMP) model: a generalizable and open source simulation framework designed to capture how water utilities can minimize operational and financial risks by regionally coordinating planning and management choices, i.e. making more efficient and coordinated use of restrictions, water transfers and financial hedging combined with possible construction of new infrastructure. We introduce the WUMP simulation framework as part of a new multi-objective benchmark problem for planning and management of regionally integrated water utility companies. In this problem, a group of fictitious water utilities seek to balance the use of the mentioned reliability driven actions (e.g., restrictions, water transfers and infrastructure pathways) and their inherent financial risks. Several traits of this problem make it ideal for a benchmark problem, namely the presence of (1) strong non-linearities and discontinuities in the Pareto front caused by the step-wise nature of the decision making formulation and by the abrupt addition of storage through infrastructure construction, (2) noise due to the stochastic nature of the streamflows and water demands, and (3) non-separability resulting from the cooperative formulation of the problem, in which decisions made by stakeholder may substantially impact others. Both the open source WUMP simulation framework and its demonstration in a challenging benchmarking example hold value for promoting broader advances in urban water supply portfolio planning for regions confronting change.

  3. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  4. A suite of exercises for verifying dynamic earthquake rupture codes

    USGS Publications Warehouse

    Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis

    2018-01-01

    We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.

  5. Numerical simulation of the modulation transfer function (MTF) in infrared focal plane arrays: simulation methodology and MTF optimization

    NASA Astrophysics Data System (ADS)

    Schuster, J.

    2018-02-01

    Military requirements demand both single and dual-color infrared (IR) imaging systems with both high resolution and sharp contrast. To quantify the performance of these imaging systems, a key measure of performance, the modulation transfer function (MTF), describes how well an optical system reproduces an objects contrast in the image plane at different spatial frequencies. At the center of an IR imaging system is the focal plane array (FPA). IR FPAs are hybrid structures consisting of a semiconductor detector pixel array, typically fabricated from HgCdTe, InGaAs or III-V superlattice materials, hybridized with heat/pressure to a silicon read-out integrated circuit (ROIC) with indium bumps on each pixel providing the mechanical and electrical connection. Due to the growing sophistication of the pixel arrays in these FPAs, sophisticated modeling techniques are required to predict, understand, and benchmark the pixel array MTF that contributes to the total imaging system MTF. To model the pixel array MTF, computationally exhaustive 2D and 3D numerical simulation approaches are required to correctly account for complex architectures and effects such as lateral diffusion from the pixel corners. It is paramount to accurately model the lateral di_usion (pixel crosstalk) as it can become the dominant mechanism limiting the detector MTF if not properly mitigated. Once the detector MTF has been simulated, it is directly decomposed into its constituent contributions to reveal exactly what is limiting the total detector MTF, providing a path for optimization. An overview of the MTF will be given and the simulation approach will be discussed in detail, along with how different simulation parameters effect the MTF calculation. Finally, MTF optimization strategies (crosstalk mitigation) will be discussed.

  6. Reproducibility of haemodynamical simulations in a subject-specific stented aneurysm model--a report on the Virtual Intracranial Stenting Challenge 2007.

    PubMed

    Radaelli, A G; Augsburger, L; Cebral, J R; Ohta, M; Rüfenacht, D A; Balossino, R; Benndorf, G; Hose, D R; Marzo, A; Metcalfe, R; Mortier, P; Mut, F; Reymond, P; Socci, L; Verhegghe, B; Frangi, A F

    2008-07-19

    This paper presents the results of the Virtual Intracranial Stenting Challenge (VISC) 2007, an international initiative whose aim was to establish the reproducibility of state-of-the-art haemodynamical simulation techniques in subject-specific stented models of intracranial aneurysms (IAs). IAs are pathological dilatations of the cerebral artery walls, which are associated with high mortality and morbidity rates due to subarachnoid haemorrhage following rupture. The deployment of a stent as flow diverter has recently been indicated as a promising treatment option, which has the potential to protect the aneurysm by reducing the action of haemodynamical forces and facilitating aneurysm thrombosis. The direct assessment of changes in aneurysm haemodynamics after stent deployment is hampered by limitations in existing imaging techniques and currently requires resorting to numerical simulations. Numerical simulations also have the potential to assist in the personalized selection of an optimal stent design prior to intervention. However, from the current literature it is difficult to assess the level of technological advancement and the reproducibility of haemodynamical predictions in stented patient-specific models. The VISC 2007 initiative engaged in the development of a multicentre-controlled benchmark to analyse differences induced by diverse grid generation and computational fluid dynamics (CFD) technologies. The challenge also represented an opportunity to provide a survey of available technologies currently adopted by international teams from both academic and industrial institutions for constructing computational models of stented aneurysms. The results demonstrate the ability of current strategies in consistently quantifying the performance of three commercial intracranial stents, and contribute to reinforce the confidence in haemodynamical simulation, thus taking a step forward towards the introduction of simulation tools to support diagnostics and interventional planning.

  7. Benchmark of Ab Initio Bethe-Salpeter Equation Approach with Numeric Atom-Centered Orbitals

    NASA Astrophysics Data System (ADS)

    Liu, Chi; Kloppenburg, Jan; Kanai, Yosuke; Blum, Volker

    The Bethe-Salpeter equation (BSE) approach based on the GW approximation has been shown to be successful for optical spectra prediction of solids and recently also for small molecules. We here present an all-electron implementation of the BSE using numeric atom-centered orbital (NAO) basis sets. In this work, we present benchmark of BSE implemented in FHI-aims for low-lying excitation energies for a set of small organic molecules, the well-known Thiel's set. The difference between our implementation (using an analytic continuation of the GW self-energy on the real axis) and the results generated by a fully frequency dependent GW treatment on the real axis is on the order of 0.07 eV for the benchmark molecular set. We study the convergence behavior to the complete basis set limit for excitation spectra, using a group of valence correlation consistent NAO basis sets (NAO-VCC-nZ), as well as for standard NAO basis sets for ground state DFT with extended augmentation functions (NAO+aug). The BSE results and convergence behavior are compared to linear-response time-dependent DFT, where excellent numerical convergence is shown for NAO+aug basis sets.

  8. Theory comparison and numerical benchmarking on neoclassical toroidal viscosity torque

    NASA Astrophysics Data System (ADS)

    Wang, Zhirui; Park, Jong-Kyu; Liu, Yueqiang; Logan, Nikolas; Kim, Kimin; Menard, Jonathan E.

    2014-04-01

    Systematic comparison and numerical benchmarking have been successfully carried out among three different approaches of neoclassical toroidal viscosity (NTV) theory and the corresponding codes: IPEC-PENT is developed based on the combined NTV theory but without geometric simplifications [Park et al., Phys. Rev. Lett. 102, 065002 (2009)]; MARS-Q includes smoothly connected NTV formula [Shaing et al., Nucl. Fusion 50, 025022 (2010)] based on Shaing's analytic formulation in various collisionality regimes; MARS-K, originally computing the drift kinetic energy, is upgraded to compute the NTV torque based on the equivalence between drift kinetic energy and NTV torque [J.-K. Park, Phys. Plasma 18, 110702 (2011)]. The derivation and numerical results both indicate that the imaginary part of drift kinetic energy computed by MARS-K is equivalent to the NTV torque in IPEC-PENT. In the benchmark of precession resonance between MARS-Q and MARS-K/IPEC-PENT, the agreement and correlation between the connected NTV formula and the combined NTV theory in different collisionality regimes are shown for the first time. Additionally, both IPEC-PENT and MARS-K indicate the importance of the bounce harmonic resonance which can greatly enhance the NTV torque when E ×B drift frequency reaches the bounce resonance condition.

  9. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model

    PubMed Central

    Saul, Katherine R.; Hu, Xiao; Goehler, Craig M.; Vidt, Meghan E.; Daly, Melissa; Velisar, Anca; Murray, Wendy M.

    2014-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms. PMID:24995410

  10. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model.

    PubMed

    Saul, Katherine R; Hu, Xiao; Goehler, Craig M; Vidt, Meghan E; Daly, Melissa; Velisar, Anca; Murray, Wendy M

    2015-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms.

  11. Computers for real time flight simulation: A market survey

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.; Karplus, W. J.

    1977-01-01

    An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.

  12. Performance of Multi-chaotic PSO on a shifted benchmark functions set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan

    2015-03-10

    In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions.

  13. Benchmarking of Neutron Production of Heavy-Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  14. Benchmarking of Heavy Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  15. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    NASA Astrophysics Data System (ADS)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  16. Physical modeling of the effects of climate change on freshwater lenses

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Houben, G.

    2012-04-01

    The investigation of the fragile equilibrium between fresh and saline water on oceanic islands is of major importance for a sustainable management and protection of freshwater lenses. Overexploitation will lead to salt water intrusion (up-coning), in turn causing damages or even destruction of a lens in the long term. We have performed a series of experiments on the laboratory scale to investigate and visualize processes of freshwater lenses under different boundary conditions. In addition these scenarios were numerically simulated using the finite-element model FEFLOW. Results were also compared to analytical solutions for problems regarding e.g. mean travel times of flow paths within a freshwater lens. On the laboratory scale, a cross section of an island was simulated by setting up a sand-box model (200 cm x 50 cm x 5 cm). Lens dynamics are driven by density contrasts of saline and fresh water, recharge rate and Kf-values of the medium. We used a time-dependent, sequential application of the tracers uranine, eosine and indigotine, to represent different recharge events. With a stepwise increase of freshwater recharge, we could show that the maximum thickness of the lens increased in a non-linear behavior. Moreover we measured that the degradation of a freshwater lens after turning off the precipitation does not follow the same function as its development does. This means that a steady state freshwater lens does not degrade as fast as it develops under constant recharge. On the other side, we could show that this is not true for a partial degradation of the lens due to passing forces, like anthropogenic pumping or climate change. This is, because the recovery to equilibrium is always a quasi asymptotic process. Thus, times of re-equilibration to steady state will take longer after e.g. a drought, than the degradation during the draught itself. This behavior could also be verified applying the numerical finite-element model FEFLOW. In addition, numerical simulations will be used to close the gap between laboratory results and future field investigations. For example, impacts due to sea level rise induced by climate change can be up-scaled and compared to the results achieved from physical experiments. Analytical models (e.g. Fetter 1972, Vacher et al. 1990, Chesnaux & Allen 2007) were used as benchmarks in our investigations. Models in general are simplifications of a real situation trying to display the relevant processes. For further investigations it is planned to compare different models and generate new benchmark experiments to improve the accuracy of existing models.

  17. Two dimensional model for coherent synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Huang, Chengkun; Kwan, Thomas J. T.; Carlsten, Bruce E.

    2013-01-01

    Understanding coherent synchrotron radiation (CSR) effects in a bunch compressor requires an accurate model accounting for the realistic beam shape and parameters. We extend the well-known 1D CSR analytic model into two dimensions and develop a simple numerical model based on the Liénard-Wiechert formula for the CSR field of a coasting beam. This CSR numerical model includes the 2D spatial dependence of the field in the bending plane and is accurate for arbitrary beam energy. It also removes the singularity in the space charge field calculation present in a 1D model. Good agreement is obtained with 1D CSR analytic result for free electron laser (FEL) related beam parameters but it can also give a more accurate result for low-energy/large spot size beams and off-axis/transient fields. This 2D CSR model can be used for understanding the limitation of various 1D models and for benchmarking fully electromagnetic multidimensional particle-in-cell simulations for self-consistent CSR modeling.

  18. A Numerical Simulator for Three-Dimensional Flows Through Vibrating Blade Rows

    NASA Technical Reports Server (NTRS)

    Chuang, H. Andrew; Verdon, Joseph M.

    1998-01-01

    The three-dimensional, multi-stage, unsteady, turbomachinery analysis, TURBO, has been extended to predict the aeroelastic and aeroacoustic response behaviors of a single blade row operating within a cylindrical annular duct. In particular, a blade vibration capability has been incorporated so that the TURBO analysis can be applied over a solution domain that deforms with a vibratory blade motion. Also, unsteady far-field conditions have been implemented to render the computational boundaries at inlet and exit transparent to outgoing unsteady disturbances. The modified TURBO analysis is applied herein to predict unsteady subsonic and transonic flows. The intent is to partially validate this nonlinear analysis for blade flutter applications, via numerical results for benchmark unsteady flows, and to demonstrate the analysis for a realistic fan rotor. For these purposes, we have considered unsteady subsonic flows through a 3D version of the 10th Standard Cascade, and unsteady transonic flows through the first stage rotor of the NASA Lewis, Rotor 67, two-stage fan.

  19. PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra

    NASA Astrophysics Data System (ADS)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-06-01

    The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).

  20. On the development of OpenFOAM solvers based on explicit and implicit high-order Runge-Kutta schemes for incompressible flows with heat transfer

    NASA Astrophysics Data System (ADS)

    D'Alessandro, Valerio; Binci, Lorenzo; Montelpare, Sergio; Ricci, Renato

    2018-01-01

    Open-source CFD codes provide suitable environments for implementing and testing low-dissipative algorithms typically used to simulate turbulence. In this research work we developed CFD solvers for incompressible flows based on high-order explicit and diagonally implicit Runge-Kutta (RK) schemes for time integration. In particular, an iterated PISO-like procedure based on Rhie-Chow correction was used to handle pressure-velocity coupling within each implicit RK stage. For the explicit approach, a projected scheme was used to avoid the "checker-board" effect. The above-mentioned approaches were also extended to flow problems involving heat transfer. It is worth noting that the numerical technology available in the OpenFOAM library was used for space discretization. In this work, we additionally explore the reliability and effectiveness of the proposed implementations by computing several unsteady flow benchmarks; we also show that the numerical diffusion due to the time integration approach is completely canceled using the solution techniques proposed here.

  1. Detecting many-body-localization lengths with cold atoms

    NASA Astrophysics Data System (ADS)

    Guo, Xuefei; Li, Xiaopeng

    2018-03-01

    Considering ultracold atoms in optical lattices, we propose experimental protocols to study many-body-localization (MBL) length and criticality in quench dynamics. Through numerical simulations with exact diagonalization, we show that in the MBL phase the perturbed density profile following a local quench remains exponentially localized in postquench dynamics. The size of this density profile after long-time-dynamics defines a localization length, which tends to diverge at the MBL-to-ergodic transition as we increase the system size. The determined localization transition point agrees with previous exact diagonalization calculations using other diagnostics. Our numerical results provide evidence for violation of the Harris-Chayes bound for the MBL criticality. The critical exponent ν can be extracted from our proposed dynamical procedure, which can then be used directly in experiments to determine whether the Harris-Chayes-bound holds for the MBL transition. These proposed protocols to detect localization criticality are justified by benchmarking to the well-established results for the noninteracting three-dimensional Anderson localization.

  2. A SECOND-ORDER DIVERGENCE-CONSTRAINED MULTIDIMENSIONAL NUMERICAL SCHEME FOR RELATIVISTIC TWO-FLUID ELECTRODYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amano, Takanobu, E-mail: amano@eps.s.u-tokyo.ac.jp

    A new multidimensional simulation code for relativistic two-fluid electrodynamics (RTFED) is described. The basic equations consist of the full set of Maxwell’s equations coupled with relativistic hydrodynamic equations for separate two charged fluids, representing the dynamics of either an electron–positron or an electron–proton plasma. It can be recognized as an extension of conventional relativistic magnetohydrodynamics (RMHD). Finite resistivity may be introduced as a friction between the two species, which reduces to resistive RMHD in the long wavelength limit without suffering from a singularity at infinite conductivity. A numerical scheme based on HLL (Harten–Lax–Van Leer) Riemann solver is proposed that exactlymore » preserves the two divergence constraints for Maxwell’s equations simultaneously. Several benchmark problems demonstrate that it is capable of describing RMHD shocks/discontinuities at long wavelength limit, as well as dispersive characteristics due to the two-fluid effect appearing at small scales. This shows that the RTFED model is a promising tool for high energy astrophysics application.« less

  3. Photosynthetic productivity and its efficiencies in ISIMIP2a biome models: benchmarking for impact assessment studies

    NASA Astrophysics Data System (ADS)

    Ito, Akihiko; Nishina, Kazuya; Reyer, Christopher P. O.; François, Louis; Henrot, Alexandra-Jane; Munhoven, Guy; Jacquemin, Ingrid; Tian, Hanqin; Yang, Jia; Pan, Shufen; Morfopoulos, Catherine; Betts, Richard; Hickler, Thomas; Steinkamp, Jörg; Ostberg, Sebastian; Schaphoff, Sibyll; Ciais, Philippe; Chang, Jinfeng; Rafique, Rashid; Zeng, Ning; Zhao, Fang

    2017-08-01

    Simulating vegetation photosynthetic productivity (or gross primary production, GPP) is a critical feature of the biome models used for impact assessments of climate change. We conducted a benchmarking of global GPP simulated by eight biome models participating in the second phase of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP2a) with four meteorological forcing datasets (30 simulations), using independent GPP estimates and recent satellite data of solar-induced chlorophyll fluorescence as a proxy of GPP. The simulated global terrestrial GPP ranged from 98 to 141 Pg C yr-1 (1981-2000 mean); considerable inter-model and inter-data differences were found. Major features of spatial distribution and seasonal change of GPP were captured by each model, showing good agreement with the benchmarking data. All simulations showed incremental trends of annual GPP, seasonal-cycle amplitude, radiation-use efficiency, and water-use efficiency, mainly caused by the CO2 fertilization effect. The incremental slopes were higher than those obtained by remote sensing studies, but comparable with those by recent atmospheric observation. Apparent differences were found in the relationship between GPP and incoming solar radiation, for which forcing data differed considerably. The simulated GPP trends co-varied with a vegetation structural parameter, leaf area index, at model-dependent strengths, implying the importance of constraining canopy properties. In terms of extreme events, GPP anomalies associated with a historical El Niño event and large volcanic eruption were not consistently simulated in the model experiments due to deficiencies in both forcing data and parameterized environmental responsiveness. Although the benchmarking demonstrated the overall advancement of contemporary biome models, further refinements are required, for example, for solar radiation data and vegetation canopy schemes.

  4. Two-fluid dusty shocks: simple benchmarking problems and applications to protoplanetary discs

    NASA Astrophysics Data System (ADS)

    Lehmann, Andrew; Wardle, Mark

    2018-05-01

    The key role that dust plays in the interstellar medium has motivated the development of numerical codes designed to study the coupled evolution of dust and gas in systems such as turbulent molecular clouds and protoplanetary discs. Drift between dust and gas has proven to be important as well as numerically challenging. We provide simple benchmarking problems for dusty gas codes by numerically solving the two-fluid dust-gas equations for steady, plane-parallel shock waves. The two distinct shock solutions to these equations allow a numerical code to test different forms of drag between the two fluids, the strength of that drag and the dust to gas ratio. We also provide an astrophysical application of J-type dust-gas shocks to studying the structure of accretion shocks on to protoplanetary discs. We find that two-fluid effects are most important for grains larger than 1 μm, and that the peak dust temperature within an accretion shock provides a signature of the dust-to-gas ratio of the infalling material.

  5. Numerical simulation on hydromechanical coupling in porous media adopting three-dimensional pore-scale model.

    PubMed

    Liu, Jianjun; Song, Rui; Cui, Mengmeng

    2014-01-01

    A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view.

  6. Numerical Simulation on Hydromechanical Coupling in Porous Media Adopting Three-Dimensional Pore-Scale Model

    PubMed Central

    Liu, Jianjun; Song, Rui; Cui, Mengmeng

    2014-01-01

    A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view. PMID:24955384

  7. CHORUS code for solar and planetary convection

    NASA Astrophysics Data System (ADS)

    Wang, Junfeng

    Turbulent, density stratified convection is ubiquitous in stars and planets. Numerical simulation has become an indispensable tool for understanding it. A primary contribution of this dissertation work is the creation of the Compressible High-ORder Unstructured Spectral-difference (CHORUS) code for simulating the convection and related fluid dynamics in the interiors of stars and planets. In this work, the CHORUS code is verified by using two newly defined benchmark cases and demonstrates excellent parallel performance. It has unique potential to simulate challenging physical phenomena such as multi-scale solar convection, core convection, and convection in oblate, rapidly-rotating stars. In order to exploit its unique capabilities, the CHORUS code has been extended to perform the first 3D simulations of convection in oblate, rapidly rotating solar-type stars. New insights are obtained with respect to the influence of oblateness on the convective structure and heat flux transport. With the presence of oblateness resulting from the centrifugal force effect, the convective structure in the polar regions decouples from the main convective modes in the equatorial regions. Our convection simulations predict that heat flux peaks in both the polar and equatorial regions, contrary to previous theoretical results that predict darker equators. High latitudinal zonal jets are also observed in the simulations.

  8. Simulating single-phase and two-phase non-Newtonian fluid flow of a digital rock scanned at high resolution

    NASA Astrophysics Data System (ADS)

    Tembely, Moussa; Alsumaiti, Ali M.; Jouini, Mohamed S.; Rahimov, Khurshed; Dolatabadi, Ali

    2017-11-01

    Most of the digital rock physics (DRP) simulations focus on Newtonian fluids and overlook the detailed description of rock-fluid interaction. A better understanding of multiphase non-Newtonian fluid flow at pore-scale is crucial for optimizing enhanced oil recovery (EOR). The Darcy scale properties of reservoir rocks such as the capillary pressure curves and the relative permeability are controlled by the pore-scale behavior of the multiphase flow. In the present work, a volume of fluid (VOF) method coupled with an adaptive meshing technique is used to perform the pore-scale simulation on a 3D X-ray micro-tomography (CT) images of rock samples. The numerical model is based on the resolution of the Navier-Stokes equations along with a phase fraction equation incorporating the dynamics contact model. The simulations of a single phase flow for the absolute permeability showed a good agreement with the literature benchmark. Subsequently, the code is used to simulate a two-phase flow consisting of a polymer solution, displaying a shear-thinning power law viscosity. The simulations enable to access the impact of the consistency factor (K), the behavior index (n), along with the two contact angles (advancing and receding) on the relative permeability.

  9. An analytical benchmark and a Mathematica program for MD codes: Testing LAMMPS on the 2nd generation Brenner potential

    NASA Astrophysics Data System (ADS)

    Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.

    2016-10-01

    An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.

  10. Simplified Numerical Analysis of ECT Probe - Eddy Current Benchmark Problem 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sikora, R.; Chady, T.; Gratkowski, S.

    2005-04-09

    In this paper a third eddy current benchmark problem is considered. The objective of the benchmark is to determine optimal operating frequency and size of the pancake coil designated for testing tubes made of Inconel. It can be achieved by maximization of the change in impedance of the coil due to a flaw. Approximation functions of the probe (coil) characteristic were developed and used in order to reduce number of required calculations. It results in significant speed up of the optimization process. An optimal testing frequency and size of the probe were achieved as a final result of the calculation.

  11. Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.

    2007-03-01

    In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.

  12. Direct numerical simulations of magmatic differentiation at the microscopic scale

    NASA Astrophysics Data System (ADS)

    Sethian, J.; Suckale, J.; Elkins-Tanton, L. T.

    2010-12-01

    A key question in the context of magmatic differentiation and fractional crystallization is the ability of crystals to decouple from the ambient fluid and sink or rise. Field data indicates a complex spectrum of behavior ranging from rapid sedimentation to continued entrainment. Theoretical and laboratory studies paint a similarly rich picture. The goal of this study is to provide a detailed numerical assessment of the competing effects of sedimentation and entrainment at the scale of individual crystals. The decision to simulate magmatic differentiation at the grain scale comes at the price of not being able to simultaneously solve for the convective velocity field at the macroscopic scale, but has the crucial advantage of enabling us to fully resolve the dynamics of the systems from first principles without requiring any simplifying assumptions. The numerical approach used in this study is a customized computational methodology developed specifically for simulations of solid-fluid coupling in geophysical systems. The algorithm relies on a two-step projection scheme: In the first step, we solve the multiple-phase Navier-Stokes or Stokes equation in both domains. In the second step, we project the velocity field in the solid domain onto a rigid-body motion by enforcing that the deformation tensor in the respective domain is zero. This procedure is also used to enforce the no-slip boundary-condition on the solid-fluid interface. We have extensively validated and benchmarked the method. Our preliminary results indicate that, not unexpectedly, the competing effects of sedimentation and entrainment depend sensitively on the size distribution of the crystals, the aspect ratio of individual crystals and the vigor of the ambient flow field. We provide a detailed scaling analysis and quantify our results in terms of the relevant non-dimensional numbers.

  13. Modelling of a Solar Thermal Power Plant for Benchmarking Blackbox Optimization Solvers

    NASA Astrophysics Data System (ADS)

    Lemyre Garneau, Mathieu

    A new family of problems is provided to serve as a benchmark for blackbox optimization solvers. The problems are single or bi-objective and vary in complexity in terms of the number of variables used (from 5 to 29), the type of variables (integer, real, category), the number of constraints (from 5 to 17) and their types (binary or continuous). In order to provide problems exhibiting dynamics that reflect real engineering challenges, they are extracted from an original numerical model of a concentrated solar power (CSP) power plant with molten salt thermal storage. The model simulates the performance of the power plant by using a high level modeling of each of its main components, namely, an heliostats field, a central cavity receiver, a molten salt heat storage, a steam generator and an idealized powerblock. The heliostats field layout is determined through a simple automatic strategy that finds the best individual positions on the field by considering their respective cosine efficiency, atmospheric scattering and spillage losses as a function of the design parameters. A Monte-Carlo integral method is used to evaluate the heliostats field's optical performance throughout the day so that shadowing effects between heliostats are considered, and the results of this evaluation provide the inputs to simulate the levels and temperatures of the thermal storage. The molten salt storage inventory is used to transfer thermal energy to the powerblock, which simulates a simple Rankine cycle with a single steam turbine. Auxiliary models are used to provide additional optimization constraints on the investment cost, parasitic losses or components failure. The results of preliminary optimizations performed with the NOMAD software using default settings are provided to show the validity of the problems.

  14. Analytical solutions for benchmarking cold regions subsurface water flow and energy transport models: one-dimensional soil thaw with conduction and advection

    USGS Publications Warehouse

    Kurylyk, Barret L.; McKenzie, Jeffrey M; MacQuarrie, Kerry T. B.; Voss, Clifford I.

    2014-01-01

    Numerous cold regions water flow and energy transport models have emerged in recent years. Dissimilarities often exist in their mathematical formulations and/or numerical solution techniques, but few analytical solutions exist for benchmarking flow and energy transport models that include pore water phase change. This paper presents a detailed derivation of the Lunardini solution, an approximate analytical solution for predicting soil thawing subject to conduction, advection, and phase change. Fifteen thawing scenarios are examined by considering differences in porosity, surface temperature, Darcy velocity, and initial temperature. The accuracy of the Lunardini solution is shown to be proportional to the Stefan number. The analytical solution results obtained for soil thawing scenarios with water flow and advection are compared to those obtained from the finite element model SUTRA. Three problems, two involving the Lunardini solution and one involving the classic Neumann solution, are recommended as standard benchmarks for future model development and testing.

  15. Shallow water models as tool for tsunami current predictions in ports and harbors. Validation with Tohoku 2011 field data

    NASA Astrophysics Data System (ADS)

    Gonzalez Vida, J. M., Sr.; Macias Sanchez, J.; Castro, M. J.; Ortega, S.

    2015-12-01

    Model ability to compute and predict tsunami flow velocities is of importance in risk assessment and hazard mitigation. Substantial damage can be produced by high velocity flows, particularly in harbors and bays, even when the wave height is small. Besides, an accurate simulation of tsunami flow velocities and accelerations is fundamental for advancing in the study of tsunami sediment transport. These considerations made the National Tsunami Hazard Mitigation Program (NTHMP) proposing a benchmark exercise focused on modeling and simulating tsunami currents. Until recently, few direct measurements of tsunami velocities were available to compare and to validate model results. After Tohoku 2011 many current meters measurement were made, mainly in harbors and channels. In this work we present a part of the contribution made by the EDANYA group from the University of Malaga to the NTHMP workshop organized at Portland (USA), 9-10 of February 2015. We have selected three out of the five proposed benchmark problems. Two of them consist in real observed data from the Tohoku 2011 event, one at Hilo Habour (Hawaii) and the other at Tauranga Bay (New Zealand). The third one consists in laboratory experimental data for the inundation of Seaside City in Oregon. For this model validation the Tsunami-HySEA model, developed by EDANYA group, was used. The overall conclusion that we could extract from this validation exercise was that the Tsunami-HySEA model performed well in all benchmark problems proposed. The greater spatial variability in tsunami velocity than wave height makes it more difficult its precise numerical representation. The larger variability in velocities is likely a result of the behaviour of the flow as it is channelized and as it flows around bathymetric highs and structures. In the other hand wave height do not respond as strongly to chanelized flow as current velocity.

  16. Computational ecology as an emerging science

    PubMed Central

    Petrovskii, Sergei; Petrovskaya, Natalia

    2012-01-01

    It has long been recognized that numerical modelling and computer simulations can be used as a powerful research tool to understand, and sometimes to predict, the tendencies and peculiarities in the dynamics of populations and ecosystems. It has been, however, much less appreciated that the context of modelling and simulations in ecology is essentially different from those that normally exist in other natural sciences. In our paper, we review the computational challenges arising in modern ecology in the spirit of computational mathematics, i.e. with our main focus on the choice and use of adequate numerical methods. Somewhat paradoxically, the complexity of ecological problems does not always require the use of complex computational methods. This paradox, however, can be easily resolved if we recall that application of sophisticated computational methods usually requires clear and unambiguous mathematical problem statement as well as clearly defined benchmark information for model validation. At the same time, many ecological problems still do not have mathematically accurate and unambiguous description, and available field data are often very noisy, and hence it can be hard to understand how the results of computations should be interpreted from the ecological viewpoint. In this scientific context, computational ecology has to deal with a new paradigm: conventional issues of numerical modelling such as convergence and stability become less important than the qualitative analysis that can be provided with the help of computational techniques. We discuss this paradigm by considering computational challenges arising in several specific ecological applications. PMID:23565336

  17. Benchmarking aerodynamic prediction of unsteady rotor aerodynamics of active flaps on wind turbine blades using ranging fidelity tools

    NASA Astrophysics Data System (ADS)

    Barlas, Thanasis; Jost, Eva; Pirrung, Georg; Tsiantas, Theofanis; Riziotis, Vasilis; Navalkar, Sachin T.; Lutz, Thorsten; van Wingerden, Jan-Willem

    2016-09-01

    Simulations of a stiff rotor configuration of the DTU 10MW Reference Wind Turbine are performed in order to assess the impact of prescribed flap motion on the aerodynamic loads on a blade sectional and rotor integral level. Results of the engineering models used by DTU (HAWC2), TUDelft (Bladed) and NTUA (hGAST) are compared to the CFD predictions of USTUTT-IAG (FLOWer). Results show fairly good comparison in terms of axial loading, while alignment of tangential and drag-related forces across the numerical codes needs to be improved, together with unsteady corrections associated with rotor wake dynamics. The use of a new wake model in HAWC2 shows considerable accuracy improvements.

  18. A fictitious domain finite element method for simulations of fluid-structure interactions: The Navier-Stokes equations coupled with a moving solid

    NASA Astrophysics Data System (ADS)

    Court, Sébastien; Fournié, Michel

    2015-05-01

    The paper extends a stabilized fictitious domain finite element method initially developed for the Stokes problem to the incompressible Navier-Stokes equations coupled with a moving solid. This method presents the advantage to predict an optimal approximation of the normal stress tensor at the interface. The dynamics of the solid is governed by the Newton's laws and the interface between the fluid and the structure is materialized by a level-set which cuts the elements of the mesh. An algorithm is proposed in order to treat the time evolution of the geometry and numerical results are presented on a classical benchmark of the motion of a disk falling in a channel.

  19. Coupling of Multiple Coulomb Scattering with Energy Loss and Straggling in HZETRN

    NASA Technical Reports Server (NTRS)

    Mertens, Christopher J.; Wilson, John W.; Walker, Steven A.; Tweed, John

    2007-01-01

    The new version of the HZETRN deterministic transport code based on Green's function methods, and the incorporation of ground-based laboratory boundary conditions, has lead to the development of analytical and numerical procedures to include off-axis dispersion of primary ion beams due to small-angle multiple Coulomb scattering. In this paper we present the theoretical formulation and computational procedures to compute ion beam broadening and a methodology towards achieving a self-consistent approach to coupling multiple scattering interactions with ionization energy loss and straggling. Our initial benchmark case is a 60 MeV proton beam on muscle tissue, for which we can compare various attributes of beam broadening with Monte Carlo simulations reported in the open literature.

  20. Scalability of surrogate-assisted multi-objective optimization of antenna structures exploiting variable-fidelity electromagnetic simulation models

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2016-10-01

    Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.

  1. Prediction of Gas Injection Performance for Heterogeneous Reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blunt, Martin J.; Orr, Franklin M.

    This report describes research carried out in the Department of Petroleum Engineering at Stanford University from September 1997 - September 1998 under the second year of a three-year grant from the Department of Energy on the "Prediction of Gas Injection Performance for Heterogeneous Reservoirs." The research effort is an integrated study of the factors affecting gas injection, from the pore scale to the field scale, and involves theoretical analysis, laboratory experiments, and numerical simulation. The original proposal described research in four areas: (1) Pore scale modeling of three phase flow in porous media; (2) Laboratory experiments and analysis of factorsmore » influencing gas injection performance at the core scale with an emphasis on the fundamentals of three phase flow; (3) Benchmark simulations of gas injection at the field scale; and (4) Development of streamline-based reservoir simulator. Each state of the research is planned to provide input and insight into the next stage, such that at the end we should have an integrated understanding of the key factors affecting field scale displacements.« less

  2. UNSAT-H Version 2. 0: Unsaturated soil water and heat flow model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fayer, M.J.; Jones, T.L.

    1990-04-01

    This report documents UNSAT-H Version 2.0, a model for calculating water and heat flow in unsaturated media. The documentation includes the bases for the conceptual model and its numerical implementation, benchmark test cases, example simulations involving layered soils and plant transpiration, and the code listing. Waste management practices at the Hanford Site have included disposal of low-level wastes by near-surface burial. Predicting the future long-term performance of any such burial site in terms of migration of contaminants requires a model capable of simulating water flow in the unsaturated soils above the buried waste. The model currently used to meet thismore » need is UNSAT-H. This model was developed at Pacific Northwest Laboratory to assess water dynamics of near-surface, waste-disposal sites at the Hanford Site. The code is primarily used to predict deep drainage as a function of such environmental conditions as climate, soil type, and vegetation. UNSAT-H is also used to simulate the effects of various practices to enhance isolation of wastes. 66 refs., 29 figs., 7 tabs.« less

  3. Curvilinear Immersed Boundary Method for Simulating Fluid Structure Interaction with Complex 3D Rigid Bodies

    PubMed Central

    Borazjani, Iman; Ge, Liang; Sotiropoulos, Fotis

    2010-01-01

    The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries, Journal of Computational Physics 225 (2007) 1782–1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions unconditionally unstable iteration schemes result even when strong coupling FSI is employed. For such cases, however, combining the strong-coupling iteration with under-relaxation in conjunction with the Aitken’s acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI algorithm. The stabilizing role of under-relaxation is also clarified and an upper bound of the required for stability under-relaxation coefficient is derived. PMID:20981246

  4. Curvilinear immersed boundary method for simulating fluid structure interaction with complex 3D rigid bodies

    NASA Astrophysics Data System (ADS)

    Borazjani, Iman; Ge, Liang; Sotiropoulos, Fotis

    2008-08-01

    The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A numerical method for solving the 3D unsteady incompressible Navier-Stokes equations in curvilinear domains with complex immersed boundaries, Journal of Computational Physics 225 (2007) 1782-1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions the FSI algorithm is unconditionally unstable even when strong coupling FSI is employed. For such cases, however, combining the strong coupling iteration with under-relaxation in conjunction with the Aitken's acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI algorithm. The stabilizing role of under-relaxation is also clarified and the upper bound of the under-relaxation coefficient, required for stability, is derived.

  5. A novel left heart simulator for the multi-modality characterization of native mitral valve geometry and fluid mechanics.

    PubMed

    Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Yoganathan, Ajit P

    2013-02-01

    Numerical models of the mitral valve have been used to elucidate mitral valve function and mechanics. These models have evolved from simple two-dimensional approximations to complex three-dimensional fully coupled fluid structure interaction models. However, to date these models lack direct one-to-one experimental validation. As computational solvers vary considerably, experimental benchmark data are critically important to ensure model accuracy. In this study, a novel left heart simulator was designed specifically for the validation of numerical mitral valve models. Several distinct experimental techniques were collectively performed to resolve mitral valve geometry and hemodynamics. In particular, micro-computed tomography was used to obtain accurate and high-resolution (39 μm voxel) native valvular anatomy, which included the mitral leaflets, chordae tendinae, and papillary muscles. Three-dimensional echocardiography was used to obtain systolic leaflet geometry. Stereoscopic digital particle image velocimetry provided all three components of fluid velocity through the mitral valve, resolved every 25 ms in the cardiac cycle. A strong central filling jet (V ~ 0.6 m/s) was observed during peak systole with minimal out-of-plane velocities. In addition, physiologic hemodynamic boundary conditions were defined and all data were synchronously acquired through a central trigger. Finally, the simulator is a precisely controlled environment, in which flow conditions and geometry can be systematically prescribed and resultant valvular function and hemodynamics assessed. Thus, this work represents the first comprehensive database of high fidelity experimental data, critical for extensive validation of mitral valve fluid structure interaction simulations.

  6. A Novel Left Heart Simulator for the Multi-modality Characterization of Native Mitral Valve Geometry and Fluid Mechanics

    PubMed Central

    Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Yoganathan, Ajit P.

    2012-01-01

    Numerical models of the mitral valve have been used to elucidate mitral valve function and mechanics. These models have evolved from simple two-dimensional approximations to complex three-dimensional fully coupled fluid structure interaction models. However, to date these models lack direct one-to-one experimental validation. As computational solvers vary considerably, experimental benchmark data are critically important to ensure model accuracy. In this study, a novel left heart simulator was designed specifically for the validation of numerical mitral valve models. Several distinct experimental techniques were collectively performed to resolve mitral valve geometry and hemodynamics. In particular, micro-computed tomography was used to obtain accurate and high-resolution (39 µm voxel) native valvular anatomy, which included the mitral leaflets, chordae tendinae, and papillary muscles. Threedimensional echocardiography was used to obtain systolic leaflet geometry for direct comparison of resultant leaflet kinematics. Stereoscopic digital particle image velocimetry provided all three components of fluid velocity through the mitral valve, resolved every 25 ms in the cardiac cycle. A strong central filling jet was observed during peak systole, with minimal out-of-plane velocities (V~0.6m/s). In addition, physiologic hemodynamic boundary conditions were defined and all data were synchronously acquired through a central trigger. Finally, the simulator is a precisely controlled environment, in which flow conditions and geometry can be systematically prescribed and resultant valvular function and hemodynamics assessed. Thus, these data represent the first comprehensive database of high fidelity experimental data, critical for extensive validation of mitral valve fluid structure interaction simulations. PMID:22965640

  7. Global dynamic modeling of a transmission system

    NASA Technical Reports Server (NTRS)

    Choy, F. K.; Qian, W.

    1993-01-01

    The work performed on global dynamic simulation and noise correlation of gear transmission systems at the University of Akron is outlined. The objective is to develop a comprehensive procedure to simulate the dynamics of the gear transmission system coupled with the effects of gear box vibrations. The developed numerical model is benchmarked with results from experimental tests at NASA Lewis Research Center. The modal synthesis approach is used to develop the global transient vibration analysis procedure used in the model. Modal dynamic characteristics of the rotor-gear-bearing system are calculated by the matrix transfer method while those of the gear box are evaluated by the finite element method (NASTRAN). A three-dimensional, axial-lateral coupled bearing model is used to couple the rotor vibrations with the gear box motion. The vibrations between the individual rotor systems are coupled through the nonlinear gear mesh interactions. The global equations of motion are solved in modal coordinates and the transient vibration of the system is evaluated by a variable time-stepping integration scheme. The relationship between housing vibration and resulting noise of the gear transmission system is generated by linear transfer functions using experimental data. A nonlinear relationship of the noise components to the fundamental mesh frequency is developed using the hypercoherence function. The numerically simulated vibrations and predicted noise of the gear transmission system are compared with the experimental results from the gear noise test rig at NASA Lewis Research Center. Results of the comparison indicate that the global dynamic model developed can accurately simulate the dynamics of a gear transmission system.

  8. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machnes, S.; Institute for Theoretical Physics, University of Ulm, D-89069 Ulm; Sander, U.

    2011-08-15

    For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions aremore » pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.« less

  9. Validation of mechanical models for reinforced concrete structures: Presentation of the French project ``Benchmark des Poutres de la Rance''

    NASA Astrophysics Data System (ADS)

    L'Hostis, V.; Brunet, C.; Poupard, O.; Petre-Lazar, I.

    2006-11-01

    Several ageing models are available for the prediction of the mechanical consequences of rebar corrosion. They are used for service life prediction of reinforced concrete structures. Concerning corrosion diagnosis of reinforced concrete, some Non Destructive Testing (NDT) tools have been developed, and have been in use for some years. However, these developments require validation on existing concrete structures. The French project “Benchmark des Poutres de la Rance” contributes to this aspect. It has two main objectives: (i) validation of mechanical models to estimate the influence of rebar corrosion on the load bearing capacity of a structure, (ii) qualification of the use of the NDT results to collect information on steel corrosion within reinforced-concrete structures. Ten French and European institutions from both academic research laboratories and industrial companies contributed during the years 2004 and 2005. This paper presents the project that was divided into several work packages: (i) the reinforced concrete beams were characterized from non-destructive testing tools, (ii) the mechanical behaviour of the beams was experimentally tested, (iii) complementary laboratory analysis were performed and (iv) finally numerical simulations results were compared to the experimental results obtained with the mechanical tests.

  10. Transonic Flutter Suppression Control Law Design, Analysis and Wind-Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using classical, and minimax techniques are described. A unified general formulation and solution for the minimax approach, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  11. Nonlinear 3D visco-resistive MHD modeling of fusion plasmas: a comparison between numerical codes

    NASA Astrophysics Data System (ADS)

    Bonfiglio, D.; Chacon, L.; Cappello, S.

    2008-11-01

    Fluid plasma models (and, in particular, the MHD model) are extensively used in the theoretical description of laboratory and astrophysical plasmas. We present here a successful benchmark between two nonlinear, three-dimensional, compressible visco-resistive MHD codes. One is the fully implicit, finite volume code PIXIE3D [1,2], which is characterized by many attractive features, notably the generalized curvilinear formulation (which makes the code applicable to different geometries) and the possibility to include in the computation the energy transport equation and the extended MHD version of Ohm's law. In addition, the parallel version of the code features excellent scalability properties. Results from this code, obtained in cylindrical geometry, are compared with those produced by the semi-implicit cylindrical code SpeCyl, which uses finite differences radially, and spectral formulation in the other coordinates [3]. Both single and multi-mode simulations are benchmarked, regarding both reversed field pinch (RFP) and ohmic tokamak magnetic configurations. [1] L. Chacon, Computer Physics Communications 163, 143 (2004). [2] L. Chacon, Phys. Plasmas 15, 056103 (2008). [3] S. Cappello, Plasma Phys. Control. Fusion 46, B313 (2004) & references therein.

  12. Design and Optimization of Composite Automotive Hatchback Using Integrated Material-Structure-Process-Performance Method

    NASA Astrophysics Data System (ADS)

    Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai

    2018-03-01

    The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.

  13. An integrity measure to benchmark quantum error correcting memories

    NASA Astrophysics Data System (ADS)

    Xu, Xiaosi; de Beaudrap, Niel; O'Gorman, Joe; Benjamin, Simon C.

    2018-02-01

    Rapidly developing experiments across multiple platforms now aim to realise small quantum codes, and so demonstrate a memory within which a logical qubit can be protected from noise. There is a need to benchmark the achievements in these diverse systems, and to compare the inherent power of the codes they rely upon. We describe a recently introduced performance measure called integrity, which relates to the probability that an ideal agent will successfully ‘guess’ the state of a logical qubit after a period of storage in the memory. Integrity is straightforward to evaluate experimentally without state tomography and it can be related to various established metrics such as the logical fidelity and the pseudo-threshold. We offer a set of experimental milestones that are steps towards demonstrating unconditionally superior encoded memories. Using intensive numerical simulations we compare memories based on the five-qubit code, the seven-qubit Steane code, and a nine-qubit code which is the smallest instance of a surface code; we assess both the simple and fault-tolerant implementations of each. While the ‘best’ code upon which to base a memory does vary according to the nature and severity of the noise, nevertheless certain trends emerge.

  14. Simulations of Bingham plastic flows with the multiple-relaxation-time lattice Boltzmann model

    NASA Astrophysics Data System (ADS)

    Chen, SongGui; Sun, QiCheng; Jin, Feng; Liu, JianGuo

    2014-03-01

    Fresh cement mortar is a type of workable paste, which can be well approximated as a Bingham plastic and whose flow behavior is of major concern in engineering. In this paper, Papanastasiou's model for Bingham fluids is solved by using the multiplerelaxation-time lattice Boltzmann model (MRT-LB). Analysis of the stress growth exponent m in Bingham fluid flow simulations shows that Papanastasiou's model provides a good approximation of realistic Bingham plastics for values of m > 108. For lower values of m, Papanastasiou's model is valid for fluids between Bingham and Newtonian fluids. The MRT-LB model is validated by two benchmark problems: 2D steady Poiseuille flows and lid-driven cavity flows. Comparing the numerical results of the velocity distributions with corresponding analytical solutions shows that the MRT-LB model is appropriate for studying Bingham fluids while also providing better numerical stability. We further apply the MRT-LB model to simulate flow through a sudden expansion channel and the flow surrounding a round particle. Besides the rich flow structures obtained in this work, the dynamics fluid force on the round particle is calculated. Results show that both the Reynolds number Re and the Bingham number Bn affect the drag coefficients C D , and a drag coefficient with Re and Bn being taken into account is proposed. The relationship of Bn and the ratio of unyielded zone thickness to particle diameter is also analyzed. Finally, the Bingham fluid flowing around a set of randomly dispersed particles is simulated to obtain the apparent viscosity and velocity fields. These results help simulation of fresh concrete flowing in porous media.

  15. IgSimulator: a versatile immunosequencing simulator.

    PubMed

    Safonova, Yana; Lapidus, Alla; Lill, Jennie

    2015-10-01

    The recent introduction of next-generation sequencing technologies to antibody studies have resulted in a growing number of immunoinformatics tools for antibody repertoire analysis. However, benchmarking these newly emerging tools remains problematic since the gold standard datasets that are needed to validate these tools are typically not available. Since simulating antibody repertoires is often the only feasible way to benchmark new immunoinformatics tools, we developed the IgSimulator tool that addresses various complications in generating realistic antibody repertoires. IgSimulator's code has modular structure and can be easily adapted to new requirements to simulation. IgSimulator is open source and freely available as a C++ and Python program running on all Unix-compatible platforms. The source code is available from yana-safonova.github.io/ig_simulator. safonova.yana@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Benchmarks for target tracking

    NASA Astrophysics Data System (ADS)

    Dunham, Darin T.; West, Philip D.

    2011-09-01

    The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.

  17. Benchmarking of neutron production of heavy-ion transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, I.; Ronningen, R. M.; Heilbronn, L.

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less

  18. Theory verification and numerical benchmarking on neoclassical toroidal viscosity

    NASA Astrophysics Data System (ADS)

    Wang, Z. R.; Park, J.-K.; Liu, Y. Q.; Logan, N. C.; Menard, J. E.

    2013-10-01

    Systematic verification and numerical benchmarking has been successfully carried out among three different approaches of neoclassical toroidal viscosity (NTV) theory and the corresponding codes: IPEC-PENT is developed based on the combined NTV theory but without geometric simplifications; MARS-K originally calculating the kinetic energy is upgraded to calculate the NTV torque based on the equivalence between kinetic energy and NTV torque; MARS-Q includes smoothly connected NTV formula. The derivation and numerical results both indicate that the imaginary part of kinetic energy calculated by MARS-K is equivalent to the NTV torque in IPEC-PENT. In the benchmark of precession resonance between MARS-Q and MARS-K/IPEC-PENT, it is first time to show the agreement and the correlation between the connected NTV formula and the combined NTV theory in different collisional region. Additionally, both IPEC-PENT and MARS-K indicates the importance of the bounce harmonic resonance which could greatly enhance the NTV torque when E cross B drift frequency reaches the bounce resonance condition. Since MARS-K also has the capability to calculate the plasma response including the kinetic effect self-consistently, the self-consistent NTV torque calculations have also been tested. This work is supported by DOE Contract No. DE-AC02-09CH11466.

  19. Theory comparison and numerical benchmarking on neoclassical toroidal viscosity torque

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhirui; Park, Jong-Kyu; Logan, Nikolas

    Systematic comparison and numerical benchmarking have been successfully carried out among three different approaches of neoclassical toroidal viscosity (NTV) theory and the corresponding codes: IPEC-PENT is developed based on the combined NTV theory but without geometric simplifications [Park et al., Phys. Rev. Lett. 102, 065002 (2009)]; MARS-Q includes smoothly connected NTV formula [Shaing et al., Nucl. Fusion 50, 025022 (2010)] based on Shaing's analytic formulation in various collisionality regimes; MARS-K, originally computing the drift kinetic energy, is upgraded to compute the NTV torque based on the equivalence between drift kinetic energy and NTV torque [J.-K. Park, Phys. Plasma 18, 110702more » (2011)]. The derivation and numerical results both indicate that the imaginary part of drift kinetic energy computed by MARS-K is equivalent to the NTV torque in IPEC-PENT. In the benchmark of precession resonance between MARS-Q and MARS-K/IPEC-PENT, the agreement and correlation between the connected NTV formula and the combined NTV theory in different collisionality regimes are shown for the first time. Additionally, both IPEC-PENT and MARS-K indicate the importance of the bounce harmonic resonance which can greatly enhance the NTV torque when E×B drift frequency reaches the bounce resonance condition.« less

  20. Modelling low Reynolds number vortex-induced vibration problems with a fixed mesh fluid-solid interaction formulation

    NASA Astrophysics Data System (ADS)

    González Cornejo, Felipe A.; Cruchaga, Marcela A.; Celentano, Diego J.

    2017-11-01

    The present work reports a fluid-rigid solid interaction formulation described within the framework of a fixed-mesh technique. The numerical analysis is focussed on the study of a vortex-induced vibration (VIV) of a circular cylinder at low Reynolds number. The proposed numerical scheme encompasses the fluid dynamics computation in an Eulerian domain where the body is embedded using a collection of markers to describe its shape, and the rigid solid's motion is obtained with the well-known Newton's law. The body's velocity is imposed on the fluid domain through a penalty technique on the embedded fluid-solid interface. The fluid tractions acting on the solid are computed from the fluid dynamic solution of the flow around the body. The resulting forces are considered to solve the solid motion. The numerical code is validated by contrasting the obtained results with those reported in the literature using different approaches for simulating the flow past a fixed circular cylinder as a benchmark problem. Moreover, a mesh convergence analysis is also done providing a satisfactory response. In particular, a VIV problem is analyzed, emphasizing the description of the synchronization phenomenon.

  1. Wavenumber-extended high-order oscillation control finite volume schemes for multi-dimensional aeroacoustic computations

    NASA Astrophysics Data System (ADS)

    Kim, Sungtae; Lee, Soogab; Kim, Kyu Hong

    2008-04-01

    A new numerical method toward accurate and efficient aeroacoustic computations of multi-dimensional compressible flows has been developed. The core idea of the developed scheme is to unite the advantages of the wavenumber-extended optimized scheme and M-AUSMPW+/MLP schemes by predicting a physical distribution of flow variables more accurately in multi-space dimensions. The wavenumber-extended optimization procedure for the finite volume approach based on the conservative requirement is newly proposed for accuracy enhancement, which is required to capture the acoustic portion of the solution in the smooth region. Furthermore, the new distinguishing mechanism which is based on the Gibbs phenomenon in discontinuity, between continuous and discontinuous regions is introduced to eliminate the excessive numerical dissipation in the continuous region by the restricted application of MLP according to the decision of the distinguishing function. To investigate the effectiveness of the developed method, a sequence of benchmark simulations such as spherical wave propagation, nonlinear wave propagation, shock tube problem and vortex preservation test problem are executed. Also, throughout more realistic shock-vortex interaction and muzzle blast flow problems, the utility of the new method for aeroacoustic applications is verified by comparing with the previous numerical or experimental results.

  2. Spacecraft charging analysis with the implicit particle-in-cell code iPic3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deca, J.; Lapenta, G.; Marchand, R.

    2013-10-15

    We present the first results on the analysis of spacecraft charging with the implicit particle-in-cell code iPic3D, designed for running on massively parallel supercomputers. The numerical algorithm is presented, highlighting the implementation of the electrostatic solver and the immersed boundary algorithm; the latter which creates the possibility to handle complex spacecraft geometries. As a first step in the verification process, a comparison is made between the floating potential obtained with iPic3D and with Orbital Motion Limited theory for a spherical particle in a uniform stationary plasma. Second, the numerical model is verified for a CubeSat benchmark by comparing simulation resultsmore » with those of PTetra for space environment conditions with increasing levels of complexity. In particular, we consider spacecraft charging from plasma particle collection, photoelectron and secondary electron emission. The influence of a background magnetic field on the floating potential profile near the spacecraft is also considered. Although the numerical approaches in iPic3D and PTetra are rather different, good agreement is found between the two models, raising the level of confidence in both codes to predict and evaluate the complex plasma environment around spacecraft.« less

  3. Generalized three-dimensional lattice Boltzmann color-gradient method for immiscible two-phase pore-scale imbibition and drainage in porous media

    NASA Astrophysics Data System (ADS)

    Leclaire, Sébastien; Parmigiani, Andrea; Malaspinas, Orestis; Chopard, Bastien; Latt, Jonas

    2017-03-01

    This article presents a three-dimensional numerical framework for the simulation of fluid-fluid immiscible compounds in complex geometries, based on the multiple-relaxation-time lattice Boltzmann method to model the fluid dynamics and the color-gradient approach to model multicomponent flow interaction. New lattice weights for the lattices D3Q15, D3Q19, and D3Q27 that improve the Galilean invariance of the color-gradient model as well as for modeling the interfacial tension are derived and provided in the Appendix. The presented method proposes in particular an approach to model the interaction between the fluid compound and the solid, and to maintain a precise contact angle between the two-component interface and the wall. Contrarily to previous approaches proposed in the literature, this method yields accurate solutions even in complex geometries and does not suffer from numerical artifacts like nonphysical mass transfer along the solid wall, which is crucial for modeling imbibition-type problems. The article also proposes an approach to model inflow and outflow boundaries with the color-gradient method by generalizing the regularized boundary conditions. The numerical framework is first validated for three-dimensional (3D) stationary state (Jurin's law) and time-dependent (Washburn's law and capillary waves) problems. Then, the usefulness of the method for practical problems of pore-scale flow imbibition and drainage in porous media is demonstrated. Through the simulation of nonwetting displacement in two-dimensional random porous media networks, we show that the model properly reproduces three main invasion regimes (stable displacement, capillary fingering, and viscous fingering) as well as the saturating zone transition between these regimes. Finally, the ability to simulate immiscible two-component flow imbibition and drainage is validated, with excellent results, by numerical simulations in a Berea sandstone, a frequently used benchmark case used in this field, using a complex geometry that originates from a 3D scan of a porous sandstone. The methods presented in this article were implemented in the open-source PALABOS library, a general C++ matrix-based library well adapted for massive fluid flow parallel computation.

  4. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.

  5. RETRANO3 benchmarks for Beaver Valley plant transients and FSAR analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaumont, E.T.; Feltus, M.A.

    1993-01-01

    Any best-estimate code (e.g., RETRANO3) results must be validated against plant data and final safety analysis report (FSAR) predictions. The need for two independent means of benchmarking is necessary to ensure that the results were not biased toward a particular data set and to have a certain degree of accuracy. The code results need to be compared with previous results and show improvements over previous code results. Ideally, the two best means of benchmarking a thermal hydraulics code are comparing results from previous versions of the same code along with actual plant data. This paper describes RETRAN03 benchmarks against RETRAN02more » results, actual plant data, and FSAR predictions. RETRAN03, the Electric Power Research Institute's latest version of the RETRAN thermal-hydraulic analysis codes, offers several upgrades over its predecessor, RETRAN02 Mod5. RETRAN03 can use either implicit or semi-implicit numerics, whereas RETRAN02 Mod5 uses only semi-implicit numerics. Another major upgrade deals with slip model options. RETRAN03 added several new models, including a five-equation model for more accurate modeling of two-phase flow. RETPAN02 Mod5 should give similar but slightly more conservative results than RETRAN03 when executed with RETRAN02 Mod5 options.« less

  6. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less

  7. Benchmarking Heavy Ion Transport Codes FLUKA, HETC-HEDS MARS15, MCNPX, and PHITS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronningen, Reginald Martin; Remec, Igor; Heilbronn, Lawrence H.

    Powerful accelerators such as spallation neutron sources, muon-collider/neutrino facilities, and rare isotope beam facilities must be designed with the consideration that they handle the beam power reliably and safely, and they must be optimized to yield maximum performance relative to their design requirements. The simulation codes used for design purposes must produce reliable results. If not, component and facility designs can become costly, have limited lifetime and usefulness, and could even be unsafe. The objective of this proposal is to assess the performance of the currently available codes PHITS, FLUKA, MARS15, MCNPX, and HETC-HEDS that could be used for designmore » simulations involving heavy ion transport. We plan to access their performance by performing simulations and comparing results against experimental data of benchmark quality. Quantitative knowledge of the biases and the uncertainties of the simulations is essential as this potentially impacts the safe, reliable and cost effective design of any future radioactive ion beam facility. Further benchmarking of heavy-ion transport codes was one of the actions recommended in the Report of the 2003 RIA R&D Workshop".« less

  8. Humidification of Blow-By Oxygen During Recovery of Postoperative Pediatric Patients: One Unit's Journey.

    PubMed

    Donahue, Suzanne; DiBlasi, Robert M; Thomas, Karen

    2018-02-02

    To examine the practice of nebulizer cool mist blow-by oxygen administered to spontaneously breathing postanesthesia care unit (PACU) pediatric patients during Phase one recovery. Existing evidence was evaluated. Informal benchmarking documented practices in peer organizations. An in vitro study was then conducted to simulate clinical practice and determine depth and amount of airway humidity delivery with blow-by oxygen. Informal benchmarking information was obtained by telephone interview. Using a three-dimensional printed simulation model of the head connected to a breathing lung simulator, depth and amount of moisture delivery in the respiratory tree were measured. Evidence specific to PACU administration of cool mist blow-by oxygen was limited. Informal benchmarking revealed that routine cool mist oxygenated blow-by administration was not widely practiced. The laboratory experiment revealed minimal moisture reaching the mid-tracheal area of the simulated airway model. Routine use of oxygenated cool mist in spontaneously breathing pediatric PACU patients is not supported. Copyright © 2017 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.

  9. Benchmarking MARS (accident management software) with the Browns Ferry fire

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, S.M.; Liu, L.Y.; Raines, J.C.

    1992-01-01

    The MAAP Accident Response System (MARS) is a userfriendly computer software developed to provide management and engineering staff with the most needed insights, during actual or simulated accidents, of the current and future conditions of the plant based on current plant data and its trends. To demonstrate the reliability of the MARS code in simulatng a plant transient, MARS is being benchmarked with the available reactor pressure vessel (RPV) pressure and level data from the Browns Ferry fire. The MRS software uses the Modular Accident Analysis Program (MAAP) code as its basis to calculate plant response under accident conditions. MARSmore » uses a limited set of plant data to initialize and track the accidnt progression. To perform this benchmark, a simulated set of plant data was constructed based on actual report data containing the information necessary to initialize MARS and keep track of plant system status throughout the accident progression. The initial Browns Ferry fire data were produced by performing a MAAP run to simulate the accident. The remaining accident simulation used actual plant data.« less

  10. Use of the Fracture Continuum Model for Numerical Modeling of Flow and Transport of Deep Geologic Disposal of Nuclear Waste in Crystalline Rock

    NASA Astrophysics Data System (ADS)

    Hadgu, T.; Kalinina, E.; Klise, K. A.; Wang, Y.

    2015-12-01

    Numerical modeling of disposal of nuclear waste in a deep geologic repository in fractured crystalline rock requires robust characterization of fractures. Various methods for fracture representation in granitic rocks exist. In this study we used the fracture continuum model (FCM) to characterize fractured rock for use in the simulation of flow and transport in the far field of a generic nuclear waste repository located at 500 m depth. The FCM approach is a stochastic method that maps the permeability of discrete fractures onto a regular grid. The method generates permeability fields using field observations of fracture sets. The original method described in McKenna and Reeves (2005) was designed for vertical fractures. The method has since then been extended to incorporate fully three-dimensional representations of anisotropic permeability, multiple independent fracture sets, and arbitrary fracture dips and orientations, and spatial correlation (Kalinina et al. 20012, 2014). For this study the numerical code PFLOTRAN (Lichtner et al., 2015) has been used to model flow and transport. PFLOTRAN solves a system of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Benchmark tests were conducted to simulate flow and transport in a specified model domain. Distributions of fracture parameters were used to generate a selected number of realizations. For each realization, the FCM method was used to generate a permeability field of the fractured rock. The PFLOTRAN code was then used to simulate flow and transport in the domain. Simulation results and analysis are presented. The results indicate that the FCM approach is a viable method to model fractured crystalline rocks. The FCM is a computationally efficient way to generate realistic representation of complex fracture systems. This approach is of interest for nuclear waste disposal models applied over large domains.

  11. ‘Survival’: a simulation toolkit introducing a modular approach for radiobiological evaluations in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Manganaro, L.; Russo, G.; Bourhaleb, F.; Fausti, F.; Giordanengo, S.; Monaco, V.; Sacchi, R.; Vignati, A.; Cirio, R.; Attili, A.

    2018-04-01

    One major rationale for the application of heavy ion beams in tumour therapy is their increased relative biological effectiveness (RBE). The complex dependencies of the RBE on dose, biological endpoint, position in the field etc require the use of biophysical models in treatment planning and clinical analysis. This study aims to introduce a new software, named ‘Survival’, to facilitate the radiobiological computations needed in ion therapy. The simulation toolkit was written in C++ and it was developed with a modular architecture in order to easily incorporate different radiobiological models. The following models were successfully implemented: the local effect model (LEM, version I, II and III) and variants of the microdosimetric-kinetic model (MKM). Different numerical evaluation approaches were also implemented: Monte Carlo (MC) numerical methods and a set of faster analytical approximations. Among the possible applications, the toolkit was used to reproduce the RBE versus LET for different ions (proton, He, C, O, Ne) and different cell lines (CHO, HSG). Intercomparison between different models (LEM and MKM) and computational approaches (MC and fast approximations) were performed. The developed software could represent an important tool for the evaluation of the biological effectiveness of charged particles in ion beam therapy, in particular when coupled with treatment simulations. Its modular architecture facilitates benchmarking and inter-comparison between different models and evaluation approaches. The code is open source (GPL2 license) and available at https://github.com/batuff/Survival.

  12. References and benchmarks for pore-scale flow simulated using micro-CT images of porous media and digital rocks

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Hofmann, Ronny; Alpak, Faruk O.; Berg, Steffen; Dietderich, Jesse; Agarwal, Umang; Tandon, Kunj; Hunter, Sander; Freeman, Justin; Wilson, Ove Bjorn

    2017-11-01

    We generate a novel reference dataset to quantify the impact of numerical solvers, boundary conditions, and simulation platforms. We consider a variety of microstructures ranging from idealized pipes to digital rocks. Pore throats of the digital rocks considered are large enough to be well resolved with state-of-the-art micro-computerized tomography technology. Permeability is computed using multiple numerical engines, 12 in total, including, Lattice-Boltzmann, computational fluid dynamics, voxel based, fast semi-analytical, and known empirical models. Thus, we provide a measure of uncertainty associated with flow computations of digital media. Moreover, the reference and standards dataset generated is the first of its kind and can be used to test and improve new fluid flow algorithms. We find that there is an overall good agreement between solvers for idealized cross-section shape pipes. As expected, the disagreement increases with increase in complexity of the pore space. Numerical solutions for pipes with sinusoidal variation of cross section show larger variability compared to pipes of constant cross-section shapes. We notice relatively larger variability in computed permeability of digital rocks with coefficient of variation (of up to 25%) in computed values between various solvers. Still, these differences are small given other subsurface uncertainties. The observed differences between solvers can be attributed to several causes including, differences in boundary conditions, numerical convergence criteria, and parameterization of fundamental physics equations. Solvers that perform additional meshing of irregular pore shapes require an additional step in practical workflows which involves skill and can introduce further uncertainty. Computation times for digital rocks vary from minutes to several days depending on the algorithm and available computational resources. We find that more stringent convergence criteria can improve solver accuracy but at the expense of longer computation time.

  13. On the simulation and mitigation of anisoplanatic optical turbulence for long range imaging

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; LeMaster, Daniel A.

    2017-05-01

    We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of point spread functions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods.

  14. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations.

    PubMed

    Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus

    2015-01-01

    Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology.

  15. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    PubMed Central

    Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus

    2015-01-01

    Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology. PMID:26441628

  16. Opto-Electronic and Interconnects Hierarchical Design Automation System (OE-IDEAS)

    DTIC Science & Technology

    2004-05-01

    NETBOOK WEBSITE............................................................71 8.2 SIMULATION OF CRITICAL PATH FROM THE MAYO “10G” SYSTEM MCM BOARD...Benchmarks from the DaVinci Netbook website In May 2002, CFDRC downloaded all the materials from the DaVinci Netbook website containing the benchmark

  17. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  18. Overflow Simulations using MPAS-Ocean in Idealized and Realistic Domains

    NASA Astrophysics Data System (ADS)

    Reckinger, S.; Petersen, M. R.; Reckinger, S. J.

    2016-02-01

    MPAS-Ocean is used to simulate an idealized, density-driven overflow using the dynamics of overflow mixing and entrainment (DOME) setup. Numerical simulations are benchmarked against other models, including the MITgcm's z-coordinate model and HIM's isopycnal coordinate model. A full parameter study is presented that looks at how sensitive overflow simulations are to vertical grid type, resolution, and viscosity. Horizontal resolutions with 50 km grid cells are under-resolved and produce poor results, regardless of other parameter settings. Vertical grids ranging in thickness from 15 m to 120 m were tested. A horizontal resolution of 10 km and a vertical resolution of 60 m are sufficient to resolve the mesoscale dynamics of the DOME configuration, which mimics real-world overflow parameters. Mixing and final buoyancy are least sensitive to horizontal viscosity, but strongly sensitive to vertical viscosity. This suggests that vertical viscosity could be adjusted in overflow water formation regions to influence mixing and product water characteristics. Also, the study shows that sigma coordinates produce much less mixing than z-type coordinates, resulting in heavier plumes that go further down slope. Sigma coordinates are less sensitive to changes in resolution but as sensitive to vertical viscosity compared to z-coordinates. Additionally, preliminary measurements of overflow diagnostics on global simulations using a realistic oceanic domain are presented.

  19. Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements

    DOE PAGES

    Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.; ...

    2014-11-04

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  20. Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  1. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these banchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  2. Compton scattering collision module for OSIRIS

    NASA Astrophysics Data System (ADS)

    Del Gaudio, Fabrizio; Grismayer, Thomas; Fonseca, Ricardo; Silva, Luís

    2017-10-01

    Compton scattering plays a fundamental role in a variety of different astrophysical environments, such as at the gaps of pulsars and the stagnation surface of black holes. In these scenarios, Compton scattering is coupled with self-consistent mechanisms such as pair cascades. We present the implementation of a novel module, embedded in the self-consistent framework of the PIC code OSIRIS 4.0, capable of simulating Compton scattering from first principles and that is fully integrated with the self-consistent plasma dynamics. The algorithm accounts for the stochastic nature of Compton scattering reproducing without approximations the exchange of energy between photons and unbound charged species. We present benchmarks of the code against the analytical results of Blumenthal et al. and the numerical solution of the linear Kompaneets equation and good agreement is found between the simulations and the theoretical models. This work is supported by the European Research Council Grant (ERC- 2015-AdG 695088) and the Fundao para a Céncia e Tecnologia (Bolsa de Investigao PD/BD/114323/2016).

  3. In-plane crashworthiness of bio-inspired hierarchical honeycombs

    DOE PAGES

    Yin, Hanfeng; Huang, Xiaofei; Scarpa, Fabrizio; ...

    2018-03-13

    Biological tissues like bone, wood, and sponge possess hierarchical cellular topologies, which are lightweight and feature an excellent energy absorption capability. Here we present a system of bio-inspired hierarchical honeycomb structures based on hexagonal, Kagome, and triangular tessellations. The hierarchical designs and a reference regular honeycomb configuration are subjected to simulated in-plane impact using the nonlinear finite element code LS-DYNA. The numerical simulation results show that the triangular hierarchical honeycomb provides the best performance compared to the other two hierarchical honeycombs, and features more than twice the energy absorbed by the regular honeycomb under similar loading conditions. We also proposemore » a parametric study correlating the microstructure parameters (hierarchical length ratio r and the number of sub cells N) to the energy absorption capacity of these hierarchical honeycombs. The triangular hierarchical honeycomb with N = 2 and r = 1/8 shows the highest energy absorption capacity among all the investigated cases, and this configuration could be employed as a benchmark for the design of future safety protective systems.« less

  4. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ting; Plecháč, Petr

    2017-12-01

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  5. In-plane crashworthiness of bio-inspired hierarchical honeycombs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yin, Hanfeng; Huang, Xiaofei; Scarpa, Fabrizio

    Biological tissues like bone, wood, and sponge possess hierarchical cellular topologies, which are lightweight and feature an excellent energy absorption capability. Here we present a system of bio-inspired hierarchical honeycomb structures based on hexagonal, Kagome, and triangular tessellations. The hierarchical designs and a reference regular honeycomb configuration are subjected to simulated in-plane impact using the nonlinear finite element code LS-DYNA. The numerical simulation results show that the triangular hierarchical honeycomb provides the best performance compared to the other two hierarchical honeycombs, and features more than twice the energy absorbed by the regular honeycomb under similar loading conditions. We also proposemore » a parametric study correlating the microstructure parameters (hierarchical length ratio r and the number of sub cells N) to the energy absorption capacity of these hierarchical honeycombs. The triangular hierarchical honeycomb with N = 2 and r = 1/8 shows the highest energy absorption capacity among all the investigated cases, and this configuration could be employed as a benchmark for the design of future safety protective systems.« less

  6. Optimization of High-Dimensional Functions through Hypercube Evaluation

    PubMed Central

    Abiyev, Rahib H.; Tunay, Mustafa

    2015-01-01

    A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions. PMID:26339237

  7. A multi-state trajectory method for non-adiabatic dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Guohua, E-mail: taogh@pkusz.edu.cn

    2016-03-07

    A multi-state trajectory approach is proposed to describe nuclear-electron coupled dynamics in nonadiabatic simulations. In this approach, each electronic state is associated with an individual trajectory, among which electronic transition occurs. The set of these individual trajectories constitutes a multi-state trajectory, and nuclear dynamics is described by one of these individual trajectories as the system is on the corresponding state. The total nuclear-electron coupled dynamics is obtained from the ensemble average of the multi-state trajectories. A variety of benchmark systems such as the spin-boson system have been tested and the results generated using the quasi-classical version of the method showmore » reasonably good agreement with the exact quantum calculations. Featured in a clear multi-state picture, high efficiency, and excellent numerical stability, the proposed method may have advantages in being implemented to realistic complex molecular systems, and it could be straightforwardly applied to general nonadiabatic dynamics involving multiple states.« less

  8. Upscaling of dilution and mixing using a trajectory based Spatial Markov random walk model in a periodic flow domain

    NASA Astrophysics Data System (ADS)

    Sund, Nicole L.; Porta, Giovanni M.; Bolster, Diogo

    2017-05-01

    The Spatial Markov Model (SMM) is an upscaled model that has been used successfully to predict effective mean transport across a broad range of hydrologic settings. Here we propose a novel variant of the SMM, applicable to spatially periodic systems. This SMM is built using particle trajectories, rather than travel times. By applying the proposed SMM to a simple benchmark problem we demonstrate that it can predict mean effective transport, when compared to data from fully resolved direct numerical simulations. Next we propose a methodology for using this SMM framework to predict measures of mixing and dilution, that do not just depend on mean concentrations, but are strongly impacted by pore-scale concentration fluctuations. We use information from trajectories of particles to downscale and reconstruct pore-scale approximate concentration fields from which mixing and dilution measures are then calculated. The comparison between measurements from fully resolved simulations and predictions with the SMM agree very favorably.

  9. Benchmark Modeling of the Near-Field and Far-Field Wave Effects of Wave Energy Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhinefrank, Kenneth E; Haller, Merrick C; Ozkan-Haller, H Tuba

    2013-01-26

    This project is an industry-led partnership between Columbia Power Technologies and Oregon State University that will perform benchmark laboratory experiments and numerical modeling of the near-field and far-field impacts of wave scattering from an array of wave energy devices. These benchmark experimental observations will help to fill a gaping hole in our present knowledge of the near-field effects of multiple, floating wave energy converters and are a critical requirement for estimating the potential far-field environmental effects of wave energy arrays. The experiments will be performed at the Hinsdale Wave Research Laboratory (Oregon State University) and will utilize an array ofmore » newly developed Buoys' that are realistic, lab-scale floating power converters. The array of Buoys will be subjected to realistic, directional wave forcing (1:33 scale) that will approximate the expected conditions (waves and water depths) to be found off the Central Oregon Coast. Experimental observations will include comprehensive in-situ wave and current measurements as well as a suite of novel optical measurements. These new optical capabilities will include imaging of the 3D wave scattering using a binocular stereo camera system, as well as 3D device motion tracking using a newly acquired LED system. These observing systems will capture the 3D motion history of individual Buoys as well as resolve the 3D scattered wave field; thus resolving the constructive and destructive wave interference patterns produced by the array at high resolution. These data combined with the device motion tracking will provide necessary information for array design in order to balance array performance with the mitigation of far-field impacts. As a benchmark data set, these data will be an important resource for testing of models for wave/buoy interactions, buoy performance, and far-field effects on wave and current patterns due to the presence of arrays. Under the proposed project we will initiate high-resolution (fine scale, very near-field) fluid/structure interaction simulations of buoy motions, as well as array-scale, phase-resolving wave scattering simulations. These modeling efforts will utilize state-of-the-art research quality models, which have not yet been brought to bear on this complex problem of large array wave/structure interaction problem.« less

  10. Quantum simulations of nuclei and nuclear pasta with the multiresolution adaptive numerical environment for scientific simulations

    NASA Astrophysics Data System (ADS)

    Sagert, I.; Fann, G. I.; Fattoyev, F. J.; Postnikov, S.; Horowitz, C. J.

    2016-05-01

    Background: Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation, and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. Purpose: In this work, we present proof-of-principle three-dimensional (3D) Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). Methods: We perform benchmark studies of 16O, 208Pb, and 238U nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so-called waffle geometry as obtained by the Indiana University Molecular Dynamics (IUMD) code. The size of the unit cell is 24 fm with an average density of about ρ =0.05 fm-3 , proton fraction of Yp=0.3 , and temperature of T =0 MeV. Results: Our calculations reproduce the binding energies and shapes of light and heavy nuclei with different geometries. For the pasta simulation, we find that the final geometry is very similar to the initial waffle state. We compare calculations with and without spin-orbit forces. We find that while subtle differences are present, the pasta phase remains in the waffle geometry. Conclusions: Within the MADNESS framework, we can successfully perform calculations of inhomogeneous nuclear matter. By using pasta configurations from IUMD it is possible to explore different geometries and test the impact of self-consistent calculations on the latter.

  11. Information processing using a single dynamical node as complex system

    PubMed Central

    Appeltant, L.; Soriano, M.C.; Van der Sande, G.; Danckaert, J.; Massar, S.; Dambre, J.; Schrauwen, B.; Mirasso, C.R.; Fischer, I.

    2011-01-01

    Novel methods for information processing are highly desired in our information-driven society. Inspired by the brain's ability to process information, the recently introduced paradigm known as 'reservoir computing' shows that complex networks can efficiently perform computation. Here we introduce a novel architecture that reduces the usually required large number of elements to a single nonlinear node with delayed feedback. Through an electronic implementation, we experimentally and numerically demonstrate excellent performance in a speech recognition benchmark. Complementary numerical studies also show excellent performance for a time series prediction benchmark. These results prove that delay-dynamical systems, even in their simplest manifestation, can perform efficient information processing. This finding paves the way to feasible and resource-efficient technological implementations of reservoir computing. PMID:21915110

  12. Benchmark simulation Model no 2 in Matlab-simulink: towards plant-wide WWTP control strategy evaluation.

    PubMed

    Vreck, D; Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    In this paper, implementation of the Benchmark Simulation Model No 2 (BSM2) within Matlab-Simulink is presented. The BSM2 is developed for plant-wide WWTP control strategy evaluation on a long-term basis. It consists of a pre-treatment process, an activated sludge process and sludge treatment processes. Extended evaluation criteria are proposed for plant-wide control strategy assessment. Default open-loop and closed-loop strategies are also proposed to be used as references with which to compare other control strategies. Simulations indicate that the BM2 is an appropriate tool for plant-wide control strategy evaluation.

  13. Numerical Boundary Conditions for Computational Aeroacoustics Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Tam, Chritsopher K. W.; Kurbatskii, Konstantin A.; Fang, Jun

    1997-01-01

    Category 1, Problems 1 and 2, Category 2, Problem 2, and Category 3, Problem 2 are solved computationally using the Dispersion-Relation-Preserving (DRP) scheme. All these problems are governed by the linearized Euler equations. The resolution requirements of the DRP scheme for maintaining low numerical dispersion and dissipation as well as accurate wave speeds in solving the linearized Euler equations are now well understood. As long as 8 or more mesh points per wavelength is employed in the numerical computation, high quality results are assured. For the first three categories of benchmark problems, therefore, the real challenge is to develop high quality numerical boundary conditions. For Category 1, Problems 1 and 2, it is the curved wall boundary conditions. For Category 2, Problem 2, it is the internal radiation boundary conditions inside the duct. For Category 3, Problem 2, they are the inflow and outflow boundary conditions upstream and downstream of the blade row. These are the foci of the present investigation. Special nonhomogeneous radiation boundary conditions that generate the incoming disturbances and at the same time allow the outgoing reflected or scattered acoustic disturbances to leave the computation domain without significant reflection are developed. Numerical results based on these boundary conditions are provided.

  14. A study of workstation computational performance for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Cleveland, Jeff I., II

    1995-01-01

    With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.

  15. Physics-Based Broadband Ground Motion Simulations in Near Fault Conditions: the L'Aquila (Italy) and the Upper Rhine Graben (France-Germany) Case of Studies

    NASA Astrophysics Data System (ADS)

    Del Gaudio, S.; Lancieri, M.; Hok, S.; Satriano, C.; Chartier, T.; Scotti, O.; Bernard, P.

    2016-12-01

    Predictions of realistic ground motion for potential future earthquakes are always an interesting task for seismologists and are also the main objective of seismic hazard assessment. While, on one hand, numerical simulations have become more and more accurate and several different techniques have been developed, on the other hand ground motion prediction equations (GMPEs) have become a powerful instrument (due to great improvement of seismic strong motion networks providing a large amount of data). Nevertheless GMPEs do not represent the whole variety of source processes and this can lead to incorrect estimates especially in the near fault conditions because of the lack of records of large earthquakes at short distances. In such cases, physics-based ground motion simulations can be a valid tool to complement prediction equations for scenario studies, provided that both source and propagation are accurately described. We present here a comparison between numerical simulations performed in near fault conditions using two different kinematic source models, which are based on different assumptions and parameterizations: the "k-2 model" and the "fractal model". Wave propagation is taken into account using hybrid Green's function (HGF), which consists in coupling numerical Green's function with an empirical Green's function (EGF) approach. The advantage of this technique is that it does not require a very detailed knowledge of the propagation medium, but requires availability of high quality records of small earthquakes in the target area. The first application we show is on L'Aquila 2009 M 6.3 earthquake, where the main event records provide a benchmark for the synthetic waveforms. Here we can clearly observe which are the limitations of these techniques and investigate which are the physical parameters that are effectively controlling the ground motion level. The second application is a blind test on Upper Rhine Graben (URG) where active faults producing micro seismic activity are very close to sites of interest needing a careful investigation of seismic hazard. Finally we will perform a probabilistic seismic hazard analysis (PSHA) for the URG using numerical simulations to define input ground motion for different scenarios and compare them with a classical probabilistic study based on GMPEs.

  16. Modeling of the T S D E Heater Test to Investigate Crushed Salt Reconsolidation and Rock Salt Creep for the Underground Disposal of High-Level Nuclear Waste

    NASA Astrophysics Data System (ADS)

    Blanco Martin, L.; Rutqvist, J.; Birkholzer, J. T.; Wolters, R.; Lux, K. H.

    2014-12-01

    Rock salt is a potential medium for the underground disposal of nuclear waste because it has several assets, in particular its water and gas tightness in the undisturbed state, its ability to heal induced fractures and its high thermal conductivity as compared to other shallow-crustal rocks. In addition, the run-of-mine, granular salt, may be used to backfill the mined open spaces. We present simulation results associated with coupled thermal, hydraulic and mechanical processes in the TSDE (Thermal Simulation for Drift Emplacement) experiment, conducted in the Asse salt mine in Germany [1]. During this unique test, conceived to simulate reference repository conditions for spent nuclear fuel, a significant amount of data (temperature, stress changes and displacements, among others) was measured at 20 cross-sections, distributed in two drifts in which a total of six electrical heaters were emplaced. The drifts were subsequently backfilled with crushed salt. This test has been modeled in three-dimensions, using two sequential simulators for flow (mass and heat) and geomechanics, TOUGH-FLAC and FLAC-TOUGH [2]. These simulators have recently been updated to accommodate large strains and time-dependent rheology. The numerical predictions obtained by the two simulators are compared within the framework of an international benchmark exercise, and also with experimental data. Subsequently, a re-calibration of some parameters has been performed. Modeling coupled processes in saliniferous media for nuclear waste disposal is a novel approach, and in this study it has led to the determination of some creep parameters that are very difficult to assess at the laboratory-scale because they require extremely low strain rates. Moreover, the results from the benchmark are very satisfactory and validate the capabilities of the two simulators used to study coupled thermal, mechanical and hydraulic (multi-component, multi-phase) processes relative to the underground disposal of high-level nuclear waste in rock salt. References: [1] Bechthold et al., 1999. BAMBUS-I Project. Euratom, Report EUR19124-EN. [2] Blanco Martín et al., 2014. Comparison of two sequential simulators to investigate thermal-hydraulic-mechanical processes related to nuclear waste isolation in saliniferous formations. In preparation.

  17. Proficiency performance benchmarks for removal of simulated brain tumors using a virtual reality simulator NeuroTouch.

    PubMed

    AlZhrani, Gmaan; Alotaibi, Fahad; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Sabbagh, Abdulrahman; Bajunaid, Khalid; Lajoie, Susanne P; Del Maestro, Rolando F

    2015-01-01

    Assessment of neurosurgical technical skills involved in the resection of cerebral tumors in operative environments is complex. Educators emphasize the need to develop and use objective and meaningful assessment tools that are reliable and valid for assessing trainees' progress in acquiring surgical skills. The purpose of this study was to develop proficiency performance benchmarks for a newly proposed set of objective measures (metrics) of neurosurgical technical skills performance during simulated brain tumor resection using a new virtual reality simulator (NeuroTouch). Each participant performed the resection of 18 simulated brain tumors of different complexity using the NeuroTouch platform. Surgical performance was computed using Tier 1 and Tier 2 metrics derived from NeuroTouch simulator data consisting of (1) safety metrics, including (a) volume of surrounding simulated normal brain tissue removed, (b) sum of forces utilized, and (c) maximum force applied during tumor resection; (2) quality of operation metric, which involved the percentage of tumor removed; and (3) efficiency metrics, including (a) instrument total tip path lengths and (b) frequency of pedal activation. All studies were conducted in the Neurosurgical Simulation Research Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada. A total of 33 participants were recruited, including 17 experts (board-certified neurosurgeons) and 16 novices (7 senior and 9 junior neurosurgery residents). The results demonstrated that "expert" neurosurgeons resected less surrounding simulated normal brain tissue and less tumor tissue than residents. These data are consistent with the concept that "experts" focused more on safety of the surgical procedure compared with novices. By analyzing experts' neurosurgical technical skills performance on these different metrics, we were able to establish benchmarks for goal proficiency performance training of neurosurgery residents. This study furthers our understanding of expert neurosurgical performance during the resection of simulated virtual reality tumors and provides neurosurgical trainees with predefined proficiency performance benchmarks designed to maximize the learning of specific surgical technical skills. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  18. Solutions of the benchmark problems by the dispersion-relation-preserving scheme

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Shen, H.; Kurbatskii, K. A.; Auriault, L.

    1995-01-01

    The 7-point stencil Dispersion-Relation-Preserving scheme of Tam and Webb is used to solve all the six categories of the CAA benchmark problems. The purpose is to show that the scheme is capable of solving linear, as well as nonlinear aeroacoustics problems accurately. Nonlinearities, inevitably, lead to the generation of spurious short wave length numerical waves. Often, these spurious waves would overwhelm the entire numerical solution. In this work, the spurious waves are removed by the addition of artificial selective damping terms to the discretized equations. Category 3 problems are for testing radiation and outflow boundary conditions. In solving these problems, the radiation and outflow boundary conditions of Tam and Webb are used. These conditions are derived from the asymptotic solutions of the linearized Euler equations. Category 4 problems involved solid walls. Here, the wall boundary conditions for high-order schemes of Tam and Dong are employed. These conditions require the use of one ghost value per boundary point per physical boundary condition. In the second problem of this category, the governing equations, when written in cylindrical coordinates, are singular along the axis of the radial coordinate. The proper boundary conditions at the axis are derived by applying the limiting process of r approaches 0 to the governing equations. The Category 5 problem deals with the numerical noise issue. In the present approach, the time-independent mean flow solution is computed first. Once the residual drops to the machine noise level, the incident sound wave is turned on gradually. The solution is marched in time until a time-periodic state is reached. No exact solution is known for the Category 6 problem. Because of this, the problem is formulated in two totally different ways, first as a scattering problem then as a direct simulation problem. There is good agreement between the two numerical solutions. This offers confidence in the computed results. Both formulations are solved as initial value problems. As such, no Kutta condition is required at the trailing edge of the airfoil.

  19. Buckling and Damage Resistance of Transversely-Loaded Composite Shells

    NASA Technical Reports Server (NTRS)

    Wardle, Brian L.

    1998-01-01

    Experimental and numerical work was conducted to better understand composite shell response to transverse loadings which simulate damage-causing impact events. The quasi-static, centered, transverse loading response of laminated graphite/epoxy shells in a [+/-45(sub n)/O(sub n)](sub s) layup having geometric characteristics of a commercial fuselage are studied. The singly-curved composite shell structures are hinged along the straight circumferential edges and are either free or simply supported along the curved axial edges. Key components of the shell response are response instabilities due to limit-point and/or bifurcation buckling. Experimentally, deflection-controlled shell response is characterized via load-deflection data, deformation-shape evolutions, and the resulting damage state. Finite element models are used to study the kinematically nonlinear shell response, including bifurcation, limit-points, and postbuckling. A novel technique is developed for evaluating bifurcation from nonlinear prebuckling states utilizing asymmetric spatial discretization to introduce numerical perturbations. Advantages of the asymmetric meshing technique (AMT) over traditional techniques include efficiency, robustness, ease of application, and solution of the actual (not modified) problems. The AMT is validated by comparison to traditional numerical analysis of a benchmark problem and verified by comparison to experimental data. Applying the technique, bifurcation in a benchmark shell-buckling problem is correctly identified. Excellent agreement between the numerical and experimental results are obtained for a number of composite shells although predictive capability decreases for stiffer (thicker) specimens which is attributed to compliance of the test fixture. Restraining the axial edge (simple support) has the effect of creating a more complex response which involves unstable bifurcation, limit-point buckling, and dynamic collapse. Such shells were noted to bifurcate into asymmetric deformation modes but were undamaged during testing. Shells in this study which were damaged were not observed to bifurcate. Thus, a direct link between bifurcation and atypical damage could not be established although the mechanism (bifurcation) was identified. Recommendations for further work in these related areas are provided and include extensions of the AMT to other shell geometries and structural problems.

  20. Development and Experimental Benchmark of Simulations to Predict Used Nuclear Fuel Cladding Temperatures during Drying and Transfer Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greiner, Miles

    Radial hydride formation in high-burnup used fuel cladding has the potential to radically reduce its ductility and suitability for long-term storage and eventual transport. To avoid this formation, the maximum post-reactor temperature must remain sufficiently low to limit the cladding hoop stress, and so that hydrogen from the existing circumferential hydrides will not dissolve and become available to re-precipitate into radial hydrides under the slow cooling conditions during drying, transfer and early dry-cask storage. The objective of this research is to develop and experimentallybenchmark computational fluid dynamics simulations of heat transfer in post-pool-storage drying operations, when high-burnup fuel cladding ismore » likely to experience its highest temperature. These benchmarked tools can play a key role in evaluating dry cask storage systems for extended storage of high-burnup fuels and post-storage transportation, including fuel retrievability. The benchmarked tools will be used to aid the design of efficient drying processes, as well as estimate variations of surface temperatures as a means of inferring helium integrity inside the canister or cask. This work will be conducted effectively because the principal investigator has experience developing these types of simulations, and has constructed a test facility that can be used to benchmark them.« less

  1. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Nakano, H.; Serianni, G.; Takeiri, Y.; Tsumori, K.

    2011-09-01

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.

  2. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chitarin, G.; University of Padova, Dept. of Management and Engineering, strad. S. Nicola, 36100 Vicenza; Agostinetti, P.

    2011-09-26

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of themore » BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.« less

  3. Evaluation of control strategies using an oxidation ditch benchmark.

    PubMed

    Abusam, A; Keesman, K J; Spanjers, H; van, Straten G; Meinema, K

    2002-01-01

    This paper presents validation and implementation results of a benchmark developed for a specific full-scale oxidation ditch wastewater treatment plant. A benchmark is a standard simulation procedure that can be used as a tool in evaluating various control strategies proposed for wastewater treatment plants. It is based on model and performance criteria development. Testing of this benchmark, by comparing benchmark predictions to real measurements of the electrical energy consumptions and amounts of disposed sludge for a specific oxidation ditch WWTP, has shown that it can (reasonably) be used for evaluating the performance of this WWTP. Subsequently, the validated benchmark was then used in evaluating some basic and advanced control strategies. Some of the interesting results obtained are the following: (i) influent flow splitting ratio, between the first and the fourth aerated compartments of the ditch, has no significant effect on the TN concentrations in the effluent, and (ii) for evaluation of long-term control strategies, future benchmarks need to be able to assess settlers' performance.

  4. A domain decomposition approach to implementing fault slip in finite-element models of quasi-static and dynamic crustal deformation

    USGS Publications Warehouse

    Aagaard, Brad T.; Knepley, M.G.; Williams, C.A.

    2013-01-01

    We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.

  5. Surface tension models for a multi-material ALE code with AMR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wangyi; Koniges, Alice; Gott, Kevin

    A number of surface tension models have been implemented in a 3D multi-physics multi-material code, ALE–AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR). ALE–AMR is unique in its ability to model hot radiating plasmas, cold fragmenting solids, and most recently, the deformation of molten material. The surface tension models implemented include a diffuse interface approach with special numerical techniques to remove parasitic flow and a height function approach in conjunction with a volume-fraction interface reconstruction package. These surface tension models are benchmarked with a variety of test problems. In conclusion, based on the results, themore » height function approach using volume fractions was chosen to simulate droplet dynamics associated with extreme ultraviolet (EUV) lithography.« less

  6. Orbiter entry aerothermodynamics

    NASA Technical Reports Server (NTRS)

    Ried, R. C.

    1985-01-01

    The challenge in the definition of the entry aerothermodynamic environment arising from the challenge of a reliable and reusable Orbiter is reviewed in light of the existing technology. Select problems pertinent to the orbiter development are discussed with reference to comprehensive treatments. These problems include boundary layer transition, leeward-side heating, shock/shock interaction scaling, tile gap heating, and nonequilibrium effects such as surface catalysis. Sample measurements obtained from test flights of the Orbiter are presented with comparison to preflight expectations. Numerical and wind tunnel simulations gave efficient information for defining the entry environment and an adequate level of preflight confidence. The high quality flight data provide an opportunity to refine the operational capability of the orbiter and serve as a benchmark both for the development of aerothermodynamic technology and for use in meeting future entry heating challenges.

  7. Energy resolved actinometry for simultaneous measurement of atomic oxygen densities and local mean electron energies in radio-frequency driven plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greb, Arthur, E-mail: ag941@york.ac.uk; Niemi, Kari; O'Connell, Deborah

    2014-12-08

    A diagnostic method for the simultaneous determination of atomic oxygen densities and mean electron energies is demonstrated for an atmospheric pressure radio-frequency plasma jet. The proposed method is based on phase resolved optical emission measurements of the direct and dissociative electron-impact excitation dynamics of three distinct emission lines, namely, Ar 750.4 nm, O 777.4 nm, and O 844.6 nm. The energy dependence of these lines serves as basis for analysis by taking into account two line ratios. In this frame, the method is highly adaptable with regard to pressure and gas composition. Results are benchmarked against independent numerical simulations and two-photon absorption laser-inducedmore » fluorescence experiments.« less

  8. Surface tension models for a multi-material ALE code with AMR

    DOE PAGES

    Liu, Wangyi; Koniges, Alice; Gott, Kevin; ...

    2017-06-01

    A number of surface tension models have been implemented in a 3D multi-physics multi-material code, ALE–AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR). ALE–AMR is unique in its ability to model hot radiating plasmas, cold fragmenting solids, and most recently, the deformation of molten material. The surface tension models implemented include a diffuse interface approach with special numerical techniques to remove parasitic flow and a height function approach in conjunction with a volume-fraction interface reconstruction package. These surface tension models are benchmarked with a variety of test problems. In conclusion, based on the results, themore » height function approach using volume fractions was chosen to simulate droplet dynamics associated with extreme ultraviolet (EUV) lithography.« less

  9. Hazards of benchmarking complications with the National Trauma Data Bank: numerators in search of denominators.

    PubMed

    Kardooni, Shahrzad; Haut, Elliott R; Chang, David C; Pierce, Charles A; Efron, David T; Haider, Adil H; Pronovost, Peter J; Cornwell, Edward E

    2008-02-01

    Complication rates after trauma may serve as important indicators of quality of care. Meaningful performance benchmarks for complication rates require reference standards from valid and reliable data. Selection of appropriate numerators and denominators is a major consideration for data validity in performance improvement and benchmarking. We examined the suitability of the National Trauma Data Bank (NTDB) as a reference for benchmarking trauma center complication rates. We selected the five most commonly reported complications in the NTDB v. 6.1 (pneumonia, urinary tract infection, acute respiratory distress syndrome, deep vein thrombosis, myocardial infarction). We compared rates for each complication using three different denominators defined by different populations at risk. A-all patients from all 700 reporting facilities as the denominator (n = 1,466,887); B-only patients from the 441 hospitals reporting at least one complication (n = 1,307,729); C-patients from hospitals reporting at least one occurrence of each specific complication, giving a unique denominator for each complication (n range = 869,675-1,167,384). We also looked at differences in hospital characteristics between complication reporters and nonreporters. There was a 12.2% increase in the rate of each complication when patients from facilities not reporting any complications were excluded from the denominator. When rates were calculated using a unique denominator for each complication, rates increased 25% to 70%. The change from rate A to rate C produced a new rank order for the top five complications. When compared directly, rates B and C were also significantly different for all complications (all p < 0.01). Hospitals that reported complication information had significantly higher annual admissions and were more likely to be designated level I or II trauma centers and be university teaching hospitals. There is great variability in complication data reported in the NTDB that may introduce bias and significantly influence rates of complications reported. This potential for bias creates a challenge for appropriately interpreting complication rates for hospital performance benchmarking. We recognize the value of large aggregated registries such as the NTDB as a valuable tool for benchmarking and performance improvement purposes. However, we strongly advocate the need for conscientious selection of numerators and denominators that serve as the basic foundation for research.

  10. Comparison of mapping algorithms used in high-throughput sequencing: application to Ion Torrent data

    PubMed Central

    2014-01-01

    Background The rapid evolution in high-throughput sequencing (HTS) technologies has opened up new perspectives in several research fields and led to the production of large volumes of sequence data. A fundamental step in HTS data analysis is the mapping of reads onto reference sequences. Choosing a suitable mapper for a given technology and a given application is a subtle task because of the difficulty of evaluating mapping algorithms. Results In this paper, we present a benchmark procedure to compare mapping algorithms used in HTS using both real and simulated datasets and considering four evaluation criteria: computational resource and time requirements, robustness of mapping, ability to report positions for reads in repetitive regions, and ability to retrieve true genetic variation positions. To measure robustness, we introduced a new definition for a correctly mapped read taking into account not only the expected start position of the read but also the end position and the number of indels and substitutions. We developed CuReSim, a new read simulator, that is able to generate customized benchmark data for any kind of HTS technology by adjusting parameters to the error types. CuReSim and CuReSimEval, a tool to evaluate the mapping quality of the CuReSim simulated reads, are freely available. We applied our benchmark procedure to evaluate 14 mappers in the context of whole genome sequencing of small genomes with Ion Torrent data for which such a comparison has not yet been established. Conclusions A benchmark procedure to compare HTS data mappers is introduced with a new definition for the mapping correctness as well as tools to generate simulated reads and evaluate mapping quality. The application of this procedure to Ion Torrent data from the whole genome sequencing of small genomes has allowed us to validate our benchmark procedure and demonstrate that it is helpful for selecting a mapper based on the intended application, questions to be addressed, and the technology used. This benchmark procedure can be used to evaluate existing or in-development mappers as well as to optimize parameters of a chosen mapper for any application and any sequencing platform. PMID:24708189

  11. Benchmarking of measurement and simulation of transverse rms-emittance growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Dong-O

    2008-01-01

    Transverse emittance growth along the Alvarez DTL section is a major concern with respect to the preservation of beam quality of high current beams at the GSI UNILAC. In order to define measures to reduce this growth appropriated tools to simulate the beam dynamics are indispensable. This paper is about the benchmarking of three beam dynamics simulation codes, i.e. DYNAMION, PARMILA, and PARTRAN against systematic measurements of beam emittances for different machine settings. Experimental set-ups, data reduction, the preparation of the simulations, and the evaluation of the simulations will be described. It was found that the measured 100%-rmsemittances behind themore » DTL exceed the simulated values. Comparing measured 90%-rms-emittances to the simulated 95%-rms-emittances gives fair to good agreement instead. The sum of horizontal and vertical emittances is even described well by the codes as long as experimental 90%-rmsemittances are compared to simulated 95%-rms-emittances. Finally, the successful reduction of transverse emittance growth by systematic beam matching is reported.« less

  12. Numerical Simulation Applications in the Design of EGS Collab Experiment 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Henry; White, Mark D.; Fu, Pengcheng

    The United States Department of Energy, Geothermal Technologies Office (GTO) is funding a collaborative investigation of enhanced geothermal systems (EGS) processes at the meso-scale. This study, referred to as the EGS Collab project, is a unique opportunity for scientists and engineers to investigate the creation of fracture networks and circulation of fluids across those networks under in-situ stress conditions. The EGS Collab project is envisioned to comprise three experiments and the site for the first experiment is on the 4850 Level (4,850 feet below ground surface) in phyllite of the Precambrian Poorman formation, at the Sanford Underground Research Facility, locatedmore » at the former Homestake Gold Mine, in Lead, South Dakota. Principal objectives of the project are to develop a number of intermediate-scale field sites and to conduct well-controlled in situ experiments focused on rock fracture behavior and permeability enhancement. Data generated during these experiments will be compared against predictions of a suite of computer codes specifically designed to solve problems involving coupled thermal, hydrological, geomechanical, and geochemical processes. Comparisons between experimental and numerical simulation results will provide code developers with direction for improvements and verification of process models, build confidence in the suite of available numerical tools, and ultimately identify critical future development needs for the geothermal modeling community. Moreover, conducting thorough comparisons of models, modelling approaches, measurement approaches and measured data, via the EGS Collab project, will serve to identify techniques that are most likely to succeed at the Frontier Observatory for Research in Geothermal Energy (FORGE), the GTO's flagship EGS research effort. As noted, outcomes from the EGS Collab project experiments will serve as benchmarks for computer code verification, but numerical simulation additionally plays an essential role in designing these meso-scale experiments. This paper describes specific numerical simulations supporting the design of Experiment 1, a field test involving hydraulic stimulation of two fractures from notched sections of the injection borehole and fluid circulation between sub-horizontal injection and production boreholes in each fracture individually and collectively, including the circulation of chilled water. Whereas the mine drift allows for accurate and close placement of monitoring instrumentation to the developed fractures, active ventilation in the drift cooled the rock mass within the experimental volume. Numerical simulations were executed to predict seismic events and magnitudes during stimulation, initial fracture orientations for smooth horizontal wellbores, pressure requirements for fracture initiation from notched wellbores, fracture propagation during stimulation between the injection and production boreholes, tracer travel times between the injection and production boreholes, produced fluid temperatures with chilled water injections, pressure limits on fluid circulation to avoid fracture growth, temperature environment surrounding the 4850 Level drift, and fracture propagation within a stress field altered by drift excavation, ventilation cooling, and dewatering.« less

  13. Efficient numerical schemes for viscoplastic avalanches. Part 2: The 2D case

    NASA Astrophysics Data System (ADS)

    Fernández-Nieto, Enrique D.; Gallardo, José M.; Vigneaux, Paul

    2018-01-01

    This paper deals with the numerical resolution of a shallow water viscoplastic flow model. Viscoplastic materials are characterized by the existence of a yield stress: below a certain critical threshold in the imposed stress, there is no deformation and the material behaves like a rigid solid, but when that yield value is exceeded, the material flows like a fluid. In the context of avalanches, it means that after going down a slope, the material can stop and its free surface has a non-trivial shape, as opposed to the case of water (Newtonian fluid). The model involves variational inequalities associated with the yield threshold: finite volume schemes are used together with duality methods (namely Augmented Lagrangian and Bermúdez-Moreno) to discretize the problem. To be able to accurately simulate the stopping behavior of the avalanche, new schemes need to be designed, involving the classical notion of well-balancing. In the present context, it needs to be extended to take into account the viscoplastic nature of the material as well as general bottoms with wet/dry fronts which are encountered in geophysical geometries. Here we derive such schemes in 2D as the follow up of the companion paper treating the 1D case. Numerical tests include in particular a generalized 2D benchmark for Bingham codes (the Bingham-Couette flow with two non-zero boundary conditions on the velocity) and a simulation of the avalanche path of Taconnaz in Chamonix-Mont-Blanc to show the usability of these schemes on real topographies from digital elevation models (DEM).

  14. Image analysis method for the measurement of water saturation in a two-dimensional experimental flow tank

    NASA Astrophysics Data System (ADS)

    Belfort, Benjamin; Weill, Sylvain; Lehmann, François

    2017-07-01

    A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.

  15. Electron-beam-ion-source (EBIS) modeling progress at FAR-TECH, Inc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J. S., E-mail: kim@far-tech.com; Zhao, L., E-mail: kim@far-tech.com; Spencer, J. A., E-mail: kim@far-tech.com

    FAR-TECH, Inc. has been developing a numerical modeling tool for Electron-Beam-Ion-Sources (EBISs). The tool consists of two codes. One is the Particle-Beam-Gun-Simulation (PBGUNS) code to simulate a steady state electron beam and the other is the EBIS-Particle-In-Cell (EBIS-PIC) code to simulate ion charge breeding with the electron beam. PBGUNS, a 2D (r,z) electron gun and ion source simulation code, has been extended for efficient modeling of EBISs and the work was presented previously. EBIS-PIC is a space charge self-consistent PIC code and is written to simulate charge breeding in an axisymmetric 2D (r,z) device allowing for full three-dimensional ion dynamics.more » This 2D code has been successfully benchmarked with Test-EBIS measurements at Brookhaven National Laboratory. For long timescale (< tens of ms) ion charge breeding, the 2D EBIS-PIC simulations take a long computational time making the simulation less practical. Most of the EBIS charge breeding, however, may be modeled in 1D (r) as the axial dependence of the ion dynamics may be ignored in the trap. Where 1D approximations are valid, simulations of charge breeding in an EBIS over long time scales become possible, using EBIS-PIC together with PBGUNS. Initial 1D results are presented. The significance of the magnetic field to ion dynamics, ion cooling effects due to collisions with neutral gas, and the role of Coulomb collisions are presented.« less

  16. A locally conservative non-negative finite element formulation for anisotropic advective-diffusive-reactive systems

    NASA Astrophysics Data System (ADS)

    Mudunuru, M. K.; Shabouei, M.; Nakshatrala, K.

    2015-12-01

    Advection-diffusion-reaction (ADR) equations appear in various areas of life sciences, hydrogeological systems, and contaminant transport. Obtaining stable and accurate numerical solutions can be challenging as the underlying equations are coupled, nonlinear, and non-self-adjoint. Currently, there is neither a robust computational framework available nor a reliable commercial package known that can handle various complex situations. Herein, the objective of this poster presentation is to present a novel locally conservative non-negative finite element formulation that preserves the underlying physical and mathematical properties of a general linear transient anisotropic ADR equation. In continuous setting, governing equations for ADR systems possess various important properties. In general, all these properties are not inherited during finite difference, finite volume, and finite element discretizations. The objective of this poster presentation is two fold: First, we analyze whether the existing numerical formulations (such as SUPG and GLS) and commercial packages provide physically meaningful values for the concentration of the chemical species for various realistic benchmark problems. Furthermore, we also quantify the errors incurred in satisfying the local and global species balance for two popular chemical kinetics schemes: CDIMA (chlorine dioxide-iodine-malonic acid) and BZ (Belousov--Zhabotinsky). Based on these numerical simulations, we show that SUPG and GLS produce unphysical values for concentration of chemical species due to the violation of the non-negative constraint, contain spurious node-to-node oscillations, and have large errors in local and global species balance. Second, we proposed a novel finite element formulation to overcome the above difficulties. The proposed locally conservative non-negative computational framework based on low-order least-squares finite elements is able to preserve these underlying physical and mathematical properties. Several representative numerical examples are discussed to illustrate the importance of the proposed numerical formulations to accurately describe various aspects of mixing process in chaotic flows and to simulate transport in highly heterogeneous anisotropic media.

  17. Notes on numerical reliability of several statistical analysis programs

    USGS Publications Warehouse

    Landwehr, J.M.; Tasker, Gary D.

    1999-01-01

    This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.

  18. Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.

    2015-12-01

    Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  19. A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.

    2016-02-01

    The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.

  20. 3D thermal modeling of TRISO fuel coupled with neutronic simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jianwei; Uddin, Rizwan

    2010-01-01

    The Very High Temperature Gas Reactor (VHTR) is widely considered as one of the top candidates identified in the Next Generation Nuclear Power-plant (NGNP) Technology Roadmap under the U.S . Depanment of Energy's Generation IV program. TRlSO particle is a common element among different VHTR designs and its performance is critical to the safety and reliability of the whole reactor. A TRISO particle experiences complex thermo-mechanical changes during reactor operation in high temperature and high burnup conditions. TRISO fuel performance analysis requires evaluation of these changes on micro scale. Since most of these changes are temperature dependent, 3D thermal modelingmore » of TRISO fuel is a crucial step of the whole analysis package. In this paper, a 3D numerical thermal model was developed to calculate temperature distribution inside TRISO and pebble under different scenarios. 3D simulation is required because pebbles or TRISOs are always subjected to asymmetric thermal conditions since they are randomly packed together. The numerical model was developed using finite difference method and it was benchmarked against ID analytical results and also results reported from literature. Monte-Carlo models were set up to calculate radial power density profile. Complex convective boundary condition was applied on the pebble outer surface. Three reactors were simulated using this model to calculate temperature distribution under different power levels. Two asymmetric boundary conditions were applied to the pebble to test the 3D capabilities. A gas bubble was hypothesized inside the TRISO kernel and 3D simulation was also carried out under this scenario. Intuition-coherent results were obtained and reported in this paper.« less

  1. Benchmarking of vertically-integrated CO2 flow simulations at the Sleipner Field, North Sea

    NASA Astrophysics Data System (ADS)

    Cowton, L. R.; Neufeld, J. A.; White, N. J.; Bickle, M. J.; Williams, G. A.; White, J. C.; Chadwick, R. A.

    2018-06-01

    Numerical modeling plays an essential role in both identifying and assessing sub-surface reservoirs that might be suitable for future carbon capture and storage projects. Accuracy of flow simulations is tested by benchmarking against historic observations from on-going CO2 injection sites. At the Sleipner project located in the North Sea, a suite of time-lapse seismic reflection surveys enables the three-dimensional distribution of CO2 at the top of the reservoir to be determined as a function of time. Previous attempts have used Darcy flow simulators to model CO2 migration throughout this layer, given the volume of injection with time and the location of the injection point. Due primarily to computational limitations preventing adequate exploration of model parameter space, these simulations usually fail to match the observed distribution of CO2 as a function of space and time. To circumvent these limitations, we develop a vertically-integrated fluid flow simulator that is based upon the theory of topographically controlled, porous gravity currents. This computationally efficient scheme can be used to invert for the spatial distribution of reservoir permeability required to minimize differences between the observed and calculated CO2 distributions. When a uniform reservoir permeability is assumed, inverse modeling is unable to adequately match the migration of CO2 at the top of the reservoir. If, however, the width and permeability of a mapped channel deposit are allowed to independently vary, a satisfactory match between the observed and calculated CO2 distributions is obtained. Finally, the ability of this algorithm to forecast the flow of CO2 at the top of the reservoir is assessed. By dividing the complete set of seismic reflection surveys into training and validation subsets, we find that the spatial pattern of permeability required to match the training subset can successfully predict CO2 migration for the validation subset. This ability suggests that it might be feasible to forecast migration patterns into the future with a degree of confidence. Nevertheless, our analysis highlights the difficulty in estimating reservoir parameters away from the region swept by CO2 without additional observational constraints.

  2. On the modelling of complex kinematic hardening and nonquadratic anisotropic yield criteria at finite strains: application to sheet metal forming

    NASA Astrophysics Data System (ADS)

    Grilo, Tiago J.; Vladimirov, Ivaylo N.; Valente, Robertt A. F.; Reese, Stefanie

    2016-06-01

    In the present paper, a finite strain model for complex combined isotropic-kinematic hardening is presented. It accounts for finite elastic and finite plastic strains and is suitable for any anisotropic yield criterion. In order to model complex cyclic hardening phenomena, the kinematic hardening is described by several back stress components. To that end, a new procedure is proposed in which several multiplicative decompositions of the plastic part of the deformation gradient are considered. The formulation incorporates a completely general format of the yield function, which means that any yield function can by employed by following a procedure that ensures the principle of material frame indifference. The constitutive equations are derived in a thermodynamically consistent way and numerically integrated by means of a backward-Euler algorithm based on the exponential map. The performance of the constitutive model is assessed via numerical simulations of industry-relevant sheet metal forming processes (U-channel forming and draw/re-draw of a panel benchmarks), the results of which are compared to experimental data. The comparison between numerical and experimental results shows that the use of multiple back stress components is very advantageous in the description of springback. This holds in particular if one carries out a comparison with the results of using only one component. Moreover, the numerically obtained results are in excellent agreement with the experimental data.

  3. Pore-scale and continuum simulations of solute transport micromodel benchmark experiments

    DOE PAGES

    Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; ...

    2014-06-18

    Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing,more » and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics, needed up to several days on supercomputers to resolve the more complex problems.« less

  4. Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport

    NASA Astrophysics Data System (ADS)

    Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.

    2016-12-01

    Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.

  5. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outsidemore » of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.« less

  6. Predicting debris-flow initiation and run-out with a depth-averaged two-phase model and adaptive numerical methods

    NASA Astrophysics Data System (ADS)

    George, D. L.; Iverson, R. M.

    2012-12-01

    Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.

  7. Targeting the affordability of cigarettes: a new benchmark for taxation policy in low-income and-middle-income countries.

    PubMed

    Blecher, Evan

    2010-08-01

    To investigate the appropriateness of tax incidence (the percentage of the retail price occupied by taxes) benchmarking in low-income and-middle-income countries (LMICs) with rapidly growing economies and to explore the viability of an alternative tax policy rule based on the affordability of cigarettes. The paper outlines criticisms of tax incidence benchmarking, particularly in the context of LMICs. It then considers an affordability-based benchmark using relative income price (RIP) as a measure of affordability. The RIP measures the percentage of annual per capita GDP required to purchase 100 packs of cigarettes. Using South Africa as a case study of an LMIC, future consumption is simulated using both tax incidence benchmarks and affordability benchmarks. I show that a tax incidence benchmark is not an optimal policy tool in South Africa and that an affordability benchmark could be a more effective means of reducing tobacco consumption in the future. Although a tax incidence benchmark was successful in increasing prices and reducing tobacco consumption in South Africa in the past, this approach has drawbacks, particularly in the context of a rapidly growing LMIC economy. An affordability benchmark represents an appropriate alternative that would be more effective in reducing future cigarette consumption.

  8. First benchmark of the Unstructured Grid Adaptation Working Group

    NASA Technical Reports Server (NTRS)

    Ibanez, Daniel; Barral, Nicolas; Krakos, Joshua; Loseille, Adrien; Michal, Todd; Park, Mike

    2017-01-01

    Unstructured grid adaptation is a technology that holds the potential to improve the automation and accuracy of computational fluid dynamics and other computational disciplines. Difficulty producing the highly anisotropic elements necessary for simulation on complex curved geometries that satisfies a resolution request has limited this technology's widespread adoption. The Unstructured Grid Adaptation Working Group is an open gathering of researchers working on adapting simplicial meshes to conform to a metric field. Current members span a wide range of institutions including academia, industry, and national laboratories. The purpose of this group is to create a common basis for understanding and improving mesh adaptation. We present our first major contribution: a common set of benchmark cases, including input meshes and analytic metric specifications, that are publicly available to be used for evaluating any mesh adaptation code. We also present the results of several existing codes on these benchmark cases, to illustrate their utility in identifying key challenges common to all codes and important differences between available codes. Future directions are defined to expand this benchmark to mature the technology necessary to impact practical simulation workflows.

  9. An Improved Lattice Boltzmann Model for Non-Newtonian Flows with Applications to Solid-Fluid Interactions in External Flows

    NASA Astrophysics Data System (ADS)

    Adam, Saad; Premnath, Kannan

    2016-11-01

    Fluid mechanics of non-Newtonian fluids, which arise in numerous settings, are characterized by non-linear constitutive models that pose certain unique challenges for computational methods. Here, we consider the lattice Boltzmann method (LBM), which offers some computational advantages due to its kinetic basis and its simpler stream-and-collide procedure enabling efficient simulations. However, further improvements are necessary to improve its numerical stability and accuracy for computations involving broader parameter ranges. Hence, in this study, we extend the cascaded LBM formulation by modifying its moment equilibria and relaxation parameters to handle a variety of non-Newtonian constitutive equations, including power-law and Bingham fluids, with improved stability. In addition, we include corrections to the moment equilibria to obtain an inertial frame invariant scheme without cubic-velocity defects. After preforming its validation study for various benchmark flows, we study the physics of non-Newtonian flow over pairs of circular and square cylinders in a tandem arrangement, especially the wake structure interactions and their effects on resulting forces in each cylinder, and elucidate the effect of the various characteristic parameters.

  10. Polarization chaos and random bit generation in nonlinear fiber optics induced by a time-delayed counter-propagating feedback loop.

    PubMed

    Morosi, J; Berti, N; Akrout, A; Picozzi, A; Guasoni, M; Fatome, J

    2018-01-22

    In this manuscript, we experimentally and numerically investigate the chaotic dynamics of the state-of-polarization in a nonlinear optical fiber due to the cross-interaction between an incident signal and its intense backward replica generated at the fiber-end through an amplified reflective delayed loop. Thanks to the cross-polarization interaction between the two-delayed counter-propagating waves, the output polarization exhibits fast temporal chaotic dynamics, which enable a powerful scrambling process with moving speeds up to 600-krad/s. The performance of this all-optical scrambler was then evaluated on a 10-Gbit/s On/Off Keying telecom signal achieving an error-free transmission. We also describe how these temporal and chaotic polarization fluctuations can be exploited as an all-optical random number generator. To this aim, a billion-bit sequence was experimentally generated and successfully confronted to the dieharder benchmarking statistic tools. Our experimental analysis are supported by numerical simulations based on the resolution of counter-propagating coupled nonlinear propagation equations that confirm the observed behaviors.

  11. Laser Lightcraft Performance

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Liu, Jiwen; Wei, Hong

    2000-01-01

    The purpose of this study is to establish the technical ground for modeling the physics of laser powered pulse detonation phenomenon. The principle of the laser power propulsion is that when high-powered laser is focused at a small area near the surface of a thruster, the intense energy causes the electrical breakdown of the working fluid (e.g. air) and forming high speed plasma (known as the inverse Bremsstrahlung, IB, effect). The intense heat and high pressure created in the plasma consequently causes the surrounding to heat up and expand until the thrust producing shock waves are formed. This complex process of gas ionization, increase in radiation absorption and the forming of plasma and shock waves will be investigated in the development of the present numerical model. In the first phase of this study, laser light focusing, radiation absorption and shock wave propagation over the entire pulsed cycle are modeled. The model geometry and test conditions of known benchmark experiments such as those in Myrabo's experiment will be employed in the numerical model validation simulations. The calculated performance data will be compared to the test data.

  12. Fully implicit Particle-in-cell algorithms for multiscale plasma simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis

    The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PICmore » only, reduced dimensionality). The approach is free of numerical instabilities: ω peΔt >> 1, and Δx >> λ D. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N FE, leading to an optimal algorithm.« less

  13. Comparing Hospital Processes and Outcomes in California Medicare Beneficiaries: Simulation Prompts Reconsideration.

    PubMed

    Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia

    2017-01-01

    This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. The Centers for Medicare and Medicaid Services' Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records.To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California's (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals' mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals' decreased, KPNC hospitals' performance would appear better. Future hospital benchmarking should consider the impact of variation in admission thresholds.

  14. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  15. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reedlunn, Benjamin

    Room D was an in-situ, isothermal, underground experiment conducted at the Waste Isolation Pilot Plant between 1984 and 1991. The room was carefully instrumented to measure the horizontal and vertical closure immediately upon excavation and for several years thereafter. Early finite element simulations of salt creep around Room D under-predicted the vertical closure by 4.5×, causing investigators to explore a series of changes to the way Room D was modeled. Discrepancies between simulations and measurements were resolved through a series of adjustments to model parameters, which were openly acknowledged in published reports. Interest in Room D has been rekindled recentlymore » by the U.S./German Joint Project III and Project WEIMOS, which seek to improve the predictions of rock salt constitutive models. Joint Project participants calibrate their models solely against laboratory tests, and benchmark the models against underground experiments, such as room D. This report describes updating legacy Room D simulations to today’s computational standards by rectifying several numerical issues. Subsequently, the constitutive model used in previous modeling is recalibrated two different ways against a suite of new laboratory creep experiments on salt extracted from the repository horizon of the Waste Isolation Pilot Plant. Simulations with the new, laboratory-based, calibrations under-predict Room D vertical closure by 3.1×. A list of potential improvements is discussed.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reedlunn, Benjamin

    Room D was an in-situ, isothermal, underground experiment conducted at theWaste Isolation Pilot Plant between 1984 and 1991. The room was carefully instrumented to measure the horizontal and vertical closure immediately upon excavation and for several years thereafter. Early finite element simulations of salt creep around Room D under predicted the vertical closure by 4.5×, causing investigators to explore a series of changes to the way Room D was modeled. Discrepancies between simulations and measurements were resolved through a series of adjustments to model parameters, which were openly acknowledged in published reports. Interest in Room D has been rekindled recentlymore » by the U.S./German Joint Project III and Project WEIMOS, which seek to improve the predictions of rock salt constitutive models. Joint Project participants calibrate their models solely against laboratory tests, and benchmark the models against underground experiments, such as room D. This report describes updating legacy Room D simulations to today’s computational standards by rectifying several numerical issues. Subsequently, the constitutive model used in previous modeling is recalibrated two different ways against a suite of new laboratory creep experiments on salt extracted from the repository horizon of the Waste Isolation Pilot Plant. Simulations with the new, laboratory-based, calibrations under predict Room D vertical closure by 3.1×. A list of potential improvements is discussed.« less

  18. Three-dimensional fluid-structure interaction case study on cubical fluid cavity with flexible bottom

    NASA Astrophysics Data System (ADS)

    Ghelardi, Stefano; Rizzo, Cesare; Villa, Diego

    2017-12-01

    In this paper, we report our study on a numerical fluid-structure interaction problem originally presented by Mok et al. (2001) in two dimensions and later studied in three dimensions by Valdés Vazquez (2007), Lombardi (2012), and Trimarchi (2012). We focus on a 3D test case in which we evaluated the sensitivity of several input parameters on the fluid and structural results. In particular, this analysis provides a starting point from which we can look deeper into specific aspects of these simulations and analyze more realistic cases, e.g., in sails design. In this study, using the commercial software ADINA™, we addressed a well-known unsteadiness problem comprising a square box representing the fluid domain with a flexible bottom modeled with structural shell elements. We compared data from previously published work whose authors used the same numerical approach, i.e., a partitioned approach coupling a finite volume solver (for the fluid domain) and a finite element solver (for the solid domain). Specifically, we established several benchmarks and made comparisons with respect to fluid and solid meshes, structural element types, and structural damping, as well as solution algorithms. Moreover, we compared our method with a monolithic finite element solution method. Our comparisons of new and old results provide an outline of best practices for such simulations.

  19. Site investigation and modelling at "La Maina" landslide (Carnian Alps, Italy)

    NASA Astrophysics Data System (ADS)

    Marcato, G.; Mantovani, M.; Pasuto, A.; Silvano, S.; Tagliavini, F.; Zabuski, L.; Zannoni, A.

    2006-01-01

    The Sauris reservoir is a hydroelectric basin closed downstream by a 136 m high, double arc concrete dam. The dam is firmly anchored to a consistent rock (Dolomia dello Schlern), but the Lower Triassic clayey formations, cropping out especially in the lower part of the slopes, have made the whole catchment basin increasingly prone to landslides. In recent years, the "La Maina landslide" has opened up several joints over a surface of about 100 000 m2, displacing about 1 500 000 m3 of material. Particular attention is now being given to the evolution of the instability area, as the reservoir is located at the foot of the landslide. Under the commission of the Regional Authority for Civil Protection a numerical modelling simulation in a pseudo-time condition of the slope was developed, in order to understand the risk for transport infrastructures, for some houses and for the reservoir and to take urgent mesaures to stabilize the slope. A monitoring system consisting of four inclinometers, three wire extensometers and ten GPS bench-mark pillars was immediately set up to check on surface and deep displacements. The data collected and the geological and geomorphological evidences was used to carry out a numerical simulation. The reliability of the results was checked by comparing the model with the morphological evidence of the movement. The mitigation measures were designed and realised following the indications provided by the model.

  20. Modeling mass transfer and reaction of dilute solutes in a ternary phase system by the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Fu, Yu-Hang; Bai, Lin; Luo, Kai-Hong; Jin, Yong; Cheng, Yi

    2017-04-01

    In this work, we propose a general approach for modeling mass transfer and reaction of dilute solute(s) in incompressible three-phase flows by introducing a collision operator in lattice Boltzmann (LB) method. An LB equation was used to simulate the solute dynamics among three different fluids, in which the newly expanded collision operator was used to depict the interface behavior of dilute solute(s). The multiscale analysis showed that the presented model can recover the macroscopic transport equations derived from the Maxwell-Stefan equation for dilute solutes in three-phase systems. Compared with the analytical equation of state of solute and dynamic behavior, these results are proven to constitute a generalized framework to simulate solute distributions in three-phase flows, including compound soluble in one phase, compound adsorbed on single-interface, compound in two phases, and solute soluble in three phases. Moreover, numerical simulations of benchmark cases, such as phase decomposition, multilayered planar interfaces, and liquid lens, were performed to test the stability and efficiency of the model. Finally, the multiphase mass transfer and reaction in Janus droplet transport in a straight microchannel were well reproduced.

  1. A sharp interface Cartesian grid method for viscous simulation of shocked particle-laden flows

    NASA Astrophysics Data System (ADS)

    Das, Pratik; Sen, Oishik; Jacobs, Gustaaf; Udaykumar, H. S.

    2017-09-01

    A Cartesian grid-based sharp interface method is presented for viscous simulations of shocked particle-laden flows. The moving solid-fluid interfaces are represented using level sets. A moving least-squares reconstruction is developed to apply the no-slip boundary condition at solid-fluid interfaces and to supply viscous stresses to the fluid. The algorithms developed in this paper are benchmarked against similarity solutions for the boundary layer over a fixed flat plate and against numerical solutions for moving interface problems such as shock-induced lift-off of a cylinder in a channel. The framework is extended to 3D and applied to calculate low Reynolds number steady supersonic flow over a sphere. Viscous simulation of the interaction of a particle cloud with an incident planar shock is demonstrated; the average drag on the particles and the vorticity field in the cloud are compared to the inviscid case to elucidate the effects of viscosity on momentum transfer between the particle and fluid phases. The methods developed will be useful for obtaining accurate momentum and heat transfer closure models for macro-scale shocked particulate flow applications such as blast waves and dust explosions.

  2. Stochastic Rotation Dynamics simulations of wetting multi-phase flows

    NASA Astrophysics Data System (ADS)

    Hiller, Thomas; Sanchez de La Lama, Marta; Brinkmann, Martin

    2016-06-01

    Multi-color Stochastic Rotation Dynamics (SRDmc) has been introduced by Inoue et al. [1,2] as a particle based simulation method to study the flow of emulsion droplets in non-wetting microchannels. In this work, we extend the multi-color method to also account for different wetting conditions. This is achieved by assigning the color information not only to fluid particles but also to virtual wall particles that are required to enforce proper no-slip boundary conditions. To extend the scope of the original SRDmc algorithm to e.g. immiscible two-phase flow with viscosity contrast we implement an angular momentum conserving scheme (SRD+mc). We perform extensive benchmark simulations to show that a mono-phase SRDmc fluid exhibits bulk properties identical to a standard SRD fluid and that SRDmc fluids are applicable to a wide range of immiscible two-phase flows. To quantify the adhesion of a SRD+mc fluid in contact to the walls we measure the apparent contact angle from sessile droplets in mechanical equilibrium. For a further verification of our wettability implementation we compare the dewetting of a liquid film from a wetting stripe to experimental and numerical studies of interfacial morphologies on chemically structured surfaces.

  3. Ultracool dwarf benchmarks with Gaia primaries

    NASA Astrophysics Data System (ADS)

    Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.

    2017-10-01

    We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability <10-4, based on projected separation and expected constraints on common distance, common proper motion and/or common radial velocity. Within this bulk population, we identify smaller target subsets of rarer systems whose collective properties still span the full parameter space of the population, as well as systems containing primary stars that are good age calibrators. Our simulation analysis leads to a series of recommendations for candidate selection and observational follow-up that could identify ˜500 diverse Gaia benchmarks. As a test of the veracity of our methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.

  4. The Edge-Disjoint Path Problem on Random Graphs by Message-Passing.

    PubMed

    Altarelli, Fabrizio; Braunstein, Alfredo; Dall'Asta, Luca; De Bacco, Caterina; Franz, Silvio

    2015-01-01

    We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length.

  5. The Edge-Disjoint Path Problem on Random Graphs by Message-Passing

    PubMed Central

    2015-01-01

    We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length. PMID:26710102

  6. Ernst Julius Öpik's (1916) note on the theory of explosion cratering on the Moon's surface—The complex case of a long-overlooked benchmark paper

    NASA Astrophysics Data System (ADS)

    Racki, Grzegorz; Koeberl, Christian; Viik, Tõnu; Jagt-Yazykova, Elena A.; Jagt, John W. M.

    2014-10-01

    High-velocity impact as a common phenomenon in planetary evolution was ignored until well into the twentieth century, mostly because of inadequate understanding of cratering processes. An eight-page note, published in Russian by the young Ernst Julius Öpik, a great Estonian astronomer, was among the key selenological papers, but due to the language barrier, it was barely known and mostly incorrectly cited. This particular paper is here intended to serve as an explanatory supplement to an English translation of Öpik's article, but also to document an early stage in our understanding of cratering. First, we outline the historical-biographical background of this benchmark paper, and second, a comprehensive discussion of its merits is presented, from past and present perspectives alike. In his theoretical research, Öpik analyzed the explosive formation of craters numerically, albeit in a very simple way. For the first time, he approximated relationships among minimal meteorite size, impact energy, and crater diameter; this scaling focused solely on the gravitational energy of excavating the crater (a "useful" working approach). This initial physical model, with a rational mechanical basis, was developed in a series of papers up to 1961. Öpik should certainly be viewed as the founder of the numerical simulation approach in planetary sciences. In addition, the present note also briefly describes Nikolai A. Morozov as a remarkable man, a forgotten Russian scientist and, surprisingly, the true initiator of Öpik's explosive impact theory. In fact, already between 1909 and 1911, Morozov probably was the first to consider conclusively that explosion craters would be circular, bowl-shaped depressions even when formed under different impact angles.

  7. Transonic Flutter Suppression Control Law Design, Analysis and Wind Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  8. Transonic Flutter Suppression Control Law Design, Analysis and Wind-Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  9. Transonic Flutter Suppression Control Law Design Using Classical and Optimal Techniques with Wind-Tunnel Results

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1999-01-01

    The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.

  10. Characterizing health risks associated with recreational swimming at Taiwanese beaches by using quantitative microbial risk assessment.

    PubMed

    Jang, Cheng-Shin; Liang, Ching-Ping

    2018-01-01

    Taiwan is surrounded by oceans, and therefore numerous pleasure beaches attract millions of tourists annually to participate in recreational swimming activities. However, impaired water quality because of fecal pollution poses a potential threat to the tourists' health. This study probabilistically characterized the health risks associated with recreational swimming engendered by waterborne enterococci at 13 Taiwanese beaches by using quantitative microbial risk assessment. First, data on enterococci concentrations at coastal beaches monitored by the Taiwan Environmental Protection Administration were reproduced using nonparametric Monte Carlo simulation (MCS). The ingestion volumes of recreational swimming based on uniform and gamma distributions were subsequently determined using MCS. Finally, after the distribution combination of the two parameters, the beta-Poisson dose-response function was employed to quantitatively estimate health risks to recreational swimmers. Moreover, various levels of risk to recreational swimmers were classified and spatially mapped to explore feasible recreational and environmental management strategies at the beaches. The study results revealed that although the health risks associated with recreational swimming did not exceed an acceptable benchmark of 0.019 illnesses daily at all beaches, they approached to this benchmark at certain beaches. Beaches with relatively high risks are located in Northwestern Taiwan owing to the current movements.

  11. Tensor network simulation of QED on infinite lattices: Learning from (1 +1 ) d , and prospects for (2 +1 ) d

    NASA Astrophysics Data System (ADS)

    Zapp, Kai; Orús, Román

    2017-06-01

    The simulation of lattice gauge theories with tensor network (TN) methods is becoming increasingly fruitful. The vision is that such methods will, eventually, be used to simulate theories in (3 +1 ) dimensions in regimes difficult for other methods. So far, however, TN methods have mostly simulated lattice gauge theories in (1 +1 ) dimensions. The aim of this paper is to explore the simulation of quantum electrodynamics (QED) on infinite lattices with TNs, i.e., fermionic matter fields coupled to a U (1 ) gauge field, directly in the thermodynamic limit. With this idea in mind we first consider a gauge-invariant infinite density matrix renormalization group simulation of the Schwinger model—i.e., QED in (1 +1 ) d . After giving a precise description of the numerical method, we benchmark our simulations by computing the subtracted chiral condensate in the continuum, in good agreement with other approaches. Our simulations of the Schwinger model allow us to build intuition about how a simulation should proceed in (2 +1 ) dimensions. Based on this, we propose a variational ansatz using infinite projected entangled pair states (PEPS) to describe the ground state of (2 +1 ) d QED. The ansatz includes U (1 ) gauge symmetry at the level of the tensors, as well as fermionic (matter) and bosonic (gauge) degrees of freedom both at the physical and virtual levels. We argue that all the necessary ingredients for the simulation of (2 +1 ) d QED are, a priori, already in place, paving the way for future upcoming results.

  12. Dynamics of Active Separation Control at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Pack, LaTunia G.; Seifert, Avi

    2000-01-01

    A series of active flow control experiments were recently conducted at high Reynolds numbers on a generic separated configuration. The model simulates the upper surface of a 20% thick Glauert-Goldschmied type airfoil at zero angle of attack. The flow is fully turbulent since the tunnel sidewall boundary layer flows over the model. The main motivation for the experiments is to generate a comprehensive data base for validation of unsteady numerical simulation as a first step in the development of a CFD design tool, without which it would not be possible to effectively utilize the great potential of unsteady flow control. This paper focuses on the dynamics of several key features of the baseline as well as the controlled flow. It was found that the thickness of the upstream boundary layer has a negligible effect on the flow dynamics. It is speculated that separation is caused mainly by the highly convex surface while viscous effects are less important. The two-dimensional separated flow contains unsteady waves centered on a reduced frequency of 0.9, while in the three dimensional separated flow, frequencies around a reduced frequency of 0.3 and 1 are active. Several scenarios of resonant wave interaction take place at the separated shear-layer and in the pressure recovery region. The unstable reduced frequency bands for periodic excitation are centered on 1.5 and 5, but these reduced frequencies are based on the length of the baseline bubble that shortens due to the excitation. The conventional works well for the coherent wave features. Reproduction of these dynamic effects by a numerical simulation would provide benchmark validation.

  13. Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.

    PubMed

    Salis, Howard; Kaznessis, Yiannis

    2005-02-01

    The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.

  14. Issues in benchmarking human reliability analysis methods : a literature review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  15. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  16. The philosophy of benchmark testing a standards-based picture archiving and communications system.

    PubMed

    Richardson, N E; Thomas, J A; Lyche, D K; Romlein, J; Norton, G S; Dolecek, Q E

    1999-05-01

    The Department of Defense issued its requirements for a Digital Imaging Network-Picture Archiving and Communications System (DIN-PACS) in a Request for Proposals (RFP) to industry in January 1997, with subsequent contracts being awarded in November 1997 to the Agfa Division of Bayer and IBM Global Government Industry. The Government's technical evaluation process consisted of evaluating a written technical proposal as well as conducting a benchmark test of each proposed system at the vendor's test facility. The purpose of benchmark testing was to evaluate the performance of the fully integrated system in a simulated operational environment. The benchmark test procedures and test equipment were developed through a joint effort between the Government, academic institutions, and private consultants. Herein the authors discuss the resources required and the methods used to benchmark test a standards-based PACS.

  17. Numerical simulation for flow and heat transfer to Carreau fluid with magnetic field effect: Dual nature study

    NASA Astrophysics Data System (ADS)

    Hashim; Khan, Masood; Alshomrani, Ali Saleh

    2017-12-01

    This article considers a realistic approach to examine the magnetohydrodynamics (MHD) flow of Carreau fluid induced by the shrinking sheet subject to the stagnation-point. This study also explores the impacts of non-linear thermal radiation on the heat transfer process. The governing equations of physical model are expressed as a system of partial differential equations and are transformed into non-linear ordinary differential equations by introducing local similarity variables. The economized equations of the problem are numerically integrated using the Runge-Kutta Fehlberg integration scheme. In this study, we explore the condition of existence, non-existence, uniqueness and dual nature for obtaining numerical solutions. It is found that the solutions may possess multiple natures, upper and lower branch, for a specific range of shrinking parameter. Results indicate that due to an increment in the magnetic parameter, range of shrinking parameter where a dual solution exists, increases. Further, strong magnetic field enhances the thickness of the momentum boundary layer in case of the second solution while for first solution it reduces. We further note that the fluid suction diminishes the fluid velocity and therefore the thickness of the hydrodynamic boundary layer decreases as well. A critical analysis with existing works is performed which shows that outcome are benchmarks with these works.

  18. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less

  19. Intercomparison of Monte Carlo radiation transport codes to model TEPC response in low-energy neutron and gamma-ray fields.

    PubMed

    Ali, F; Waker, A J; Waller, E J

    2014-10-01

    Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Technical Report: Benchmarking for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLoughlin, K.

    2016-01-22

    The software application “MetaQuant” was developed by our group at Lawrence Livermore National Laboratory (LLNL). It is designed to profile microbial populations in a sample using data from whole-genome shotgun (WGS) metagenomic DNA sequencing. Several other metagenomic profiling applications have been described in the literature. We ran a series of benchmark tests to compare the performance of MetaQuant against that of a few existing profiling tools, using real and simulated sequence datasets. This report describes our benchmarking procedure and results.

  1. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less

  2. Comparison of two-dimensional and three-dimensional simulations of dense nonaqueous phase liquids (DNAPLs): Migration and entrapment in a nonuniform permeability field

    NASA Astrophysics Data System (ADS)

    Christ, John A.; Lemke, Lawrence D.; Abriola, Linda M.

    2005-01-01

    The influence of reduced dimensionality (two-dimensional (2-D) versus 3-D) on predictions of dense nonaqueous phase liquid (DNAPL) infiltration and entrapment in statistically homogeneous, nonuniform permeability fields was investigated using the University of Texas Chemical Compositional Simulator (UTCHEM), a 3-D numerical multiphase simulator. Hysteretic capillary pressure-saturation and relative permeability relationships implemented in UTCHEM were benchmarked against those of another lab-tested simulator, the Michigan-Vertical and Lateral Organic Redistribution (M-VALOR). Simulation of a tetrachloroethene spill in 16 field-scale aquifer realizations generated DNAPL saturation distributions with approximately equivalent distribution metrics in two and three dimensions, with 2-D simulations generally resulting in slightly higher maximum saturations and increased vertical spreading. Variability in 2-D and 3-D distribution metrics across the set of realizations was shown to be correlated at a significance level of 95-99%. Neither spill volume nor release rate appeared to affect these conclusions. Variability in the permeability field did affect spreading metrics by increasing the horizontal spreading in 3-D more than in 2-D in more heterogeneous media simulations. The assumption of isotropic horizontal spatial statistics resulted, on average, in symmetric 3-D saturation distribution metrics in the horizontal directions. The practical implication of this study is that for statistically homogeneous, nonuniform aquifers, 2-D simulations of saturation distributions are good approximations to those obtained in 3-D. However, additional work will be needed to explore the influence of dimensionality on simulated DNAPL dissolution.

  3. Performance of exchange-correlation functionals in density functional theory calculations for liquid metal: A benchmark test for sodium.

    PubMed

    Han, Jeong-Hwan; Oda, Takuji

    2018-04-14

    The performance of exchange-correlation functionals in density-functional theory (DFT) calculations for liquid metal has not been sufficiently examined. In the present study, benchmark tests of Perdew-Burke-Ernzerhof (PBE), Armiento-Mattsson 2005 (AM05), PBE re-parameterized for solids, and local density approximation (LDA) functionals are conducted for liquid sodium. The pair correlation function, equilibrium atomic volume, bulk modulus, and relative enthalpy are evaluated at 600 K and 1000 K. Compared with the available experimental data, the errors range from -11.2% to 0.0% for the atomic volume, from -5.2% to 22.0% for the bulk modulus, and from -3.5% to 2.5% for the relative enthalpy depending on the DFT functional. The generalized gradient approximation functionals are superior to the LDA functional, and the PBE and AM05 functionals exhibit the best performance. In addition, we assess whether the error tendency in liquid simulations is comparable to that in solid simulations, which would suggest that the atomic volume and relative enthalpy performances are comparable between solid and liquid states but that the bulk modulus performance is not. These benchmark test results indicate that the results of liquid simulations are significantly dependent on the exchange-correlation functional and that the DFT functional performance in solid simulations can be used to roughly estimate the performance in liquid simulations.

  4. Performance of exchange-correlation functionals in density functional theory calculations for liquid metal: A benchmark test for sodium

    NASA Astrophysics Data System (ADS)

    Han, Jeong-Hwan; Oda, Takuji

    2018-04-01

    The performance of exchange-correlation functionals in density-functional theory (DFT) calculations for liquid metal has not been sufficiently examined. In the present study, benchmark tests of Perdew-Burke-Ernzerhof (PBE), Armiento-Mattsson 2005 (AM05), PBE re-parameterized for solids, and local density approximation (LDA) functionals are conducted for liquid sodium. The pair correlation function, equilibrium atomic volume, bulk modulus, and relative enthalpy are evaluated at 600 K and 1000 K. Compared with the available experimental data, the errors range from -11.2% to 0.0% for the atomic volume, from -5.2% to 22.0% for the bulk modulus, and from -3.5% to 2.5% for the relative enthalpy depending on the DFT functional. The generalized gradient approximation functionals are superior to the LDA functional, and the PBE and AM05 functionals exhibit the best performance. In addition, we assess whether the error tendency in liquid simulations is comparable to that in solid simulations, which would suggest that the atomic volume and relative enthalpy performances are comparable between solid and liquid states but that the bulk modulus performance is not. These benchmark test results indicate that the results of liquid simulations are significantly dependent on the exchange-correlation functional and that the DFT functional performance in solid simulations can be used to roughly estimate the performance in liquid simulations.

  5. Thermal Performance Benchmarking: Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Xuhui

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronicsmore » systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction-to-liquid thermal resistance as compared to the other systems. At a flow rate of 12 liters per minute, the thermal resistance of the i3 systems is only 30 percent of the Accord system and 15 percent of the LEAF system.« less

  6. The PPP Simulator: User’s Manual and Report

    DTIC Science & Technology

    1986-11-01

    simulator: Script started on Thu Aug 28 09:16:15 1986 1 ji] -> ppp -d Benchmarks/Par/ccon6.w pau load /a/hprg’fagin/ PPPl /Benchmarks/Par,’concatOP .w Capace...EOF ) putc( c, stdout ) #else if(( fp = fopen("/a/hprg/fagin/ PPPl /notes’, fir" ))!NULL) while(( c = getc(fp)) != EOF ) putc( c, stdout ) #erndif if...hprg/fagin/ PPPl /bitl.d’, fir" ) =NULL) lddsptbl( fp, bi-tbl ); while((--argc > 0) && ((*.+argv)[0]= -I for( s =argv[0]+l; *s!=’\\0’ s++ A -A Aug 18 16

  7. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  8. Performance modeling & simulation of complex systems (A systems engineering design & analysis approach)

    NASA Technical Reports Server (NTRS)

    Hall, Laverne

    1995-01-01

    Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.

  9. Direct Numerical Simulation of Turbulent Multi-Stage Autoignition Relevant to Engine Conditions

    NASA Astrophysics Data System (ADS)

    Chen, Jacqueline

    2017-11-01

    Due to the unrivaled energy density of liquid hydrocarbon fuels combustion will continue to provide over 80% of the world's energy for at least the next fifty years. Hence, combustion needs to be understood and controlled to optimize combustion systems for efficiency to prevent further climate change, to reduce emissions and to ensure U.S. energy security. In this talk I will discuss recent progress in direct numerical simulations of turbulent combustion focused on providing fundamental insights into key `turbulence-chemistry' interactions that underpin the development of next generation fuel efficient, fuel flexible engines for transportation and power generation. Petascale direct numerical simulation (DNS) of multi-stage mixed-mode turbulent combustion in canonical configurations have elucidated key physics that govern autoignition and flame stabilization in engines and provide benchmark data for combustion model development under the conditions of advanced engines which operate near combustion limits to maximize efficiency and minimize emissions. Mixed-mode combustion refers to premixed or partially-premixed flames propagating into stratified autoignitive mixtures. Multi-stage ignition refers to hydrocarbon fuels with negative temperature coefficient behavior that undergo sequential low- and high-temperature autoignition. Key issues that will be discussed include: 1) the role of mixing in shear driven turbulence on the dynamics of multi-stage autoignition and cool flame propagation in diesel environments, 2) the role of thermal and composition stratification on the evolution of the balance of mixed combustion modes - flame propagation versus spontaneous ignition - which determines the overall combustion rate in autoignition processes, and 3) the role of cool flames on lifted flame stabilization. Finally prospects for DNS of turbulent combustion at the exascale will be discussed in the context of anticipated heterogeneous machine architectures. sponsored by DOE Office of Basic Energy Sciences and computing resources provided by the Oakridge Leadership Computing Facility through the DOE INCITE Program.

  10. Chrystal and Proudman resonances simulated with three numerical models

    NASA Astrophysics Data System (ADS)

    Bubalo, Maja; Janeković, Ivica; Orlić, Mirko

    2018-05-01

    The aim of this work was to study Chrystal and Proudman resonances in a simple closed basin and to explore and compare how well the two resonant mechanisms are reproduced with different, nowadays widely used, numerical ocean models. The test case was based on air pressure disturbances of two commonly used shapes (a sinusoidal and a boxcar), having various wave lengths, and propagating at different speeds. Our test domain was a closed rectangular basin, 300 km long with a uniform depth of 50 m, with the theoretical analytical solution available for benchmark. In total, 2250 simulations were performed for each of the three different numerical models: ADCIRC, SCHISM and ROMS. During each of the simulations, we recorded water level anomalies and computed the integral of the energy density spectrum for a number of points distributed along the basin. We have successfully documented the transition from Proudman to Chrystal resonance that occurs for a sinusoidal air pressure disturbance having a wavelength between one and two basin lengths. An inter-model comparison of the results shows that different models represent the two resonant phenomena in a slightly different way. For Chrystal resonance, all the models showed similar behavior; however, ADCIRC model providing slightly higher values of the mean resonant period than the other two models. In the case of Proudman resonance, the most consistent results, closest to the analytical solution, were obtained using ROMS model, which reproduced the mean resonant speed equal to 22.00 m/s— i.e., close to the theoretical value of 22.15 m/s. ADCIRC and SCHISM models showed small deviations from that value, with the mean speed being slightly lower—21.97 m/s (ADCIRC) and 21.93 m/s (SCHISM). The findings may seem small but could play an important role when resonance is a crucial process producing enhancing effects by two orders of magnitude (i.e., meteotsunamis).

  11. Assessment of Efficiency and Performance in Tsunami Numerical Modeling with GPU

    NASA Astrophysics Data System (ADS)

    Yalciner, Bora; Zaytsev, Andrey

    2017-04-01

    Non-linear shallow water equations (NSWE) are used to solve the propagation and coastal amplification of long waves and tsunamis. Leap Frog scheme of finite difference technique is one of the satisfactory numerical methods which is widely used in these problems. Tsunami numerical models are necessary for not only academic but also operational purposes which need faster and accurate solutions. Recent developments in information technology provide considerably faster numerical solutions in this respect and are becoming one of the crucial requirements. Tsunami numerical code NAMI DANCE uses finite difference numerical method to solve linear and non-linear forms of shallow water equations for long wave problems, specifically for tsunamis. In this study, the new code is structured for Graphical Processing Unit (GPU) using CUDA API. The new code is applied to different (analytical, experimental and field) benchmark problems of tsunamis for tests. One of those applications is 2011 Great East Japan tsunami which was instrumentally recorded on various types of gauges including tide and wave gauges and offshore GPS buoys cabled Ocean Bottom Pressure (OBP) gauges and DART buoys. The accuracy of the results are compared with the measurements and fairly well agreements are obtained. The efficiency and performance of the code is also compared with the version using multi-core Central Processing Unit (CPU). Dependence of simulation speed with GPU on linear or non-linear solutions is also investigated. One of the results is that the simulation speed is increased up to 75 times comparing to the process time in the computer using single 4/8 thread multi-core CPU. The results are presented with comparisons and discussions. Furthermore how multi-dimensional finite difference problems fits towards GPU architecture is also discussed. The research leading to this study has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement No: 603839 (Project ASTARTE-Assessment, Strategy and Risk Reduction for Tsunamis in Europe). PARI, Japan and NOAA, USA are acknowledged for the data of the measurements. Prof. Ahmet C. Yalciner is also acknowledged for his long term and sustained support to the authors.

  12. OpenMP performance for benchmark 2D shallow water equations using LBM

    NASA Astrophysics Data System (ADS)

    Sabri, Khairul; Rabbani, Hasbi; Gunawan, Putu Harry

    2018-03-01

    Shallow water equations or commonly referred as Saint-Venant equations are used to model fluid phenomena. These equations can be solved numerically using several methods, like Lattice Boltzmann method (LBM), SIMPLE-like Method, Finite Difference Method, Godunov-type Method, and Finite Volume Method. In this paper, the shallow water equation will be approximated using LBM or known as LABSWE and will be simulated in performance of parallel programming using OpenMP. To evaluate the performance between 2 and 4 threads parallel algorithm, ten various number of grids Lx and Ly are elaborated. The results show that using OpenMP platform, the computational time for solving LABSWE can be decreased. For instance using grid sizes 1000 × 500, the speedup of 2 and 4 threads is observed 93.54 s and 333.243 s respectively.

  13. Partially-Averaged Navier Stokes Model for Turbulence: Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Abdol-Hamid, Khaled S.

    2005-01-01

    Partially-averaged Navier Stokes (PANS) is a suite of turbulence closure models of various modeled-to-resolved scale ratios ranging from Reynolds-averaged Navier Stokes (RANS) to Navier-Stokes (direct numerical simulations). The objective of PANS, like hybrid models, is to resolve large scale structures at reasonable computational expense. The modeled-to-resolved scale ratio or the level of physical resolution in PANS is quantified by two parameters: the unresolved-to-total ratios of kinetic energy (f(sub k)) and dissipation (f(sub epsilon)). The unresolved-scale stress is modeled with the Boussinesq approximation and modeled transport equations are solved for the unresolved kinetic energy and dissipation. In this paper, we first present a brief discussion of the PANS philosophy followed by a description of the implementation procedure and finally perform preliminary evaluation in benchmark problems.

  14. A new multiscale air quality transport model (Fluidity, 4.1.9) using fully unstructured anisotropic adaptive mesh technology

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.

    2015-06-01

    A new anisotropic hr-adaptive mesh technique has been applied to modelling of multiscale transport phenomena, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been setup for two-dimensional (2-D) transport phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes.

  15. Propagation of an ultra-short, intense laser in a relativistic fluid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, A.B.; Decker, C.D.

    1997-12-31

    A Maxwell-relativistic fluid model is developed to describe the propagation of an ultrashort, intense laser pulse through an underdense plasma. The model makes use of numerically stabilizing fast Fourier transform (FFT) computational methods for both the Maxwell and fluid equations, and it is benchmarked against particle-in-cell (PIC) simulations. Strong fields generated in the wake of the laser are calculated, and the authors observe coherent wake-field radiation generated at harmonics of the plasma frequency due to nonlinearities in the laser-plasma interaction. For a plasma whose density is 10% of critical, the highest members of the plasma harmonic series begin to overlapmore » with the first laser harmonic, suggesting that widely used multiple-scales-theory, by which the laser and plasma frequencies are assumed to be separable, ceases to be a useful approximation.« less

  16. Microwave beam broadening due to turbulent plasma density fluctuations within the limit of the Born approximation and beyond

    NASA Astrophysics Data System (ADS)

    Köhn, A.; Guidi, L.; Holzhauer, E.; Maj, O.; Poli, E.; Snicker, A.; Weber, H.

    2018-07-01

    Plasma turbulence, and edge density fluctuations in particular, can under certain conditions broaden the cross-section of injected microwave beams significantly. This can be a severe problem for applications relying on well-localized deposition of the microwave power, like the control of MHD instabilities. Here we investigate this broadening mechanism as a function of fluctuation level, background density and propagation length in a fusion-relevant scenario using two numerical codes, the full-wave code IPF-FDMC and the novel wave kinetic equation solver WKBeam. The latter treats the effects of fluctuations using a statistical approach, based on an iterative solution of the scattering problem (Born approximation). The full-wave simulations are used to benchmark this approach. The Born approximation is shown to be valid over a large parameter range, including ITER-relevant scenarios.

  17. Nuclear spin noise in the central spin model

    NASA Astrophysics Data System (ADS)

    Fröhling, Nina; Anders, Frithjof B.; Glazov, Mikhail

    2018-05-01

    We study theoretically the fluctuations of the nuclear spins in quantum dots employing the central spin model which accounts for the hyperfine interaction of the nuclei with the electron spin. These fluctuations are calculated both with an analytical approach using homogeneous hyperfine couplings (box model) and with a numerical simulation using a distribution of hyperfine coupling constants. The approaches are in good agreement. The box model serves as a benchmark with low computational cost that explains the basic features of the nuclear spin noise well. We also demonstrate that the nuclear spin noise spectra comprise a two-peak structure centered at the nuclear Zeeman frequency in high magnetic fields with the shape of the spectrum controlled by the distribution of the hyperfine constants. This allows for direct access to this distribution function through nuclear spin noise spectroscopy.

  18. Capturing nonlocal interaction effects in the Hubbard model: Optimal mappings and limits of applicability

    NASA Astrophysics Data System (ADS)

    van Loon, E. G. C. P.; Schüler, M.; Katsnelson, M. I.; Wehling, T. O.

    2016-10-01

    We investigate the Peierls-Feynman-Bogoliubov variational principle to map Hubbard models with nonlocal interactions to effective models with only local interactions. We study the renormalization of the local interaction induced by nearest-neighbor interaction and assess the quality of the effective Hubbard models in reproducing observables of the corresponding extended Hubbard models. We compare the renormalization of the local interactions as obtained from numerically exact determinant quantum Monte Carlo to approximate but more generally applicable calculations using dual boson, dynamical mean field theory, and the random phase approximation. These more approximate approaches are crucial for any application with real materials in mind. Furthermore, we use the dual boson method to calculate observables of the extended Hubbard models directly and benchmark these against determinant quantum Monte Carlo simulations of the effective Hubbard model.

  19. Least-Squares Spectral Element Solutions to the CAA Workshop Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Lin, Wen H.; Chan, Daniel C.

    1997-01-01

    This paper presents computed results for some of the CAA benchmark problems via the acoustic solver developed at Rocketdyne CFD Technology Center under the corporate agreement between Boeing North American, Inc. and NASA for the Aerospace Industry Technology Program. The calculations are considered as benchmark testing of the functionality, accuracy, and performance of the solver. Results of these computations demonstrate that the solver is capable of solving the propagation of aeroacoustic signals. Testing of sound generation and on more realistic problems is now pursued for the industrial applications of this solver. Numerical calculations were performed for the second problem of Category 1 of the current workshop problems for an acoustic pulse scattered from a rigid circular cylinder, and for two of the first CAA workshop problems, i. e., the first problem of Category 1 for the propagation of a linear wave and the first problem of Category 4 for an acoustic pulse reflected from a rigid wall in a uniform flow of Mach 0.5. The aim for including the last two problems in this workshop is to test the effectiveness of some boundary conditions set up in the solver. Numerical results of the last two benchmark problems have been compared with their corresponding exact solutions and the comparisons are excellent. This demonstrates the high fidelity of the solver in handling wave propagation problems. This feature lends the method quite attractive in developing a computational acoustic solver for calculating the aero/hydrodynamic noise in a violent flow environment.

  20. Comparing Hospital Processes and Outcomes in California Medicare Beneficiaries: Simulation Prompts Reconsideration

    PubMed Central

    Escobar, Gabriel J; Baker, Jennifer M; Turk, Benjamin J; Draper, David; Liu, Vincent; Kipnis, Patricia

    2017-01-01

    Introduction This article is not a traditional research report. It describes how conducting a specific set of benchmarking analyses led us to broader reflections on hospital benchmarking. We reexamined an issue that has received far less attention from researchers than in the past: How variations in the hospital admission threshold might affect hospital rankings. Considering this threshold made us reconsider what benchmarking is and what future benchmarking studies might be like. Although we recognize that some of our assertions are speculative, they are based on our reading of the literature and previous and ongoing data analyses being conducted in our research unit. We describe the benchmarking analyses that led to these reflections. Objectives The Centers for Medicare and Medicaid Services’ Hospital Compare Web site includes data on fee-for-service Medicare beneficiaries but does not control for severity of illness, which requires physiologic data now available in most electronic medical records. To address this limitation, we compared hospital processes and outcomes among Kaiser Permanente Northern California’s (KPNC) Medicare Advantage beneficiaries and non-KPNC California Medicare beneficiaries between 2009 and 2010. Methods We assigned a simulated severity of illness measure to each record and explored the effect of having the additional information on outcomes. Results We found that if the admission severity of illness in non-KPNC hospitals increased, KPNC hospitals’ mortality performance would appear worse; conversely, if admission severity at non-KPNC hospitals’ decreased, KPNC hospitals’ performance would appear better. Conclusion Future hospital benchmarking should consider the impact of variation in admission thresholds. PMID:29035176

  1. Computational Hemodynamic Simulation of Human Circulatory System under Altered Gravity

    NASA Technical Reports Server (NTRS)

    Kim. Chang Sung; Kiris, Cetin; Kwak, Dochan

    2003-01-01

    A computational hemodynamics approach is presented to simulate the blood flow through the human circulatory system under altered gravity conditions. Numerical techniques relevant to hemodynamics issues are introduced to non-Newtonian modeling for flow characteristics governed by red blood cells, distensible wall motion due to the heart pulse, and capillary bed modeling for outflow boundary conditions. Gravitational body force terms are added to the Navier-Stokes equations to study the effects of gravity on internal flows. Six-type gravity benchmark problems are originally presented to provide the fundamental understanding of gravitational effects on the human circulatory system. For code validation, computed results are compared with steady and unsteady experimental data for non-Newtonian flows in a carotid bifurcation model and a curved circular tube, respectively. This computational approach is then applied to the blood circulation in the human brain as a target problem. A three-dimensional, idealized Circle of Willis configuration is developed with minor arteries truncated based on anatomical data. Demonstrated is not only the mechanism of the collateral circulation but also the effects of gravity on the distensible wall motion and resultant flow patterns.

  2. EVA Suit R and D for Performance Optimization

    NASA Technical Reports Server (NTRS)

    Cowley, Matthew S.; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar

    2014-01-01

    Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. To verify that new suit designs meet requirements, full prototypes must be built and tested with human subjects. However, numerous design iterations will occur before the hardware meets those requirements. Traditional draw-prototype-test paradigms for R&D are prohibitively expensive with today's shrinking Government budgets. Personnel at NASA are developing modern simulation techniques which focus on human-centric designs by creating virtual prototype simulations and fully adjustable physical prototypes of suit hardware. During the R&D design phase, these easily modifiable representations of an EVA suit's hard components will allow designers to think creatively and exhaust design possibilities before they build and test working prototypes with human subjects. It allows scientists to comprehensively benchmark current suit capabilities and limitations for existing suit sizes and sizes that do not exist. This is extremely advantageous and enables comprehensive design down-selections to be made early in the design process, enables the use of human performance as design criteria, and enables designs to target specific populations

  3. Out-of-equilibrium protocol for Rényi entropies via the Jarzynski equality.

    PubMed

    Alba, Vincenzo

    2017-06-01

    In recent years entanglement measures, such as the von Neumann and the Rényi entropies, provided a unique opportunity to access elusive features of quantum many-body systems. However, extracting entanglement properties analytically, experimentally, or in numerical simulations can be a formidable task. Here, by combining the replica trick and the Jarzynski equality we devise an alternative effective out-of-equilibrium protocol for measuring the equilibrium Rényi entropies. The key idea is to perform a quench in the geometry of the replicas. The Rényi entropies are obtained as the exponential average of the work performed during the quench. We illustrate an application of the method in classical Monte Carlo simulations, although it could be useful in different contexts, such as in quantum Monte Carlo, or experimentally in cold-atom systems. The method is most effective in the quasistatic regime, i.e., for a slow quench. As a benchmark, we compute the Rényi entropies in the Ising universality class in 1+1 dimensions. We find perfect agreement with the well-known conformal field theory predictions.

  4. Seismic sounding of convection in the Sun

    NASA Astrophysics Data System (ADS)

    Sreenivasan, Katepalli R.

    2015-11-01

    Thermal convection is the dominant mechanism of energy transport in the outer envelope of the Sun (one-third by radius). It drives global fluid circulations and magnetic fields observed on the solar surface. Convection excites a broadband spectrum of acoustic waves that propagate within the interior and set up modal resonances. These acoustic waves, also called seismic waves, are observed at the surface of the Sun by space- and ground-based telescopes. Seismic sounding, the study of these seismic waves to infer the internal properties of the Sun, constitutes helioseismology. Here we review our knowledge of solar convection, especially that obtained through seismic inference. Several characteristics of solar convection, such as differential rotation, anisotropic Reynolds stresses, the influence of rotation on convection and supergranulation, are considered. On larger scales, several inferences suggest that convective velocities are substantially smaller than those predicted by theory and simulations. This discrepancy challenges the models of internal differential rotation that rely on convective stresses as a driving mechanism and provide an important benchmark for numerical simulations. In collaboration with Shravan Hanasoge, Tata Institute of Fundamental Research, Mumbai and Laurent Gizon, Max-Planck-Institut fuer Sonnensystemforschung, Goettingen.

  5. Intercomparison of terrain-following coordinate transformation and immersed boundary methods in large-eddy simulation of wind fields over complex terrain

    NASA Astrophysics Data System (ADS)

    Fang, Jiannong; Porté-Agel, Fernando

    2016-09-01

    Accurate modeling of complex terrain, especially steep terrain, in the simulation of wind fields remains a challenge. It is well known that the terrain-following coordinate transformation method (TFCT) generally used in atmospheric flow simulations is restricted to non-steep terrain with slope angles less than 45 degrees. Due to the advantage of keeping the basic computational grids and numerical schemes unchanged, the immersed boundary method (IBM) has been widely implemented in various numerical codes to handle arbitrary domain geometry including steep terrain. However, IBM could introduce considerable implementation errors in wall modeling through various interpolations because an immersed boundary is generally not co-located with a grid line. In this paper, we perform an intercomparison of TFCT and IBM in large-eddy simulation of a turbulent wind field over a three-dimensional (3D) hill for the purpose of evaluating the implementation errors in IBM. The slopes of the three-dimensional hill are not steep and, therefore, TFCT can be applied. Since TFCT is free from interpolation-induced implementation errors in wall modeling, its results can serve as a reference for the evaluation so that the influence of errors from wall models themselves can be excluded. For TFCT, a new algorithm for solving the pressure Poisson equation in the transformed coordinate system is proposed and first validated for a laminar flow over periodic two-dimensional hills by comparing with a benchmark solution. For the turbulent flow over the 3D hill, the wind-tunnel measurements used for validation contain both vertical and horizontal profiles of mean velocities and variances, thus allowing an in-depth comparison of the numerical models. In this case, TFCT is expected to be preferable to IBM. This is confirmed by the presented results of comparison. It is shown that the implementation errors in IBM lead to large discrepancies between the results obtained by TFCT and IBM near the surface. The effects of different schemes used to implement wall boundary conditions in IBM are studied. The source of errors and possible ways to improve the IBM implementation are discussed.

  6. Computational Studies of Strongly Correlated Quantum Matter

    NASA Astrophysics Data System (ADS)

    Shi, Hao

    The study of strongly correlated quantum many-body systems is an outstanding challenge. Highly accurate results are needed for the understanding of practical and fundamental problems in condensed-matter physics, high energy physics, material science, quantum chemistry and so on. Our familiar mean-field or perturbative methods tend to be ineffective. Numerical simulations provide a promising approach for studying such systems. The fundamental difficulty of numerical simulation is that the dimension of the Hilbert space needed to describe interacting systems increases exponentially with the system size. Quantum Monte Carlo (QMC) methods are one of the best approaches to tackle the problem of enormous Hilbert space. They have been highly successful for boson systems and unfrustrated spin models. For systems with fermions, the exchange symmetry in general causes the infamous sign problem, making the statistical noise in the computed results grow exponentially with the system size. This hinders our understanding of interesting physics such as high-temperature superconductivity, metal-insulator phase transition. In this thesis, we present a variety of new developments in the auxiliary-field quantum Monte Carlo (AFQMC) methods, including the incorporation of symmetry in both the trial wave function and the projector, developing the constraint release method, using the force-bias to drastically improve the efficiency in Metropolis framework, identifying and solving the infinite variance problem, and sampling Hartree-Fock-Bogoliubov wave function. With these developments, some of the most challenging many-electron problems are now under control. We obtain an exact numerical solution of two-dimensional strongly interacting Fermi atomic gas, determine the ground state properties of the 2D Fermi gas with Rashba spin-orbit coupling, provide benchmark results for the ground state of the two-dimensional Hubbard model, and establish that the Hubbard model has a stripe order in the underdoped region.

  7. Grid-converged solution and analysis of the unsteady viscous flow in a two-dimensional shock tube

    NASA Astrophysics Data System (ADS)

    Zhou, Guangzhao; Xu, Kun; Liu, Feng

    2018-01-01

    The flow in a shock tube is extremely complex with dynamic multi-scale structures of sharp fronts, flow separation, and vortices due to the interaction of the shock wave, the contact surface, and the boundary layer over the side wall of the tube. Prediction and understanding of the complex fluid dynamics are of theoretical and practical importance. It is also an extremely challenging problem for numerical simulation, especially at relatively high Reynolds numbers. Daru and Tenaud ["Evaluation of TVD high resolution schemes for unsteady viscous shocked flows," Comput. Fluids 30, 89-113 (2001)] proposed a two-dimensional model problem as a numerical test case for high-resolution schemes to simulate the flow field in a square closed shock tube. Though many researchers attempted this problem using a variety of computational methods, there is not yet an agreed-upon grid-converged solution of the problem at the Reynolds number of 1000. This paper presents a rigorous grid-convergence study and the resulting grid-converged solutions for this problem by using a newly developed, efficient, and high-order gas-kinetic scheme. Critical data extracted from the converged solutions are documented as benchmark data. The complex fluid dynamics of the flow at Re = 1000 are discussed and analyzed in detail. Major phenomena revealed by the numerical computations include the downward concentration of the fluid through the curved shock, the formation of the vortices, the mechanism of the shock wave bifurcation, the structure of the jet along the bottom wall, and the Kelvin-Helmholtz instability near the contact surface. Presentation and analysis of those flow processes provide important physical insight into the complex flow physics occurring in a shock tube.

  8. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    PubMed

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  9. Performance of MODIS satellite and mesoscale model based land surface temperature for soil moisture deficit estimation using Neural Network

    NASA Astrophysics Data System (ADS)

    Srivastava, Prashant K.; Petropoulos, George P.; Gupta, Manika; Islam, Tanvir

    2015-04-01

    Soil Moisture Deficit (SMD) is a key variable in the water and energy exchanges that occur at the land-surface/atmosphere interface. Monitoring SMD is an alternate method of irrigation scheduling and represents the use of the suitable quantity of water at the proper time by combining measurements of soil moisture deficit. In past it is found that LST has a strong relation to SMD, which can be estimated by MODIS or numerical weather prediction model such as WRF (Weather Research and Forecasting model). By looking into the importance of SMD, this work focused on the application of Artificial Neural Network (ANN) for evaluating its capabilities towards SMD estimation using the LST data estimated from MODIS and WRF mesoscale model. The benchmark SMD estimated from Probability Distribution Model (PDM) over the Brue catchment, Southwest of England, U.K. is used for all the calibration and validation experiments. The performances between observed and simulated SMD are assessed in terms of the Nash-Sutcliffe Efficiency (NSE), the Root Mean Square Error (RMSE) and the percentage of bias (%Bias). The application of the ANN confirmed a high capability WRF and MODIS LST for prediction of SMD. Performance during the ANN calibration and validation showed a good agreement between benchmark and estimated SMD with MODIS LST information with significantly higher performance than WRF simulated LST. The work presented showed the first comprehensive application of LST from MODIS and WRF mesoscale model for hydrological SMD estimation, particularly for the maritime climate. More studies in this direction are recommended to hydro-meteorological community, so that useful information will be accumulated in the technical literature domain for different geographical locations and climatic conditions. Keyword: WRF, Land Surface Temperature, MODIS satellite, Soil Moisture Deficit, Neural Network

  10. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  11. Tsunami-HySEA model validation for tsunami current predictions

    NASA Astrophysics Data System (ADS)

    Macías, Jorge; Castro, Manuel J.; González-Vida, José Manuel; Ortega, Sergio

    2016-04-01

    Model ability to compute and predict tsunami flow velocities is of importance in risk assessment and hazard mitigation. Substantial damage can be produced by high velocity flows, particularly in harbors and bays, even when the wave height is small. Besides, an accurate simulation of tsunami flow velocities and accelerations is fundamental for advancing in the study of tsunami sediment transport. These considerations made the National Tsunami Hazard Mitigation Program (NTHMP) proposing a benchmark exercise focussed on modeling and simulating tsunami currents. Until recently, few direct measurements of tsunami velocities were available to compare and to validate model results. After Tohoku 2011 many current meters measurement were made, mainly in harbors and channels. In this work we present a part of the contribution made by the EDANYA group from the University of Malaga to the NTHMP workshop organized at Portland (USA), 9-10 of February 2015. We have selected three out of the five proposed benchmark problems. Two of them consist in real observed data from the Tohoku 2011 event, one at Hilo Habour (Hawaii) and the other at Tauranga Bay (New Zealand). The third one consists in laboratory experimental data for the inundation of Seaside City in Oregon. Acknowledgements: This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069) and the Spanish Government Research project DAIFLUID (MTM2012-38383-C02-01) and Universidad de Málaga, Campus de Excelencia Andalucía TECH. The GPU and multi-GPU computations were performed at the Unit of Numerical Methods (UNM) of the Research Support Central Services (SCAI) of the University of Malaga.

  12. The future of simulation technologies for complex cardiovascular procedures.

    PubMed

    Cates, Christopher U; Gallagher, Anthony G

    2012-09-01

    Changing work practices and the evolution of more complex interventions in cardiovascular medicine are forcing a paradigm shift in the way doctors are trained. Implantable cardioverter defibrillator (ICD), transcatheter aortic valve implantation (TAVI), carotid artery stenting (CAS), and acute stroke intervention procedures are forcing these changes at a faster pace than in other disciplines. As a consequence, cardiovascular medicine has had to develop a sophisticated understanding of precisely what is meant by 'training' and 'skill'. An evolving conclusion is that procedure training on a virtual reality (VR) simulator presents a viable current solution. These simulations should characterize the important performance characteristics of procedural skill that have metrics derived and defined from, and then benchmarked to experienced operators (i.e. level of proficiency). Simulation training is optimal with metric-based feedback, particularly formative trainee error assessments, proximate to their performance. In prospective, randomized studies, learners who trained to a benchmarked proficiency level on the simulator performed significantly better than learners who were traditionally trained. In addition, cardiovascular medicine now has available the most sophisticated virtual reality simulators in medicine and these have been used for the roll-out of interventions such as CAS in the USA and globally with cardiovascular society and industry partnered training programmes. The Food and Drug Administration has advocated the use of VR simulation as part of the approval of new devices and the American Board of Internal Medicine has adopted simulation as part of its maintenance of certification. Simulation is rapidly becoming a mainstay of cardiovascular education, training, certification, and the safe adoption of new technology. If cardiovascular medicine is to continue to lead in the adoption and integration of simulation, then, it must take a proactive position in the development of metric-based simulation curriculum, adoption of proficiency benchmarking definitions, and then resolve to commit resources so as to continue to lead this revolution in physician training.

  13. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine performance tests readily accessible will help advance a more transparent model evaluation process.

  14. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGES

    Bess, John D.; Montierth, Leland; Köberl, Oliver; ...

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data aremore » greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  15. Benchmark solutions for the galactic ion transport equations: Energy and spatially dependent problems

    NASA Technical Reports Server (NTRS)

    Ganapol, Barry D.; Townsend, Lawrence W.; Wilson, John W.

    1989-01-01

    Nontrivial benchmark solutions are developed for the galactic ion transport (GIT) equations in the straight-ahead approximation. These equations are used to predict potential radiation hazards in the upper atmosphere and in space. Two levels of difficulty are considered: (1) energy independent, and (2) spatially independent. The analysis emphasizes analytical methods never before applied to the GIT equations. Most of the representations derived have been numerically implemented and compared to more approximate calculations. Accurate ion fluxes are obtained (3 to 5 digits) for nontrivial sources. For monoenergetic beams, both accurate doses and fluxes are found. The benchmarks presented are useful in assessing the accuracy of transport algorithms designed to accommodate more complex radiation protection problems. In addition, these solutions can provide fast and accurate assessments of relatively simple shield configurations.

  16. Performance benchmark of LHCb code on state-of-the-art x86 architectures

    NASA Astrophysics Data System (ADS)

    Campora Perez, D. H.; Neufeld, N.; Schwemmer, R.

    2015-12-01

    For Run 2 of the LHC, LHCb is replacing a significant part of its event filter farm with new compute nodes. For the evaluation of the best performing solution, we have developed a method to convert our high level trigger application into a stand-alone, bootable benchmark image. With additional instrumentation we turned it into a self-optimising benchmark which explores techniques such as late forking, NUMA balancing and optimal number of threads, i.e. it automatically optimises box-level performance. We have run this procedure on a wide range of Haswell-E CPUs and numerous other architectures from both Intel and AMD, including also the latest Intel micro-blade servers. We present results in terms of performance, power consumption, overheads and relative cost.

  17. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  18. Finite Element Modeling of the World Federation's Second MFL Benchmark Problem

    NASA Astrophysics Data System (ADS)

    Zeng, Zhiwei; Tian, Yong; Udpa, Satish; Udpa, Lalita

    2004-02-01

    This paper presents results obtained by simulating the second magnetic flux leakage benchmark problem proposed by the World Federation of NDE Centers. The geometry consists of notches machined on the internal and external surfaces of a rotating steel pipe that is placed between two yokes that are part of a magnetic circuit energized by an electromagnet. The model calculates the radial component of the leaked field at specific positions. The nonlinear material property of the ferromagnetic pipe is taken into account in simulating the problem. The velocity effect caused by the rotation of the pipe is, however, ignored for reasons of simplicity.

  19. Simulations of Turbulent Flows with Strong Shocks and Density Variations: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanjiva Lele

    2012-10-01

    The target of this SciDAC Science Application was to develop a new capability based on high-order and high-resolution schemes to simulate shock-turbulence interactions and multi-material mixing in planar and spherical geometries, and to study Rayleigh-Taylor and Richtmyer-Meshkov turbulent mixing. These fundamental problems have direct application in high-speed engineering flows, such as inertial confinement fusion (ICF) capsule implosions and scramjet combustion, and also in the natural occurrence of supernovae explosions. Another component of this project was the development of subgrid-scale (SGS) models for large-eddy simulations of flows involving shock-turbulence interaction and multi-material mixing, that were to be validated with the DNSmore » databases generated during the program. The numerical codes developed are designed for massively-parallel computer architectures, ensuring good scaling performance. Their algorithms were validated by means of a sequence of benchmark problems. The original multi-stage plan for this five-year project included the following milestones: 1) refinement of numerical algorithms for application to the shock-turbulence interaction problem and multi-material mixing (years 1-2); 2) direct numerical simulations (DNS) of canonical shock-turbulence interaction (years 2-3), targeted at improving our understanding of the physics behind the combined two phenomena and also at guiding the development of SGS models; 3) large-eddy simulations (LES) of shock-turbulence interaction (years 3-5), improving SGS models based on the DNS obtained in the previous phase; 4) DNS of planar/spherical RM multi-material mixing (years 3-5), also with the two-fold objective of gaining insight into the relevant physics of this instability and aiding in devising new modeling strategies for multi-material mixing; 5) LES of planar/spherical RM mixing (years 4-5), integrating the improved SGS and multi-material models developed in stages 3 and 5. This final report is outlined as follows. Section 2 shows an assessment of numerical algorithms that are best suited for the numerical simulation of compressible flows involving turbulence and shock phenomena. Sections 3 and 4 deal with the canonical shock-turbulence interaction problem, from the DNS and LES perspectives, respectively. Section 5 considers the shock-turbulence inter-action in spherical geometry, in particular, the interaction of a converging shock with isotropic turbulence as well as the problem of the blast wave. Section 6 describes the study of shock-accelerated mixing through planar and spherical Richtmyer-Meshkov mixing as well as the shock-curtain interaction problem In section 7 we acknowledge the different interactions between Stanford and other institutions participating in this SciDAC project, as well as several external collaborations made possible through it. Section 8 presents a list of publications and presentations that have been generated during the course of this SciDAC project. Finally, section 9 concludes this report with the list of personnel at Stanford University funded by this SciDAC project.« less

  20. An investigation of the convective region of numerically simulated squall lines

    NASA Astrophysics Data System (ADS)

    Bryan, George Howard

    High resolution numerical simulations are utilized to investigate the thermodynamic and kinematic structure of the convective region of squall lines. A new numerical modeling system was developed for this purpose. The model incorporates several new and/or recent advances in numerical modeling, including: a mass- and energy-conserving equation set, based on the compressible system of equations; third-order Runge-Kutta time integration, with high (third to sixth) order spatial discretization; and a new method for conserved-variable mixing in saturated environments, utilizing an exact definition for ice-liquid water potential temperature. A benchmark simulation for moist environments was designed to evaluate the new model. It was found that the mass- and energy-conserving equation set was necessary to produce acceptable results, and that traditional equation sets have a cool bias that leads to systematic underprediction of vertical velocity. The model was developed to run on massively-parallel distributed memory computing systems. This allows for simulations with very high resolution. In this study, squall lines were simulated with grid spacing of 125 m over a 300 km x 60 km x 18 km domain. Results show that the 125 m simulations contain sub-cloud-scale turbulent eddies that stretch and distort plumes of high equivalent potential temperature (thetae) that rise from the pre-squall-line boundary layer. In contrast, with 1 km grid spacing the high thetae plumes rise in a laminar manner, and require parameterized subgrid terms to diffuse the high theta e air. The high resolution output is used to refine the conceptual model of the structure and lifecycle of moist absolutely unstable layers (MAULs). Moist absolute instability forms in the inflow region of the squall line and is subsequently removed by turbulent processes of varying scales. Three general MAUL regimes (MRs) are identified: a laminar MR, characterized by deep (˜2 km) MAULs that extend continuously in both the cross-line and along-line directions; a convective MR, containing deep (˜10 km) cellular pulses and plumes; and a turbulent MR, characterized by numerous moist turbulent eddies that are a few km (or smaller) in scale. The character of the laminar MR is of particular interest. Parcels in this region experience moist absolute instability for 11--17 minutes before beginning to overturn. Conventional theory suggests that overturning would ensue immediately in these conditions. Two explanations are offered to elucidate why this layer persists without overturning. First, it is found that buoyancy forcing (defined as the sum of buoyancy and the vertical pressure gradient due to the buoyancy field) is reduced in the laminar MR as compared to that of an isolated parcel. The geometry of the laminar MR is directly responsible for this reduction in buoyancy forcing; specifically, the MAUL extends continuously in the along-line direction and for 10 km in the cross-line direction, which inhibits the development of vertical motions due to mass continuity considerations. (Abstract shortened by UMI.)

Top