Sample records for simulating large numbers

  1. Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise

    NASA Astrophysics Data System (ADS)

    Kocheemoolayil, Joseph; Lele, Sanjiva

    2014-11-01

    Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.

  2. Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.

    2017-06-19

    The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less

  3. Efficient Control Law Simulation for Multiple Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.

    1998-10-06

    In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The timemore » to calculate the control law for each robot at each time step is demonstrated to be O(logN).« less

  4. Wall-Resolved Large-Eddy Simulation of Flow Separation Over NASA Wall-Mounted Hump

    NASA Technical Reports Server (NTRS)

    Uzun, Ali; Malik, Mujeeb R.

    2017-01-01

    This paper reports the findings from a study that applies wall-resolved large-eddy simulation to investigate flow separation over the NASA wall-mounted hump geometry. Despite its conceptually simple flow configuration, this benchmark problem has proven to be a challenging test case for various turbulence simulation methods that have attempted to predict flow separation arising from the adverse pressure gradient on the aft region of the hump. The momentum-thickness Reynolds number of the incoming boundary layer has a value that is near the upper limit achieved by recent direct numerical simulation and large-eddy simulation of incompressible turbulent boundary layers. The high Reynolds number of the problem necessitates a significant number of grid points for wall-resolved calculations. The present simulations show a significant improvement in the separation-bubble length prediction compared to Reynolds-Averaged Navier-Stokes calculations. The current simulations also provide good overall prediction of the skin-friction distribution, including the relaminarization observed over the front portion of the hump due to the strong favorable pressure gradient. We discuss a number of problems that were encountered during the course of this work and present possible solutions. A systematic study regarding the effect of domain span, subgrid-scale model, tunnel back pressure, upstream boundary layer conditions and grid refinement is performed. The predicted separation-bubble length is found to be sensitive to the span of the domain. Despite the large number of grid points used in the simulations, some differences between the predictions and experimental observations still exist (particularly for Reynolds stresses) in the case of the wide-span simulation, suggesting that additional grid resolution may be required.

  5. Building occupancy simulation and data assimilation using a graph-based agent-oriented model

    NASA Astrophysics Data System (ADS)

    Rai, Sanish; Hu, Xiaolin

    2018-07-01

    Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.

  6. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  7. Conformational analysis of oligosaccharides and polysaccharides using molecular dynamics simulations.

    PubMed

    Frank, Martin

    2015-01-01

    Complex carbohydrates usually have a large number of rotatable bonds and consequently a large number of theoretically possible conformations can be generated (combinatorial explosion). The application of systematic search methods for conformational analysis of carbohydrates is therefore limited to disaccharides and trisaccharides in a routine analysis. An alternative approach is to use Monte-Carlo methods or (high-temperature) molecular dynamics (MD) simulations to explore the conformational space of complex carbohydrates. This chapter describes how to use MD simulation data to perform a conformational analysis (conformational maps, hydrogen bonds) of oligosaccharides and how to build realistic 3D structures of large polysaccharides using Conformational Analysis Tools (CAT).

  8. The Influence of the Number of Different Stocks on the Levy-Levy-Solomon Model

    NASA Astrophysics Data System (ADS)

    Kohl, R.

    The stock market model of Levy, Levy, Solomon is simulated for more than one stock to analyze the behavior for a large number of investors. Small markets can lead to realistic looking prices for one and more stocks. A large number of investors leads to a semi-regular fashion simulating one stock. For many stocks, three of the stocks are semi-regular and dominant, the rest is chaotic. Aside from that we changed the utility function and checked the results.

  9. Large scale simulation of liquid water transport in a gas diffusion layer of polymer electrolyte membrane fuel cells using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi

    2017-09-01

    A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.

  10. Sensitivity of a Riparian Large Woody Debris Recruitment Model to the Number of Contributing Banks and Tree Fall Pattern

    Treesearch

    Don C. Bragg; Jeffrey L. Kershner

    2004-01-01

    Riparian large woody debris (LWD) recruitment simulations have traditionally applied a random angle of tree fall from two well-forested stream banks. We used a riparian LWD recruitment model (CWD, version 1.4) to test the validity these assumptions. Both the number of contributing forest banks and predominant tree fall direction significantly influenced simulated...

  11. Large-Eddy Simulation of Wind-Plant Aerodynamics: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done wind plant large-eddy simulations with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performingmore » this type of simulation. We have used the OpenFOAM CFD toolbox to create our solver.« less

  12. Dynamical simulation of E-ELT segmented primary mirror

    NASA Astrophysics Data System (ADS)

    Sedghi, B.; Muller, M.; Bauvir, B.

    2011-09-01

    The dynamical behavior of the primary mirror (M1) has an important impact on the control of the segments and the performance of the telescope. Control of large segmented mirrors with a large number of actuators and sensors and multiple control loops in real life is a challenging problem. In virtual life, modeling, simulation and analysis of the M1 bears similar difficulties and challenges. In order to capture the dynamics of the segment subunits (high frequency modes) and the telescope back structure (low frequency modes), high order dynamical models with a very large number of inputs and outputs need to be simulated. In this paper, different approaches for dynamical modeling and simulation of the M1 segmented mirror subject to various perturbations, e.g. sensor noise, wind load, vibrations, earthquake are presented.

  13. Large-eddy simulation of the passage of a shock wave through homogeneous turbulence

    NASA Astrophysics Data System (ADS)

    Braun, N. O.; Pullin, D. I.; Meiron, D. I.

    2017-11-01

    The passage of a nominally plane shockwave through homogeneous, compressible turbulence is a canonical problem representative of flows seen in supernovae, supersonic combustion engines, and inertial confinement fusion. The interaction of isotropic turbulence with a stationary normal shockwave is considered at inertial range Taylor Reynolds numbers, Reλ = 100 - 2500 , using Large Eddy Simulation (LES). The unresolved, subgrid terms are approximated by the stretched-vortex model (Kosovic et al., 2002), which allows self-consistent reconstruction of the subgrid contributions to the turbulent statistics of interest. The mesh is adaptively refined in the vicinity of the shock to resolve small amplitude shock oscillations, and the implications of mesh refinement on the subgrid modeling are considered. Simulations are performed at a range of shock Mach numbers, Ms = 1.2 - 3.0 , and turbulent Mach numbers, Mt = 0.06 - 0.18 , to explore the parameter space of the interaction at high Reynolds number. The LES shows reasonable agreement with linear analysis and lower Reynolds number direct numerical simulations. LANL Subcontract 305963.

  14. On the influences of key modelling constants of large eddy simulations for large-scale compartment fires predictions

    NASA Astrophysics Data System (ADS)

    Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy

    2017-09-01

    An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.

  15. Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent Reactive Mixtures

    DTIC Science & Technology

    2012-03-27

    pulse- detonation engines ( PDE ), stage separation, supersonic cav- ity oscillations, hypersonic aerodynamics, detonation induced structural...ADAPTIVE UNSTRUCTURED CARTESIAN METHOD FOR LARGE-EDDY SIMULATION OF DETONATION IN MULTI-PHASE TURBULENT REACTIVE MIXTURES 5b. GRANT NUMBER FA9550...CCL Report TR-2012-03-03 Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent

  16. Implementation of Shifted Periodic Boundary Conditions in the Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) Software

    DTIC Science & Technology

    2015-08-01

    Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) Software by N Scott Weingarten and James P Larentzos Approved for...Massively Parallel Simulator ( LAMMPS ) Software by N Scott Weingarten Weapons and Materials Research Directorate, ARL James P Larentzos Engility...Shifted Periodic Boundary Conditions in the Large-Scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) Software 5a. CONTRACT NUMBER 5b

  17. Massive data compression for parameter-dependent covariance matrices

    NASA Astrophysics Data System (ADS)

    Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise

    2017-12-01

    We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.

  18. Computer-generated forces in distributed interactive simulation

    NASA Astrophysics Data System (ADS)

    Petty, Mikel D.

    1995-04-01

    Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.

  19. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    NASA Astrophysics Data System (ADS)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  20. Numerical Schemes for Dynamically Orthogonal Equations of Stochastic Fluid and Ocean Flows

    DTIC Science & Technology

    2011-11-03

    stages of the simulation (see §5.1). Also, because the pdf is discrete, we calculate the mo- ments using the biased estimator CYiYj ≈ 1q ∑ r Yr,iYr,j...independent random variables. For problems that require large p (e.g. non-Gaussian) and large s (e.g. large ocean or fluid simulations ), the number of...Sc = ν̂/K̂ is the Schmidt number which is the ratio of kinematic viscosity ν̂ to molecular diffusivity K̂ for the density field, ĝ′ = ĝ (ρ̂max−ρ̂min

  1. Numerical study of dynamo action at low magnetic Prandtl numbers.

    PubMed

    Ponty, Y; Mininni, P D; Montgomery, D C; Pinton, J-F; Politano, H; Pouquet, A

    2005-04-29

    We present a three-pronged numerical approach to the dynamo problem at low magnetic Prandtl numbers P(M). The difficulty of resolving a large range of scales is circumvented by combining direct numerical simulations, a Lagrangian-averaged model and large-eddy simulations. The flow is generated by the Taylor-Green forcing; it combines a well defined structure at large scales and turbulent fluctuations at small scales. Our main findings are (i) dynamos are observed from P(M)=1 down to P(M)=10(-2), (ii) the critical magnetic Reynolds number increases sharply with P(M)(-1) as turbulence sets in and then it saturates, and (iii) in the linear growth phase, unstable magnetic modes move to smaller scales as P(M) is decreased. Then the dynamo grows at large scales and modifies the turbulent velocity fluctuations.

  2. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  3. Mapping Ad Hoc Communications Network of a Large Number Fixed-Wing UAV Swarm

    DTIC Science & Technology

    2017-03-01

    partitioned sub-swarms. The work covered in this thesis is to build a model of the NPS swarm’s communication network in ns-3 simulation software and use...partitioned sub- swarms. The work covered in this thesis is to build a model of the NPS swarm’s communication network in ns-3 simulation software and...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS MAPPING AD HOC COMMUNICATIONS NETWORK OF A LARGE NUMBER FIXED-WING UAV SWARM by Alexis

  4. Three-Dimensional Simulation of Liquid Drop Dynamics Within Unsaturated Vertical Hele-Shaw Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hai Huang; Paul Meakin

    A three-dimensional, multiphase fluid flow model with volume of fluid-interface tracking was developed and applied to study the multiphase dynamics of moving liquid drops of different sizes within vertical Hele-Shaw cells. The simulated moving velocities are significantly different from those obtained from a first-order analytical approximation, based on simple force-balance concepts. The simulation results also indicate that the moving drops can exhibit a variety of shapes and that the transition among these different shapes is largely determined by the moving velocities. More important, there is a transition from a linear moving regime at small capillary numbers, in which the capillarymore » number scales linearly with the Bond number, to a nonlinear moving regime at large capillary numbers, in which the moving drop releases a train of droplets from its trailing edge. The train of droplets forms a variety of patterns at different moving velocities.« less

  5. Nonlinear relaxation algorithms for circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, R.A.

    Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less

  6. High-Performance Reactive Particle Tracking with Adaptive Representation

    NASA Astrophysics Data System (ADS)

    Schmidt, M.; Benson, D. A.; Pankavich, S.

    2017-12-01

    Lagrangian particle tracking algorithms have been shown to be effective tools for modeling chemical reactions in imperfectly-mixed media. One disadvantage of these algorithms is the possible need to employ large numbers of particles in simulations, depending on the concentration covariance structure, and these large particle numbers can lead to long computation times. Two distinct approaches have recently arisen to overcome this. One method employs spatial kernels that are related to a specified, reduced particle number; however, over-wide kernels, dictated by a very low particle number, lead to an excess of reaction calculations and cause a reduction in performance. Another formulation involves hybrid particles that carry multiple species of reactant, wherein each particle is treated as its own well-mixed volume, obviating the need for large numbers of particles for each species but still requiring a fixed number of hybrid particles. Here, we combine these two approaches and demonstrate an improved method for simulating a given system in a computationally efficient manner. Additionally, the independent nature of transport and reaction calculations in this approach allows for significant gains via parallelization in an MPI or OpenMP context. For benchmarking, we choose a CO2 injection simulation with dissolution and precipitation of calcite and dolomite, allowing us to derive the proper treatment of interaction between solid and aqueous phases.

  7. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  8. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles; Moin, Parviz

    2002-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustic combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Both of these characteristics suggest the use of larger time steps than those allowed by an acoustic CFL condition. The turbulent combustion model used is the Combined Conserved Scalar/Level Set Flamelet model of Duchamp de Lageneste and Pitsch for partially premixed combustion. Comparison of LES results to the experiments of Besson et al will be presented.

  9. Particle Number Dependence of the N-body Simulations of Moon Formation

    NASA Astrophysics Data System (ADS)

    Sasaki, Takanori; Hosono, Natsuki

    2018-04-01

    The formation of the Moon from the circumterrestrial disk has been investigated by using N-body simulations with the number N of particles limited from 104 to 105. We develop an N-body simulation code on multiple Pezy-SC processors and deploy Framework for Developing Particle Simulators to deal with large number of particles. We execute several high- and extra-high-resolution N-body simulations of lunar accretion from a circumterrestrial disk of debris generated by a giant impact on Earth. The number of particles is up to 107, in which 1 particle corresponds to a 10 km sized satellitesimal. We find that the spiral structures inside the Roche limit radius differ between low-resolution simulations (N ≤ 105) and high-resolution simulations (N ≥ 106). According to this difference, angular momentum fluxes, which determine the accretion timescale of the Moon also depend on the numerical resolution.

  10. Spiking neural network simulation: memory-optimal synaptic event scheduling.

    PubMed

    Stewart, Robert D; Gurney, Kevin N

    2011-06-01

    Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.

  11. Sub-domain decomposition methods and computational controls for multibody dynamical systems. [of spacecraft structures

    NASA Technical Reports Server (NTRS)

    Menon, R. G.; Kurdila, A. J.

    1992-01-01

    This paper presents a concurrent methodology to simulate the dynamics of flexible multibody systems with a large number of degrees of freedom. A general class of open-loop structures is treated and a redundant coordinate formulation is adopted. A range space method is used in which the constraint forces are calculated using a preconditioned conjugate gradient method. By using a preconditioner motivated by the regular ordering of the directed graph of the structures, it is shown that the method is order N in the total number of coordinates of the system. The overall formulation has the advantage that it permits fine parallelization and does not rely on system topology to induce concurrency. It can be efficiently implemented on the present generation of parallel computers with a large number of processors. Validation of the method is presented via numerical simulations of space structures incorporating large number of flexible degrees of freedom.

  12. RACORO continental boundary layer cloud investigations. 2. Large-eddy simulations of cumulus clouds and evaluation with in-situ and ground-based observations

    DOE PAGES

    Endo, Satoshi; Fridlind, Ann M.; Lin, Wuyin; ...

    2015-06-19

    A 60-hour case study of continental boundary layer cumulus clouds is examined using two large-eddy simulation (LES) models. The case is based on observations obtained during the RACORO Campaign (Routine Atmospheric Radiation Measurement [ARM] Aerial Facility [AAF] Clouds with Low Optical Water Depths [CLOWD] Optical Radiative Observations) at the ARM Climate Research Facility's Southern Great Plains site. The LES models are driven by continuous large-scale and surface forcings, and are constrained by multi-modal and temporally varying aerosol number size distribution profiles derived from aircraft observations. We compare simulated cloud macrophysical and microphysical properties with ground-based remote sensing and aircraft observations.more » The LES simulations capture the observed transitions of the evolving cumulus-topped boundary layers during the three daytime periods, and generally reproduce variations of droplet number concentration with liquid water content (LWC), corresponding to the gradient between the cloud centers and cloud edges at given heights. The observed LWC values fall within the range of simulated values; the observed droplet number concentrations are commonly higher than simulated, but differences remain on par with potential estimation errors in the aircraft measurements. Sensitivity studies examine the influences of bin microphysics versus bulk microphysics, aerosol advection, supersaturation treatment, and aerosol hygroscopicity. Simulated macrophysical cloud properties are found to be insensitive in this non-precipitating case, but microphysical properties are especially sensitive to bulk microphysics supersaturation treatment and aerosol hygroscopicity.« less

  13. It's a Girl! Random Numbers, Simulations, and the Law of Large Numbers

    ERIC Educational Resources Information Center

    Goodwin, Chris; Ortiz, Enrique

    2015-01-01

    Modeling using mathematics and making inferences about mathematical situations are becoming more prevalent in most fields of study. Descriptive statistics cannot be used to generalize about a population or make predictions of what can occur. Instead, inference must be used. Simulation and sampling are essential in building a foundation for…

  14. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  15. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  16. Study report on interfacing major physiological subsystem models: An approach for developing a whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Fitzjerrell, D. G.; Grounds, D. J.; Leonard, J. I.

    1975-01-01

    Using a whole body algorithm simulation model, a wide variety and large number of stresses as well as different stress levels were simulated including environmental disturbances, metabolic changes, and special experimental situations. Simulation of short term stresses resulted in simultaneous and integrated responses from the cardiovascular, respiratory, and thermoregulatory subsystems and the accuracy of a large number of responding variables was verified. The capability of simulating significantly longer responses was demonstrated by validating a four week bed rest study. In this case, the long term subsystem model was found to reproduce many experimentally observed changes in circulatory dynamics, body fluid-electrolyte regulation, and renal function. The value of systems analysis and the selected design approach for developing a whole body algorithm was demonstrated.

  17. A comparison of three approaches to compute the effective Reynolds number of the implicit large-eddy simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ye; Thornber, Ben

    2016-04-12

    Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology andmore » wind tunnel experiments.« less

  18. Features of MCNP6 Relevant to Medical Radiation Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H. Grady III; Goorley, John T.

    2012-08-29

    MCNP (Monte Carlo N-Particle) is a general-purpose Monte Carlo code for simulating the transport of neutrons, photons, electrons, positrons, and more recently other fundamental particles and heavy ions. Over many years MCNP has found a wide range of applications in many different fields, including medical radiation physics. In this presentation we will describe and illustrate a number of significant recently-developed features in the current version of the code, MCNP6, having particular utility for medical physics. Among these are major extensions of the ability to simulate large, complex geometries, improvement in memory requirements and speed for large lattices, introduction of mesh-basedmore » isotopic reaction tallies, advances in radiography simulation, expanded variance-reduction capabilities, especially for pulse-height tallies, and a large number of enhancements in photon/electron transport.« less

  19. Topology of Large-Scale Structure by Galaxy Type: Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III; Cen, Renyue; Ostriker, Jeremiah P.

    1996-07-01

    The topology of large-scale structure is studied as a function of galaxy type using the genus statistic. In hydrodynamical cosmological cold dark matter simulations, galaxies form on caustic surfaces (Zeldovich pancakes) and then slowly drain onto filaments and clusters. The earliest forming galaxies in the simulations (defined as "ellipticals") are thus seen at the present epoch preferentially in clusters (tending toward a meatball topology), while the latest forming galaxies (defined as "spirals") are seen currently in a spongelike topology. The topology is measured by the genus (number of "doughnut" holes minus number of isolated regions) of the smoothed density-contour surfaces. The measured genus curve for all galaxies as a function of density obeys approximately the theoretical curve expected for random- phase initial conditions, but the early-forming elliptical galaxies show a shift toward a meatball topology relative to the late-forming spirals. Simulations using standard biasing schemes fail to show such an effect. Large observational samples separated by galaxy type could be used to test for this effect.

  20. Large-scale expensive black-box function optimization

    NASA Astrophysics Data System (ADS)

    Rashid, Kashif; Bailey, William; Couët, Benoît

    2012-09-01

    This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.

  1. Numerical simulation of a plane turbulent mixing layer, with applications to isothermal, rapid reactions

    NASA Technical Reports Server (NTRS)

    Lin, P.; Pratt, D. T.

    1987-01-01

    A hybrid method has been developed for the numerical prediction of turbulent mixing in a spatially-developing, free shear layer. Most significantly, the computation incorporates the effects of large-scale structures, Schmidt number and Reynolds number on mixing, which have been overlooked in the past. In flow field prediction, large-eddy simulation was conducted by a modified 2-D vortex method with subgrid-scale modeling. The predicted mean velocities, shear layer growth rates, Reynolds stresses, and the RMS of longitudinal velocity fluctuations were found to be in good agreement with experiments, although the lateral velocity fluctuations were overpredicted. In scalar transport, the Monte Carlo method was extended to the simulation of the time-dependent pdf transport equation. For the first time, the mixing frequency in Curl's coalescence/dispersion model was estimated by using Broadwell and Breidenthal's theory of micromixing, which involves Schmidt number, Reynolds number and the local vorticity. Numerical tests were performed for a gaseous case and an aqueous case. Evidence that pure freestream fluids are entrained into the layer by large-scale motions was found in the predicted pdf. Mean concentration profiles were found to be insensitive to Schmidt number, while the unmixedness was higher for higher Schmidt number. Applications were made to mixing layers with isothermal, fast reactions. The predicted difference in product thickness of the two cases was in reasonable quantitative agreement with experimental measurements.

  2. Reaction factoring and bipartite update graphs accelerate the Gillespie Algorithm for large-scale biochemical systems.

    PubMed

    Indurkhya, Sagar; Beal, Jacob

    2010-01-06

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.

  3. Reaction Factoring and Bipartite Update Graphs Accelerate the Gillespie Algorithm for Large-Scale Biochemical Systems

    PubMed Central

    Indurkhya, Sagar; Beal, Jacob

    2010-01-01

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048

  4. Does Training Learners on Simulators Benefit Real Patients?

    ERIC Educational Resources Information Center

    Teteris, Elise; Fraser, Kristin; Wright, Bruce; McLaughlin, Kevin

    2012-01-01

    Despite limited data on patient outcomes, simulation training has already been adopted and embraced by a large number of medical schools. Yet widespread acceptance of simulation should not relieve us of the duty to demonstrate if, and under which circumstances, training learners on simulation benefits real patients. Here we review the data on…

  5. Characteristics of Tornado-Like Vortices Simulated in a Large-Scale Ward-Type Simulator

    NASA Astrophysics Data System (ADS)

    Tang, Zhuo; Feng, Changda; Wu, Liang; Zuo, Delong; James, Darryl L.

    2018-02-01

    Tornado-like vortices are simulated in a large-scale Ward-type simulator to further advance the understanding of such flows, and to facilitate future studies of tornado wind loading on structures. Measurements of the velocity fields near the simulator floor and the resulting floor surface pressures are interpreted to reveal the mean and fluctuating characteristics of the flow as well as the characteristics of the static-pressure deficit. We focus on the manner in which the swirl ratio and the radial Reynolds number affect these characteristics. The transition of the tornado-like flow from a single-celled vortex to a dual-celled vortex with increasing swirl ratio and the impact of this transition on the flow field and the surface-pressure deficit are closely examined. The mean characteristics of the surface-pressure deficit caused by tornado-like vortices simulated at a number of swirl ratios compare well with the corresponding characteristics recorded during full-scale tornadoes.

  6. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  7. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  8. A large eddy lattice Boltzmann simulation of magnetohydrodynamic turbulence

    NASA Astrophysics Data System (ADS)

    Flint, Christopher; Vahala, George

    2018-02-01

    Large eddy simulations (LES) of a lattice Boltzmann magnetohydrodynamic (LB-MHD) model are performed for the unstable magnetized Kelvin-Helmholtz jet instability. This algorithm is an extension of Ansumali et al. [1] to MHD in which one performs first an expansion in the filter width on the kinetic equations followed by the usual low Knudsen number expansion. These two perturbation operations do not commute. Closure is achieved by invoking the physical constraint that subgrid effects occur at transport time scales. The simulations are in very good agreement with direct numerical simulations.

  9. A web-based repository of surgical simulator projects.

    PubMed

    Leskovský, Peter; Harders, Matthias; Székely, Gábor

    2006-01-01

    The use of computer-based surgical simulators for training of prospective surgeons has been a topic of research for more than a decade. As a result, a large number of academic projects have been carried out, and a growing number of commercial products are available on the market. Keeping track of all these endeavors for established groups as well as for newly started projects can be quite arduous. Gathering information on existing methods, already traveled research paths, and problems encountered is a time consuming task. To alleviate this situation, we have established a modifiable online repository of existing projects. It contains detailed information about a large number of simulator projects gathered from web pages, papers and personal communication. The database is modifiable (with password protected sections) and also allows for a simple statistical analysis of the collected data. For further information, the surgical repository web page can be found at www.virtualsurgery.vision.ee.ethz.ch.

  10. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Co-Axial Supersonic Free-Jet Experiment

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Edwards, J. R.

    2009-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The baseline value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was noted when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid simulation results showed the same trends as the baseline Reynolds-averaged predictions. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions are suggested as a remedy to this dilemma. Comparisons between resolved second-order turbulence statistics and their modeled Reynolds-averaged counterparts were also performed.

  11. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    NASA Astrophysics Data System (ADS)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and instantaneous streamwise velocities, but between the instantaneous temperatures and velocities averaged over certain time interval.

  12. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Moin, Parviz

    2003-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustics combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Additionally, new boundary conditions based on the work of Poinsot and Lele have been developed to model the acoustic effect of a long channel upstream of the computational inlet, thus avoiding the need to include such a channel in the computational domain. The turbulent combustion model used is the Level Set model of Duchamp de Lageneste and Pitsch for premixed combustion. Comparison of LES results to the reacting experiments of Besson et al. will be presented.

  13. Computation of large-scale statistics in decaying isotropic turbulence

    NASA Technical Reports Server (NTRS)

    Chasnov, Jeffrey R.

    1993-01-01

    We have performed large-eddy simulations of decaying isotropic turbulence to test the prediction of self-similar decay of the energy spectrum and to compute the decay exponents of the kinetic energy. In general, good agreement between the simulation results and the assumption of self-similarity were obtained. However, the statistics of the simulations were insufficient to compute the value of gamma which corrects the decay exponent when the spectrum follows a k(exp 4) wave number behavior near k = 0. To obtain good statistics, it was found necessary to average over a large ensemble of turbulent flows.

  14. Astrophysical N-body Simulations Using Hierarchical Tree Data Structures

    NASA Astrophysics Data System (ADS)

    Warren, M. S.; Salmon, J. K.

    The authors report on recent large astrophysical N-body simulations executed on the Intel Touchstone Delta system. They review the astrophysical motivation and the numerical techniques and discuss steps taken to parallelize these simulations. The methods scale as O(N log N), for large values of N, and also scale linearly with the number of processors. The performance sustained for a duration of 67 h, was between 5.1 and 5.4 Gflop/s on a 512-processor system.

  15. Large-scale magnetic fields at high Reynolds numbers in magnetohydrodynamic simulations.

    PubMed

    Hotta, H; Rempel, M; Yokoyama, T

    2016-03-25

    The 11-year solar magnetic cycle shows a high degree of coherence in spite of the turbulent nature of the solar convection zone. It has been found in recent high-resolution magnetohydrodynamics simulations that the maintenance of a large-scale coherent magnetic field is difficult with small viscosity and magnetic diffusivity (≲10 (12) square centimenters per second). We reproduced previous findings that indicate a reduction of the energy in the large-scale magnetic field for lower diffusivities and demonstrate the recovery of the global-scale magnetic field using unprecedentedly high resolution. We found an efficient small-scale dynamo that suppresses small-scale flows, which mimics the properties of large diffusivity. As a result, the global-scale magnetic field is maintained even in the regime of small diffusivities-that is, large Reynolds numbers. Copyright © 2016, American Association for the Advancement of Science.

  16. Spreadsheet Simulation of the Law of Large Numbers

    ERIC Educational Resources Information Center

    Boger, George

    2005-01-01

    If larger and larger samples are successively drawn from a population and a running average calculated after each sample has been drawn, the sequence of averages will converge to the mean, [mu], of the population. This remarkable fact, known as the law of large numbers, holds true if samples are drawn from a population of discrete or continuous…

  17. An immersed boundary method for direct and large eddy simulation of stratified flows in complex geometry

    NASA Astrophysics Data System (ADS)

    Rapaka, Narsimha R.; Sarkar, Sutanu

    2016-10-01

    A sharp-interface Immersed Boundary Method (IBM) is developed to simulate density-stratified turbulent flows in complex geometry using a Cartesian grid. The basic numerical scheme corresponds to a central second-order finite difference method, third-order Runge-Kutta integration in time for the advective terms and an alternating direction implicit (ADI) scheme for the viscous and diffusive terms. The solver developed here allows for both direct numerical simulation (DNS) and large eddy simulation (LES) approaches. Methods to enhance the mass conservation and numerical stability of the solver to simulate high Reynolds number flows are discussed. Convergence with second-order accuracy is demonstrated in flow past a cylinder. The solver is validated against past laboratory and numerical results in flow past a sphere, and in channel flow with and without stratification. Since topographically generated internal waves are believed to result in a substantial fraction of turbulent mixing in the ocean, we are motivated to examine oscillating tidal flow over a triangular obstacle to assess the ability of this computational model to represent nonlinear internal waves and turbulence. Results in laboratory-scale (order of few meters) simulations show that the wave energy flux, mean flow properties and turbulent kinetic energy agree well with our previous results obtained using a body-fitted grid (BFG). The deviation of IBM results from BFG results is found to increase with increasing nonlinearity in the wave field that is associated with either increasing steepness of the topography relative to the internal wave propagation angle or with the amplitude of the oscillatory forcing. LES is performed on a large scale ridge, of the order of few kilometers in length, that has the same geometrical shape and same non-dimensional values for the governing flow and environmental parameters as the laboratory-scale topography, but significantly larger Reynolds number. A non-linear drag law is utilized in the large-scale application to parameterize turbulent losses due to bottom friction at high Reynolds number. The large scale problem exhibits qualitatively similar behavior to the laboratory scale problem with some differences: slightly larger intensification of the boundary flow and somewhat higher non-dimensional values for the energy fluxed away by the internal wave field. The phasing of wave breaking and turbulence exhibits little difference between small-scale and large-scale obstacles as long as the important non-dimensional parameters are kept the same. We conclude that IBM is a viable approach to the simulation of internal waves and turbulence in high Reynolds number stratified flows over topography.

  18. Handling Qualities of Large Rotorcraft in Hover and Low Speed

    NASA Technical Reports Server (NTRS)

    Malpica, Carlos; Theodore, Colin R.; Lawrence , Ben; Blanken, Chris L.

    2015-01-01

    According to a number of system studies, large capacity advanced rotorcraft with a capability of high cruise speeds (approx.350 mph) as well as vertical and/or short take-off and landing (V/STOL) flight could alleviate anticipated air transportation capacity issues by making use of non-primary runways, taxiways, and aprons. These advanced aircraft pose a number of design challenges, as well as unknown issues in the flight control and handling qualities domains. A series of piloted simulation experiments have been conducted on the NASA Ames Research Center Vertical Motion Simulator (VMS) in recent years to systematically investigate the fundamental flight control and handling qualities issues associated with the characteristics of large rotorcraft, including tiltrotors, in hover and low-speed maneuvering.

  19. Large-eddy simulation of flow around an airfoil on a structured mesh

    NASA Technical Reports Server (NTRS)

    Kaltenbach, Hans-Jakob; Choi, Haecheon

    1995-01-01

    The diversity of flow characteristics encountered in a flow over an airfoil near maximum lift taxes the presently available statistical turbulence models. This work describes our first attempt to apply the technique of large-eddy simulation to a flow of aeronautical interest. The challenge for this simulation comes from the high Reynolds number of the flow as well as the variety of flow regimes encountered, including a thin laminar boundary layer at the nose, transition, boundary layer growth under adverse pressure gradient, incipient separation near the trailing edge, and merging of two shear layers at the trailing edge. The flow configuration chosen is a NACA 4412 airfoil near maximum lift. The corresponding angle of attack was determined independently by Wadcock (1987) and Hastings & Williams (1984, 1987) to be close to 12 deg. The simulation matches the chord Reynolds number U(sub infinity)c/v = 1.64 x 10(exp 6) of Wadcock's experiment.

  20. An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram

    2018-01-01

    Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.

  1. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment

    NASA Technical Reports Server (NTRS)

    Baurle, Robert A.; Edwards, Jack R.

    2010-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure assumptions.

  2. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  3. Simulation studies using multibody dynamics code DART

    NASA Technical Reports Server (NTRS)

    Keat, James E.

    1989-01-01

    DART is a multibody dynamics code developed by Photon Research Associates for the Air Force Astronautics Laboratory (AFAL). The code is intended primarily to simulate the dynamics of large space structures, particularly during the deployment phase of their missions. DART integrates nonlinear equations of motion numerically. The number of bodies in the system being simulated is arbitrary. The bodies' interconnection joints can have an arbitrary number of degrees of freedom between 0 and 6. Motions across the joints can be large. Provision for simulating on-board control systems is provided. Conservation of energy and momentum, when applicable, are used to evaluate DART's performance. After a brief description of DART, studies made to test the program prior to its delivery to AFAL are described. The first is a large angle reorientating of a flexible spacecraft consisting of a rigid central hub and four flexible booms. Reorientation was accomplished by a single-cycle sine wave shape torque input. In the second study, an appendage, mounted on a spacecraft, was slewed through a large angle. Four closed-loop control systems provided control of this appendage and of the spacecraft's attitude. The third study simulated the deployment of the rim of a bicycle wheel configuration large space structure. This system contained 18 bodies. An interesting and unexpected feature of the dynamics was a pulsing phenomena experienced by the stays whole playout was used to control the deployment. A short description of the current status of DART is given.

  4. PyNEST: A Convenient Interface to the NEST Simulator.

    PubMed

    Eppler, Jochen Martin; Helias, Moritz; Muller, Eilif; Diesmann, Markus; Gewaltig, Marc-Oliver

    2008-01-01

    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 10(4) neurons and 10(7) to 10(9) synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used.

  5. PyNEST: A Convenient Interface to the NEST Simulator

    PubMed Central

    Eppler, Jochen Martin; Helias, Moritz; Muller, Eilif; Diesmann, Markus; Gewaltig, Marc-Oliver

    2008-01-01

    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 104 neurons and 107 to 109 synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used. PMID:19198667

  6. FDTD method for laser absorption in metals for large scale problems.

    PubMed

    Deng, Chun; Ki, Hyungson

    2013-10-21

    The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.

  7. Population genetics and molecular evolution of DNA sequences in transposable elements. I. A simulation framework.

    PubMed

    Kijima, T E; Innan, Hideki

    2013-11-01

    A population genetic simulation framework is developed to understand the behavior and molecular evolution of DNA sequences of transposable elements. Our model incorporates random transposition and excision of transposable element (TE) copies, two modes of selection against TEs, and degeneration of transpositional activity by point mutations. We first investigated the relationships between the behavior of the copy number of TEs and these parameters. Our results show that when selection is weak, the genome can maintain a relatively large number of TEs, but most of them are less active. In contrast, with strong selection, the genome can maintain only a limited number of TEs but the proportion of active copies is large. In such a case, there could be substantial fluctuations of the copy number over generations. We also explored how DNA sequences of TEs evolve through the simulations. In general, active copies form clusters around the original sequence, while less active copies have long branches specific to themselves, exhibiting a star-shaped phylogeny. It is demonstrated that the phylogeny of TE sequences could be informative to understand the dynamics of TE evolution.

  8. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    NASA Astrophysics Data System (ADS)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  9. Large Eddy Simulation study of the development of finite-channel lock-release currents at high Grashof numbers

    NASA Astrophysics Data System (ADS)

    Ooi, Seng-Keat

    2005-11-01

    Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.

  10. Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?

    NASA Astrophysics Data System (ADS)

    Kubota, Noriaki

    2018-03-01

    The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.

  11. Effects of forcing time scale on the simulated turbulent flows and turbulent collision statistics of inertial particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100

    2015-01-15

    In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less

  12. Scalable nuclear density functional theory with Sky3D

    NASA Astrophysics Data System (ADS)

    Afibuzzaman, Md; Schuetrumpf, Bastian; Aktulga, Hasan Metin

    2018-02-01

    In nuclear astrophysics, quantum simulations of large inhomogeneous dense systems as they appear in the crusts of neutron stars present big challenges. The number of particles in a simulation with periodic boundary conditions is strongly limited due to the immense computational cost of the quantum methods. In this paper, we describe techniques for an efficient and scalable parallel implementation of Sky3D, a nuclear density functional theory solver that operates on an equidistant grid. Presented techniques allow Sky3D to achieve good scaling and high performance on a large number of cores, as demonstrated through detailed performance analysis on a Cray XC40 supercomputer.

  13. Large eddy simulation of turbulent cavitating flows

    NASA Astrophysics Data System (ADS)

    Gnanaskandan, A.; Mahesh, K.

    2015-12-01

    Large Eddy Simulation is employed to study two turbulent cavitating flows: over a cylinder and a wedge. A homogeneous mixture model is used to treat the mixture of water and water vapor as a compressible fluid. The governing equations are solved using a novel predictor- corrector method. The subgrid terms are modeled using the Dynamic Smagorinsky model. Cavitating flow over a cylinder at Reynolds number (Re) = 3900 and cavitation number (σ) = 1.0 is simulated and the wake characteristics are compared to the single phase results at the same Reynolds number. It is observed that cavitation suppresses turbulence in the near wake and delays three dimensional breakdown of the vortices. Next, cavitating flow over a wedge at Re = 200, 000 and σ = 2.0 is presented. The mean void fraction profiles obtained are compared to experiment and good agreement is obtained. Cavity auto-oscillation is observed, where the sheet cavity breaks up into a cloud cavity periodically. The results suggest LES as an attractive approach for predicting turbulent cavitating flows.

  14. Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines

    NASA Astrophysics Data System (ADS)

    Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.

    2016-12-01

    Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.

  15. Implicit Large Eddy Simulation of a wingtip vortex at Rec =1.2x106

    NASA Astrophysics Data System (ADS)

    Lombard, Jean-Eloi; Moxey, Dave; Sherwin, Spencer; SherwinLab Team

    2015-11-01

    We present recent developments in numerical methods for performing a Large Eddy Simulation (LES) of the formation and evolution of a wingtip vortex. The development of these vortices in the near wake, in combination with the large Reynolds numbers present in these cases, make these types of test cases particularly challenging to investigate numerically. To demonstrate the method's viability, we present results from numerical simulations of flow over a NACA 0012 profile wingtip at Rec = 1.2 x106 and compare them against experimental data, which is to date the highest Reynolds number achieved for a LES that has been correlated with experiments for this test case. Our model correlates favorably with experiment, both for the characteristic jetting in the primary vortex and pressure distribution on the wing surface. The proposed method is of general interest for the modeling of transitioning vortex dominated flows over complex geometries. McLaren Racing/Royal Academy of Engineering Research Chair.

  16. FASTPM: a new scheme for fast simulations of dark matter and haloes

    NASA Astrophysics Data System (ADS)

    Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick

    2016-12-01

    We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.

  17. LES-ODT Simulations of Turbulent Reacting Shear Layers

    NASA Astrophysics Data System (ADS)

    Hoffie, Andreas; Echekki, Tarek

    2012-11-01

    Large-eddy simulations (LES) combined with the one-dimensional turbulence (ODT) simulations of a spatially developing turbulent reacting shear layer with heat release and high Reynolds numbers were conducted and compared to results from direct numerical simulations (DNS) of the same configuration. The LES-ODT approach is based on LES solutions for momentum on a coarse grid and solutions for momentum and reactive scalars on a fine ODT grid, which is embedded in the LES computational domain. The shear layer is simulated with a single-step, second-order reaction with an Arrhenius reaction rate. The transport equations are solved using a low Mach number approximation. The LES-ODT simulations yield reasonably accurate predictions of turbulence and passive/reactive scalars' statistics compared to DNS results.

  18. Investigating the Randomness of Numbers

    ERIC Educational Resources Information Center

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  19. Assessment of the Partially Resolved Numerical Simulation (PRNS) Approach in the National Combustion Code (NCC) for Turbulent Nonreacting and Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2008-01-01

    This paper describes an approach which aims at bridging the gap between the traditional Reynolds-averaged Navier-Stokes (RANS) approach and the traditional large eddy simulation (LES) approach. It has the characteristics of the very large eddy simulation (VLES) and we call this approach the partially-resolved numerical simulation (PRNS). Systematic simulations using the National Combustion Code (NCC) have been carried out for fully developed turbulent pipe flows at different Reynolds numbers to evaluate the PRNS approach. Also presented are the sample results of two demonstration cases: nonreacting flow in a single injector flame tube and reacting flow in a Lean Direct Injection (LDI) hydrogen combustor.

  20. Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection at large Rayleigh numbers

    NASA Astrophysics Data System (ADS)

    Kozitskiy, Sergey

    2018-06-01

    Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection has been performed by using the previously derived system of complex Ginzburg-Landau type amplitude equations, valid in a neighborhood of Hopf bifurcation points. Simulation has shown that the state of spatiotemporal chaos develops in the system. It has the form of nonstationary structures that depend on the parameters of the system. The shape of structures does not depend on the initial conditions, and a limited number of spectral components participate in their formation.

  1. Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection at large Rayleigh numbers

    NASA Astrophysics Data System (ADS)

    Kozitskiy, Sergey

    2018-05-01

    Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection has been performed by using the previously derived system of complex Ginzburg-Landau type amplitude equations, valid in a neighborhood of Hopf bifurcation points. Simulation has shown that the state of spatiotemporal chaos develops in the system. It has the form of nonstationary structures that depend on the parameters of the system. The shape of structures does not depend on the initial conditions, and a limited number of spectral components participate in their formation.

  2. The statistical significance of error probability as determined from decoding simulations for long codes

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  3. Lattice Boltzmann simulation of nonequilibrium effects in oscillatory gas flow.

    PubMed

    Tang, G H; Gu, X J; Barber, R W; Emerson, D R; Zhang, Y H

    2008-08-01

    Accurate evaluation of damping in laterally oscillating microstructures is challenging due to the complex flow behavior. In addition, device fabrication techniques and surface properties will have an important effect on the flow characteristics. Although kinetic approaches such as the direct simulation Monte Carlo (DSMC) method and directly solving the Boltzmann equation can address these challenges, they are beyond the reach of current computer technology for large scale simulation. As the continuum Navier-Stokes equations become invalid for nonequilibrium flows, we take advantage of the computationally efficient lattice Boltzmann method to investigate nonequilibrium oscillating flows. We have analyzed the effects of the Stokes number, Knudsen number, and tangential momentum accommodation coefficient for oscillating Couette flow and Stokes' second problem. Our results are in excellent agreement with DSMC data for Knudsen numbers up to Kn=O(1) and show good agreement for Knudsen numbers as large as 2.5. In addition to increasing the Stokes number, we demonstrate that increasing the Knudsen number or decreasing the accommodation coefficient can also expedite the breakdown of symmetry for oscillating Couette flow. This results in an earlier transition from quasisteady to unsteady flow. Our paper also highlights the deviation in velocity slip between Stokes' second problem and the confined Couette case.

  4. Real-time fast physical random number generator with a photonic integrated circuit.

    PubMed

    Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu

    2017-03-20

    Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.

  5. Large-eddy simulation of a boundary layer with concave streamwise curvature

    NASA Technical Reports Server (NTRS)

    Lund, Thomas S.

    1994-01-01

    Turbulence modeling continues to be one of the most difficult problems in fluid mechanics. Existing prediction methods are well developed for certain classes of simple equilibrium flows, but are still not entirely satisfactory for a large category of complex non-equilibrium flows found in engineering practice. Direct and large-eddy simulation (LES) approaches have long been believed to have great potential for the accurate prediction of difficult turbulent flows, but the associated computational cost has been prohibitive for practical problems. This remains true for direct simulation but is no longer clear for large-eddy simulation. Advances in computer hardware, numerical methods, and subgrid-scale modeling have made it possible to conduct LES for flows or practical interest at Reynolds numbers in the range of laboratory experiments. The objective of this work is to apply ES and the dynamic subgrid-scale model to the flow of a boundary layer over a concave surface.

  6. An analysis of the number of parking bays and checkout counters for a supermarket using SAS simulation studio

    NASA Astrophysics Data System (ADS)

    Kar, Leow Soo

    2014-07-01

    Two important factors that influence customer satisfaction in large supermarkets or hypermarkets are adequate parking facilities and short waiting times at the checkout counters. This paper describes the simulation analysis of a large supermarket to determine the optimal levels of these two factors. SAS Simulation Studio is used to model a large supermarket in a shopping mall with car park facility. In order to make the simulation model more realistic, a number of complexities are introduced into the model. For example, arrival patterns of customers vary with the time of the day (morning, afternoon and evening) and with the day of the week (weekdays or weekends), the transport mode of arriving customers (by car or other means), the mode of payment (cash or credit card), customer shopping pattern (leisurely, normal, exact) or choice of checkout counters (normal or express). In this study, we focus on 2 important components of the simulation model, namely the parking area, the normal and express checkout counters. The parking area is modeled using a Resource Pool block where one resource unit represents one parking bay. A customer arriving by car seizes a unit of the resource from the Pool block (parks car) and only releases it when he exits the system. Cars arriving when the Resource Pool is empty (no more parking bays) leave without entering the system. The normal and express checkouts are represented by Server blocks with appropriate service time distributions. As a case study, a supermarket in a shopping mall with a limited number of parking bays in Bangsar was chosen for this research. Empirical data on arrival patterns, arrival modes, payment modes, shopping patterns, service times of the checkout counters were collected and analyzed to validate the model. Sensitivity analysis was also performed with different simulation scenarios to identify the parameters for the optimal number the parking spaces and checkout counters.

  7. Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.

    PubMed

    Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A

    2011-01-01

    Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.

  8. Structure and modeling of turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novikov, E.A.

    The {open_quotes}vortex strings{close_quotes} scale l{sub s} {approximately} LRe{sup -3/10} (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scalemore » motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES).« less

  9. Large eddy simulation for predicting turbulent heat transfer in gas turbines

    PubMed Central

    Tafti, Danesh K.; He, Long; Nagendra, K.

    2014-01-01

    Blade cooling technology will play a critical role in the next generation of propulsion and power generation gas turbines. Accurate prediction of blade metal temperature can avoid the use of excessive compressed bypass air and allow higher turbine inlet temperature, increasing fuel efficiency and decreasing emissions. Large eddy simulation (LES) has been established to predict heat transfer coefficients with good accuracy under various non-canonical flows, but is still limited to relatively simple geometries and low Reynolds numbers. It is envisioned that the projected increase in computational power combined with a drop in price-to-performance ratio will make system-level simulations using LES in complex blade geometries at engine conditions accessible to the design process in the coming one to two decades. In making this possible, two key challenges are addressed in this paper: working with complex intricate blade geometries and simulating high-Reynolds-number (Re) flows. It is proposed to use the immersed boundary method (IBM) combined with LES wall functions. A ribbed duct at Re=20 000 is simulated using the IBM, and a two-pass ribbed duct is simulated at Re=100 000 with and without rotation (rotation number Ro=0.2) using LES with wall functions. The results validate that the IBM is a viable alternative to body-conforming grids and that LES with wall functions reproduces experimental results at a much lower computational cost. PMID:25024418

  10. Large-Eddy Simulation of Wind-Plant Aerodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation, and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done large-eddy simulations of wind plants with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology formore » performing this type of simulation. We used the OpenFOAM CFD toolbox to create our solver. The simulated time-averaged power production of the turbines in the plant agrees well with field observations, except with the sixth turbine and beyond in each wind-aligned. The power produced by each of those turbines is overpredicted by 25-40%. A direct comparison between simulated and field data is difficult because we simulate one wind direction with a speed and turbulence intensity characteristic of Lillgrund, but the field observations were taken over a year of varying conditions. The simulation shows the significant 60-70% decrease in the performance of the turbines behind the front row in this plant that has a spacing of 4.3 rotor diameters in this direction. The overall plant efficiency is well predicted. This work shows the importance of using local grid refinement to simultaneously capture the meter-scale details of the turbine wake and the kilometer-scale turbulent atmospheric structures. Although this work illustrates the power of large-eddy simulation in producing a time-accurate solution, it required about one million processor-hours, showing the significant cost of large-eddy simulation.« less

  11. Large Eddy Simulations of Transverse Combustion Instability in a Multi-Element Injector

    DTIC Science & Technology

    2016-07-27

    Instability in a Multi- Element Injector 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Matthew Harvazinski, Yogin...Simulations of Transverse  Combustion Instability in a Multi‐ Element  Injector 2 History Damaged engine injector  faceplate caused by combustion...Clearance #16346 3 Single  Element  Studies Short Post Marginally Stable Intermediate Post Unstable Long Post Stable Long Post Unstable CVRC Experiment

  12. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  13. The impact of clustering and angular resolution on far-infrared and millimeter continuum observations

    NASA Astrophysics Data System (ADS)

    Béthermin, Matthieu; Wu, Hao-Yi; Lagache, Guilaine; Davidzon, Iary; Ponthieu, Nicolas; Cousin, Morgane; Wang, Lingyu; Doré, Olivier; Daddi, Emanuele; Lapi, Andrea

    2017-11-01

    Follow-up observations at high-angular resolution of bright submillimeter galaxies selected from deep extragalactic surveys have shown that the single-dish sources are comprised of a blend of several galaxies. Consequently, number counts derived from low- and high-angular-resolution observations are in tension. This demonstrates the importance of resolution effects at these wavelengths and the need for realistic simulations to explore them. We built a new 2 deg2 simulation of the extragalactic sky from the far-infrared to the submillimeter. It is based on an updated version of the 2SFM (two star-formation modes) galaxy evolution model. Using global galaxy properties generated by this model, we used an abundance-matching technique to populate a dark-matter lightcone and thus simulate the clustering. We produced maps from this simulation and extracted the sources, and we show that the limited angular resolution of single-dish instruments has a strong impact on (sub)millimeter continuum observations. Taking into account these resolution effects, we are reproducing a large set of observables, as number counts and their evolution with redshift and cosmic infrared background power spectra. Our simulation consistently describes the number counts from single-dish telescopes and interferometers. In particular, at 350 and 500 μm, we find that the number counts measured by Herschel between 5 and 50 mJy are biased towards high values by a factor 2, and that the redshift distributions are biased towards low redshifts. We also show that the clustering has an important impact on the Herschel pixel histogram used to derive number counts from P(D) analysis. We find that the brightest galaxy in the beam of a 500 μm Herschel source contributes on average to only 60% of the Herschel flux density, but that this number will rise to 95% for future millimeter surveys on 30 m-class telescopes (e.g., NIKA2 at IRAM). Finally, we show that the large number density of red Herschel sources found in observations but not in models might be an observational artifact caused by the combination of noise, resolution effects, and the steepness of color- and flux density distributions. Our simulation, called Simulated Infrared Dusty Extragalactic Sky (SIDES), is publicly available. Our simulation Simulated Infrared Dusty Extragalactic Sky (SIDES) is available at http://cesam.lam.fr/sides.

  14. Three dimensional hair model by means particles using Blender

    NASA Astrophysics Data System (ADS)

    Alvarez-Cedillo, Jesús Antonio; Almanza-Nieto, Roberto; Herrera-Lozada, Juan Carlos

    2010-09-01

    The simulation and modeling of human hair is a process whose computational complexity is very large, this due to the large number of factors that must be calculated to give a realistic appearance. Generally, the method used in the film industry to simulate hair is based on particle handling graphics. In this paper we present a simple approximation of how to model human hair using particles in Blender. [Figure not available: see fulltext.

  15. Direct numerical simulation of broadband trailing edge noise from a NACA 0012 airfoil

    NASA Astrophysics Data System (ADS)

    Mehrabadi, Mohammad; Bodony, Daniel

    2016-11-01

    Commercial jet-powered aircraft produce unwanted noise at takeoff and landing when they are close to near-airport communities. Modern high-bypass-ratio turbofan engines have reduced jet exhaust noise sufficiently such that noise from the main fan is now significant. In preparation for a large-eddy simulation of the NASA/GE Source Diagnostic Test Fan, we study the broadband noise due to the turbulent flow on a NACA 0012 airfoil at zero degree angle-of-attack, a chord-based Reynolds number of 408,000 and a Mach number of 0.115 using direct numerical simulation (DNS) and wall-modeled large-eddy simulation (WMLES). The flow conditions correspond to existing experimental data. We investigate the roughness-induced transition-to-turbulence and sound generation from a DNS perspective as well as examine how these two features are captured by a wall model. Comparisons between the DNS- and WMLES-predicted noise are made and provide guidance on the use of WMLES for broadband fan noise prediction. AeroAcoustics Research Consortium.

  16. An Agent-Based Epidemic Simulation of Social Behaviors Affecting HIV Transmission among Taiwanese Homosexuals

    PubMed Central

    2015-01-01

    Computational simulations are currently used to identify epidemic dynamics, to test potential prevention and intervention strategies, and to study the effects of social behaviors on HIV transmission. The author describes an agent-based epidemic simulation model of a network of individuals who participate in high-risk sexual practices, using number of partners, condom usage, and relationship length to distinguish between high- and low-risk populations. Two new concepts—free links and fixed links—are used to indicate tendencies among individuals who either have large numbers of short-term partners or stay in long-term monogamous relationships. An attempt was made to reproduce epidemic curves of reported HIV cases among male homosexuals in Taiwan prior to using the agent-based model to determine the effects of various policies on epidemic dynamics. Results suggest that when suitable adjustments are made based on available social survey statistics, the model accurately simulates real-world behaviors on a large scale. PMID:25815047

  17. The use of Tecnomatix software to simulate the manufacturing flows in an industrial enterprise producing hydrostatic components

    NASA Astrophysics Data System (ADS)

    Petrila, S.; Brabie, G.; Chirita, B.

    2016-08-01

    The analysis performed on manufacturing flows within industrial enterprises producing hydrostatic components twos made on a number of factors that influence smooth running of production such: distance between pieces, waiting time from one surgery to another; time achievement of setups on CNC machines; tool changing in case of a large number of operators and manufacturing complexity of large files [2]. To optimize the manufacturing flow it was used the software Tecnomatix. This software represents a complete portfolio of manufacturing solutions digital manufactured by Siemens. It provides innovation by linking all production methods of a product from process design, process simulation, validation and ending the manufacturing process. Among its many capabilities to create a wide range of simulations, the program offers various demonstrations regarding the behavior manufacturing cycles. This program allows the simulation and optimization of production systems and processes in several areas such as: car suppliers, production of industrial equipment; electronics manufacturing, design and production of aerospace and defense parts.

  18. Velocity Resolved---Scalar Modeled Simulations of High Schmidt Number Turbulent Transport

    NASA Astrophysics Data System (ADS)

    Verma, Siddhartha

    The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc " 1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc . Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor-diffusivity model as a good representation of the subfilter flux. Velocity resolved - scalar modeled simulations of homogeneous isotropic turbulence are conducted to confirm the behavior theorized in these a priori analyses, and suggest that the tensor-diffusivity model is ideal for use in the viscous-convective subrange. Simulations of a turbulent mixing layer are also discussed, with the partial objective of analyzing Schmidt number dependence of a variety of scalar statistics. Large-scale statistics are confirmed to be relatively independent of the Schmidt number for Sc " 1, which is explained by the dominance of subfilter dissipation over resolved molecular dissipation in the simulations. Overall, the VR-SM framework presented is quite effective in predicting large-scale transport characteristics of high Schmidt number scalars, however, it is determined that prediction of subfilter quantities would entail additional modeling intended specifically for this purpose. The VR-SM simulations presented in this thesis provide us with the opportunity to overlap with experimental studies, while at the same time creating an assortment of baseline datasets for future validation of LES models, thereby satisfying the objectives outlined for this work.

  19. Fast Simulation of Dynamic Ultrasound Images Using the GPU.

    PubMed

    Storve, Sigurd; Torp, Hans

    2017-10-01

    Simulated ultrasound data is a valuable tool for development and validation of quantitative image analysis methods in echocardiography. Unfortunately, simulation time can become prohibitive for phantoms consisting of a large number of point scatterers. The COLE algorithm by Gao et al. is a fast convolution-based simulator that trades simulation accuracy for improved speed. We present highly efficient parallelized CPU and GPU implementations of the COLE algorithm with an emphasis on dynamic simulations involving moving point scatterers. We argue that it is crucial to minimize the amount of data transfers from the CPU to achieve good performance on the GPU. We achieve this by storing the complete trajectories of the dynamic point scatterers as spline curves in the GPU memory. This leads to good efficiency when simulating sequences consisting of a large number of frames, such as B-mode and tissue Doppler data for a full cardiac cycle. In addition, we propose a phase-based subsample delay technique that efficiently eliminates flickering artifacts seen in B-mode sequences when COLE is used without enough temporal oversampling. To assess the performance, we used a laptop computer and a desktop computer, each equipped with a multicore Intel CPU and an NVIDIA GPU. Running the simulator on a high-end TITAN X GPU, we observed two orders of magnitude speedup compared to the parallel CPU version, three orders of magnitude speedup compared to simulation times reported by Gao et al. in their paper on COLE, and a speedup of 27000 times compared to the multithreaded version of Field II, using numbers reported in a paper by Jensen. We hope that by releasing the simulator as an open-source project we will encourage its use and further development.

  20. The Influence of Investor Number on a Microscopic Market Model

    NASA Astrophysics Data System (ADS)

    Hellthaler, T.

    The stock market model of Levy, Persky, Solomon is simulated for much larger numbers of investors. While small markets can lead to realistically looking prices, the resulting prices of large markets oscillate smoothly in a semi-regular fashion.

  1. Predicting supramolecular self-assembly on reconstructed metal surfaces

    NASA Astrophysics Data System (ADS)

    Roussel, Thomas J.; Barrena, Esther; Ocal, Carmen; Faraudo, Jordi

    2014-06-01

    The prediction of supramolecular self-assembly onto solid surfaces is still challenging in many situations of interest for nanoscience. In particular, no previous simulation approach has been capable to simulate large self-assembly patterns of organic molecules over reconstructed surfaces (which have periodicities over large distances) due to the large number of surface atoms and adsorbing molecules involved. Using a novel simulation technique, we report here large scale simulations of the self-assembly patterns of an organic molecule (DIP) over different reconstructions of the Au(111) surface. We show that on particular reconstructions, the molecule-molecule interactions are enhanced in a way that long-range order is promoted. Also, the presence of a distortion in a reconstructed surface pattern not only induces the presence of long-range order but also is able to drive the organization of DIP into two coexisting homochiral domains, in quantitative agreement with STM experiments. On the other hand, only short range order is obtained in other reconstructions of the Au(111) surface. The simulation strategy opens interesting perspectives to tune the supramolecular structure by simulation design and surface engineering if choosing the right molecular building blocks and stabilising the chosen reconstruction pattern.The prediction of supramolecular self-assembly onto solid surfaces is still challenging in many situations of interest for nanoscience. In particular, no previous simulation approach has been capable to simulate large self-assembly patterns of organic molecules over reconstructed surfaces (which have periodicities over large distances) due to the large number of surface atoms and adsorbing molecules involved. Using a novel simulation technique, we report here large scale simulations of the self-assembly patterns of an organic molecule (DIP) over different reconstructions of the Au(111) surface. We show that on particular reconstructions, the molecule-molecule interactions are enhanced in a way that long-range order is promoted. Also, the presence of a distortion in a reconstructed surface pattern not only induces the presence of long-range order but also is able to drive the organization of DIP into two coexisting homochiral domains, in quantitative agreement with STM experiments. On the other hand, only short range order is obtained in other reconstructions of the Au(111) surface. The simulation strategy opens interesting perspectives to tune the supramolecular structure by simulation design and surface engineering if choosing the right molecular building blocks and stabilising the chosen reconstruction pattern. GA image adapted from refs: (a) Phys. Chem. Chem. Phys., 2001, 3, 3399-3404, with permission from the PCCP Owner Societies, and (b) J. Phys. Chem. C, 2008, 112 (18), 7168-7172, reprinted with permission from the American Chemical Society, copyright © 2008.

  2. High performance computing in biology: multimillion atom simulations of nanoscale systems

    PubMed Central

    Sanbonmatsu, K. Y.; Tung, C.-S.

    2007-01-01

    Computational methods have been used in biology for sequence analysis (bioinformatics), all-atom simulation (molecular dynamics and quantum calculations), and more recently for modeling biological networks (systems biology). Of these three techniques, all-atom simulation is currently the most computationally demanding, in terms of compute load, communication speed, and memory load. Breakthroughs in electrostatic force calculation and dynamic load balancing have enabled molecular dynamics simulations of large biomolecular complexes. Here, we report simulation results for the ribosome, using approximately 2.64 million atoms, the largest all-atom biomolecular simulation published to date. Several other nanoscale systems with different numbers of atoms were studied to measure the performance of the NAMD molecular dynamics simulation program on the Los Alamos National Laboratory Q Machine. We demonstrate that multimillion atom systems represent a 'sweet spot' for the NAMD code on large supercomputers. NAMD displays an unprecedented 85% parallel scaling efficiency for the ribosome system on 1024 CPUs. We also review recent targeted molecular dynamics simulations of the ribosome that prove useful for studying conformational changes of this large biomolecular complex in atomic detail. PMID:17187988

  3. Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.

    PubMed

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2017-01-01

    Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.

  4. Two-step simulation of velocity and passive scalar mixing at high Schmidt number in turbulent jets

    NASA Astrophysics Data System (ADS)

    Rah, K. Jeff; Blanquart, Guillaume

    2016-11-01

    Simulation of passive scalar in the high Schmidt number turbulent mixing process requires higher computational cost than that of velocity fields, because the scalar is associated with smaller length scales than velocity. Thus, full simulation of both velocity and passive scalar with high Sc for a practical configuration is difficult to perform. In this work, a new approach to simulate velocity and passive scalar mixing at high Sc is suggested to reduce the computational cost. First, the velocity fields are resolved by Large Eddy Simulation (LES). Then, by extracting the velocity information from LES, the scalar inside a moving fluid blob is simulated by Direct Numerical Simulation (DNS). This two-step simulation method is applied to a turbulent jet and provides a new way to examine a scalar mixing process in a practical application with smaller computational cost. NSF, Samsung Scholarship.

  5. Simulating anomalous transport and multiphase segregation in porous media with the Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Matin, Rastin; Hernandez, Anier; Misztal, Marek; Mathiesen, Joachim

    2015-04-01

    Many hydrodynamic phenomena ranging from flows at micron scale in porous media, large Reynolds numbers flows, non-Newtonian and multiphase flows have been simulated on computers using the lattice Boltzmann (LB) method. By solving the Lattice Boltzmann Equation on unstructured meshes in three dimensions, we have developed methods to efficiently model the fluid flow in real rock samples. We use this model to study the spatio-temporal statistics of the velocity field inside three-dimensional real geometries and investigate its relation to the, in general, anomalous transport of passive tracers for a wide range of Peclet and Reynolds numbers. We extend this model by free-energy based method, which allows us to simulate binary systems with large-density ratios in a thermodynamically consistent way and track the interface explicitly. In this presentation we will present our recent results on both anomalous transport and multiphase segregation.

  6. Towards Petascale DNS of High Reynolds-Number Turbulent Boundary Layer

    NASA Astrophysics Data System (ADS)

    Webster, Keegan R.

    In flight vehicles, a large portion of fuel consumption is due to skin-friction drag. Reduction of this drag will significantly reduce the fuel consumption of flight vehicles and help our nation to reduce CO 2 emissions. In order to reduce skin-friction drag, an increased understanding of wall-turbulence is needed. Direct numerical simulation (DNS) of spatially developing turbulent boundary layers (SDTBL) can provide the fundamental understanding of wall-turbulence in order to produce models for Reynolds averaged Navier-Stokes (RANS) and large-eddy simulations (LES). DNS of SDTBL over a flat plate at Retheta = 1430 - 2900 were performed. Improvements were made to the DNS code allowing for higher Reynolds number simulations towards petascale DNS of turbulent boundary layers. Mesh refinement and improvements to the inflow and outflow boundary conditions have resulted in turbulence statistics that match more closely to experimental results. The Reynolds stresses and the terms of their evolution equations are reported.

  7. Implicit and explicit subgrid-scale modeling in discontinuous Galerkin methods for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Fernandez, Pablo; Nguyen, Ngoc-Cuong; Peraire, Jaime

    2017-11-01

    Over the past few years, high-order discontinuous Galerkin (DG) methods for Large-Eddy Simulation (LES) have emerged as a promising approach to solve complex turbulent flows. Despite the significant research investment, the relation between the discretization scheme, the Riemann flux, the subgrid-scale (SGS) model and the accuracy of the resulting LES solver remains unclear. In this talk, we investigate the role of the Riemann solver and the SGS model in the ability to predict a variety of flow regimes, including transition to turbulence, wall-free turbulence, wall-bounded turbulence, and turbulence decay. The Taylor-Green vortex problem and the turbulent channel flow at various Reynolds numbers are considered. Numerical results show that DG methods implicitly introduce numerical dissipation in under-resolved turbulence simulations and, even in the high Reynolds number limit, this implicit dissipation provides a more accurate representation of the actual subgrid-scale dissipation than that by explicit models.

  8. Hot air impingement on a flat plate using Large Eddy Simulation (LES) technique

    NASA Astrophysics Data System (ADS)

    Plengsa-ard, C.; Kaewbumrung, M.

    2018-01-01

    Impinging hot gas jets to a flat plate generate very high heat transfer coefficients in the impingement zone. The magnitude of heat transfer prediction near the stagnation point is important and accurate heat flux distribution are needed. This research studies on heat transfer and flow field resulting from a single hot air impinging wall. The simulation is carried out using computational fluid dynamics (CFD) commercial code FLUENT. Large Eddy Simulation (LES) approach with a subgrid-scale Smagorinsky-Lilly model is present. The classical Werner-Wengle wall model is used to compute the predicted results of velocity and temperature near walls. The Smagorinsky constant in the turbulence model is set to 0.1 and is kept constant throughout the investigation. The hot gas jet impingement on the flat plate with a constant surface temperature is chosen to validate the predicted heat flux results with experimental data. The jet Reynolds number is equal to 20,000 and a fixed jet-to-plate spacing of H/D = 2.0. Nusselt number on the impingement surface is calculated. As predicted by the wall model, the instantaneous computed Nusselt number agree fairly well with experimental data. The largest values of calculated Nusselt number are near the stagnation point and decrease monotonically in the wall jet region. Also, the contour plots of instantaneous values of wall heat flux on a flat plate are captured by LES simulation.

  9. Cancellation exponent and multifractal structure in two-dimensional magnetohydrodynamics: direct numerical simulations and Lagrangian averaged modeling.

    PubMed

    Graham, Jonathan Pietarila; Mininni, Pablo D; Pouquet, Annick

    2005-10-01

    We present direct numerical simulations and Lagrangian averaged (also known as alpha model) simulations of forced and free decaying magnetohydrodynamic turbulence in two dimensions. The statistics of sign cancellations of the current at small scales is studied using both the cancellation exponent and the fractal dimension of the structures. The alpha model is found to have the same scaling behavior between positive and negative contributions as the direct numerical simulations. The alpha model is also able to reproduce the time evolution of these quantities in free decaying turbulence. At large Reynolds numbers, an independence of the cancellation exponent with the Reynolds numbers is observed.

  10. Ascertaining Validity in the Abstract Realm of PMESII Simulation Models: An Analysis of the Peace Support Operations Model (PSOM)

    DTIC Science & Technology

    2009-06-01

    simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE

  11. Power-law scaling in Bénard-Marangoni convection at large Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Boeck, Thomas; Thess, André

    2001-08-01

    Bénard-Marangoni convection at large Prandtl numbers is found to exhibit steady (nonturbulent) behavior in numerical experiments over a very wide range of Marangoni numbers Ma far away from the primary instability threshold. A phenomenological theory, taking into account the different character of thermal boundary layers at the bottom and at the free surface, is developed. It predicts a power-law scaling for the nondimensional velocity (Peclet number) and heat flux (Nusselt number) of the form Pe~Ma2/3, Nu~Ma2/9. This prediction is in good agreement with two-dimensional direct numerical simulations up to Ma=3.2×105.

  12. A Virtual Mixture Approach to the Study of Multistate Equilibrium: Application to Constant pH Simulation in Explicit Water

    PubMed Central

    Wu, Xiongwu; Brooks, Bernard R.

    2015-01-01

    Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66’s pKa. PMID:26506245

  13. A Virtual Mixture Approach to the Study of Multistate Equilibrium: Application to Constant pH Simulation in Explicit Water.

    PubMed

    Wu, Xiongwu; Brooks, Bernard R

    2015-10-01

    Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66's pKa.

  14. Particle physics and polyedra proximity calculation for hazard simulations in large-scale industrial plants

    NASA Astrophysics Data System (ADS)

    Plebe, Alice; Grasso, Giorgio

    2016-12-01

    This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.

  15. Accounting for Parameter Uncertainty in Complex Atmospheric Models, With an Application to Greenhouse Gas Emissions Evaluation

    NASA Astrophysics Data System (ADS)

    Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.

    2016-12-01

    In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.

  16. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    NASA Astrophysics Data System (ADS)

    Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.

  17. Teaching Technology-Structure Contingencies by "Harnessing the Wind"

    ERIC Educational Resources Information Center

    Miller, Lynn E.

    2007-01-01

    This article describes a role-playing simulation that demonstrates how organizational structure is influenced by organizational and departmental technologies. Students act as employees of firms that must manufacture either a range of innovative products or a large number of standardized products. The simulation can be used in organizational…

  18. Large-Eddy/Lattice Boltzmann Simulations of Micro-blowing Strategies for Subsonic and Supersonic Drag Control

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    2003-01-01

    This report summarizes the progress made in the first 8 to 9 months of this research. The Lattice Boltzmann Equation (LBE) methodology for Large-eddy Simulations (LES) of microblowing has been validated using a jet-in-crossflow test configuration. In this study, the flow intake is also simulated to allow the interaction to occur naturally. The Lattice Boltzmann Equation Large-eddy Simulations (LBELES) approach is capable of capturing not only the flow features associated with the flow, such as hairpin vortices and recirculation behind the jet, but also is able to show better agreement with experiments when compared to previous RANS predictions. The LBELES is shown to be computationally very efficient and therefore, a viable method for simulating the injection process. Two strategies have been developed to simulate multi-hole injection process as in the experiment. In order to allow natural interaction between the injected fluid and the primary stream, the flow intakes for all the holes have to be simulated. The LBE method is computationally efficient but is still 3D in nature and therefore, there may be some computational penalty. In order to study a large number or holes, a new 1D subgrid model has been developed that will simulate a reduced form of the Navier-Stokes equation in these holes.

  19. Turbulence and entrainment length scales in large wind farms.

    PubMed

    Andersen, Søren J; Sørensen, Jens N; Mikkelsen, Robert F

    2017-04-13

    A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  20. Turbulence and entrainment length scales in large wind farms

    PubMed Central

    2017-01-01

    A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265028

  1. Large eddy simulation for predicting turbulent heat transfer in gas turbines.

    PubMed

    Tafti, Danesh K; He, Long; Nagendra, K

    2014-08-13

    Blade cooling technology will play a critical role in the next generation of propulsion and power generation gas turbines. Accurate prediction of blade metal temperature can avoid the use of excessive compressed bypass air and allow higher turbine inlet temperature, increasing fuel efficiency and decreasing emissions. Large eddy simulation (LES) has been established to predict heat transfer coefficients with good accuracy under various non-canonical flows, but is still limited to relatively simple geometries and low Reynolds numbers. It is envisioned that the projected increase in computational power combined with a drop in price-to-performance ratio will make system-level simulations using LES in complex blade geometries at engine conditions accessible to the design process in the coming one to two decades. In making this possible, two key challenges are addressed in this paper: working with complex intricate blade geometries and simulating high-Reynolds-number (Re) flows. It is proposed to use the immersed boundary method (IBM) combined with LES wall functions. A ribbed duct at Re=20 000 is simulated using the IBM, and a two-pass ribbed duct is simulated at Re=100 000 with and without rotation (rotation number Ro=0.2) using LES with wall functions. The results validate that the IBM is a viable alternative to body-conforming grids and that LES with wall functions reproduces experimental results at a much lower computational cost. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2014-11-01

    Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.

  3. Resolution requirements for aero-optical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mani, Ali; Wang Meng; Moin, Parviz

    2008-11-10

    Analytical criteria are developed to estimate the error of aero-optical computations due to inadequate spatial resolution of refractive index fields in high Reynolds number flow simulations. The unresolved turbulence structures are assumed to be locally isotropic and at low turbulent Mach number. Based on the Kolmogorov spectrum for the unresolved structures, the computational error of the optical path length is estimated and linked to the resulting error in the computed far-field optical irradiance. It is shown that in the high Reynolds number limit, for a given geometry and Mach number, the spatial resolution required to capture aero-optics within a pre-specifiedmore » error margin does not scale with Reynolds number. In typical aero-optical applications this resolution requirement is much lower than the resolution required for direct numerical simulation, and therefore, a typical large-eddy simulation can capture the aero-optical effects. The analysis is extended to complex turbulent flow simulations in which non-uniform grid spacings are used to better resolve the local turbulence structures. As a demonstration, the analysis is used to estimate the error of aero-optical computation for an optical beam passing through turbulent wake of flow over a cylinder.« less

  4. Study of 3-D Dynamic Roughness Effects on Flow Over a NACA 0012 Airfoil Using Large Eddy Simulations at Low Reynolds Numbers

    NASA Astrophysics Data System (ADS)

    Guda, Venkata Subba Sai Satish

    There have been several advancements in the aerospace industry in areas of design such as aerodynamics, designs, controls and propulsion; all aimed at one common goal i.e. increasing efficiency --range and scope of operation with lesser fuel consumption. Several methods of flow control have been tried. Some were successful, some failed and many were termed as impractical. The low Reynolds number regime of 104 - 105 is a very interesting range. Flow physics in this range are quite different than those of higher Reynolds number range. Mid and high altitude UAV's, MAV's, sailplanes, jet engine fan blades, inboard helicopter rotor blades and wind turbine rotors are some of the aerodynamic applications that fall in this range. The current study deals with using dynamic roughness as a means of flow control over a NACA 0012 airfoil at low Reynolds numbers. Dynamic 3-D surface roughness elements on an airfoil placed near the leading edge aim at increasing the efficiency by suppressing the effects of leading edge separation like leading edge stall by delaying or totally eliminating flow separation. A numerical study of the above method has been carried out by means of a Large Eddy Simulation, a mathematical model for turbulence in Computational Fluid Dynamics, owing to the highly unsteady nature of the flow. A user defined function has been developed for the 3-D dynamic roughness element motion. Results from simulations have been compared to those from experimental PIV data. Large eddy simulations have relatively well captured the leading edge stall. For the clean cases, i.e. with the DR not actuated, the LES was able to reproduce experimental results in a reasonable fashion. However DR simulation results show that it fails to reattach the flow and suppress flow separation compared to experiments. Several novel techniques of grid design and hump creation are introduced through this study.

  5. Transferability of optimally-selected climate models in the quantification of climate change impacts on hydrology

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe

    2016-11-01

    Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.

  6. A General Simulator Using State Estimation for a Space Tug Navigation System. [computerized simulation, orbital position estimation and flight mechanics

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1975-01-01

    A general simulation program is presented (GSP) involving nonlinear state estimation for space vehicle flight navigation systems. A complete explanation of the iterative guidance mode guidance law, derivation of the dynamics, coordinate frames, and state estimation routines are given so as to fully clarify the assumptions and approximations involved so that simulation results can be placed in their proper perspective. A complete set of computer acronyms and their definitions as well as explanations of the subroutines used in the GSP simulator are included. To facilitate input/output, a complete set of compatable numbers, with units, are included to aid in data development. Format specifications, output data phrase meanings and purposes, and computer card data input are clearly spelled out. A large number of simulation and analytical studies were used to determine the validity of the simulator itself as well as various data runs.

  7. Large-scale parentage inference with SNPs: an efficient algorithm for statistical confidence of parent pair allocations.

    PubMed

    Anderson, Eric C

    2012-11-08

    Advances in genotyping that allow tens of thousands of individuals to be genotyped at a moderate number of single nucleotide polymorphisms (SNPs) permit parentage inference to be pursued on a very large scale. The intergenerational tagging this capacity allows is revolutionizing the management of cultured organisms (cows, salmon, etc.) and is poised to do the same for scientific studies of natural populations. Currently, however, there are no likelihood-based methods of parentage inference which are implemented in a manner that allows them to quickly handle a very large number of potential parents or parent pairs. Here we introduce an efficient likelihood-based method applicable to the specialized case of cultured organisms in which both parents can be reliably sampled. We develop a Markov chain representation for the cumulative number of Mendelian incompatibilities between an offspring and its putative parents and we exploit it to develop a fast algorithm for simulation-based estimates of statistical confidence in SNP-based assignments of offspring to pairs of parents. The method is implemented in the freely available software SNPPIT. We describe the method in detail, then assess its performance in a large simulation study using known allele frequencies at 96 SNPs from ten hatchery salmon populations. The simulations verify that the method is fast and accurate and that 96 well-chosen SNPs can provide sufficient power to identify the correct pair of parents from amongst millions of candidate pairs.

  8. Automatic Selection of Order Parameters in the Analysis of Large Scale Molecular Dynamics Simulations.

    PubMed

    Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S

    2014-12-09

    Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.

  9. Enstrophy Cascade in Decaying Two-Dimensional Quantum Turbulence

    NASA Astrophysics Data System (ADS)

    Reeves, Matthew T.; Billam, Thomas P.; Yu, Xiaoquan; Bradley, Ashton S.

    2017-11-01

    We report evidence for an enstrophy cascade in large-scale point-vortex simulations of decaying two-dimensional quantum turbulence. Devising a method to generate quantum vortex configurations with kinetic energy narrowly localized near a single length scale, the dynamics are found to be well characterized by a superfluid Reynolds number Res that depends only on the number of vortices and the initial kinetic energy scale. Under free evolution the vortices exhibit features of a classical enstrophy cascade, including a k-3 power-law kinetic energy spectrum, and constant enstrophy flux associated with inertial transport to small scales. Clear signatures of the cascade emerge for N ≳500 vortices. Simulating up to very large Reynolds numbers (N =32 768 vortices), additional features of the classical theory are observed: the Kraichnan-Batchelor constant is found to converge to C'≈1.6 , and the width of the k-3 range scales as Res1 /2 .

  10. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    PubMed

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, John Russell

    This grant funded the development and dissemination of the Photon Simulator (PhoSim) for the purpose of studying dark energy at high precision with the upcoming Large Synoptic Survey Telescope (LSST) astronomical survey. The work was in collaboration with the LSST Dark Energy Science Collaboration (DESC). Several detailed physics improvements were made in the optics, atmosphere, and sensor, a number of validation studies were performed, and a significant number of usability features were implemented. Future work in DESC will use PhoSim as the image simulation tool for data challenges used by the analysis groups.

  12. FLAME: A platform for high performance computing of complex systems, applied for three case studies

    DOE PAGES

    Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...

    2011-01-01

    FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.

  13. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, M.; Wieseman, C. D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few a priori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  14. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  15. Architecture for an integrated real-time air combat and sensor network simulation

    NASA Astrophysics Data System (ADS)

    Criswell, Evans A.; Rushing, John; Lin, Hong; Graves, Sara

    2007-04-01

    An architecture for an integrated air combat and sensor network simulation is presented. The architecture integrates two components: a parallel real-time sensor fusion and target tracking simulation, and an air combat simulation. By integrating these two simulations, it becomes possible to experiment with scenarios in which one or both sides in a battle have very large numbers of primitive passive sensors, and to assess the likely effects of those sensors on the outcome of the battle. Modern Air Power is a real-time theater-level air combat simulation that is currently being used as a part of the USAF Air and Space Basic Course (ASBC). The simulation includes a variety of scenarios from the Vietnam war to the present day, and also includes several hypothetical future scenarios. Modern Air Power includes a scenario editor, an order of battle editor, and full AI customization features that make it possible to quickly construct scenarios for any conflict of interest. The scenario editor makes it possible to place a wide variety of sensors including both high fidelity sensors such as radars, and primitive passive sensors that provide only very limited information. The parallel real-time sensor network simulation is capable of handling very large numbers of sensors on a computing cluster of modest size. It can fuse information provided by disparate sensors to detect and track targets, and produce target tracks.

  16. Multiscale numerical simulations of magnetoconvection at low magnetic Prandtl and Rossby numbers.

    NASA Astrophysics Data System (ADS)

    Maffei, S.; Calkins, M. A.; Julien, K. A.; Marti, P.

    2017-12-01

    The dynamics of the Earth's outer core is characterized by low values of the Rossby (Ro), Ekman and magnetic Prandtl numbers. These values indicate the large spectra of temporal and spatial scales that need to be accounted for in realistic numerical simulations of the system. Current direct numerical simulation are not capable of reaching this extreme regime, suggesting that a new class of models is required to account for the rich dynamics expected in the natural system. Here we present results from a quasi-geostrophic, multiscale model based on the scale separation implied by the low Ro typical of rapidly rotating systems. We investigate a plane layer geometry where convection is driven by an imposed temperature gradient and the hydrodynamic equations are modified by a large scale magnetic field. Analytical investigation shows that at values of thermal and magnetic Prandtl numbers relevant for liquid metals, the energetic requirements for the onset of convection is not significantly altered even in the presence of strong magnetic fields. Results from strongly forced nonlinear numerical simulations show the presence of an inverse cascade, typical of 2-D turbulence, when no or weak magnetic field is applied. For higher values of the magnetic field the inverse cascade is quenched.

  17. Numerical prediction of rail roughness growth on tangent railway tracks

    NASA Astrophysics Data System (ADS)

    Nielsen, J. C. O.

    2003-10-01

    Growth of railhead roughness (irregularities, waviness) is predicted through numerical simulation of dynamic train-track interaction on tangent track. The hypothesis is that wear is caused by longitudinal slip due to driven wheelsets, and that wear is proportional to the longitudinal frictional power in the contact patch. Emanating from an initial roughness spectrum corresponding to a new or a recent ground rail, an initial roughness profile is determined. Wheel-rail contact forces, creepages and wear for one wheelset passage are calculated in relation to location along a discretely supported track model. The calculated wear is scaled by a chosen number of wheelset passages, and is then added to the initial roughness profile. Field observations of rail corrugation on a Dutch track are used to validate the simulation model. Results from the simulations predict a large roughness growth rate for wavelengths around 30-40 mm. The large growth in this wavelength interval is explained by a low track receptance near the sleepers around the pinned-pinned resonance frequency, in combination with a large number of driven passenger wheelset passages at uniform speed. The agreement between simulations and field measurements is good with respect to dominating roughness wavelength and annual wear rate. Remedies for reducing roughness growth are discussed.

  18. Full Quantum Dynamics Simulation of a Realistic Molecular System Using the Adaptive Time-Dependent Density Matrix Renormalization Group Method.

    PubMed

    Yao, Yao; Sun, Ke-Wei; Luo, Zhen; Ma, Haibo

    2018-01-18

    The accurate theoretical interpretation of ultrafast time-resolved spectroscopy experiments relies on full quantum dynamics simulations for the investigated system, which is nevertheless computationally prohibitive for realistic molecular systems with a large number of electronic and/or vibrational degrees of freedom. In this work, we propose a unitary transformation approach for realistic vibronic Hamiltonians, which can be coped with using the adaptive time-dependent density matrix renormalization group (t-DMRG) method to efficiently evolve the nonadiabatic dynamics of a large molecular system. We demonstrate the accuracy and efficiency of this approach with an example of simulating the exciton dissociation process within an oligothiophene/fullerene heterojunction, indicating that t-DMRG can be a promising method for full quantum dynamics simulation in large chemical systems. Moreover, it is also shown that the proper vibronic features in the ultrafast electronic process can be obtained by simulating the two-dimensional (2D) electronic spectrum by virtue of the high computational efficiency of the t-DMRG method.

  19. Large eddy simulation on Rayleigh–Bénard convection of cold water in the neighborhood of the maximum density

    NASA Astrophysics Data System (ADS)

    Huang, Xiao-Jie; Zhang, Li; Hu, Yu-Peng; Li, You-Rong

    2018-06-01

    In order to understand the effect of the Rayleigh number, the density inversion phenomenon and the aspect ratio on the flow patterns and the heat transfer characteristics of Rayleigh–Bénard convection of cold water in the neighborhood of the maximum density, a series of large eddy simulations are conducted by using the finite volume method. The Rayleigh number ranges between 106 and 109, the density inversion parameter and the aspect ratio are varied from 0 to 0.9 and from 0.4 to 2.5, respectively. The results indicate that the reversal of the large scale circulation (LSC) occurs with the increase of the Rayleigh number. When there exists a density inversion phenomenon, the key driver for the LSC is hot plumes. When the density inversion parameter is large enough, a stagnant region is found near the top of the container as the hot plumes cannot move to the top wall. The flow pattern structures depend mainly on the aspect ratio. When the aspect ratio is small, the rolls are vertically stacked and the flow keeps on switching among different flow states. For a moderate aspect ratio, different long-lived roll states coexist at a fixed aspect ratio. For a larger aspect ratio, the flow state is everlasting. The number of rolls increases with the increase of the aspect ratio. Furthermore, the aspect ratio has only slight influence on the time averaged Nusselt number for all density inversion parameters.

  20. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  1. Performance of a proportion-based approach to meta-analytic moderator estimation: results from Monte Carlo simulations.

    PubMed

    Aguirre-Urreta, Miguel I; Ellis, Michael E; Sun, Wenying

    2012-03-01

    This research investigates the performance of a proportion-based approach to meta-analytic moderator estimation through a series of Monte Carlo simulations. This approach is most useful when the moderating potential of a categorical variable has not been recognized in primary research and thus heterogeneous groups have been pooled together as a single sample. Alternative scenarios representing different distributions of group proportions are examined along with varying numbers of studies, subjects per study, and correlation combinations. Our results suggest that the approach is largely unbiased in its estimation of the magnitude of between-group differences and performs well with regard to statistical power and type I error. In particular, the average percentage bias of the estimated correlation for the reference group is positive and largely negligible, in the 0.5-1.8% range; the average percentage bias of the difference between correlations is also minimal, in the -0.1-1.2% range. Further analysis also suggests both biases decrease as the magnitude of the underlying difference increases, as the number of subjects in each simulated primary study increases, and as the number of simulated studies in each meta-analysis increases. The bias was most evident when the number of subjects and the number of studies were the smallest (80 and 36, respectively). A sensitivity analysis that examines its performance in scenarios down to 12 studies and 40 primary subjects is also included. This research is the first that thoroughly examines the adequacy of the proportion-based approach. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  2. NASA Integrated Vehicle Health Management (NIVHM) A New Simulation Architecture. Part I; An Investigation

    NASA Technical Reports Server (NTRS)

    Sheppard, Gene

    2005-01-01

    The overall objective of this research is to explore the development of a new architecture for simulating a vehicle health monitoring system in support of NASA s on-going Integrated Vehicle Health Monitoring (IVHM) initiative. As discussed in NASA MSFC s IVHM workshop on June 29-July 1, 2004, a large number of sensors will be required for a robust IVHM system. The current simulation architecture is incapable of simulating the large number of sensors required for IVHM. Processing the data from the sensors into a format that a human operator can understand and assimilate in a timely manner will require a paradigm shift. Data from a single sensor is, at best, suspect and in order to overcome this deficiency, redundancy will be required for tomorrow s sensors. The sensor technology of tomorrow will allow for the placement of thousands of sensors per square inch. The major obstacle to overcome will then be how we can mitigate the torrent of data from raw sensor data to useful information to computer assisted decisionmaking.

  3. LES tests on airfoil trailing edge serration

    NASA Astrophysics Data System (ADS)

    Zhu, Wei Jun; Shen, Wen Zhong

    2016-09-01

    In the present study, a large number of acoustic simulations are carried out for a low noise airfoil with different Trailing Edge Serrations (TES). The Ffowcs Williams-Hawkings (FWH) acoustic analogy is used for noise prediction at trailing edge. The acoustic solver is running on the platform of our in-house incompressible flow solver EllipSys3D. The flow solution is first obtained from the Large Eddy Simulation (LES), the acoustic part is then carried out based on the instantaneous hydrodynamic pressure and velocity field. To obtain the time history data of sound pressure, the flow quantities are integrated around the airfoil surface through the FWH approach. For all the simulations, the chord based Reynolds number is around 1.5x106. In the test matrix, the effects from angle of attack, the TE flap angle, the length/width of the TES are investigated. Even though the airfoil under investigation is already optimized for low noise emission, most numerical simulations and wind tunnel experiments show that the noise level is further decreased by adding the TES device.

  4. Large-eddy simulations of a forced homogeneous isotropic turbulence with polymer additives

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Cai, Wei-Hua; Li, Feng-Chen

    2014-03-01

    Large-eddy simulations (LES) based on the temporal approximate deconvolution model were performed for a forced homogeneous isotropic turbulence (FHIT) with polymer additives at moderate Taylor Reynolds number. Finitely extensible nonlinear elastic in the Peterlin approximation model was adopted as the constitutive equation for the filtered conformation tensor of the polymer molecules. The LES results were verified through comparisons with the direct numerical simulation results. Using the LES database of the FHIT in the Newtonian fluid and the polymer solution flows, the polymer effects on some important parameters such as strain, vorticity, drag reduction, and so forth were studied. By extracting the vortex structures and exploring the flatness factor through a high-order correlation function of velocity derivative and wavelet analysis, it can be found that the small-scale vortex structures and small-scale intermittency in the FHIT are all inhibited due to the existence of the polymers. The extended self-similarity scaling law in the polymer solution flow shows no apparent difference from that in the Newtonian fluid flow at the currently simulated ranges of Reynolds and Weissenberg numbers.

  5. Simulations of the Formation and Evolution of X-ray Clusters

    NASA Astrophysics Data System (ADS)

    Bryan, G. L.; Klypin, A.; Norman, M. L.

    1994-05-01

    We describe results from a set of Omega = 1 Cold plus Hot Dark Matter (CHDM) and Cold Dark Matter (CDM) simulations. We examine the formation and evolution of X-ray clusters in a cosmological setting with sufficient numbers to perform statistical analysis. We find that CDM, normalized to COBE, seems to produce too many large clusters, both in terms of the luminosity (dn/dL) and temperature (dn/dT) functions. The CHDM simulation produces fewer clusters and the temperature distribution (our numerically most secure result) matches observations where they overlap. The computed cluster luminosity function drops below observations, but we are almost surely underestimating the X-ray luminosity. Because of the lower fluctuations in CHDM, there are only a small number of bright clusters in our simulation volume; however we can use the simulated clusters to fix the relation between temperature and velocity dispersion, allowing us to use collisionless N-body codes to probe larger length scales with correspondingly brighter clusters. The hydrodynamic simulations have been performed with a hybrid particle-mesh scheme for the dark matter and a high resolution grid-based piecewise parabolic method for the adiabatic gas dynamics. This combination has been implemented for massively parallel computers, allowing us to achive grids as large as 512(3) .

  6. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  7. Computational aspects in high intensity ultrasonic surgery planning.

    PubMed

    Pulkkinen, A; Hynynen, K

    2010-01-01

    Therapeutic ultrasound treatment planning is discussed and computational aspects regarding it are reviewed. Nonlinear ultrasound simulations were solved with a combined frequency domain Rayleigh and KZK model. Ultrasonic simulations were combined with thermal simulations and were used to compute heating of muscle tissue in vivo for four different focused ultrasound transducers. The simulations were compared with measurements and good agreement was found for large F-number transducers. However, at F# 1.9 the simulated rate of temperature rise was approximately a factor of 2 higher than the measured ones. The power levels used with the F# 1 transducer were too low to show any nonlinearity. The simulations were used to investigate the importance of nonlinarities generated in the coupling water, and also the importance of including skin in the simulations. Ignoring either of these in the model would lead to larger errors. Most notably, the nonlinearities generated in the water can enhance the focal temperature by more than 100%. The simulations also demonstrated that pulsed high power sonications may provide an opportunity to significantly (up to a factor of 3) reduce the treatment time. In conclusion, nonlinear propagation can play an important role in shaping the energy distribution during a focused ultrasound treatment and it should not be ignored in planning. However, the current simulation methods are accurate only with relatively large F-numbers and better models need to be developed for sharply focused transducers. Copyright 2009 Elsevier Ltd. All rights reserved.

  8. Simulation of a large size inductively coupled plasma generator and comparison with experimental data

    NASA Astrophysics Data System (ADS)

    Lei, Fan; Li, Xiaoping; Liu, Yanming; Liu, Donglin; Yang, Min; Yu, Yuanyuan

    2018-01-01

    A two-dimensional axisymmetric inductively coupled plasma (ICP) model with its implementation in the COMSOL (Multi-physics simulation software) platform is described. Specifically, a large size ICP generator filled with argon is simulated in this study. Distributions of the number density and temperature of electrons are obtained for various input power and pressure settings and compared. In addition, the electron trajectory distribution is obtained in simulation. Finally, using experimental data, the results from simulations are compared to assess the veracity of the two-dimensional fluid model. The purpose of this comparison is to validate the veracity of the simulation model. An approximate agreement was found (variation tendency is the same). The main reasons for the numerical magnitude discrepancies are the assumption of a Maxwellian distribution and a Druyvesteyn distribution for the electron energy and the lack of cross sections of collision frequencies and reaction rates for argon plasma.

  9. VHDL simulation with access to transistor models

    NASA Technical Reports Server (NTRS)

    Gibson, J.

    1991-01-01

    Hardware description languages such as VHDL have evolved to aid in the design of systems with large numbers of elements and a wide range of electronic and logical abstractions. For high performance circuits, behavioral models may not be able to efficiently include enough detail to give designers confidence in a simulation's accuracy. One option is to provide a link between the VHDL environment and a transistor level simulation environment. The coupling of the Vantage Analysis Systems VHDL simulator and the NOVA simulator provides the combination of VHDL modeling and transistor modeling.

  10. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  11. Structural mechanics simulations

    NASA Technical Reports Server (NTRS)

    Biffle, Johnny H.

    1992-01-01

    Sandia National Laboratory has a very broad structural capability. Work has been performed in support of reentry vehicles, nuclear reactor safety, weapons systems and components, nuclear waste transport, strategic petroleum reserve, nuclear waste storage, wind and solar energy, drilling technology, and submarine programs. The analysis environment contains both commercial and internally developed software. Included are mesh generation capabilities, structural simulation codes, and visual codes for examining simulation results. To effectively simulate a wide variety of physical phenomena, a large number of constitutive models have been developed.

  12. Modeling number of bacteria per food unit in comparison to bacterial concentration in quantitative risk assessment: impact on risk estimates.

    PubMed

    Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin

    2015-02-01

    When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.

  13. Computational Analyses of the LIMX TBCC Inlet High-Speed Flowpath

    NASA Technical Reports Server (NTRS)

    Dippold, Vance F., III

    2012-01-01

    Reynolds-Averaged Navier-Stokes (RANS) simulations were performed for the high-speed flowpath and isolator of a dual-flowpath Turbine-Based Combined-Cycle (TBCC) inlet using the Wind-US code. The RANS simulations were performed in preparation for the Large-scale Inlet for Mode Transition (LIMX) model tests in the NASA Glenn Research Center (GRC) 10- by 10-ft Supersonic Wind Tunnel. The LIMX inlet has a low-speed flowpath that is coupled to a turbine engine and a high-speed flowpath designed to be coupled to a Dual-Mode Scramjet (DMSJ) combustor. These RANS simulations were conducted at a simulated freestream Mach number of 4.0, which is the nominal Mach number for the planned wind tunnel testing with the LIMX model. For the simulation results presented in this paper, the back pressure, cowl angles, and freestream Mach number were each varied to assess the performance and robustness of the high-speed inlet and isolator. Under simulated wind tunnel conditions at maximum inlet mass flow rates, the high-speed flowpath pressure rise was found to be greater than a factor of four. Furthermore, at a simulated freestream Mach number of 4.0, the high-speed flowpath and isolator showed stability for freestream Mach number that drops 0.1 Mach below the design point. The RANS simulations indicate the yet-untested highspeed inlet and isolator flowpath should operate as designed. The RANS simulation results also provided important insight to researchers as they developed test plans for the LIMX experiment in GRC s 10- by 10-ft Supersonic Wind Tunnel.

  14. Large-eddy simulations of compressible convection on massively parallel computers. [stellar physics

    NASA Technical Reports Server (NTRS)

    Xie, Xin; Toomre, Juri

    1993-01-01

    We report preliminary implementation of the large-eddy simulation (LES) technique in 2D simulations of compressible convection carried out on the CM-2 massively parallel computer. The convective flow fields in our simulations possess structures similar to those found in a number of direct simulations, with roll-like flows coherent across the entire depth of the layer that spans several density scale heights. Our detailed assessment of the effects of various subgrid scale (SGS) terms reveals that they may affect the gross character of convection. Yet, somewhat surprisingly, we find that our LES solutions, and another in which the SGS terms are turned off, only show modest differences. The resulting 2D flows realized here are rather laminar in character, and achieving substantial turbulence may require stronger forcing and less dissipation.

  15. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, T.; Nagata, K.

    2016-08-01

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.

  16. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less

  17. Rarefaction-driven Rayleigh–Taylor instability. Part 2. Experiments and simulations in the nonlinear regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, R. V.; Cabot, W. H.; Greenough, J. A.

    Experiments and large eddy simulation (LES) were performed to study the development of the Rayleigh–Taylor instability into the saturated, nonlinear regime, produced between two gases accelerated by a rarefaction wave. Single-mode two-dimensional, and single-mode three-dimensional initial perturbations were introduced on the diffuse interface between the two gases prior to acceleration. The rarefaction wave imparts a non-constant acceleration, and a time decreasing Atwood number,more » $$A=(\\unicode[STIX]{x1D70C}_{2}-\\unicode[STIX]{x1D70C}_{1})/(\\unicode[STIX]{x1D70C}_{2}+\\unicode[STIX]{x1D70C}_{1})$$, where$$\\unicode[STIX]{x1D70C}_{2}$$and$$\\unicode[STIX]{x1D70C}_{1}$$are the densities of the heavy and light gas, respectively. Experiments and simulations are presented for initial Atwood numbers of$A=0.49$$,$$A=0.63$$,$$A=0.82$$and$$A=0.94$$. Nominally two-dimensional (2-D) experiments (initiated with nearly 2-D perturbations) and 2-D simulations are observed to approach an intermediate-time velocity plateau that is in disagreement with the late-time velocity obtained from the incompressible model of Goncharov (Phys. Rev. Lett., vol. 88, 2002, 134502). Reacceleration from an intermediate velocity is observed for 2-D bubbles in large wavenumber,$$k=2\\unicode[STIX]{x03C0}/\\unicode[STIX]{x1D706}=0.247~\\text{mm}^{-1}$$, experiments and simulations, where$$\\unicode[STIX]{x1D706}$is the wavelength of the initial perturbation. At moderate Atwood numbers, the bubble and spike velocities approach larger values than those predicted by Goncharov’s model. These late-time velocity trends are predicted well by numerical simulations using the LLNL Miranda code, and by the 2009 model of Mikaelian (Phys. Fluids., vol. 21, 2009, 024103) that extends Layzer type models to variable acceleration and density. Large Atwood number experiments show a delayed roll up, and exhibit a free-fall like behaviour. Finally, experiments initiated with three-dimensional perturbations tend to agree better with models and a simulation using the LLNL Ares code initiated with an axisymmetric rather than Cartesian symmetry.« less

  18. Rarefaction-driven Rayleigh–Taylor instability. Part 2. Experiments and simulations in the nonlinear regime

    DOE PAGES

    Morgan, R. V.; Cabot, W. H.; Greenough, J. A.; ...

    2018-01-12

    Experiments and large eddy simulation (LES) were performed to study the development of the Rayleigh–Taylor instability into the saturated, nonlinear regime, produced between two gases accelerated by a rarefaction wave. Single-mode two-dimensional, and single-mode three-dimensional initial perturbations were introduced on the diffuse interface between the two gases prior to acceleration. The rarefaction wave imparts a non-constant acceleration, and a time decreasing Atwood number,more » $$A=(\\unicode[STIX]{x1D70C}_{2}-\\unicode[STIX]{x1D70C}_{1})/(\\unicode[STIX]{x1D70C}_{2}+\\unicode[STIX]{x1D70C}_{1})$$, where$$\\unicode[STIX]{x1D70C}_{2}$$and$$\\unicode[STIX]{x1D70C}_{1}$$are the densities of the heavy and light gas, respectively. Experiments and simulations are presented for initial Atwood numbers of$A=0.49$$,$$A=0.63$$,$$A=0.82$$and$$A=0.94$$. Nominally two-dimensional (2-D) experiments (initiated with nearly 2-D perturbations) and 2-D simulations are observed to approach an intermediate-time velocity plateau that is in disagreement with the late-time velocity obtained from the incompressible model of Goncharov (Phys. Rev. Lett., vol. 88, 2002, 134502). Reacceleration from an intermediate velocity is observed for 2-D bubbles in large wavenumber,$$k=2\\unicode[STIX]{x03C0}/\\unicode[STIX]{x1D706}=0.247~\\text{mm}^{-1}$$, experiments and simulations, where$$\\unicode[STIX]{x1D706}$is the wavelength of the initial perturbation. At moderate Atwood numbers, the bubble and spike velocities approach larger values than those predicted by Goncharov’s model. These late-time velocity trends are predicted well by numerical simulations using the LLNL Miranda code, and by the 2009 model of Mikaelian (Phys. Fluids., vol. 21, 2009, 024103) that extends Layzer type models to variable acceleration and density. Large Atwood number experiments show a delayed roll up, and exhibit a free-fall like behaviour. Finally, experiments initiated with three-dimensional perturbations tend to agree better with models and a simulation using the LLNL Ares code initiated with an axisymmetric rather than Cartesian symmetry.« less

  19. Stochastic simulation of biological reactions, and its applications for studying actin polymerization.

    PubMed

    Ichikawa, Kazuhisa; Suzuki, Takashi; Murata, Noboru

    2010-11-30

    Molecular events in biological cells occur in local subregions, where the molecules tend to be small in number. The cytoskeleton, which is important for both the structural changes of cells and their functions, is also a countable entity because of its long fibrous shape. To simulate the local environment using a computer, stochastic simulations should be run. We herein report a new method of stochastic simulation based on random walk and reaction by the collision of all molecules. The microscopic reaction rate P(r) is calculated from the macroscopic rate constant k. The formula involves only local parameters embedded for each molecule. The results of the stochastic simulations of simple second-order, polymerization, Michaelis-Menten-type and other reactions agreed quite well with those of deterministic simulations when the number of molecules was sufficiently large. An analysis of the theory indicated a relationship between variance and the number of molecules in the system, and results of multiple stochastic simulation runs confirmed this relationship. We simulated Ca²(+) dynamics in a cell by inward flow from a point on the cell surface and the polymerization of G-actin forming F-actin. Our results showed that this theory and method can be used to simulate spatially inhomogeneous events.

  20. The Linear Bias in the Zeldovich Approximation and a Relation between the Number Density and the Linear Bias of Dark Halos

    NASA Astrophysics Data System (ADS)

    Fan, Zuhui

    2000-01-01

    The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.

  1. LES of Temporally Evolving Mixing Layers by an Eighth-Order Filter Scheme

    NASA Technical Reports Server (NTRS)

    Hadjadj, A; Yee, H. C.; Sjogreen, B.

    2011-01-01

    An eighth-order filter method for a wide range of compressible flow speeds (H.C. Yee and B. Sjogreen, Proceedings of ICOSAHOM09, June 22-26, 2009, Trondheim, Norway) are employed for large eddy simulations (LES) of temporally evolving mixing layers (TML) for different convective Mach numbers (Mc) and Reynolds numbers. The high order filter method is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The value of Mc considered is for the TML range from the quasi-incompressible regime to the highly compressible supersonic regime. The three main characteristics of compressible TML (the self similarity property, compressibility effects and the presence of large-scale structure with shocklets for high Mc) are considered for the LES study. The LES results using the same scheme parameters for all studied cases agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) work of Rogers & Moser (1994) and Pantano & Sarkar (2002).

  2. Vorticity, backscatter and counter-gradient transport predictions using two-level simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Ranjan, R.; Menon, S.

    2018-04-01

    The two-level simulation (TLS) method evolves both the large-and the small-scale fields in a two-scale approach and has shown good predictive capabilities in both isotropic and wall-bounded high Reynolds number (Re) turbulent flows in the past. Sensitivity and ability of this modelling approach to predict fundamental features (such as backscatter, counter-gradient turbulent transport, small-scale vorticity, etc.) seen in high Re turbulent flows is assessed here by using two direct numerical simulation (DNS) datasets corresponding to a forced isotropic turbulence at Taylor's microscale-based Reynolds number Reλ ≈ 433 and a fully developed turbulent flow in a periodic channel at friction Reynolds number Reτ ≈ 1000. It is shown that TLS captures the dynamics of local co-/counter-gradient transport and backscatter at the requisite scales of interest. These observations are further confirmed through a posteriori investigation of the flow in a periodic channel at Reτ = 2000. The results reveal that the TLS method can capture both the large- and the small-scale flow physics in a consistent manner, and at a reduced overall cost when compared to the estimated DNS or wall-resolved LES cost.

  3. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel

    2014-01-15

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less

  4. Wildland fire probabilities estimated from weather model-deduced monthly mean fire danger indices

    Treesearch

    Haiganoush K. Preisler; Shyh-Chin Chen; Francis Fujioka; John W. Benoit; Anthony L. Westerling

    2008-01-01

    The National Fire Danger Rating System indices deduced from a regional simulation weather model were used to estimate probabilities and numbers of large fire events on monthly and 1-degree grid scales. The weather model simulations and forecasts are ongoing experimental products from the Experimental Climate Prediction Center at the Scripps Institution of Oceanography...

  5. Important parameters for smoke plume rise simulation with Daysmoke

    Treesearch

    L. Liu; G.L. Achtemeier; S.L. Goodrick; W. Jackson

    2010-01-01

    Daysmoke is a local smoke transport model and has been used to provide smoke plume rise information. It includes a large number of parameters describing the dynamic and stochastic processes of particle upward movement, fallout, fluctuation, and burn emissions. This study identifies the important parameters for Daysmoke simulations of plume rise and seeks to understand...

  6. Planning for Retrospective Conversion: A Simulation of the OCLC TAPECON Service.

    ERIC Educational Resources Information Center

    Hanson, Heidi; Pronevitz, Gregory

    1989-01-01

    Describes a simulation of OCLC's TAPECON retrospective conversion service and its impact on an online catalog in a large university research library. The analysis includes results of Library of Congress Card Numbers, author/title, and title searches, and hit rates based on an analysis of OCLC and locally generated reports. (three references)…

  7. Equilibration and analysis of first-principles molecular dynamics simulations of water

    NASA Astrophysics Data System (ADS)

    Dawson, William; Gygi, François

    2018-03-01

    First-principles molecular dynamics (FPMD) simulations based on density functional theory are becoming increasingly popular for the description of liquids. In view of the high computational cost of these simulations, the choice of an appropriate equilibration protocol is critical. We assess two methods of estimation of equilibration times using a large dataset of first-principles molecular dynamics simulations of water. The Gelman-Rubin potential scale reduction factor [A. Gelman and D. B. Rubin, Stat. Sci. 7, 457 (1992)] and the marginal standard error rule heuristic proposed by White [Simulation 69, 323 (1997)] are evaluated on a set of 32 independent 64-molecule simulations of 58 ps each, amounting to a combined cumulative time of 1.85 ns. The availability of multiple independent simulations also allows for an estimation of the variance of averaged quantities, both within MD runs and between runs. We analyze atomic trajectories, focusing on correlations of the Kohn-Sham energy, pair correlation functions, number of hydrogen bonds, and diffusion coefficient. The observed variability across samples provides a measure of the uncertainty associated with these quantities, thus facilitating meaningful comparisons of different approximations used in the simulations. We find that the computed diffusion coefficient and average number of hydrogen bonds are affected by a significant uncertainty in spite of the large size of the dataset used. A comparison with classical simulations using the TIP4P/2005 model confirms that the variability of the diffusivity is also observed after long equilibration times. Complete atomic trajectories and simulation output files are available online for further analysis.

  8. Equilibration and analysis of first-principles molecular dynamics simulations of water.

    PubMed

    Dawson, William; Gygi, François

    2018-03-28

    First-principles molecular dynamics (FPMD) simulations based on density functional theory are becoming increasingly popular for the description of liquids. In view of the high computational cost of these simulations, the choice of an appropriate equilibration protocol is critical. We assess two methods of estimation of equilibration times using a large dataset of first-principles molecular dynamics simulations of water. The Gelman-Rubin potential scale reduction factor [A. Gelman and D. B. Rubin, Stat. Sci. 7, 457 (1992)] and the marginal standard error rule heuristic proposed by White [Simulation 69, 323 (1997)] are evaluated on a set of 32 independent 64-molecule simulations of 58 ps each, amounting to a combined cumulative time of 1.85 ns. The availability of multiple independent simulations also allows for an estimation of the variance of averaged quantities, both within MD runs and between runs. We analyze atomic trajectories, focusing on correlations of the Kohn-Sham energy, pair correlation functions, number of hydrogen bonds, and diffusion coefficient. The observed variability across samples provides a measure of the uncertainty associated with these quantities, thus facilitating meaningful comparisons of different approximations used in the simulations. We find that the computed diffusion coefficient and average number of hydrogen bonds are affected by a significant uncertainty in spite of the large size of the dataset used. A comparison with classical simulations using the TIP4P/2005 model confirms that the variability of the diffusivity is also observed after long equilibration times. Complete atomic trajectories and simulation output files are available online for further analysis.

  9. Effectiveness of online simulation training: Measuring faculty knowledge, perceptions, and intention to adopt.

    PubMed

    Kim, Sujeong; Park, Chang; O'Rourke, Jennifer

    2017-04-01

    Best practice standards of simulation recommend standardized simulation training for nursing faculty. Online training may offer an effective and more widely available alternative to in-person training. Using the Theory of Planned Behavior, this study evaluated the effectiveness of an online simulation training program, examining faculty's foundational knowledge of simulation as well as perceptions and intention to adopt. One-group pretest-posttest design. A large school of nursing with a main campus and five regional campuses in the Midwestern United States. Convenience sample of 52 faculty participants. Knowledge of foundational simulation principles was measured by pre/post-training module quizzes. Perceptions and the intention to adopt simulation were measured using the Faculty Attitudes and Intent to Use Related to the Human Patient Simulator questionnaire. There was a significant improvement in faculty knowledge after training and observable improvements in attitudes. Attitudes significantly influenced the intention to adopt simulation (B=2.54, p<0.001). Online simulation training provides an effective alternative for training large numbers of nursing faculty who seek to implement best practice of standards within their institutions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Development and operation of a real-time simulation at the NASA Ames Vertical Motion Simulator

    NASA Technical Reports Server (NTRS)

    Sweeney, Christopher; Sheppard, Shirin; Chetelat, Monique

    1993-01-01

    The Vertical Motion Simulator (VMS) facility at the NASA Ames Research Center combines the largest vertical motion capability in the world with a flexible real-time operating system allowing research to be conducted quickly and effectively. Due to the diverse nature of the aircraft simulated and the large number of simulations conducted annually, the challenge for the simulation engineer is to develop an accurate real-time simulation in a timely, efficient manner. The SimLab facility and the software tools necessary for an operating simulation will be discussed. Subsequent sections will describe the development process through operation of the simulation; this includes acceptance of the model, validation, integration and production phases.

  11. Investigation of Rossby-number similarity in the neutral boundary layer using large-eddy simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohmstede, W.D.; Cederwall, R.T.; Meyers, R.E.

    One special case of particular interest, especially to theoreticians, is the steady-state, horizontally homogeneous, autobarotropic (PLB), hereafter referred to as the neutral boundary layer (NBL). The NBL is in fact a 'rare' atmospheric phenomenon, generally associated with high-wind situations. Nevertheless, there is a disproportionate interest in this problem because Rossby-number similarity theory provides a sound approach for addressing this issue. Rossby-number similarity theory has rather wide acceptance, but because of the rarity of the 'true' NBL state, there remains an inadequate experimental database for quantifying constants associated with the Rossby-number similarity concept. Although it remains a controversial issue, it hasmore » been proposed that large-eddy simulation (LES) is an alternative to physical experimentation for obtaining basic atmospherc 'data'. The objective of the study reported here is to investigate Rossby-number similarity in the NBL using LES. Previous studies have not addressed Rossby-number similarity explicitly, although they made use of it in the interpretation of their results. The intent is to calculate several sets of NBL solutions that are ambiguous relative to the their respective Rossby numbers and compare the results for similarity, or the lack of it. 14 refs., 1 fig.« less

  12. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  13. Numerical simulation of small-scale thermal convection in the atmosphere

    NASA Technical Reports Server (NTRS)

    Somerville, R. C. J.

    1973-01-01

    A Boussinesq system is integrated numerically in three dimensions and time in a study of nonhydrostatic convection in the atmosphere. Simulation of cloud convection is achieved by the inclusion of parametrized effects of latent heat and small-scale turbulence. The results are compared with the cell structure observed in Rayleigh-Benard laboratory conversion experiments in air. At a Rayleigh number of 4000, the numerical model adequately simulates the experimentally observed evolution, including some prominent transients of a flow from a randomly perturbed initial conductive state into the final state of steady large-amplitude two-dimensional rolls. At Rayleigh number 9000, the model reproduces the experimentally observed unsteady equilibrium of vertically coherent oscillatory waves superimposed on rolls.

  14. Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number

    NASA Astrophysics Data System (ADS)

    Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco

    2016-11-01

    A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.

  15. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  16. Physical properties of the HIV-1 capsid from all-atom molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Perilla, Juan R.; Schulten, Klaus

    2017-07-01

    Human immunodeficiency virus type 1 (HIV-1) infection is highly dependent on its capsid. The capsid is a large container, made of ~1,300 proteins with altogether 4 million atoms. Although the capsid proteins are all identical, they nevertheless arrange themselves into a largely asymmetric structure made of hexamers and pentamers. The large number of degrees of freedom and lack of symmetry pose a challenge to studying the chemical details of the HIV capsid. Simulations of over 64 million atoms for over 1 μs allow us to conduct a comprehensive study of the chemical-physical properties of an empty HIV-1 capsid, including its electrostatics, vibrational and acoustic properties, and the effects of solvent (ions and water) on the capsid. The simulations reveal critical details about the capsid with implications to biological function.

  17. Piloted simulation study of a balloon-assisted deployment of an aircraft at high altitude

    NASA Technical Reports Server (NTRS)

    Murray, James; Moes, Timothy; Norlin, Ken; Bauer, Jeffrey; Geenen, Robert; Moulton, Bryan; Hoang, Stephen

    1992-01-01

    A piloted simulation was used to study the feasibility of a balloon assisted deployment of a research aircraft at high altitude. In the simulation study, an unmanned, modified sailplane was carried to 110,000 ft with a high altitude balloon and released in a nose down attitude. A remote pilot controlled the aircraft through a pullout and then executed a zoom climb to a trimmed, 1 g flight condition. A small parachute was used to limit the Mach number during the pullout to avoid adverse transonic effects. The use of small rocket motor was studied for increasing the maximum attainable altitude. Aerodynamic modifications to the basic sailplane included applying supercritical airfoil gloves over the existing wing and tail surfaces. The aerodynamic model of the simulated aircraft was based on low Reynolds number wind tunnel tests and computational techniques, and included large Mach number and Reynolds number effects at high altitude. Parametric variations were performed to study the effects of launch altitude, gross weight, Mach number limit, and parachute size on the maximum attainable stabilized altitude. A test altitude of approx. 95,000 ft was attained, and altitudes in excess of 100,000 ft was attained.

  18. Giant Impacts on Earth-Like Worlds

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-05-01

    Earth has experienced a large number of impacts, from the cratering events that may have caused mass extinctions to the enormous impact believed to have formed the Moon. A new study examines whether our planets impact history is typical for Earth-like worlds.N-Body ChallengesTimeline placing the authors simulations in context of the history of our solar system (click for a closer look). [Quintana et al. 2016]The final stages of terrestrial planet formation are thought to be dominated by giant impacts of bodies in the protoplanetary disk. During this stage, protoplanets smash into one another and accrete, greatly influencing the growth, composition, and habitability of the final planets.There are two major challenges when simulating this N-body planet formation. The first is fragmentation: since computational time scales as N^2, simulating lots of bodies that split into many more bodies is very computationally intensive. For this reason, fragmentation is usually ignored; simulations instead assume perfect accretion during collisions.Total number of bodies remaining within the authors simulations over time, with fragmentation included (grey) and ignored (red). Both simulations result in the same final number of bodies, but the ones that include fragmentation take more time to reach that final number. [Quintana et al. 2016]The second challengeis that many-body systems are chaotic, which means its necessary to do a large number of simulations to make statistical statements about outcomes.Adding FragmentationA team of scientists led by Elisa Quintana (NASA NPP Senior Fellow at the Ames Research Center) has recently pushed at these challenges by modeling inner-planet formation using a code that does include fragmentation. The team ran 140 simulations with and 140 without the effects of fragmentation using similar initial conditions to understand how including fragmentation affects the outcome.Quintana and collaborators then used the fragmentation-inclusive simulations to examine the collisional histories of Earth-like planets that form. Their goal is to understand if our solar systems formation and evolution is typical or unique.How Common Are Giant Impacts?Histogram of the total number of giant impacts received by the 164 Earth-like worlds produced in the authors fragmentation-inclusive simulations. [Quintana et al. 2016]The authors find that including fragmentation does not affect the final number of planets that are formed in the simulation (an average of 34 in each system, consistent with our solar systems terrestrial planet count). But when fragmentation is included, fewer collisions end in merger which results in typical accretion timescales roughly doubling. So the effects of fragmentation influence the collisional history of the system and the length of time needed for the final system to form.Examining the 164 Earth-analogs produced in the fragmentation-inclusive simulations, Quintana and collaborators find that impacts large enough to completely strip a planets atmosphere are rare; fewer than 1% of the Earth-like worlds experienced this.But giant impacts that are able to strip ~50% of an Earth-analogs atmosphere roughly the energy of the giant impact thought to have formed our Moon are more common. Almost all of the authors Earth-analogs experienced at least 1 giant impact of this size in the 2-Gyr simulation, and the average Earth-like world experienced ~3 such impacts.These results suggest that our planets impact history with the Moon-forming impact likely being the last giant impact Earth experienced is fairly typical for Earth-like worlds. The outcomes also indicate that smaller impacts that are still potentially life-threatening are much more common than bulk atmospheric removal. Higher-resolution simulations could be used to examine such smaller impacts.CitationElisa V. Quintana et al 2016 ApJ 821 126. doi:10.3847/0004-637X/821/2/126

  19. The large scale microelectronics Computer-Aided Design and Test (CADAT) system

    NASA Technical Reports Server (NTRS)

    Gould, J. M.

    1978-01-01

    The CADAT system consists of a number of computer programs written in FORTRAN that provide the capability to simulate, lay out, analyze, and create the artwork for large scale microelectronics. The function of each software component of the system is described with references to specific documentation for each software component.

  20. Myopic aberrations: Simulation based comparison of curvature and Hartmann Shack wavefront sensors

    NASA Astrophysics Data System (ADS)

    Basavaraju, Roopashree M.; Akondi, Vyas; Weddell, Stephen J.; Budihal, Raghavendra Prasad

    2014-02-01

    In comparison with a Hartmann Shack wavefront sensor, the curvature wavefront sensor is known for its higher sensitivity and greater dynamic range. The aim of this study is to numerically investigate the merits of using a curvature wavefront sensor, in comparison with a Hartmann Shack (HS) wavefront sensor, to analyze aberrations of the myopic eye. Aberrations were statistically generated using Zernike coefficient data of 41 myopic subjects obtained from the literature. The curvature sensor is relatively simple to implement, and the processing of extra- and intra-focal images was linearly resolved using the Radon transform to provide Zernike modes corresponding to statistically generated aberrations. Simulations of the HS wavefront sensor involve the evaluation of the focal spot pattern from simulated aberrations. Optical wavefronts were reconstructed using the slope geometry of Southwell. Monte Carlo simulation was used to find critical parameters for accurate wavefront sensing and to investigate the performance of HS and curvature sensors. The performance of the HS sensor is highly dependent on the number of subapertures and the curvature sensor is largely dependent on the number of Zernike modes used to represent the aberration and the effective propagation distance. It is shown that in order to achieve high wavefront sensing accuracy while measuring aberrations of the myopic eye, a simpler and cost effective curvature wavefront sensor is a reliable alternative to a high resolution HS wavefront sensor with a large number of subapertures.

  1. Machine learning from computer simulations with applications in rail vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Taheri, Mehdi; Ahmadian, Mehdi

    2016-05-01

    The application of stochastic modelling for learning the behaviour of a multibody dynamics (MBD) models is investigated. Post-processing data from a simulation run are used to train the stochastic model that estimates the relationship between model inputs (suspension relative displacement and velocity) and the output (sum of suspension forces). The stochastic model can be used to reduce the computational burden of the MBD model by replacing a computationally expensive subsystem in the model (suspension subsystem). With minor changes, the stochastic modelling technique is able to learn the behaviour of a physical system and integrate its behaviour within MBD models. The technique is highly advantageous for MBD models where real-time simulations are necessary, or with models that have a large number of repeated substructures, e.g. modelling a train with a large number of railcars. The fact that the training data are acquired prior to the development of the stochastic model discards the conventional sampling plan strategies like Latin Hypercube sampling plans where simulations are performed using the inputs dictated by the sampling plan. Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, a sampling plan suitable for the process is developed where the most space-filling subset of the acquired data with ? number of sample points that best describes the dynamic behaviour of the system under study is selected as the training data.

  2. Factors Controlling the Properties of Multi-Phase Arctic Stratocumulus Clouds

    NASA Technical Reports Server (NTRS)

    Fridlind, Ann; Ackerman, Andrew; Menon, Surabi

    2005-01-01

    The 2004 Multi-Phase Arctic Cloud Experiment (M-PACE) IOP at the ARM NSA site focused on measuring the properties of autumn transition-season arctic stratus and the environmental conditions controlling them, including concentrations of heterogeneous ice nuclei. Our work aims to use a large-eddy simulation (LES) code with embedded size-resolved aerosol and cloud microphysics to identify factors controlling multi-phase arctic stratus. Our preliminary simulations of autumn transition-season clouds observed during the 1994 Beaufort and Arctic Seas Experiment (BASE) indicated that low concentrations of ice nuclei, which were not measured, may have significantly lowered liquid water content and thereby stabilized cloud evolution. However, cloud drop concentrations appeared to be virtually immune to changes in liquid water content, indicating an active Bergeron process with little effect of collection on drop number concentration. We will compare these results with preliminary simulations from October 8-13 during MPACE. The sensitivity of cloud properties to uncertainty in other factors, such as large-scale forcings and aerosol profiles, will also be investigated. Based on the LES simulations with M-PACE data, preliminary results from the NASA GlSS single-column model (SCM) will be used to examine the sensitivity of predicted cloud properties to changing cloud drop number concentrations for multi-phase arctic clouds. Present parametrizations assumed fixed cloud droplet number concentrations and these will be modified using M-PACE data.

  3. Genomic prediction in animals and plants: simulation of data, validation, reporting, and benchmarking.

    PubMed

    Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M

    2013-02-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals.

  4. Genomic Prediction in Animals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking

    PubMed Central

    Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.

    2013-01-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals. PMID:23222650

  5. Large eddy simulation of pollutant gas dispersion with buoyancy ejected from building into an urban street canyon.

    PubMed

    Hu, L H; Xu, Y; Zhu, W; Wu, L; Tang, F; Lu, K H

    2011-09-15

    The dispersion of buoyancy driven smoke soot and carbon monoxide (CO) gas, which was ejected out from side building into an urban street canyon with aspect ratio of 1 was investigated by large eddy simulation (LES) under a perpendicular wind flow. Strong buoyancy effect, which has not been revealed before, on such pollution dispersion in the street canyon was studied. The buoyancy release rate was 5 MW. The wind speed concerned ranged from 1 to 7.5m/s. The characteristics of flow pattern, distribution of smoke soot and temperature, CO concentration were revealed by the LES simulation. Dimensionless Froude number (Fr) was firstly introduced here to characterize the pollutant dispersion with buoyancy effect counteracting the wind. It was found that the flow pattern can be well categorized into three regimes. A regular characteristic large vortex was shown for the CO concentration contour when the wind velocity was higher than the critical re-entrainment value. A new formula was theoretically developed to show quantitatively that the critical re-entrainment wind velocities, u(c), for buoyancy source at different floors, were proportional to -1/3 power of the characteristic height. LES simulation results agreed well with theoretical analysis. The critical Froude number was found to be constant of 0.7. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Optimizing patient flow in a large hospital surgical centre by means of discrete-event computer simulation models.

    PubMed

    Ferreira, Rodrigo B; Coelli, Fernando C; Pereira, Wagner C A; Almeida, Renan M V R

    2008-12-01

    This study used the discrete-events computer simulation methodology to model a large hospital surgical centre (SC), in order to analyse the impact of increases in the number of post-anaesthetic beds (PABs), of changes in surgical room scheduling strategies and of increases in surgery numbers. The used inputs were: number of surgeries per day, type of surgical room scheduling, anaesthesia and surgery duration, surgical teams' specialty and number of PABs, and the main outputs were: number of surgeries per day, surgical rooms' use rate and blocking rate, surgical teams' use rate, patients' blocking rate, surgery delays (minutes) and the occurrence of postponed surgeries. Two basic strategies were implemented: in the first strategy, the number of PABs was increased under two assumptions: (a) following the scheduling plan actually used by the hospital (the 'rigid' scheduling - surgical rooms were previously assigned and assignments could not be changed) and (b) following a 'flexible' scheduling (surgical rooms, when available, could be freely used by any surgical team). In the second, the same analysis was performed, increasing the number of patients (up to the system 'feasible maximum') but fixing the number of PABs, in order to evaluate the impact of the number of patients over surgery delays. It was observed that the introduction of a flexible scheduling/increase in PABs would lead to a significant improvement in the SC productivity.

  7. Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2006-05-01

    Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.

  8. Turbulent thermal superstructures in Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Stevens, Richard J. A. M.; Blass, Alexander; Zhu, Xiaojue; Verzicco, Roberto; Lohse, Detlef

    2018-04-01

    We report the observation of superstructures, i.e., very large-scale and long living coherent structures in highly turbulent Rayleigh-Bénard convection up to Rayleigh Ra=109 . We perform direct numerical simulations in horizontally periodic domains with aspect ratios up to Γ =128 . In the considered Ra number regime the thermal superstructures have a horizontal extend of six to seven times the height of the domain and their size is independent of Ra. Many laboratory experiments and numerical simulations have focused on small aspect ratio cells in order to achieve the highest possible Ra. However, here we show that for very high Ra integral quantities such as the Nusselt number and volume averaged Reynolds number only converge to the large aspect ratio limit around Γ ≈4 , while horizontally averaged statistics such as standard deviation and kurtosis converge around Γ ≈8 , the integral scale converges around Γ ≈32 , and the peak position of the temperature variance and turbulent kinetic energy spectra only converge around Γ ≈64 .

  9. Large-scale Individual-based Models of Pandemic Influenza Mitigation Strategies

    NASA Astrophysics Data System (ADS)

    Kadau, Kai; Germann, Timothy; Longini, Ira; Macken, Catherine

    2007-03-01

    We have developed a large-scale stochastic simulation model to investigate the spread of a pandemic strain of influenza virus through the U.S. population of 281 million people, to assess the likely effectiveness of various potential intervention strategies including antiviral agents, vaccines, and modified social mobility (including school closure and travel restrictions) [1]. The heterogeneous population structure and mobility is based on available Census and Department of Transportation data where available. Our simulations demonstrate that, in a highly mobile population, restricting travel after an outbreak is detected is likely to delay slightly the time course of the outbreak without impacting the eventual number ill. For large basic reproductive numbers R0, we predict that multiple strategies in combination (involving both social and medical interventions) will be required to achieve a substantial reduction in illness rates. [1] T. C. Germann, K. Kadau, I. M. Longini, and C. A. Macken, Proc. Natl. Acad. Sci. (USA) 103, 5935-5940 (2006).

  10. Digital-analog quantum simulation of generalized Dicke models with superconducting circuits

    NASA Astrophysics Data System (ADS)

    Lamata, Lucas

    2017-03-01

    We propose a digital-analog quantum simulation of generalized Dicke models with superconducting circuits, including Fermi- Bose condensates, biased and pulsed Dicke models, for all regimes of light-matter coupling. We encode these classes of problems in a set of superconducting qubits coupled with a bosonic mode implemented by a transmission line resonator. Via digital-analog techniques, an efficient quantum simulation can be performed in state-of-the-art circuit quantum electrodynamics platforms, by suitable decomposition into analog qubit-bosonic blocks and collective single-qubit pulses through digital steps. Moreover, just a single global analog block would be needed during the whole protocol in most of the cases, superimposed with fast periodic pulses to rotate and detune the qubits. Therefore, a large number of digital steps may be attained with this approach, providing a reduced digital error. Additionally, the number of gates per digital step does not grow with the number of qubits, rendering the simulation efficient. This strategy paves the way for the scalable digital-analog quantum simulation of many-body dynamics involving bosonic modes and spin degrees of freedom with superconducting circuits.

  11. Digital-analog quantum simulation of generalized Dicke models with superconducting circuits

    PubMed Central

    Lamata, Lucas

    2017-01-01

    We propose a digital-analog quantum simulation of generalized Dicke models with superconducting circuits, including Fermi- Bose condensates, biased and pulsed Dicke models, for all regimes of light-matter coupling. We encode these classes of problems in a set of superconducting qubits coupled with a bosonic mode implemented by a transmission line resonator. Via digital-analog techniques, an efficient quantum simulation can be performed in state-of-the-art circuit quantum electrodynamics platforms, by suitable decomposition into analog qubit-bosonic blocks and collective single-qubit pulses through digital steps. Moreover, just a single global analog block would be needed during the whole protocol in most of the cases, superimposed with fast periodic pulses to rotate and detune the qubits. Therefore, a large number of digital steps may be attained with this approach, providing a reduced digital error. Additionally, the number of gates per digital step does not grow with the number of qubits, rendering the simulation efficient. This strategy paves the way for the scalable digital-analog quantum simulation of many-body dynamics involving bosonic modes and spin degrees of freedom with superconducting circuits. PMID:28256559

  12. Numerical methods for large eddy simulation of acoustic combustion instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton T.

    Acoustic combustion instabilities occur when interaction between the combustion process and acoustic modes in a combustor results in periodic oscillations in pressure, velocity, and heat release. If sufficiently large in amplitude, these instabilities can cause operational difficulties or the failure of combustor hardware. In many situations, the dominant instability is the result of the interaction between a low frequency acoustic mode of the combustor and the large scale hydrodynamics. Large eddy simulation (LES), therefore, is a promising tool for the prediction of these instabilities, since both the low frequency acoustic modes and the large scale hydrodynamics are well resolved in LES. Problems with the tractability of such simulations arise, however, due to the difficulty of solving the compressible Navier-Stokes equations efficiently at low Mach number and due to the large number of acoustic periods that are often required for such instabilities to reach limit cycles. An implicit numerical method for the solution of the compressible Navier-Stokes equations has been developed which avoids the acoustic CFL restriction, allowing for significant efficiency gains at low Mach number, while still resolving the low frequency acoustic modes of interest. In the limit of a uniform grid the numerical method causes no artificial damping of acoustic waves. New, non-reflecting boundary conditions have also been developed for use with the characteristic-based approach of Poinsot and Lele (1992). The new boundary conditions are implemented in a manner which allows for significant reduction of the computational domain of an LES by eliminating the need to perform LES in regions where one-dimensional acoustics significantly affect the instability but details of the hydrodynamics do not. These new numerical techniques have been demonstrated in an LES of an experimental combustor. The new techniques are shown to be an efficient means of performing LES of acoustic combustion instabilities and are shown to accurately predict the occurrence and frequency of the dominant mode of the instability observed in the experiment.

  13. On the theory and simulation of multiple Coulomb scattering of heavy-charged particles.

    PubMed

    Striganov, S I

    2005-01-01

    The Moliere theory of multiple Coulomb scattering is modified to take into account the difference between processes of scattering off atomic nuclei and electrons. A simple analytical expression for angular distribution of charged particles passing through a thick absorber is found. It does not assume any special form for a differential scattering cross section and has a wider range of applicability than a gaussian approximation. A well-known method to simulate multiple Coulomb scatterings is based on treating 'soft' and 'hard' collisions differently. An angular deflection in a large number of 'soft' collisions is sampled using the proposed distribution function, a small number of 'hard' collision are simulated directly. A boundary between 'hard' and 'soft' collisions is defined, providing a precise sampling of a scattering angle (1% level) and a small number of 'hard' collisions. A corresponding simulating module takes into account projectile and nucleus charged distributions and exact kinematics of a projectile-electron interaction.

  14. Annual Research Briefs: 1995

    NASA Technical Reports Server (NTRS)

    1995-01-01

    This report contains the 1995 annual progress reports of the Research Fellows and students of the Center for Turbulence Research (CTR). In 1995 CTR continued its concentration on the development and application of large-eddy simulation to complex flows, development of novel modeling concepts for engineering computations in the Reynolds averaged framework, and turbulent combustion. In large-eddy simulation, a number of numerical and experimental issues have surfaced which are being addressed. The first group of reports in this volume are on large-eddy simulation. A key finding in this area was the revelation of possibly significant numerical errors that may overwhelm the effects of the subgrid-scale model. We also commissioned a new experiment to support the LES validation studies. The remaining articles in this report are concerned with Reynolds averaged modeling, studies of turbulence physics and flow generated sound, combustion, and simulation techniques. Fundamental studies of turbulent combustion using direct numerical simulations which started at CTR will continue to be emphasized. These studies and their counterparts carried out during the summer programs have had a noticeable impact on combustion research world wide.

  15. Computer Simulation of Protein-Protein and Protein-Peptide Interactions

    DTIC Science & Technology

    1983-12-08

    a full molecular dynamic z simulation is performed, with resulting dipolar re - laxation. However, this is prohibitive when a large . number of...1993 Dr. Mike Marron Program Manager Molecular Biology Office of Naval Research 800 N. Quincy Street Arlington, VA 22217 Dear Mike, Here is the...rztnbutior is unLi--ited. , 93-30630 98 12 � 12/08/93 01/0/92-;03/31/93: Final Report, Computer Simulation of Protein-Protein and Protein-Peptide

  16. A scalable moment-closure approximation for large-scale biochemical reaction networks

    PubMed Central

    Kazeroonian, Atefeh; Theis, Fabian J.; Hasenauer, Jan

    2017-01-01

    Abstract Motivation: Stochastic molecular processes are a leading cause of cell-to-cell variability. Their dynamics are often described by continuous-time discrete-state Markov chains and simulated using stochastic simulation algorithms. As these stochastic simulations are computationally demanding, ordinary differential equation models for the dynamics of the statistical moments have been developed. The number of state variables of these approximating models, however, grows at least quadratically with the number of biochemical species. This limits their application to small- and medium-sized processes. Results: In this article, we present a scalable moment-closure approximation (sMA) for the simulation of statistical moments of large-scale stochastic processes. The sMA exploits the structure of the biochemical reaction network to reduce the covariance matrix. We prove that sMA yields approximating models whose number of state variables depends predominantly on local properties, i.e. the average node degree of the reaction network, instead of the overall network size. The resulting complexity reduction is assessed by studying a range of medium- and large-scale biochemical reaction networks. To evaluate the approximation accuracy and the improvement in computational efficiency, we study models for JAK2/STAT5 signalling and NFκB signalling. Our method is applicable to generic biochemical reaction networks and we provide an implementation, including an SBML interface, which renders the sMA easily accessible. Availability and implementation: The sMA is implemented in the open-source MATLAB toolbox CERENA and is available from https://github.com/CERENADevelopers/CERENA. Contact: jan.hasenauer@helmholtz-muenchen.de or atefeh.kazeroonian@tum.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881983

  17. Python-based geometry preparation and simulation visualization toolkits for STEPS

    PubMed Central

    Chen, Weiliang; De Schutter, Erik

    2014-01-01

    STEPS is a stochastic reaction-diffusion simulation engine that implements a spatial extension of Gillespie's Stochastic Simulation Algorithm (SSA) in complex tetrahedral geometries. An extensive Python-based interface is provided to STEPS so that it can interact with the large number of scientific packages in Python. However, a gap existed between the interfaces of these packages and the STEPS user interface, where supporting toolkits could reduce the amount of scripting required for research projects. This paper introduces two new supporting toolkits that support geometry preparation and visualization for STEPS simulations. PMID:24782754

  18. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; Werth, Charles J.; Valocchi, Albert J.

    2016-07-01

    Characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydrogeophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with "big data" processing and numerous large-scale numerical simulations. To tackle such difficulties, the principal component geostatistical approach (PCGA) has been proposed as a "Jacobian-free" inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed in the traditional inversion methods. PCGA can be conveniently linked to any multiphysics simulation software with independent parallel executions. In this paper, we extend PCGA to handle a large number of measurements (e.g., 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data were compressed by the zeroth temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Only about 2000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method.

  19. Ion-kinetic simulations of D- 3He gas-filled inertial confinement fusion target implosions with moderate to large Knudsen number

    DOE PAGES

    Larroche, O.; Rinderknecht, H. G.; Rosenberg, M. J.; ...

    2016-01-06

    Experiments designed to investigate the transition to non-collisional behavior in D 3He-gas inertial confinement fusion target implosions display increasingly large discrepancies with respect to simulations by standard hydrodynamics codes as the expected ion mean-free-paths λ c increase with respect to the target radius R (i.e., when the Knudsen number N K = λ c/R grows). To take properly into account large N K's, multi-ion-species Vlasov-Fokker-Planck computations of the inner gas in the capsules have been performed, for two different values of N K, one moderate and one large. The results, including nuclear yield, reactivity-weighted ion temperatures, nuclear emissivities, and surfacemore » brightness, have been compared with the experimental data and with the results of hydrodynamical simulations, some of which include an ad hocmodeling of kinetic effects. The experimental results are quite accurately rendered by the kinetic calculations in the smaller-N K case, much better than by the hydrodynamical calculations. The kinetic effects at play in this case are thus correctly understood. However, in the higher-N K case, the agreement is much worse. Furthermore, the remaining discrepancies are shown to arise from kinetic phenomena (e.g., inter-species diffusion) occurring at the gas-pusher interface, which should be investigated in the future work.« less

  20. An Integrated Management Support and Production Control System for Hardwood Forest Products

    Treesearch

    Guillermo A. Mendoza; Roger J. Meimban; William Sprouse; William G. Luppold; Philip A. Araman

    1991-01-01

    Spreadsheet and simulation models are tools which enable users to analyze a large number of variables affecting hardwood material utilization and profit in a systematic fashion. This paper describes two spreadsheet models; SEASaw and SEAIn, and a hardwood sawmill simulator. SEASaw is designed to estimate the amount of conversion from timber to lumber, while SEAIn is a...

  1. The Graphical Display of Simulation Results, with Applications to the Comparison of Robust IRT Estimators of Ability.

    ERIC Educational Resources Information Center

    Thissen, David; Wainer, Howard

    Simulation studies of the performance of (potentially) robust statistical estimation produce large quantities of numbers in the form of performance indices of the various estimators under various conditions. This report presents a multivariate graphical display used to aid in the digestion of the plentiful results in a current study of Item…

  2. Grain growth simulation of [001] textured YBCO films grown on (001) substrates with large lattice misfit: Prediction of misorientations of the remaining boundaries

    NASA Astrophysics Data System (ADS)

    Tsai, Jack W. H.; Ling, Shiun; Rodriguez, Julio C.; Mustapha, Zarina; Chan, Siu-Wai

    2001-04-01

    We study the effects of (1) the variation of grain boundary energy with misorientation and (2) the large lattice misfit (>3%) between the films and substrates on grain growth in films by method of Monte Carlo simulations. The results from the grain growth simulation in YBa2Cu3O7-x (YBCO) films was found to concur with previous experimental observation of preferred grain orientations for YBCO films deposited on various substrates such as (001) magnesium oxide (MgO) and (001) yttria stabilized zirconia (YSZ). The simulation has helped us to identify three factors influencing the competition of these [001] tilt boundaries. They are: (1) the relative depths of local minima in the boundary energy vs. misorientation curve, (2) the number of combinations of coincidence epitaxy (CE) orientations contributing to the exact misorientation for each of the high-angle-but-low-energy (HABLE) boundaries, and (3) the number of combinations of CE orientations within the angular ranges bracketing each of the exact HABLE boundaries. Hence, these factors can be applied to clarify the origin of special misorientations observed experimentally.

  3. Bayesian modelling of uncertainties of Monte Carlo radiative-transfer simulations

    NASA Astrophysics Data System (ADS)

    Beaujean, Frederik; Eggers, Hans C.; Kerzendorf, Wolfgang E.

    2018-07-01

    One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. We consider simulations in which the number of photon packets is Poisson distributed, while the weight assigned to a single photon packet follows any distribution of choice. We show how to estimate the statistical uncertainty of the sum of weights in each bin from the output of a single radiative-transfer simulation. Our Bayesian approach produces a posterior distribution that is valid for any number of packets in a bin, even zero packets, and is easy to implement in practice. Our analytic results for large number of packets show that we generalize existing methods that are valid only in limiting cases. The statistical problem considered here appears in identical form in a wide range of Monte Carlo simulations including particle physics and importance sampling. It is particularly powerful in extracting information when the available data are sparse or quantities are small.

  4. Principal Component Relaxation Mode Analysis of an All-Atom Molecular Dynamics Simulation of Human Lysozyme

    NASA Astrophysics Data System (ADS)

    Nagai, Toshiki; Mitsutake, Ayori; Takano, Hiroshi

    2013-02-01

    A new relaxation mode analysis method, which is referred to as the principal component relaxation mode analysis method, has been proposed to handle a large number of degrees of freedom of protein systems. In this method, principal component analysis is carried out first and then relaxation mode analysis is applied to a small number of principal components with large fluctuations. To reduce the contribution of fast relaxation modes in these principal components efficiently, we have also proposed a relaxation mode analysis method using multiple evolution times. The principal component relaxation mode analysis method using two evolution times has been applied to an all-atom molecular dynamics simulation of human lysozyme in aqueous solution. Slow relaxation modes and corresponding relaxation times have been appropriately estimated, demonstrating that the method is applicable to protein systems.

  5. KCTF evolution of trans-neptunian binaries: Connecting formation to observation

    NASA Astrophysics Data System (ADS)

    Porter, Simon B.; Grundy, William M.

    2012-08-01

    Recent observational surveys of trans-neptunian binary (TNB) systems have dramatically increased the number of known mutual orbits. Our Kozai Cycle Tidal Friction (KCTF) simulations of synthetic trans-neptunian binaries show that tidal dissipation in these systems can completely reshape their original orbits. Specifically, solar torques should have dramatically accelerated the semimajor axis decay and circularization timescales of primordial (or recently excited) TNBs. As a result, our initially random distribution of TNBs in our simulations evolved to have a large population of tight circular orbits. This tight circular population appears for a range of TNO physical properties, though a strong gravitational quadrupole can prevent some from fully circularizing. We introduce a stability parameter to predict the effectiveness of KCTF on a TNB orbit, and show that a number of known TNBs must have a large gravitational quadrupole to be stable.

  6. Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.

    2017-12-01

    Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.

  7. Low-resolution simulations of vesicle suspensions in 2D

    NASA Astrophysics Data System (ADS)

    Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George

    2018-03-01

    Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.

  8. Simulation Studies as Designed Experiments: The Comparison of Penalized Regression Models in the “Large p, Small n” Setting

    PubMed Central

    Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.

    2014-01-01

    New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666

  9. Validating the simulation of large-scale parallel applications using statistical characteristics

    DOE PAGES

    Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...

    2016-03-01

    Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less

  10. An investigation of small scales of turbulence in a boundary layer at high Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Wallace, James M.; Ong, L.; Balint, J.-L.

    1993-01-01

    The assumption that turbulence at large wave-numbers is isotropic and has universal spectral characteristics which are independent of the flow geometry, at least for high Reynolds numbers, has been a cornerstone of closure theories as well as of the most promising recent development in the effort to predict turbulent flows, viz. large eddy simulations. This hypothesis was first advanced by Kolmogorov based on the supposition that turbulent kinetic energy cascades down the scales (up the wave-numbers) of turbulence and that, if the number of these cascade steps is sufficiently large (i.e. the wave-number range is large), then the effects of anisotropies at the large scales are lost in the energy transfer process. Experimental attempts were repeatedly made to verify this fundamental assumption. However, Van Atta has recently suggested that an examination of the scalar and velocity gradient fields is necessary to definitively verify this hypothesis or prove it to be unfounded. Of course, this must be carried out in a flow with a sufficiently high Reynolds number to provide the necessary separation of scales in order unambiguously to provide the possibility of local isotropy at large wave-numbers. An opportunity to use our 12-sensor hot-wire probe to address this issue directly was made available at the 80'x120' wind tunnel at the NASA Ames Research Center, which is normally used for full-scale aircraft tests. An initial report on this high Reynolds number experiment and progress toward its evaluation is presented.

  11. Fast and Precise Emulation of Stochastic Biochemical Reaction Networks With Amplified Thermal Noise in Silicon Chips.

    PubMed

    Kim, Jaewook; Woo, Sung Sik; Sarpeshkar, Rahul

    2018-04-01

    The analysis and simulation of complex interacting biochemical reaction pathways in cells is important in all of systems biology and medicine. Yet, the dynamics of even a modest number of noisy or stochastic coupled biochemical reactions is extremely time consuming to simulate. In large part, this is because of the expensive cost of random number and Poisson process generation and the presence of stiff, coupled, nonlinear differential equations. Here, we demonstrate that we can amplify inherent thermal noise in chips to emulate randomness physically, thus alleviating these costs significantly. Concurrently, molecular flux in thermodynamic biochemical reactions maps to thermodynamic electronic current in a transistor such that stiff nonlinear biochemical differential equations are emulated exactly in compact, digitally programmable, highly parallel analog "cytomorphic" transistor circuits. For even small-scale systems involving just 80 stochastic reactions, our 0.35-μm BiCMOS chips yield a 311× speedup in the simulation time of Gillespie's stochastic algorithm over COPASI, a fast biochemical-reaction software simulator that is widely used in computational biology; they yield a 15 500× speedup over equivalent MATLAB stochastic simulations. The chip emulation results are consistent with these software simulations over a large range of signal-to-noise ratios. Most importantly, our physical emulation of Poisson chemical dynamics does not involve any inherently sequential processes and updates such that, unlike prior exact simulation approaches, they are parallelizable, asynchronous, and enable even more speedup for larger-size networks.

  12. Effects of thermal fluctuations and fluid compressibility on hydrodynamic synchronization of microrotors at finite oscillatory Reynolds number: a multiparticle collision dynamics simulation study.

    PubMed

    Theers, Mario; Winkler, Roland G

    2014-08-28

    We investigate the emergent dynamical behavior of hydrodynamically coupled microrotors by means of multiparticle collision dynamics (MPC) simulations. The two rotors are confined in a plane and move along circles driven by active forces. Comparing simulations to theoretical results based on linearized hydrodynamics, we demonstrate that time-dependent hydrodynamic interactions lead to synchronization of the rotational motion. Thermal noise implies large fluctuations of the phase-angle difference between the rotors, but synchronization prevails and the ensemble-averaged time dependence of the phase-angle difference agrees well with analytical predictions. Moreover, we demonstrate that compressibility effects lead to longer synchronization times. In addition, the relevance of the inertia terms of the Navier-Stokes equation are discussed, specifically the linear unsteady acceleration term characterized by the oscillatory Reynolds number ReT. We illustrate the continuous breakdown of synchronization with the Reynolds number ReT, in analogy to the continuous breakdown of the scallop theorem with decreasing Reynolds number.

  13. A numerical approach for simulating fluid structure interaction of flexible thin shells undergoing arbitrarily large deformations in complex domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilmanov, Anvar, E-mail: agilmano@umn.edu; Le, Trung Bao, E-mail: lebao002@umn.edu; Sotiropoulos, Fotis, E-mail: fotis@umn.edu

    We present a new numerical methodology for simulating fluid–structure interaction (FSI) problems involving thin flexible bodies in an incompressible fluid. The FSI algorithm uses the Dirichlet–Neumann partitioning technique. The curvilinear immersed boundary method (CURVIB) is coupled with a rotation-free finite element (FE) model for thin shells enabling the efficient simulation of FSI problems with arbitrarily large deformation. Turbulent flow problems are handled using large-eddy simulation with the dynamic Smagorinsky model in conjunction with a wall model to reconstruct boundary conditions near immersed boundaries. The CURVIB and FE solvers are coupled together on the flexible solid–fluid interfaces where the structural nodalmore » positions, displacements, velocities and loads are calculated and exchanged between the two solvers. Loose and strong coupling FSI schemes are employed enhanced by the Aitken acceleration technique to ensure robust coupling and fast convergence especially for low mass ratio problems. The coupled CURVIB-FE-FSI method is validated by applying it to simulate two FSI problems involving thin flexible structures: 1) vortex-induced vibrations of a cantilever mounted in the wake of a square cylinder at different mass ratios and at low Reynolds number; and 2) the more challenging high Reynolds number problem involving the oscillation of an inverted elastic flag. For both cases the computed results are in excellent agreement with previous numerical simulations and/or experiential measurements. Grid convergence tests/studies are carried out for both the cantilever and inverted flag problems, which show that the CURVIB-FE-FSI method provides their convergence. Finally, the capability of the new methodology in simulations of complex cardiovascular flows is demonstrated by applying it to simulate the FSI of a tri-leaflet, prosthetic heart valve in an anatomic aorta and under physiologic pulsatile conditions.« less

  14. Large-Scale Flows and Magnetic Fields Produced by Rotating Convection in a Quasi-Geostrophic Model of Planetary Cores

    NASA Astrophysics Data System (ADS)

    Guervilly, C.; Cardin, P.

    2017-12-01

    Convection is the main heat transport process in the liquid cores of planets. The convective flows are thought to be turbulent and constrained by rotation (corresponding to high Reynolds numbers Re and low Rossby numbers Ro). Under these conditions, and in the absence of magnetic fields, the convective flows can produce coherent Reynolds stresses that drive persistent large-scale zonal flows. The formation of large-scale flows has crucial implications for the thermal evolution of planets and the generation of large-scale magnetic fields. In this work, we explore this problem with numerical simulations using a quasi-geostrophic approximation to model convective and zonal flows at Re 104 and Ro 10-4 for Prandtl numbers relevant for liquid metals (Pr 0.1). The formation of intense multiple zonal jets strongly affects the convective heat transport, leading to the formation of a mean temperature staircase. We also study the generation of magnetic fields by the quasi-geostrophic flows at low magnetic Prandtl numbers.

  15. THE SPECTRAL AMPLITUDE OF STELLAR CONVECTION AND ITS SCALING IN THE HIGH-RAYLEIGH-NUMBER REGIME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Featherstone, Nicholas A.; Hindman, Bradley W., E-mail: feathern@colorado.edu

    2016-02-10

    Convection plays a central role in the dynamics of any stellar interior, and yet its operation remains largely hidden from direct observation. As a result, much of our understanding concerning stellar convection necessarily derives from theoretical and computational models. The Sun is, however, exceptional in that regard. The wealth of observational data afforded by its proximity provides a unique test bed for comparing convection models against observations. When such comparisons are carried out, surprising inconsistencies between those models and observations become apparent. Both photospheric and helioseismic measurements suggest that convection simulations may overestimate convective flow speeds on large spatial scales.more » Moreover, many solar convection simulations have difficulty reproducing the observed solar differential rotation owing to this apparent overestimation. We present a series of three-dimensional stellar convection simulations designed to examine how the amplitude and spectral distribution of convective flows are established within a star’s interior. While these simulations are nonmagnetic and nonrotating in nature, they demonstrate two robust phenomena. When run with sufficiently high Rayleigh number, the integrated kinetic energy of the convection becomes effectively independent of thermal diffusion, but the spectral distribution of that kinetic energy remains sensitive to both of these quantities. A simulation that has converged to a diffusion-independent value of kinetic energy will divide that energy between spatial scales such that low-wavenumber power is overestimated and high-wavenumber power is underestimated relative to a comparable system possessing higher Rayleigh number. We discuss the implications of these results in light of the current inconsistencies between models and observations.« less

  16. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less

  17. DNS of Flows over Periodic Hills using a Discontinuous-Galerkin Spectral-Element Method

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo T.; Murman, Scott M.

    2014-01-01

    Direct numerical simulation (DNS) of turbulent compressible flows is performed using a higher-order space-time discontinuous-Galerkin finite-element method. The numerical scheme is validated by performing DNS of the evolution of the Taylor-Green vortex and turbulent flow in a channel. The higher-order method is shown to provide increased accuracy relative to low-order methods at a given number of degrees of freedom. The turbulent flow over a periodic array of hills in a channel is simulated at Reynolds number 10,595 using an 8th-order scheme in space and a 4th-order scheme in time. These results are validated against previous large eddy simulation (LES) results. A preliminary analysis provides insight into how these detailed simulations can be used to improve Reynoldsaveraged Navier-Stokes (RANS) modeling

  18. Methodologies for extracting kinetic constants for multiphase reacting flow simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, S.L.; Lottes, S.A.; Golchert, B.

    1997-03-01

    Flows in industrial reactors often involve complex reactions of many species. A computational fluid dynamics (CFD) computer code, ICRKFLO, was developed to simulate multiphase, multi-species reacting flows. The ICRKFLO uses a hybrid technique to calculate species concentration and reaction for a large number of species in a reacting flow. This technique includes a hydrodynamic and reacting flow simulation with a small but sufficient number of lumped reactions to compute flow field properties followed by a calculation of local reaction kinetics and transport of many subspecies (order of 10 to 100). Kinetic rate constants of the numerous subspecies chemical reactions aremore » difficult to determine. A methodology has been developed to extract kinetic constants from experimental data efficiently. A flow simulation of a fluid catalytic cracking (FCC) riser was successfully used to demonstrate this methodology.« less

  19. Large Eddy Simulation in the Computation of Jet Noise

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.

    1999-01-01

    Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.

  20. Physical properties of the HIV-1 capsid from all-atom molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perilla, Juan R.; Schulten, Klaus

    Human immunodeficiency virus type 1 (HIV-1) infection is highly dependent on its capsid. The capsid is a large container, made of B 1,300 proteins with altogether 4 million atoms. Though the capsid proteins are all identical, they nevertheless arrange themselves into a largely asymmetric structure made of hexamers and pentamers. The large number of degrees of freedom and lack of symmetry pose a challenge to studying the chemical details of the HIV capsid. Simulations of over 64 million atoms for over 1 μs allow us to conduct a comprehensive study of the chemical–physical properties of an empty HIV-1 capsid, includingmore » its electrostatics, vibrational and acoustic properties, and the effects of solvent (ions and water) on the capsid. Furthermore, the simulations reveal critical details about the capsid with implications to biological function.« less

  1. Physical properties of the HIV-1 capsid from all-atom molecular dynamics simulations

    DOE PAGES

    Perilla, Juan R.; Schulten, Klaus

    2017-07-19

    Human immunodeficiency virus type 1 (HIV-1) infection is highly dependent on its capsid. The capsid is a large container, made of B 1,300 proteins with altogether 4 million atoms. Though the capsid proteins are all identical, they nevertheless arrange themselves into a largely asymmetric structure made of hexamers and pentamers. The large number of degrees of freedom and lack of symmetry pose a challenge to studying the chemical details of the HIV capsid. Simulations of over 64 million atoms for over 1 μs allow us to conduct a comprehensive study of the chemical–physical properties of an empty HIV-1 capsid, includingmore » its electrostatics, vibrational and acoustic properties, and the effects of solvent (ions and water) on the capsid. Furthermore, the simulations reveal critical details about the capsid with implications to biological function.« less

  2. GENASIS Mathematics : Object-oriented manifolds, operations, and solvers for large-scale physics simulations

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2018-01-01

    The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.

  3. Microscale simulations of shock interaction with large assembly of particles for developing point-particle models

    NASA Astrophysics Data System (ADS)

    Thakur, Siddharth; Neal, Chris; Mehta, Yash; Sridharan, Prasanth; Jackson, Thomas; Balachandar, S.

    2017-01-01

    Micrsoscale simulations are being conducted for developing point-particle and other related models that are needed for the mesoscale and macroscale simulations of explosive dispersal of particles. These particle models are required to compute (a) instantaneous aerodynamic force on the particle and (b) instantaneous net heat transfer between the particle and the surrounding. A strategy for a sequence of microscale simulations has been devised that allows systematic development of the hybrid surrogate models that are applicable at conditions representative of the explosive dispersal application. The ongoing microscale simulations seek to examine particle force dependence on: (a) Mach number, (b) Reynolds number, and (c) volume fraction (different particle arrangements such as cubic, face-centered cubic (FCC), body-centered cubic (BCC) and random). Future plans include investigation of sequences of fully-resolved microscale simulations consisting of an array of particles subjected to more realistic time-dependent flows that progressively better approximate the actual problem of explosive dispersal. Additionally, effects of particle shape, size, and number in simulation as well as the transient particle deformation dependence on various parameters including: (a) particle material, (b) medium material, (c) multiple particles, (d) incoming shock pressure and speed, (e) medium to particle impedance ratio, (f) particle shape and orientation to shock, etc. are being investigated.

  4. Monte Carlo simulation of steady state shock structure including cosmic ray mediation and particle escape

    NASA Technical Reports Server (NTRS)

    Ellison, D. C.; Jones, F. C.; Eichler, D.

    1983-01-01

    Both hydrodynamic calculations (Drury and Volk, 1981, and Axford et al., 1982) and kinetic simulations imply the existence of thermal subshocks in high-Mach-number cosmic-ray-mediated shocks. The injection efficiency of particles from the thermal background into the diffusive shock-acceleration process is determined in part by the sharpness and compression ratio of these subshocks. Results are reported for a Monte Carlo simulation that includes both the back reaction of accelerated particles on the inflowing plasma, producing a smoothing of the shock transition, and the free escape of particles allowing arbitrarily large overall compression ratios in high-Mach-number steady-state shocks. Energy spectra and estimates of the proportion of thermal ions accelerated to high energy are obtained.

  5. Monte Carlo simulation of steady state shock structure including cosmic ray mediation and particle escape

    NASA Astrophysics Data System (ADS)

    Ellison, D. C.; Jones, F. C.; Eichler, D.

    1983-08-01

    Both hydrodynamic calculations (Drury and Volk, 1981, and Axford et al., 1982) and kinetic simulations imply the existence of thermal subshocks in high-Mach-number cosmic-ray-mediated shocks. The injection efficiency of particles from the thermal background into the diffusive shock-acceleration process is determined in part by the sharpness and compression ratio of these subshocks. Results are reported for a Monte Carlo simulation that includes both the back reaction of accelerated particles on the inflowing plasma, producing a smoothing of the shock transition, and the free escape of particles allowing arbitrarily large overall compression ratios in high-Mach-number steady-state shocks. Energy spectra and estimates of the proportion of thermal ions accelerated to high energy are obtained.

  6. Software Engineering for Scientific Computer Simulations

    NASA Astrophysics Data System (ADS)

    Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.

    2004-11-01

    Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.

  7. Morphological changes in polycrystalline Fe after compression and release

    NASA Astrophysics Data System (ADS)

    Gunkelmann, Nina; Tramontina, Diego R.; Bringa, Eduardo M.; Urbassek, Herbert M.

    2015-02-01

    Despite a number of large-scale molecular dynamics simulations of shock compressed iron, the morphological properties of simulated recovered samples are still unexplored. Key questions remain open in this area, including the role of dislocation motion and deformation twinning in shear stress release. In this study, we present simulations of homogeneous uniaxial compression and recovery of large polycrystalline iron samples. Our results reveal significant recovery of the body-centered cubic grains with some deformation twinning driven by shear stress, in agreement with experimental results by Wang et al. [Sci. Rep. 3, 1086 (2013)]. The twin fraction agrees reasonably well with a semi-analytical model which assumes a critical shear stress for twinning. On reloading, twins disappear and the material reaches a very low strength value.

  8. Simulating spin models on GPU

    NASA Astrophysics Data System (ADS)

    Weigel, Martin

    2011-09-01

    Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.

  9. Perturbations of the Richardson number field by gravity waves

    NASA Technical Reports Server (NTRS)

    Wurtele, M. G.; Sharman, R. D.

    1985-01-01

    An analytic solution is presented for a stratified fluid of arbitrary constant Richardson number. By computer aided analysis the perturbation fields, including that of the Richardson number can be calculated. The results of the linear analytic model were compared with nonlinear simulations, leading to the following conclusions: (1) the perturbations in the Richardson number field, when small, are produced primarily by the perturbations of the shear; (2) perturbations of in the Richardson number field, even when small, are not symmetric, the increase being significantly larger than the decrease (the linear analytic solution and the nonlinear simulations both confirm this result); (3) as the perturbations grow, this asymmetry increases, but more so in the nonlinear simulations than in the linear analysis; (4) for large perturbations of the shear flow, the static stability, as represented by N2, is the dominating mechanism, becoming zero or negative, and producing convective overturning; and (5) the convectional measure of linearity in lee wave theory, NH/U, is no longer the critical parameter (it is suggested that (H/u sub 0) (du sub 0/dz) takes on this role in a shearing flow).

  10. Statistical properties of the Jukes-Holmquist method of estimating the number of nucleotide substitutions: reply to Holmquist and Conroy's criticism.

    PubMed

    Nei, M; Tateno, Y

    1981-01-01

    Conducting computer simulations, Nei and Tateno (1978) have shown that Jukes and Holmquist's (1972) method of estimating the number of nucleotide substitutions tends to give an overestimate and the estimate obtained has a large variance. Holmquist and Conroy (1980) repeated some parts of our simulation and claim that the overestimation of nucleotide substitutions in our paper occurred mainly because we used selected data. Examination of Holmquist and Conroy's simulation indicates that their results are essentially the same as ours when the Jukes-Holmquist method is used, but since they used a different method of computation their estimates of nucleotide substitutions differed substantially from ours. Another problem in Holmquist and Conroy's Letter is that they confused the expected number of nucleotide substitution with the number in a sample. This confusion has resulted in a number of unnecessary arguments. They also criticized our X2 measure, but this criticism is apparently due to a misunderstanding of the assumptions of our method and a failure to use our method in the way we described. We believe that our earlier conclusions remain unchanged.

  11. The Prominent Role of the Upstream Conditions on the Large-scale Motions of a Turbulent Channel Flow

    NASA Astrophysics Data System (ADS)

    Castillo, Luciano; Dharmarathne, Suranga; Tutkun, Murat; Hutchins, Nicholas

    2017-11-01

    In this study we investigate how upstream perturbations in a turbulent channel flow impact the downstream flow evolution, especially the large-scale motions. Direct numerical simulations were carried out at a friction Reynolds number, Reτ = 394 . Spanwise varying inlet blowing perturbations were imposed at 1 πh from the inlet. The flow field is decomposed into its constituent scales using proper orthogonal decomposition. The large-scale motions and the small-scale motions of the flow field are separated at a cut-off mode number, Mc. The cut-off mode number is defined as the number of the mode at which the fraction of energy recovered is 55 % . It is found that Reynolds stresses are increased due to blowing perturbations and large-scale motions are responsible for more than 70 % of the increase of the streamwise component of Reynolds normal stress. Surprisingly, 90 % of Reynolds shear stress is due to the energy augmentation of large-scale motions. It is shown that inlet perturbations impact the downstream flow by means of the LSM.

  12. Three-dimensional flow over a conical afterbody containing a centered propulsive jet: A numerical simulation

    NASA Technical Reports Server (NTRS)

    Deiwert, G. S.; Rothmund, H.

    1984-01-01

    The supersonic flow field over a body of revolution incident to the free stream is simulated numerically on a large, array processor (the CDC CYBER 205). The configuration is composed of a cone-cylinder forebody followed by a conical afterbody from which emanates a centered, supersonic propulsive jet. The free-stream Mach number is 2, the jet-exist Mach number is 2.5, and the jet-to-free-stream static pressure ratio is 3. Both the external flow and the exhaust are ideal air at a common total temperature.

  13. [Research on adaptive quasi-linear viscoelastic model for nonlinear viscoelastic properties of in vivo soft tissues].

    PubMed

    Wang, Heng; Sang, Yuanjun

    2017-10-01

    The mechanical behavior modeling of human soft biological tissues is a key issue for a large number of medical applications, such as surgery simulation, surgery planning, diagnosis, etc. To develop a biomechanical model of human soft tissues under large deformation for surgery simulation, the adaptive quasi-linear viscoelastic (AQLV) model was proposed and applied in human forearm soft tissues by indentation tests. An incremental ramp-and-hold test was carried out to calibrate the model parameters. To verify the predictive ability of the AQLV model, the incremental ramp-and-hold test, a single large amplitude ramp-and-hold test and a sinusoidal cyclic test at large strain amplitude were adopted in this study. Results showed that the AQLV model could predict the test results under the three kinds of load conditions. It is concluded that the AQLV model is feasible to describe the nonlinear viscoelastic properties of in vivo soft tissues under large deformation. It is promising that this model can be selected as one of the soft tissues models in the software design for surgery simulation or diagnosis.

  14. Contrail Formation in Aircraft Wakes Using Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Paoli, R.; Helie, J.; Poinsot, T. J.; Ghosal, S.

    2002-01-01

    In this work we analyze the issue of the formation of condensation trails ("contrails") in the near-field of an aircraft wake. The basic configuration consists in an exhaust engine jet interacting with a wing-tip training vortex. The procedure adopted relies on a mixed Eulerian/Lagrangian two-phase flow approach; a simple micro-physics model for ice growth has been used to couple ice and vapor phases. Large eddy simulations have carried out at a realistic flight Reynolds number to evaluate the effects of turbulent mixing and wake vortex dynamics on ice-growth characteristics and vapor thermodynamic properties.

  15. Statistical simulation of the magnetorotational dynamo.

    PubMed

    Squire, J; Bhattacharjee, A

    2015-02-27

    Turbulence and dynamo induced by the magnetorotational instability (MRI) are analyzed using quasilinear statistical simulation methods. It is found that homogenous turbulence is unstable to a large-scale dynamo instability, which saturates to an inhomogenous equilibrium with a strong dependence on the magnetic Prandtl number (Pm). Despite its enormously reduced nonlinearity, the dependence of the angular momentum transport on Pm in the quasilinear model is qualitatively similar to that of nonlinear MRI turbulence. This demonstrates the importance of the large-scale dynamo and suggests how dramatically simplified models may be used to gain insight into the astrophysically relevant regimes of very low or high Pm.

  16. Effects of operator splitting and low Mach-number correction in turbulent mixing transition simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grinstein, F. F.; Saenz, J. A.; Dolence, J. C.

    Inmore » this paper, transition and turbulence decay with the Taylor–Green vortex have been effectively used to demonstrate emulation of high Reynolds-number ( R e ) physical dissipation through numerical convective effects of various non-oscillatory finite-volume algorithms for implicit large eddy simulation (ILES), e.g. using the Godunov-based Eulerian adaptive mesh refinement code xRAGE. The inverse-chevron shock tube experiment simulations have been also used to assess xRAGE based ILES for shock driven turbulent mixing, compared with available simulation and laboratory data. The previous assessments are extended to evaluate new directionally-unsplit high-order algorithms in xRAGE, including a correction to address the well-known issue of excessive numerical diffusion of shock-capturing (e.g., Godunov-type) schemes for low Mach numbers. The unsplit options for hydrodynamics in xRAGE are discussed in detail, followed by fundamental tests with representative shock problems. Basic issues of transition to turbulence and turbulent mixing are discussed, and results of simulations of high- R e turbulent flow and mixing in canonical test cases are reported. Finally, compared to the directional-split cases, and for each grid resolution considered, unsplit results exhibit transition to turbulence with much higher effective R e —and significantly more so with the low Mach number correction.« less

  17. DNS/LES Simulations of Separated Flows at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Balakumar, P.

    2015-01-01

    Direct numerical simulations (DNS) and large-eddy simulations (LES) simulations of flow through a periodic channel with a constriction are performed using the dynamic Smagorinsky model at two Reynolds numbers of 2800 and 10595. The LES equations are solved using higher order compact schemes. DNS are performed for the lower Reynolds number case using a fine grid and the data are used to validate the LES results obtained with a coarse and a medium size grid. LES simulations are also performed for the higher Reynolds number case using a coarse and a medium size grid. The results are compared with an existing reference data set. The DNS and LES results agreed well with the reference data. Reynolds stresses, sub-grid eddy viscosity, and the budgets for the turbulent kinetic energy are also presented. It is found that the turbulent fluctuations in the normal and spanwise directions have the same magnitude. The turbulent kinetic energy budget shows that the production peaks near the separation point region and the production to dissipation ratio is very high on the order of five in this region. It is also observed that the production is balanced by the advection, diffusion, and dissipation in the shear layer region. The dominant term is the turbulent diffusion that is about two times the molecular dissipation.

  18. Effects of operator splitting and low Mach-number correction in turbulent mixing transition simulations

    DOE PAGES

    Grinstein, F. F.; Saenz, J. A.; Dolence, J. C.; ...

    2018-06-07

    Inmore » this paper, transition and turbulence decay with the Taylor–Green vortex have been effectively used to demonstrate emulation of high Reynolds-number ( R e ) physical dissipation through numerical convective effects of various non-oscillatory finite-volume algorithms for implicit large eddy simulation (ILES), e.g. using the Godunov-based Eulerian adaptive mesh refinement code xRAGE. The inverse-chevron shock tube experiment simulations have been also used to assess xRAGE based ILES for shock driven turbulent mixing, compared with available simulation and laboratory data. The previous assessments are extended to evaluate new directionally-unsplit high-order algorithms in xRAGE, including a correction to address the well-known issue of excessive numerical diffusion of shock-capturing (e.g., Godunov-type) schemes for low Mach numbers. The unsplit options for hydrodynamics in xRAGE are discussed in detail, followed by fundamental tests with representative shock problems. Basic issues of transition to turbulence and turbulent mixing are discussed, and results of simulations of high- R e turbulent flow and mixing in canonical test cases are reported. Finally, compared to the directional-split cases, and for each grid resolution considered, unsplit results exhibit transition to turbulence with much higher effective R e —and significantly more so with the low Mach number correction.« less

  19. Aerodynamic Simulation of Ice Accretion on Airfoils

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Addy, Harold E., Jr.; Bragg, Michael B.; Busch, Greg T.; Montreuil, Emmanuel

    2011-01-01

    This report describes recent improvements in aerodynamic scaling and simulation of ice accretion on airfoils. Ice accretions were classified into four types on the basis of aerodynamic effects: roughness, horn, streamwise, and spanwise ridge. The NASA Icing Research Tunnel (IRT) was used to generate ice accretions within these four types using both subscale and full-scale models. Large-scale, pressurized windtunnel testing was performed using a 72-in.- (1.83-m-) chord, NACA 23012 airfoil model with high-fidelity, three-dimensional castings of the IRT ice accretions. Performance data were recorded over Reynolds numbers from 4.5 x 10(exp 6) to 15.9 x 10(exp 6) and Mach numbers from 0.10 to 0.28. Lower fidelity ice-accretion simulation methods were developed and tested on an 18-in.- (0.46-m-) chord NACA 23012 airfoil model in a small-scale wind tunnel at a lower Reynolds number. The aerodynamic accuracy of the lower fidelity, subscale ice simulations was validated against the full-scale results for a factor of 4 reduction in model scale and a factor of 8 reduction in Reynolds number. This research has defined the level of geometric fidelity required for artificial ice shapes to yield aerodynamic performance results to within a known level of uncertainty and has culminated in a proposed methodology for subscale iced-airfoil aerodynamic simulation.

  20. Effective Integration of Earth Observation Data and Flood Modeling for Rapid Disaster Response: The Texas 2015 Case

    NASA Astrophysics Data System (ADS)

    Schumann, G.

    2016-12-01

    Routinely obtaining real-time 2-D inundation patterns of a flood event at a meaningful spatial resolution and over large scales is at the moment only feasible with either operational aircraft flights or satellite imagery. Of course having model simulations of floodplain inundation available to complement the remote sensing data is highly desirable, for both event re-analysis and forecasting event inundation. Using the Texas 2015 flood disaster, we demonstrate the value of multi-scale EO data for large scale 2-D floodplain inundation modeling and forecasting. A dynamic re-analysis of the Texas 2015 flood disaster was run using a 2-D flood model developed for accurate large scale simulations. We simulated the major rivers entering the Gulf of Mexico and used flood maps produced from both optical and SAR satellite imagery to examine regional model sensitivities and assess associated performance. It was demonstrated that satellite flood maps can complement model simulations and add value, although this is largely dependent on a number of important factors, such as image availability, regional landscape topology, and model uncertainty. In the preferred case where model uncertainty is high, landscape topology is complex (i.e. urbanized coastal area) and satellite flood maps are available (in case of SAR for instance), satellite data can significantly reduce model uncertainty by identifying the "best possible" model parameter set. However, most often the situation is occurring where model uncertainty is low and spatially contiguous flooding can be mapped from satellites easily enough, such as in rural large inland river floodplains. Consequently, not much value from satellites can be added. Nevertheless, where a large number of flood maps are available, model credibility can be increased substantially. In the case presented here this was true for at least 60% of the many thousands of kilometers of river flow length simulated, where satellite flood maps existed. The next steps of this project is to employ a technique termed "targeted observation" approach, which is an assimilation based procedure that allows quantifying the impact observations have on model predictions at the local scale and also along the entire river system, when assimilated with the model at specific "overpass" locations.

  1. Cross-flow turbines: progress report on physical and numerical model studies at large laboratory scale

    NASA Astrophysics Data System (ADS)

    Wosnik, Martin; Bachant, Peter

    2016-11-01

    Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.

  2. Joint resonant CMB power spectrum and bispectrum estimation

    NASA Astrophysics Data System (ADS)

    Meerburg, P. Daniel; Münchmeyer, Moritz; Wandelt, Benjamin

    2016-02-01

    We develop the tools necessary to assess the statistical significance of resonant features in the CMB correlation functions, combining power spectrum and bispectrum measurements. This significance is typically addressed by running a large number of simulations to derive the probability density function (PDF) of the feature-amplitude in the Gaussian case. Although these simulations are tractable for the power spectrum, for the bispectrum they require significant computational resources. We show that, by assuming that the PDF is given by a multivariate Gaussian where the covariance is determined by the Fisher matrix of the sine and cosine terms, we can efficiently produce spectra that are statistically close to those derived from full simulations. By drawing a large number of spectra from this PDF, both for the power spectrum and the bispectrum, we can quickly determine the statistical significance of candidate signatures in the CMB, considering both single frequency and multifrequency estimators. We show that for resonance models, cosmology and foreground parameters have little influence on the estimated amplitude, which allows us to simplify the analysis considerably. A more precise likelihood treatment can then be applied to candidate signatures only. We also discuss a modal expansion approach for the power spectrum, aimed at quickly scanning through large families of oscillating models.

  3. Direct simulation of flat-plate boundary layer with mild free-stream turbulence

    NASA Astrophysics Data System (ADS)

    Wu, Xiaohua; Moin, Parviz

    2014-11-01

    Spatially evolving direct numerical simulation of the flat-plate boundary layer has been performed. The momentum thickness Reynolds number develops from 80 to 3000 with a free-stream turbulence intensity decaying from 3 percent to 0.8 percent. Predicted skin-friction is in agreement with the Blasius solution prior to breakdown, follows the well-known T3A bypass transition data during transition, and agrees with the Erm and Joubert Melbourne wind-tunnel data after the completion of transition. We introduce the concept of bypass transition in the narrow sense. Streaks, although present, do not appear to be dynamically important during the present bypass transition as they occur downstream of infant turbulent spots. For the turbulent boundary layer, viscous scaling collapses the rate of dissipation profiles in the logarithmic region at different Reynolds numbers. The ratio of Taylor microscale and the Kolmogorov length scale is nearly constant over a large portion of the outer layer. The ratio of large-eddy characteristic length and the boundary layer thickness scales very well with Reynolds number. The turbulent boundary layer is also statistically analyzed using frequency spectra, conditional-sampling, and two-point correlations. Near momentum thickness Reynolds number of 2900, three layers of coherent vortices are observed: the upper and lower layers are distinct hairpin forests of large and small sizes respectively; the middle layer consists of mostly fragmented hairpin elements.

  4. Quantum chemical calculations of interatomic potentials for computer simulation of solids

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A comprehensive mathematical model by which the collective behavior of a very large number of atoms within a metal or alloy can accurately be simulated was developed. Work was done in order to predict and modify the strength of materials to suit our technological needs. The method developed is useful in studying atomic interactions related to dislocation motion and crack extension.

  5. Stochastic Reconnection for Large Magnetic Prandtl Numbers

    NASA Astrophysics Data System (ADS)

    Jafari, Amir; Vishniac, Ethan T.; Kowal, Grzegorz; Lazarian, Alex

    2018-06-01

    We consider stochastic magnetic reconnection in high-β plasmas with large magnetic Prandtl numbers, Pr m > 1. For large Pr m , field line stochasticity is suppressed at very small scales, impeding diffusion. In addition, viscosity suppresses very small-scale differential motions and therefore also the local reconnection. Here we consider the effect of high magnetic Prandtl numbers on the global reconnection rate in a turbulent medium and provide a diffusion equation for the magnetic field lines considering both resistive and viscous dissipation. We find that the width of the outflow region is unaffected unless Pr m is exponentially larger than the Reynolds number Re. The ejection velocity of matter from the reconnection region is also unaffected by viscosity unless Re ∼ 1. By these criteria the reconnection rate in typical astrophysical systems is almost independent of viscosity. This remains true for reconnection in quiet environments where current sheet instabilities drive reconnection. However, if Pr m > 1, viscosity can suppress small-scale reconnection events near and below the Kolmogorov or viscous damping scale. This will produce a threshold for the suppression of large-scale reconnection by viscosity when {\\Pr }m> \\sqrt{Re}}. In any case, for Pr m > 1 this leads to a flattening of the magnetic fluctuation power spectrum, so that its spectral index is ∼‑4/3 for length scales between the viscous dissipation scale and eddies larger by roughly {{\\Pr }}m3/2. Current numerical simulations are insensitive to this effect. We suggest that the dependence of reconnection on viscosity in these simulations may be due to insufficient resolution for the turbulent inertial range rather than a guide to the large Re limit.

  6. SU-E-I-20: Comprehensive Quality Assurance Test of Second Generation Toshiba Aquilion Large Bore CT Simulator Based On AAPM TG-66 Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, D

    2015-06-15

    Purpose: AAPM radiation therapy committee task group No. 66 (TG-66) published a report which described a general approach to CT simulator QA. The report outlines the testing procedures and specifications for the evaluation of patient dose, radiation safety, electromechanical components, and image quality for a CT simulator. The purpose of this study is to thoroughly evaluate the performance of a second generation Toshiba Aquilion Large Bore CT simulator with 90 cm bore size (Toshiba, Nasu, JP) based on the TG-66 criteria. The testing procedures and results from this study provide baselines for a routine QA program. Methods: Different measurements andmore » analysis were performed including CTDIvol measurements, alignment and orientation of gantry lasers, orientation of the tabletop with respect to the imaging plane, table movement and indexing accuracy, Scanogram location accuracy, high contrast spatial resolution, low contrast resolution, field uniformity, CT number accuracy, mA linearity and mA reproducibility using a number of different phantoms and measuring devices, such as CTDI phantom, ACR image quality phantom, TG-66 laser QA phantom, pencil ion chamber (Fluke Victoreen) and electrometer (RTI Solidose 400). Results: The CTDI measurements were within 20% of the console displayed values. The alignment and orientation for both gantry laser and tabletop, as well as the table movement and indexing and scanogram location accuracy were within 2mm as specified in TG66. The spatial resolution, low contrast resolution, field uniformity and CT number accuracy were all within ACR’s recommended limits. The mA linearity and reproducibility were both well below the TG66 threshold. Conclusion: The 90 cm bore size second generation Toshiba Aquilion Large Bore CT simulator that comes with 70 cm true FOV can consistently meet various clinical needs. The results demonstrated that this simulator complies with the TG-66 protocol in all aspects including electromechanical component, radiation safety component, and image quality component. Employee of Toshiba America Medical Systems.« less

  7. A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases

    NASA Astrophysics Data System (ADS)

    Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie

    2018-01-01

    Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade performance. Thus, for the use patterns studied here the database performance is not critically dependent on the exact choices of index or level.

  8. A Study of the Unstable Modes in High Mach Number Gaseous Jets and Shear Layers

    NASA Astrophysics Data System (ADS)

    Bassett, Gene Marcel

    1993-01-01

    Instabilities affecting the propagation of supersonic gaseous jets have been studied using high resolution computer simulations with the Piecewise-Parabolic-Method (PPM). These results are discussed in relation to jets from galactic nuclei. These studies involve a detailed treatment of a single section of a very long jet, approximating the dynamics by using periodic boundary conditions. Shear layer simulations have explored the effects of shear layers on the growth of nonlinear instabilities. Convergence of the numerical approximations has been tested by comparing jet simulations with different grid resolutions. The effects of initial conditions and geometry on the dominant disruptive instabilities have also been explored. Simulations of shear layers with a variety of thicknesses, Mach numbers and densities perturbed by incident sound waves imply that the time for the excited kink modes to grow large in amplitude and disrupt the shear layer is taug = (546 +/- 24) (M/4)^{1.7 } (Apert/0.02) ^{-0.4} delta/c, where M is the jet Mach number, delta is the half-width of the shear layer, and A_ {pert} is the perturbation amplitude. For simulations of periodic jets, the initial velocity perturbations set up zig-zag shock patterns inside the jet. In each case a single zig-zag shock pattern (an odd mode) or a double zig-zag shock pattern (an even mode) grows to dominate the flow. The dominant kink instability responsible for these shock patterns moves approximately at the linear resonance velocity, nu_ {mode} = cextnu_ {relative}/(cjet + c_ {ext}). For high resolution simulations (those with 150 or more computational zones across the jet width), the even mode dominates if the even penetration is higher in amplitude initially than the odd perturbation. For low resolution simulations, the odd mode dominates even for a stronger even mode perturbation. In high resolution simulations the jet boundary rolls up and large amounts of external gas are entrained into the jet. In low resolution simulations this entrainment process is impeded by numerical viscosity. The three-dimensional jet simulations behave similarly to two-dimensional jet runs with the same grid resolutions.

  9. Choosing the best partition of the output from a large-scale simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Challacombe, Chelsea Jordan; Casleton, Emily Michele

    Data partitioning becomes necessary when a large-scale simulation produces more data than can be feasibly stored. The goal is to partition the data, typically so that every element belongs to one and only one partition, and store summary information about the partition, either a representative value plus an estimate of the error or a distribution. Once the partitions are determined and the summary information stored, the raw data is discarded. This process can be performed in-situ; meaning while the simulation is running. When creating the partitions there are many decisions that researchers must make. For instance, how to determine oncemore » an adequate number of partitions have been created, how are the partitions created with respect to dividing the data, or how many variables should be considered simultaneously. In addition, decisions must be made for how to summarize the information within each partition. Because of the combinatorial number of possible ways to partition and summarize the data, a method of comparing the different possibilities will help guide researchers into choosing a good partitioning and summarization scheme for their application.« less

  10. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, Panagiotis; /Fermilab; Cary, John

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less

  11. A Projection of Changes in Landfilling Atmospheric River Frequency and Extreme Precipitation over Western North America from the Large Ensemble CESM Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson M.; Leung, Lai-Yung R.; Yoon, Jin-Ho

    Simulations from the Community Earth System Model Large Ensemble project are analyzed to investigate the impact of global warming on atmospheric rivers (ARs). The model has notable biases in simulating the subtropical jet position and the relationship between extreme precipitation and moisture transport. After accounting for these biases, the model projects an ensemble mean increase of 35% in the number of landfalling AR days between the last twenty years of the 20th and 21st centuries. However, the number of AR associated extreme precipitation days increases only by 28% because the moisture transport required to produce extreme precipitation also increases withmore » warming. Internal variability introduces an uncertainty of ±8% and ±7% in the projected changes in AR days and associated extreme precipitation days. In contrast, accountings for model biases only change the projected changes by about 1%. The significantly larger mean changes compared to internal variability and to the effects of model biases highlight the robustness of AR responses to global warming.« less

  12. Generation of Comprehensive Surrogate Kinetic Models and Validation Databases for Simulating Large Molecular Weight Hydrocarbon Fuels

    DTIC Science & Technology

    2012-10-25

    of hydrogen/ carbon molar ratio (H/C), derived cetane number (DCN), threshold sooting index (TSI), and average mean molecular weight (MWave) of...diffusive soot extinction configurations. Matching the “real fuel combustion property targets” of hydrogen/ carbon molar ratio (H/C), derived cetane number...combustion property targets - hydrogen/ carbon molar ratio (H/C), derived cetane number (DCN), threshold sooting index (TSI), and average mean

  13. Simulation-based power calculation for designing interrupted time series analyses of health policy interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Ross-Degnan, Dennis

    2011-11-01

    Interrupted time series is a strong quasi-experimental research design to evaluate the impacts of health policy interventions. Using simulation methods, we estimated the power requirements for interrupted time series studies under various scenarios. Simulations were conducted to estimate the power of segmented autoregressive (AR) error models when autocorrelation ranged from -0.9 to 0.9 and effect size was 0.5, 1.0, and 2.0, investigating balanced and unbalanced numbers of time periods before and after an intervention. Simple scenarios of autoregressive conditional heteroskedasticity (ARCH) models were also explored. For AR models, power increased when sample size or effect size increased, and tended to decrease when autocorrelation increased. Compared with a balanced number of study periods before and after an intervention, designs with unbalanced numbers of periods had less power, although that was not the case for ARCH models. The power to detect effect size 1.0 appeared to be reasonable for many practical applications with a moderate or large number of time points in the study equally divided around the intervention. Investigators should be cautious when the expected effect size is small or the number of time points is small. We recommend conducting various simulations before investigation. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Conditional flood frequency and catchment state: a simulation approach

    NASA Astrophysics Data System (ADS)

    Brettschneider, Marco; Bourgin, François; Merz, Bruno; Andreassian, Vazken; Blaquiere, Simon

    2017-04-01

    Catchments have memory and the conditional flood frequency distribution for a time period ahead can be seen as non-stationary: it varies with the catchment state and climatic factors. From a risk management perspective, understanding the link of conditional flood frequency to catchment state is a key to anticipate potential periods of higher flood risk. Here, we adopt a simulation approach to explore the link between flood frequency obtained by continuous rainfall-runoff simulation and the initial state of the catchment. The simulation chain is based on i) a three state rainfall generator applied at the catchment scale, whose parameters are estimated for each month, and ii) the GR4J lumped rainfall-runoff model, whose parameters are calibrated with all available data. For each month, a large number of stochastic realizations of the continuous rainfall generator for the next 12 months are used as inputs for the GR4J model in order to obtain a large number of stochastic realizations for the next 12 months. This process is then repeated for 50 different initial states of the soil moisture reservoir of the GR4J model and for all the catchments. Thus, 50 different conditional flood frequency curves are obtained for the 50 different initial catchment states. We will present an analysis of the link between the catchment states, the period of the year and the strength of the conditioning of the flood frequency compared to the unconditional flood frequency. A large sample of diverse catchments in France will be used.

  15. Two-dimensional dynamics of elasto-inertial turbulence and its role in polymer drag reduction

    NASA Astrophysics Data System (ADS)

    Sid, S.; Terrapon, V. E.; Dubief, Y.

    2018-02-01

    The goal of the present study is threefold: (i) to demonstrate the two-dimensional nature of the elasto-inertial instability in elasto-inertial turbulence (EIT), (ii) to identify the role of the bidimensional instability in three-dimensional EIT flows, and (iii) to establish the role of the small elastic scales in the mechanism of self-sustained EIT. Direct numerical simulations of viscoelastic fluid flows are performed in both two- and three-dimensional straight periodic channels using the Peterlin finitely extensible nonlinear elastic model (FENE-P). The Reynolds number is set to Reτ=85 , which is subcritical for two-dimensional flows but beyond the transition for three-dimensional ones. The polymer properties selected correspond to those of typical dilute polymer solutions, and two moderate Weissenberg numbers, Wiτ=40 ,100 , are considered. The simulation results show that sustained turbulence can be observed in two-dimensional subcritical flows, confirming the existence of a bidimensional elasto-inertial instability. The same type of instability is also observed in three-dimensional simulations where both Newtonian and elasto-inertial turbulent structures coexist. Depending on the Wi number, one type of structure can dominate and drive the flow. For large Wi values, the elasto-inertial instability tends to prevail over the Newtonian turbulence. This statement is supported by (i) the absence of typical Newtonian near-wall vortices and (ii) strong similarities between two- and three-dimensional flows when considering larger Wi numbers. The role of small elastic scales is investigated by introducing global artificial diffusion (GAD) in the hyperbolic transport equation for polymers. The aim is to measure how the flow reacts when the smallest elastic scales are progressively filtered out. The study results show that the introduction of large polymer diffusion in the system strongly damps a significant part of the elastic scales that are necessary to feed turbulence, eventually leading to flow laminarization. A sufficiently high Schmidt number (weakly diffusive polymers) is necessary to allow self-sustained turbulence to settle. Although EIT can withstand a low amount of diffusion and remains in a nonlaminar chaotic state, adding a finite amount of GAD in the system can have an impact on the dynamics and lead to important quantitative changes, even for Schmidt numbers as large as 102. The use of GAD should therefore be avoided in viscoelastic flow simulations.

  16. Direct simulation of a self-similar plane wake

    NASA Technical Reports Server (NTRS)

    Moser, Robert D.; Rogers, Michael M.

    1994-01-01

    Direct simulations of two time-developing turbulent wakes have been performed. Initial conditions for the simulations were obtained from two realizations of a direct simulation of a turbulent boundary layer at momentum thickness Reynolds number 670. In addition, extra two dimensional disturbances were added in one of the cases to mimic two dimensional forcing. The unforced wake is allowed to evolve long enough to attain self similarity. The mass-flux Reynolds number (equivalent to the momentum thickness Reynolds number in spatially developing wakes) is 2000, which is high enough for a short k(exp -5/3) range to be evident in the streamwise one dimensional velocity spectrum. Several turbulence statistics have been computed by averaging in space and over the self-similar period in time. The growth rate in the unforced flow is low compared to experiments, but when this growth-rate difference is accounted for, the statistics of the unforced case are in reasonable agreement with experiments. However, the forced case is significantly different. The growth rate, turbulence Reynolds number, and turbulence intensities are as much as ten times larger in the forced case. In addition, the forced flow exhibits large-scale structures similar to those observed in transitional wakes, while the unforced flow does not.

  17. Shear thinning effects on blood flow in straight and curved tubes

    NASA Astrophysics Data System (ADS)

    Cherry, Erica M.; Eaton, John K.

    2013-07-01

    Simulations were performed to determine the magnitude and types of errors one can expect when approximating blood in large arteries as a Newtonian fluid, particularly in the presence of secondary flows. This was accomplished by running steady simulations of blood flow in straight and curved tubes using both Newtonian and shear-thinning viscosity models. In the shear-thinning simulations, the viscosity was modeled as a shear rate-dependent function fit to experimental data. Simulations in straight tubes were modeled after physiologically relevant arterial flows, and flow parameters for the curved tube simulations were chosen to examine a variety of secondary flow strengths. The diameters ranged from 1 mm to 10 mm and the Reynolds numbers from 24 to 1500. Pressure and velocity data are reported for all simulations. In the straight tube simulations, the shear-thinning flows had flattened velocity profiles and higher pressure gradients compared to the Newtonian simulations. In the curved tube flows, the shear-thinning simulations tended to have blunted axial velocity profiles, decreased secondary flow strengths, and decreased axial vorticity compared to the Newtonian simulations. The cross-sectionally averaged pressure drops in the curved tubes were higher in the shear-thinning flows at low Reynolds number but lower at high Reynolds number. The maximum deviation in secondary flow magnitude averaged over the cross sectional area was 19% of the maximum secondary flow and the maximum deviation in axial vorticity was 25% of the maximum vorticity.

  18. Supermassive Black Hole Binaries in High Performance Massively Parallel Direct N-body Simulations on Large GPU Clusters

    NASA Astrophysics Data System (ADS)

    Spurzem, R.; Berczik, P.; Zhong, S.; Nitadori, K.; Hamada, T.; Berentzen, I.; Veles, A.

    2012-07-01

    Astrophysical Computer Simulations of Dense Star Clusters in Galactic Nuclei with Supermassive Black Holes are presented using new cost-efficient supercomputers in China accelerated by graphical processing cards (GPU). We use large high-accuracy direct N-body simulations with Hermite scheme and block-time steps, parallelised across a large number of nodes on the large scale and across many GPU thread processors on each node on the small scale. A sustained performance of more than 350 Tflop/s for a science run on using simultaneously 1600 Fermi C2050 GPUs is reached; a detailed performance model is presented and studies for the largest GPU clusters in China with up to Petaflop/s performance and 7000 Fermi GPU cards. In our case study we look at two supermassive black holes with equal and unequal masses embedded in a dense stellar cluster in a galactic nucleus. The hardening processes due to interactions between black holes and stars, effects of rotation in the stellar system and relativistic forces between the black holes are simultaneously taken into account. The simulation stops at the complete relativistic merger of the black holes.

  19. Statistical effects related to low numbers of reacting molecules analyzed for a reversible association reaction A + B = C in ideally dispersed systems: An apparent violation of the law of mass action

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szymanski, R., E-mail: rszymans@cbmm.lodz.pl; Sosnowski, S.; Maślanka, Ł.

    2016-03-28

    Theoretical analysis and computer simulations (Monte Carlo and numerical integration of differential equations) show that the statistical effect of a small number of reacting molecules depends on a way the molecules are distributed among the small volume nano-reactors (droplets in this study). A simple reversible association A + B = C was chosen as a model reaction, enabling to observe both thermodynamic (apparent equilibrium constant) and kinetic effects of a small number of reactant molecules. When substrates are distributed uniformly among droplets, all containing the same equal number of substrate molecules, the apparent equilibrium constant of the association is highermore » than the chemical one (observed in a macroscopic—large volume system). The average rate of the association, being initially independent of the numbers of molecules, becomes (at higher conversions) higher than that in a macroscopic system: the lower the number of substrate molecules in a droplet, the higher is the rate. This results in the correspondingly higher apparent equilibrium constant. A quite opposite behavior is observed when reactant molecules are distributed randomly among droplets: the apparent association rate and equilibrium constants are lower than those observed in large volume systems, being the lower, the lower is the average number of reacting molecules in a droplet. The random distribution of reactant molecules corresponds to ideal (equal sizes of droplets) dispersing of a reaction mixture. Our simulations have shown that when the equilibrated large volume system is dispersed, the resulting droplet system is already at equilibrium and no changes of proportions of droplets differing in reactant compositions can be observed upon prolongation of the reaction time.« less

  20. Generation of dense granular deposits for porosity analysis: assessment and application of large-scale non-smooth granular dynamics

    NASA Astrophysics Data System (ADS)

    Schruff, T.; Liang, R.; Rüde, U.; Schüttrumpf, H.; Frings, R. M.

    2018-01-01

    The knowledge of structural properties of granular materials such as porosity is highly important in many application-oriented and scientific fields. In this paper we present new results of computer-based packing simulations where we use the non-smooth granular dynamics (NSGD) method to simulate gravitational random dense packing of spherical particles with various particle size distributions and two types of depositional conditions. A bin packing scenario was used to compare simulation results to laboratory porosity measurements and to quantify the sensitivity of the NSGD regarding critical simulation parameters such as time step size. The results of the bin packing simulations agree well with laboratory measurements across all particle size distributions with all absolute errors below 1%. A large-scale packing scenario with periodic side walls was used to simulate the packing of up to 855,600 spherical particles with various particle size distributions (PSD). Simulation outcomes are used to quantify the effect of particle-domain-size ratio on the packing compaction. A simple correction model, based on the coordination number, is employed to compensate for this effect on the porosity and to determine the relationship between PSD and porosity. Promising accuracy and stability results paired with excellent computational performance recommend the application of NSGD for large-scale packing simulations, e.g. to further enhance the generation of representative granular deposits.

  1. A New Approach to Modeling Jupiter's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Katoh, Y.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2017-12-01

    The scales in planetary magnetospheres range from 10s of planetary radii to kilometers. For a number of years we have studied the magnetospheres of Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations. However, we have not been able to reach even the limits of the MHD approximation because of the large amount of computer resources required. Recently thanks to the progress in supercomputer systems, we have obtained the capability to simulate Jupiter's magnetosphere with 1000 times the number of grid points used in our previous simulations. This has allowed us to combine the high resolution global simulation with a micro-scale simulation of the Jovian magnetosphere. In particular we can combine a hybrid (kinetic ions and fluid electrons) simulation with the MHD simulation. In addition, the new capability enables us to run multi-parameter survey simulations of the Jupiter-solar wind system. In this study we performed a high-resolution simulation of Jovian magnetosphere to connect with the hybrid simulation, and lower resolution simulations under the various solar wind conditions to compare with Hisaki and Juno observations. In the high-resolution simulation we used a regular Cartesian gird with 0.15 RJ grid spacing and placed the inner boundary at 7 RJ. From these simulation settings, we provide the magnetic field out to around 20 RJ from Jupiter as a background field for the hybrid simulation. For the first time we have been able to resolve Kelvin Helmholtz waves on the magnetopause. We have investigated solar wind dynamic pressures between 0.01 and 0.09 nPa for a number of IMF values. These simulation data are open for the registered users to download the raw data. We have compared the results of these simulations with Hisaki auroral observations.

  2. Hybrid Large-Eddy/Reynolds-Averaged Simulation of a Supersonic Cavity Using VULCAN

    NASA Technical Reports Server (NTRS)

    Quinlan, Jesse; McDaniel, James; Baurle, Robert A.

    2013-01-01

    Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters a three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and the effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case and indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. Simulations are performed with and without inflow turbulence recycling on the coarse grid to isolate the effect of the recycling procedure, which is demonstrably critical to capturing the relevant shear layer dynamics. Shock sensor formulations of Ducros and Larsson are found to predict mean flow statistics equally well.

  3. DNS of High Pressure Supercritical Combustion

    NASA Astrophysics Data System (ADS)

    Chong, Shao Teng; Raman, Venkatramanan

    2016-11-01

    Supercritical flows have always been important to rocket motors, and more recently to aircraft engines and stationary gas turbines. The purpose of the present study is to understand effects of differential diffusion on reacting scalars using supercritical isotropic turbulence. Focus is on fuel and oxidant reacting in the transcritical region where density, heat capacity and transport properties are highly sensitive to variations in temperature and pressure. Reynolds and Damkohler number vary as a result and although it is common to neglect differential diffusion effects if Re is sufficiently large, this large variation in temperature with heat release can accentuate molecular transport differences. Direct numerical simulations (DNS) for one step chemistry reaction between fuel and oxidizer are used to examine the differential diffusion effects. A key issue investigated in this paper is if the flamelet progress variable approach, where the Lewis number is usually assumed to be unity and constant for all species, can be accurately applied to simulate supercritical combustion.

  4. Numerical study of axial turbulent flow over long cylinders

    NASA Technical Reports Server (NTRS)

    Neves, J. C.; Moin, P.; Moser, R. D.

    1991-01-01

    The effects of transverse curvature are investigated by means of direct numerical simulations of turbulent axial flow over cylinders. Two cases of Reynolds number of about 3400 and layer-thickness-to-cylinder-radius ratios of 5 and 11 were simulated. All essential turbulence scales were resolved in both calculations, and a large number of turbulence statistics were computed. The results are compared with the plane channel results of Kim et al. (1987) and with experiments. With transverse curvature the skin friction coefficient increases and the turbulence statistics, when scaled with wall units, are lower than in the plane channel. The momentum equation provides a scaling that collapses the cylinder statistics, and allows the results to be interpreted in light of the plane channel flow. The azimuthal and radial length scales of the structures in the flow are of the order of the cylinder diameter. Boomerang-shaped structures with large spanwise length scales were observed in the flow.

  5. Effect of wing mass in free flight by a butterfly-like 3D flapping wing-body model

    NASA Astrophysics Data System (ADS)

    Suzuki, Kosuke; Okada, Iori; Yoshino, Masato

    2016-11-01

    The effect of wing mass in free flight of a flapping wing is investigated by numerical simulations based on an immersed boundary-lattice Boltzmann method. We consider a butterfly-like 3D flapping wing-model consisting of two square wings with uniform mass density connected by a rod-shaped body. We simulate free flights of the wing-body model with various mass ratios of the wing to the whole of the model. As a result, it is found that the lift and thrust forces decrease as the mass ratio increases, since the body with a large mass ratio experiences large vertical and horizontal oscillations in one period and consequently the wing tip speed relatively decreases. In addition, we find the critical mass ratio between upward flight and downward flight for various Reynolds numbers. This work was supported by JSPS KAKENHI Grant Number JP16K18012.

  6. THE LARGE HIGH PRESSURE ARC PLASMA GENERATOR: A FACILITY FOR SIMULATING MISSLE AND SATELLITE RE-ENTRY. Research Report 56

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, P.; Powers, W.; Hritzay, D.

    1959-06-01

    The development of an arc wind tunnel capable of stagnation pressures in the excess of twenty atmospheres and using as much as fifteen megawatts of electrical power is described. The calibration of this facility shows that it is capable of reproducing the aerodynamic environment encountered by vehicles flying at velocities as great as satellite velocity. Its use as a missile re-entry material test facility is described. The large power capacity of this facility allows one to make material tests on specimens of size sufficient to be useful for material development yet at realistic energy and Reynolds number values. By themore » addition of a high-capacity vacuum system, this facility can be used to produce the low density, high Mach number environment needed for simulating satellite re-entry, as well as hypersonic flight at extreme altitudes. (auth)« less

  7. Precision matrix expansion - efficient use of numerical simulations in estimating errors on cosmological parameters

    NASA Astrophysics Data System (ADS)

    Friedrich, Oliver; Eifler, Tim

    2018-01-01

    Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.

  8. Turbulent diffusion of chemically reacting flows: Theory and numerical simulations

    NASA Astrophysics Data System (ADS)

    Elperin, T.; Kleeorin, N.; Liberman, M.; Lipatnikov, A. N.; Rogachevskii, I.; Yu, R.

    2017-11-01

    The theory of turbulent diffusion of chemically reacting gaseous admixtures developed previously [T. Elperin et al., Phys. Rev. E 90, 053001 (2014), 10.1103/PhysRevE.90.053001] is generalized for large yet finite Reynolds numbers and the dependence of turbulent diffusion coefficient on two parameters, the Reynolds number and Damköhler number (which characterizes a ratio of turbulent and reaction time scales), is obtained. Three-dimensional direct numerical simulations (DNSs) of a finite-thickness reaction wave for the first-order chemical reactions propagating in forced, homogeneous, isotropic, and incompressible turbulence are performed to validate the theoretically predicted effect of chemical reactions on turbulent diffusion. It is shown that the obtained DNS results are in good agreement with the developed theory.

  9. Turbulent diffusion of chemically reacting flows: Theory and numerical simulations.

    PubMed

    Elperin, T; Kleeorin, N; Liberman, M; Lipatnikov, A N; Rogachevskii, I; Yu, R

    2017-11-01

    The theory of turbulent diffusion of chemically reacting gaseous admixtures developed previously [T. Elperin et al., Phys. Rev. E 90, 053001 (2014)PLEEE81539-375510.1103/PhysRevE.90.053001] is generalized for large yet finite Reynolds numbers and the dependence of turbulent diffusion coefficient on two parameters, the Reynolds number and Damköhler number (which characterizes a ratio of turbulent and reaction time scales), is obtained. Three-dimensional direct numerical simulations (DNSs) of a finite-thickness reaction wave for the first-order chemical reactions propagating in forced, homogeneous, isotropic, and incompressible turbulence are performed to validate the theoretically predicted effect of chemical reactions on turbulent diffusion. It is shown that the obtained DNS results are in good agreement with the developed theory.

  10. Numerical Simulations of Subscale Wind Turbine Rotor Inboard Airfoils at Low Reynolds Number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaylock, Myra L.; Maniaci, David Charles; Resor, Brian R.

    2015-04-01

    New blade designs are planned to support future research campaigns at the SWiFT facility in Lubbock, Texas. The sub-scale blades will reproduce specific aerodynamic characteristics of utility-scale rotors. Reynolds numbers for megawatt-, utility-scale rotors are generally above 2-8 million. The thickness of inboard airfoils for these large rotors are typically as high as 35-40%. The thickness and the proximity to three-dimensional flow of these airfoils present design and analysis challenges, even at the full scale. However, more than a decade of experience with the airfoils in numerical simulation, in the wind tunnel, and in the field has generated confidence inmore » their performance. Reynolds number regimes for the sub-scale rotor are significantly lower for the inboard blade, ranging from 0.7 to 1 million. Performance of the thick airfoils in this regime is uncertain because of the lack of wind tunnel data and the inherent challenge associated with numerical simulations. This report documents efforts to determine the most capable analysis tools to support these simulations in an effort to improve understanding of the aerodynamic properties of thick airfoils in this Reynolds number regime. Numerical results from various codes of four airfoils are verified against previously published wind tunnel results where data at those Reynolds numbers are available. Results are then computed for other Reynolds numbers of interest.« less

  11. An algorithm for deciding the number of clusters and validating using simulated data with application to exploring crop population structure

    USDA-ARS?s Scientific Manuscript database

    A first step in exploring population structure in crop plants and other organisms is to define the number of subpopulations that exist for a given data set. The genetic marker data sets being generated have become increasingly large over time and commonly are the high-dimension, low sample size (HDL...

  12. Cascaded lattice Boltzmann method with improved forcing scheme for large-density-ratio multiphase flow at high Reynolds and Weber numbers.

    PubMed

    Lycett-Brown, Daniel; Luo, Kai H

    2016-11-01

    A recently developed forcing scheme has allowed the pseudopotential multiphase lattice Boltzmann method to correctly reproduce coexistence curves, while expanding its range to lower surface tensions and arbitrarily high density ratios [Lycett-Brown and Luo, Phys. Rev. E 91, 023305 (2015)PLEEE81539-375510.1103/PhysRevE.91.023305]. Here, a third-order Chapman-Enskog analysis is used to extend this result from the single-relaxation-time collision operator, to a multiple-relaxation-time cascaded collision operator, whose additional relaxation rates allow a significant increase in stability. Numerical results confirm that the proposed scheme enables almost independent control of density ratio, surface tension, interface width, viscosity, and the additional relaxation rates of the cascaded collision operator. This allows simulation of large density ratio flows at simultaneously high Reynolds and Weber numbers, which is demonstrated through binary collisions of water droplets in air (with density ratio up to 1000, Reynolds number 6200 and Weber number 440). This model represents a significant improvement in multiphase flow simulation by the pseudopotential lattice Boltzmann method in which real-world parameters are finally achievable.

  13. Revision of the Rawls et al. (1982) pedotransfer functions for their applicability to US croplands

    USDA-ARS?s Scientific Manuscript database

    Large scale environmental impact studies typically involve the use of simulation models and require a variety of inputs, some of which may need to be estimated in absence of adequate measured data. As an example, soil water retention needs to be estimated for a large number of soils that are to be u...

  14. New pathway of stratocumulus to cumulus transition via aerosol-cloud-precipitation interaction

    NASA Astrophysics Data System (ADS)

    Yamaguchi, T.; Feingold, G.; Kazil, J.

    2017-12-01

    The stratocumulus to cumulus transition (SCT) is typically considered to be a slow, multi-day process, caused primarily by dry air entrainment associated with overshooting cumulus rising under stratocumulus, with minor influence of precipitation. In this presentation, we show rapid SCT induced by a strong precipitation-induced modulation with Lagrangian SCT large eddy simulations. A large eddy model is coupled with a two-moment bulk microphysics scheme that predicts aerosol and droplet number concentrations. Moderate aerosol concentrations (100-250 cm-3) produce little to no drizzle from the stratocumulus deck. Large amounts of rain eventually form and wash out stratocumulus and much of the aerosol, and a cumulus state appears for approximately 10 hours. Initiation of strong rain formation is identified in penetrative cumulus clouds which are much deeper than stratocumulus, and they are able to condense large amounts of water. We show that prediction of cloud droplet number is necessary for this fast SCT since it is a result of a positive feedback of collision-coalescence induced aerosol depletion enhancing drizzle formation. Simulations with fixed droplet concentrations that bracket the time varying aerosol/drop concentrations are therefore not representative of the role of drizzle in the SCT.

  15. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    DOE PAGES

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; ...

    2016-06-09

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  17. Efficient Maintenance and Update of Nonbonded Lists in Macromolecular Simulations.

    PubMed

    Chowdhury, Rezaul; Beglov, Dmitri; Moghadasi, Mohammad; Paschalidis, Ioannis Ch; Vakili, Pirooz; Vajda, Sandor; Bajaj, Chandrajit; Kozakov, Dima

    2014-10-14

    Molecular mechanics and dynamics simulations use distance based cutoff approximations for faster computation of pairwise van der Waals and electrostatic energy terms. These approximations traditionally use a precalculated and periodically updated list of interacting atom pairs, known as the "nonbonded neighborhood lists" or nblists, in order to reduce the overhead of finding atom pairs that are within distance cutoff. The size of nblists grows linearly with the number of atoms in the system and superlinearly with the distance cutoff, and as a result, they require significant amount of memory for large molecular systems. The high space usage leads to poor cache performance, which slows computation for large distance cutoffs. Also, the high cost of updates means that one cannot afford to keep the data structure always synchronized with the configuration of the molecules when efficiency is at stake. We propose a dynamic octree data structure for implicit maintenance of nblists using space linear in the number of atoms but independent of the distance cutoff. The list can be updated very efficiently as the coordinates of atoms change during the simulation. Unlike explicit nblists, a single octree works for all distance cutoffs. In addition, octree is a cache-friendly data structure, and hence, it is less prone to cache miss slowdowns on modern memory hierarchies than nblists. Octrees use almost 2 orders of magnitude less memory, which is crucial for simulation of large systems, and while they are comparable in performance to nblists when the distance cutoff is small, they outperform nblists for larger systems and large cutoffs. Our tests show that octree implementation is approximately 1.5 times faster in practical use case scenarios as compared to nblists.

  18. Large-Eddy Simulation of the Flat-plate Turbulent Boundary Layer at High Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Inoue, Michio

    The near-wall, subgrid-scale (SGS) model [Chung and Pullin, "Large-eddy simulation and wall-modeling of turbulent channel flow'', J. Fluid Mech. 631, 281--309 (2009)] is used to perform large-eddy simulations (LES) of the incompressible developing, smooth-wall, flat-plate turbulent boundary layer. In this model, the stretched-vortex, SGS closure is utilized in conjunction with a tailored, near-wall model designed to incorporate anisotropic vorticity scales in the presence of the wall. The composite SGS-wall model is presently incorporated into a computer code suitable for the LES of developing flat-plate boundary layers. This is then used to study several aspects of zero- and adverse-pressure gradient turbulent boundary layers. First, LES of the zero-pressure gradient turbulent boundary layer are performed at Reynolds numbers Retheta based on the free-stream velocity and the momentum thickness in the range Retheta = 103-1012. Results include the inverse skin friction coefficient, 2/Cf , velocity profiles, the shape factor H, the Karman "constant", and the Coles wake factor as functions of Re theta. Comparisons with some direct numerical simulation (DNS) and experiment are made, including turbulent intensity data from atmospheric-layer measurements at Retheta = O (106). At extremely large Retheta , the empirical Coles-Fernholz relation for skin-friction coefficient provides a reasonable representation of the LES predictions. While the present LES methodology cannot of itself probe the structure of the near-wall region, the present results show turbulence intensities that scale on the wall-friction velocity and on the Clauser length scale over almost all of the outer boundary layer. It is argued that the LES is suggestive of the asymptotic, infinite Reynolds-number limit for the smooth-wall turbulent boundary layer and different ways in which this limit can be approached are discussed. The maximum Retheta of the present simulations appears to be limited by machine precision and it is speculated, but not demonstrated, that even larger Retheta could be achieved with quad- or higher-precision arithmetic. Second, the time series velocity signals obtained from LES within the logarithmic region of the zero-pressure gradient turbulent boundary layer are used in combination with an empirical, predictive inner--outer wall model [Marusic et al., "Predictive model for wall-bounded turbulent flow'', Science 329, 193 (2010)] to calculate the statistics of the fluctuating streamwise velocity in the inner region of the zero-pressure gradient turbulent boundary layer. Results, including spectra and moments up to fourth order, are compared with equivalent predictions using experimental time series, as well as with direct experimental measurements at Reynolds numbers Retau based on the friction velocity and the boundary layer thickness, Retau = 7,300, 13,600 and 19,000. LES combined with the wall model are then used to extend the inner-layer predictions to Reynolds numbers Retau = 62,000, 100,000 and 200,000 that lie within a gap in log(Retau) space between laboratory measurements and surface-layer, atmospheric experiments. The present results support a log-like increase in the near-wall peak of the streamwise turbulence intensities with Retau and also provide a means of extending LES results at large Reynolds numbers to the near-wall region of wall-bounded turbulent flows. Finally, we apply the wall model to LES of a turbulent boundary layer subject to an adverse pressure gradient. Computed statistics are found to be consistent with recent experiments and some Reynolds number similarity is observed over a range of two orders of magnitude.

  19. Uncertainty Quantification in Alchemical Free Energy Methods.

    PubMed

    Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V

    2018-06-12

    Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.

  20. Finite element analysis (FEA) analysis of the preflex beam

    NASA Astrophysics Data System (ADS)

    Wan, Lijuan; Gao, Qilang

    2017-10-01

    The development of finite element analysis (FEA) has been relatively mature, and is one of the important means of structural analysis. This method changes the problem that the research of complex structure in the past needs to be done by a large number of experiments. Through the finite element method, the numerical simulation of the structure can be used to achieve a variety of static and dynamic simulation analysis of the mechanical problems, it is also convenient to study the parameters of the structural parameters. Combined with a certain number of experiments to verify the simulation model can be completed in the past all the needs of experimental research. The nonlinear finite element method is used to simulate the flexural behavior of the prestressed composite beams with corrugated steel webs. The finite element analysis is used to understand the mechanical properties of the structure under the action of bending load.

  1. A parabolic model of drag coefficient for storm surge simulation in the South China Sea

    PubMed Central

    Peng, Shiqiu; Li, Yineng

    2015-01-01

    Drag coefficient (Cd) is an essential metric in the calculation of momentum exchange over the air-sea interface and thus has large impacts on the simulation or forecast of the upper ocean state associated with sea surface winds such as storm surges. Generally, Cd is a function of wind speed. However, the exact relationship between Cd and wind speed is still in dispute, and the widely-used formula that is a linear function of wind speed in an ocean model could lead to large bias at high wind speed. Here we establish a parabolic model of Cd based on storm surge observations and simulation in the South China Sea (SCS) through a number of tropical cyclone cases. Simulation of storm surges for independent Tropical cyclones (TCs) cases indicates that the new parabolic model of Cd outperforms traditional linear models. PMID:26499262

  2. Analysis of the coherent and turbulent stresses of a numerically simulated rough wall pipe

    NASA Astrophysics Data System (ADS)

    Chan, L.; MacDonald, M.; Chung, D.; Hutchins, N.; Ooi, A.

    2017-04-01

    A turbulent rough wall flow in a pipe is simulated using direct numerical simulation (DNS) where the roughness elements consist of explicitly gridded three-dimensional sinusoids. Two groups of simulations were conducted where the roughness semi-amplitude h+ and the roughness wavelength λ+ are systematically varied. The triple decomposition is applied to the velocity to separate the coherent and turbulent components. The coherent or dispersive component arises due to the roughness and depends on the topological features of the surface. The turbulent stress on the other hand, scales with the friction Reynolds number. For the case with the largest roughness wavelength, large secondary flows are observed which are similar to that of duct flows. The occurrence of these large secondary flows is due to the spanwise heterogeneity of the roughness which has a spacing approximately equal to the boundary layer thickness δ.

  3. A parabolic model of drag coefficient for storm surge simulation in the South China Sea.

    PubMed

    Peng, Shiqiu; Li, Yineng

    2015-10-26

    Drag coefficient (Cd) is an essential metric in the calculation of momentum exchange over the air-sea interface and thus has large impacts on the simulation or forecast of the upper ocean state associated with sea surface winds such as storm surges. Generally, Cd is a function of wind speed. However, the exact relationship between Cd and wind speed is still in dispute, and the widely-used formula that is a linear function of wind speed in an ocean model could lead to large bias at high wind speed. Here we establish a parabolic model of Cd based on storm surge observations and simulation in the South China Sea (SCS) through a number of tropical cyclone cases. Simulation of storm surges for independent Tropical cyclones (TCs) cases indicates that the new parabolic model of Cd outperforms traditional linear models.

  4. A parabolic model of drag coefficient for storm surge simulation in the South China Sea

    NASA Astrophysics Data System (ADS)

    Peng, Shiqiu; Li, Yineng

    2015-10-01

    Drag coefficient (Cd) is an essential metric in the calculation of momentum exchange over the air-sea interface and thus has large impacts on the simulation or forecast of the upper ocean state associated with sea surface winds such as storm surges. Generally, Cd is a function of wind speed. However, the exact relationship between Cd and wind speed is still in dispute, and the widely-used formula that is a linear function of wind speed in an ocean model could lead to large bias at high wind speed. Here we establish a parabolic model of Cd based on storm surge observations and simulation in the South China Sea (SCS) through a number of tropical cyclone cases. Simulation of storm surges for independent Tropical cyclones (TCs) cases indicates that the new parabolic model of Cd outperforms traditional linear models.

  5. Bayesian modelling of uncertainties of Monte Carlo radiative-transfer simulations

    NASA Astrophysics Data System (ADS)

    Beaujean, Frederik; Eggers, Hans C.; Kerzendorf, Wolfgang E.

    2018-04-01

    One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. We show how to estimate the statistical uncertainty given the output of just a single radiative-transfer simulation in which the number of photon packets follows a Poisson distribution and the weight (e.g. energy or luminosity) of a single packet may follow an arbitrary distribution. Our Bayesian approach produces a posterior distribution that is valid for any number of packets in a bin, even zero packets, and is easy to implement in practice. Our analytic results for large number of packets show that we generalise existing methods that are valid only in limiting cases. The statistical problem considered here appears in identical form in a wide range of Monte Carlo simulations including particle physics and importance sampling. It is particularly powerful in extracting information when the available data are sparse or quantities are small.

  6. Statistical study of defects caused by primary knock-on atoms in fcc Cu and bcc W using molecular dynamics

    NASA Astrophysics Data System (ADS)

    Warrier, M.; Bhardwaj, U.; Hemani, H.; Schneider, R.; Mutzke, A.; Valsakumar, M. C.

    2015-12-01

    We report on molecular Dynamics (MD) simulations carried out in fcc Cu and bcc W using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code to study (i) the statistical variations in the number of interstitials and vacancies produced by energetic primary knock-on atoms (PKA) (0.1-5 keV) directed in random directions and (ii) the in-cascade cluster size distributions. It is seen that around 60-80 random directions have to be explored for the average number of displaced atoms to become steady in the case of fcc Cu, whereas for bcc W around 50-60 random directions need to be explored. The number of Frenkel pairs produced in the MD simulations are compared with that from the Binary Collision Approximation Monte Carlo (BCA-MC) code SDTRIM-SP and the results from the NRT model. It is seen that a proper choice of the damage energy, i.e. the energy required to create a stable interstitial, is essential for the BCA-MC results to match the MD results. On the computational front it is seen that in-situ processing saves the need to input/output (I/O) atomic position data of several tera-bytes when exploring a large number of random directions and there is no difference in run-time because the extra run-time in processing data is offset by the time saved in I/O.

  7. 3D hydrodynamic simulations of carbon burning in massive stars

    NASA Astrophysics Data System (ADS)

    Cristini, A.; Meakin, C.; Hirschi, R.; Arnett, D.; Georgy, C.; Viallet, M.; Walkington, I.

    2017-10-01

    We present the first detailed 3D hydrodynamic implicit large eddy simulations of turbulent convection of carbon burning in massive stars. Simulations begin with radial profiles mapped from a carbon-burning shell within a 15 M⊙ 1D stellar evolution model. We consider models with 1283, 2563, 5123, and 10243 zones. The turbulent flow properties of these carbon-burning simulations are very similar to the oxygen-burning case. We performed a mean field analysis of the kinetic energy budgets within the Reynolds-averaged Navier-Stokes framework. For the upper convective boundary region, we find that the numerical dissipation is insensitive to resolution for linear mesh resolutions above 512 grid points. For the stiffer, more stratified lower boundary, our highest resolution model still shows signs of decreasing sub-grid dissipation suggesting it is not yet numerically converged. We find that the widths of the upper and lower boundaries are roughly 30 per cent and 10 per cent of the local pressure scaleheights, respectively. The shape of the boundaries is significantly different from those used in stellar evolution models. As in past oxygen-shell-burning simulations, we observe entrainment at both boundaries in our carbon-shell-burning simulations. In the large Péclet number regime found in the advanced phases, the entrainment rate is roughly inversely proportional to the bulk Richardson number, RiB (∝RiB-α, 0.5 ≲ α ≲ 1.0). We thus suggest the use of RiB as a means to take into account the results of 3D hydrodynamics simulations in new 1D prescriptions of convective boundary mixing.

  8. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun

    Here, an entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace’s law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results.more » Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.« less

  9. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    NASA Astrophysics Data System (ADS)

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; Derome, Dominique; Carmeliet, Jan

    2018-03-01

    An entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace's law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results. Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.

  10. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    DOE PAGES

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; ...

    2018-03-22

    Here, an entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace’s law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results.more » Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.« less

  11. Simulation of Rutherford backscattering spectrometry from arbitrary atom structures.

    PubMed

    Zhang, S; Nordlund, K; Djurabekova, F; Zhang, Y; Velisa, G; Wang, T S

    2016-10-01

    Rutherford backscattering spectrometry in a channeling direction (RBS/C) is a powerful tool for analysis of the fraction of atoms displaced from their lattice positions. However, it is in many cases not straightforward to analyze what is the actual defect structure underlying the RBS/C signal. To reveal insights of RBS/C signals from arbitrarily complex defective atomic structures, we develop here a method for simulating the RBS/C spectrum from a set of arbitrary read-in atom coordinates (obtained, e.g., from molecular dynamics simulations). We apply the developed method to simulate the RBS/C signals from Ni crystal structures containing randomly displaced atoms, Frenkel point defects, and extended defects, respectively. The RBS/C simulations show that, even for the same number of atoms in defects, the RBS/C signal is much stronger for the extended defects. Comparison with experimental results shows that the disorder profile obtained from RBS/C signals in ion-irradiated Ni is due to a small fraction of extended defects rather than a large number of individual random atoms.

  12. Simulation of Rutherford backscattering spectrometry from arbitrary atom structures

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Nordlund, K.; Djurabekova, F.; Zhang, Y.; Velisa, G.; Wang, T. S.

    2016-10-01

    Rutherford backscattering spectrometry in a channeling direction (RBS/C) is a powerful tool for analysis of the fraction of atoms displaced from their lattice positions. However, it is in many cases not straightforward to analyze what is the actual defect structure underlying the RBS/C signal. To reveal insights of RBS/C signals from arbitrarily complex defective atomic structures, we develop here a method for simulating the RBS/C spectrum from a set of arbitrary read-in atom coordinates (obtained, e.g., from molecular dynamics simulations). We apply the developed method to simulate the RBS/C signals from Ni crystal structures containing randomly displaced atoms, Frenkel point defects, and extended defects, respectively. The RBS/C simulations show that, even for the same number of atoms in defects, the RBS/C signal is much stronger for the extended defects. Comparison with experimental results shows that the disorder profile obtained from RBS/C signals in ion-irradiated Ni is due to a small fraction of extended defects rather than a large number of individual random atoms.

  13. Numerical study of wind over breaking waves and generation of spume droplets

    NASA Astrophysics Data System (ADS)

    Yang, Zixuan; Tang, Shuai; Dong, Yu-Hong; Shen, Lian

    2017-11-01

    We present direct numerical simulation (DNS) results on wind over breaking waves. The air and water are simulated as a coherent system. The air-water interface is captured using a coupled level-set and volume-of-fluid method. The initial condition for the simulation is fully-developed wind turbulence over strongly-forced steep waves. Because wave breaking is an unsteady process, we use ensemble averaging of a large number of runs to obtain turbulence statistics. The generation and transport of spume droplets during wave breaking is also simulated. The trajectories of sea spray droplets are tracked using a Lagrangian particle tracking method. The generation of droplets is captured using a kinematic criterion based on the relative velocity of fluid particles of water with respect to the wave phase speed. From the simulation, we observe that the wave plunging generates a large vortex in air, which makes an important contribution to the suspension of sea spray droplets.

  14. Large-eddy and unsteady RANS simulations of a shock-accelerated heavy gas cylinder

    DOE PAGES

    Morgan, B. E.; Greenough, J. A.

    2015-04-08

    Two-dimensional numerical simulations of the Richtmyer–Meshkov unstable “shock-jet” problem are conducted using both large-eddy simulation (LES) and unsteady Reynolds-averaged Navier–Stokes (URANS) approaches in an arbitrary Lagrangian–Eulerian hydrodynamics code. Turbulence statistics are extracted from LES by running an ensemble of simulations with multimode perturbations to the initial conditions. Detailed grid convergence studies are conducted, and LES results are found to agree well with both experiment and high-order simulations conducted by Shankar et al. (Phys Fluids 23, 024102, 2011). URANS results using a k–L approach are found to be highly sensitive to initialization of the turbulence lengthscale L and to the timemore » at which L becomes resolved on the computational mesh. As a result, it is observed that a gradient diffusion closure for turbulent species flux is a poor approximation at early times, and a new closure based on the mass-flux velocity is proposed for low-Reynolds-number mixing.« less

  15. EpiPOD : community vaccination and dispensing model user's guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, M.; Samsa, M.; Walsh, D.

    EpiPOD is a modeling system that enables local, regional, and county health departments to evaluate and refine their plans for mass distribution of antiviral and antibiotic medications and vaccines. An intuitive interface requires users to input as few or as many plan specifics as are available in order to simulate a mass treatment campaign. Behind the input interface, a system dynamics model simulates pharmaceutical supply logistics, hospital and first-responder personnel treatment, population arrival dynamics and treatment, and disease spread. When the simulation is complete, users have estimates of the number of illnesses in the population at large, the number ofmore » ill persons seeking treatment, and queuing and delays within the mass treatment system--all metrics by which the plan can be judged.« less

  16. Large eddy simulation of rotating turbulent flows and heat transfer by the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Liou, Tong-Miin; Wang, Chun-Sheng

    2018-01-01

    Due to its advantage in parallel efficiency and wall treatment over conventional Navier-Stokes equation-based methods, the lattice Boltzmann method (LBM) has emerged as an efficient tool in simulating turbulent heat and fluid flows. To properly simulate the rotating turbulent flow and heat transfer, which plays a pivotal role in tremendous engineering devices such as gas turbines, wind turbines, centrifugal compressors, and rotary machines, the lattice Boltzmann equations must be reformulated in a rotating coordinate. In this study, a single-rotating reference frame (SRF) formulation of the Boltzmann equations is newly proposed combined with a subgrid scale model for the large eddy simulation of rotating turbulent flows and heat transfer. The subgrid scale closure is modeled by a shear-improved Smagorinsky model. Since the strain rates are also locally determined by the non-equilibrium part of the distribution function, the calculation process is entirely local. The pressure-driven turbulent channel flow with spanwise rotation and heat transfer is used for validating the approach. The Reynolds number characterized by the friction velocity and channel half height is fixed at 194, whereas the rotation number in terms of the friction velocity and channel height ranges from 0 to 3.0. A working fluid of air is chosen, which corresponds to a Prandtl number of 0.71. Calculated results are demonstrated in terms of mean velocity, Reynolds stress, root mean square (RMS) velocity fluctuations, mean temperature, RMS temperature fluctuations, and turbulent heat flux. Good agreement is found between the present LBM predictions and previous direct numerical simulation data obtained by solving the conventional Navier-Stokes equations, which confirms the capability of the proposed SRF LBM and subgrid scale relaxation time formulation for the computation of rotating turbulent flows and heat transfer.

  17. Measurement of contact-angle hysteresis for droplets on nanopillared surface and in the Cassie and Wenzel states: a molecular dynamics simulation study.

    PubMed

    Koishi, Takahiro; Yasuoka, Kenji; Fujikawa, Shigenori; Zeng, Xiao Cheng

    2011-09-27

    We perform large-scale molecular dynamics simulations to measure the contact-angle hysteresis for a nanodroplet of water placed on a nanopillared surface. The water droplet can be in either the Cassie state (droplet being on top of the nanopillared surface) or the Wenzel state (droplet being in contact with the bottom of nanopillar grooves). To measure the contact-angle hysteresis in a quantitative fashion, the molecular dynamics simulation is designed such that the number of water molecules in the droplets can be systematically varied, but the number of base nanopillars that are in direct contact with the droplets is fixed. We find that the contact-angle hysteresis for the droplet in the Cassie state is weaker than that in the Wenzel state. This conclusion is consistent with the experimental observation. We also test a different definition of the contact-angle hysteresis, which can be extended to estimate hysteresis between the Cassie and Wenzel state. The idea is motivated from the appearance of the hysteresis loop typically seen in computer simulation of the first-order phase transition, which stems from the metastability of a system in different thermodynamic states. Since the initial shape of the droplet can be controlled arbitrarily in the computer simulation, the number of base nanopillars that are in contact with the droplet can be controlled as well. We show that the measured contact-angle hysteresis according to the second definition is indeed very sensitive to the initial shape of the droplet. Nevertheless, the contact-angle hystereses measured based on the conventional and new definition seem converging in the large droplet limit. © 2011 American Chemical Society

  18. Guess Again (and Again and Again): Measuring Password Strength by Simulating Password-Cracking Algorithms

    DTIC Science & Technology

    2011-08-31

    2011 4 . TITLE AND SUBTITLE Guess Again (and Again and Again): Measuring Password Strength by Simulating Password-Cracking Algorithms 5a. CONTRACT...large numbers of hashed passwords (Booz Allen Hamilton, HBGary, Gawker, Sony Playstation , etc.), coupled with the availability of botnets that offer...when evaluating the strength of different password-composition policies. 4 . We investigate the effectiveness of entropy as a measure of password

  19. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Application of the stochastic parallel gradient descent algorithm for numerical simulation and analysis of the coherent summation of radiation from fibre amplifiers

    NASA Astrophysics Data System (ADS)

    Zhou, Pu; Wang, Xiaolin; Li, Xiao; Chen, Zilum; Xu, Xiaojun; Liu, Zejin

    2009-10-01

    Coherent summation of fibre laser beams, which can be scaled to a relatively large number of elements, is simulated by using the stochastic parallel gradient descent (SPGD) algorithm. The applicability of this algorithm for coherent summation is analysed and its optimisaton parameters and bandwidth limitations are studied.

  20. In Silico Design of Smart Binders to Anthrax PA

    DTIC Science & Technology

    2012-09-01

    nanosecond(ns) molecular dynamics simulation in the NPT ensemble (constant particle number, pressure, and temperature) at 300K, with the CHARMM force...protective antigen (PA). Before the docking runs, the DS23 peptide was simulated using molecular dynamics to generate an ensemble of structures...structure), we do not see a large amount of structural change when using molecular dynamics after Rosetta docking. We note that this RMSD does not take

  1. Multi-phase SPH modelling of violent hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.

    2015-11-01

    This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.

  2. Unsteady flow simulations around complex geometries using stationary or rotating unstructured grids

    NASA Astrophysics Data System (ADS)

    Sezer-Uzol, Nilay

    In this research, the computational analysis of three-dimensional, unsteady, separated, vortical flows around complex geometries is studied by using stationary or moving unstructured grids. Two main engineering problems are investigated. The first problem is the unsteady simulation of a ship airwake, where helicopter operations become even more challenging, by using stationary unstructured grids. The second problem is the unsteady simulation of wind turbine rotor flow fields by using moving unstructured grids which are rotating with the whole three-dimensional rigid rotor geometry. The three dimensional, unsteady, parallel, unstructured, finite volume flow solver, PUMA2, is used for the computational fluid dynamics (CFD) simulations considered in this research. The code is modified to have a moving grid capability to perform three-dimensional, time-dependent rotor simulations. An instantaneous log-law wall model for Large Eddy Simulations is also implemented in PUMA2 to investigate the very large Reynolds number flow fields of rotating blades. To verify the code modifications, several sample test cases are also considered. In addition, interdisciplinary studies, which are aiming to provide new tools and insights to the aerospace and wind energy scientific communities, are done during this research by focusing on the coupling of ship airwake CFD simulations with the helicopter flight dynamics and control analysis, the coupling of wind turbine rotor CFD simulations with the aeroacoustic analysis, and the analysis of these time-dependent and large-scale CFD simulations with the help of a computational monitoring, steering and visualization tool, POSSE.

  3. Large-Eddy Simulation of Turbulent Wall-Pressure Fluctuations

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.

    1996-01-01

    Large-eddy simulations of a turbulent boundary layer with Reynolds number based on displacement thickness equal to 3500 were performed with two grid resolutions. The computations were continued for sufficient time to obtain frequency spectra with resolved frequencies that correspond to the most important structural frequencies on an aircraft fuselage. The turbulent stresses were adequately resolved with both resolutions. Detailed quantitative analysis of a variety of statistical quantities associated with the wall-pressure fluctuations revealed similar behavior for both simulations. The primary differences were associated with the lack of resolution of the high-frequency data in the coarse-grid calculation and the increased jitter (due to the lack of multiple realizations for averaging purposes) in the fine-grid calculation. A new curve fit was introduced to represent the spanwise coherence of the cross-spectral density.

  4. Characterization of Sound Radiation by Unresolved Scales of Motion in Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Zhou, Ye

    1999-01-01

    Evaluation of the sound sources in a high Reynolds number turbulent flow requires time-accurate resolution of an extremely large number of scales of motion. Direct numerical simulations will therefore remain infeasible for the forseeable future: although current large eddy simulation methods can resolve the largest scales of motion accurately the, they must leave some scales of motion unresolved. A priori studies show that acoustic power can be underestimated significantly if the contribution of these unresolved scales is simply neglected. In this paper, the problem of evaluating the sound radiation properties of the unresolved, subgrid-scale motions is approached in the spirit of the simplest subgrid stress models: the unresolved velocity field is treated as isotropic turbulence with statistical descriptors, evaluated from the resolved field. The theory of isotropic turbulence is applied to derive formulas for the total power and the power spectral density of the sound radiated by a filtered velocity field. These quantities are compared with the corresponding quantities for the unfiltered field for a range of filter widths and Reynolds numbers.

  5. Formation of large-scale structures with sharp density gradient through Rayleigh-Taylor growth in a two-dimensional slab under the two-fluid and finite Larmor radius effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goto, R.; Hatori, T.; Miura, H., E-mail: miura.hideaki@nifs.ac.jp

    Two-fluid and the finite Larmor effects on linear and nonlinear growth of the Rayleigh-Taylor instability in a two-dimensional slab are studied numerically with special attention to high-wave-number dynamics and nonlinear structure formation at a low β-value. The two effects stabilize the unstable high wave number modes for a certain range of the β-value. In nonlinear simulations, the absence of the high wave number modes in the linear stage leads to the formation of the density field structure much larger than that in the single-fluid magnetohydrodynamic simulation, together with a sharp density gradient as well as a large velocity difference. Themore » formation of the sharp velocity difference leads to a subsequent Kelvin-Helmholtz-type instability only when both the two-fluid and finite Larmor radius terms are incorporated, whereas it is not observed otherwise. It is shown that the emergence of the secondary instability can modify the outline of the turbulent structures associated with the primary Rayleigh-Taylor instability.« less

  6. Evaluation of a strain-sensitive transport model in LES of turbulent nonpremixed sooting flames

    NASA Astrophysics Data System (ADS)

    Lew, Jeffry K.; Yang, Suo; Mueller, Michael E.

    2017-11-01

    Direct Numerical Simulations (DNS) of turbulent nonpremixed jet flames have revealed that Polycyclic Aromatic Hydrocarbons (PAH) are confined to spatially intermittent regions of low scalar dissipation rate due to their slow formation chemistry. The length scales of these regions are on the order of the Kolmogorov scale or smaller, where molecular diffusion effects dominate over turbulent transport effects irrespective of the large-scale turbulent Reynolds number. A strain-sensitive transport model has been developed to identify such species whose slow chemistry, relative to local mixing rates, confines them to these small length scales. In a conventional nonpremixed ``flamelet'' approach, these species are then modeled with their molecular Lewis numbers, while remaining species are modeled with an effective unity Lewis number. A priori analysis indicates that this strain-sensitive transport model significantly affects PAH yield in nonpremixed flames with essentially no impact on temperature and major species. The model is applied with Large Eddy Simulation (LES) to a series of turbulent nonpremixed sooting jet flames and validated via comparisons with experimental measurements of soot volume fraction.

  7. Large-eddy simulations of the restricted nonlinear system

    NASA Astrophysics Data System (ADS)

    Bretheim, Joel; Gayme, Dennice; Meneveau, Charles

    2014-11-01

    Wall-bounded shear flows often exhibit elongated flow structures with streamwise coherence (e.g. rolls/streaks), prompting the exploration of a streamwise-constant modeling framework to investigate wall-turbulence. Simulations of a streamwise-constant (2D/3C) model have been shown to produce the roll/streak structures and accurately reproduce the blunted turbulent mean velocity profile in plane Couette flow. The related restricted nonlinear (RNL) model captures these same features but also exhibits self-sustaining turbulent behavior. Direct numerical simulation (DNS) of the RNL system results in similar statistics for a number of flow quantities and a flow field that is consistent with DNS of the Navier-Stokes equations. Aiming to develop reduced-order models of wall-bounded turbulence at very high Reynolds numbers in which viscous near-wall dynamics cannot be resolved, this work presents the development of an RNL formulation of the filtered Navier-Stokes equations solved for in large-eddy simulations (LES). The proposed LES-RNL system is a computationally affordable reduced-order modeling tool that is of interest for studying the underlying dynamics of high-Reynolds wall-turbulence and for engineering applications where the flow field is dominated by streamwise-coherent motions. This work is supported by NSF (IGERT, SEP-1230788 and IIA-1243482).

  8. Dynamical role of Ekman pumping in rapidly rotating convection

    NASA Astrophysics Data System (ADS)

    Stellmach, Stephan; Julien, Keith; Cheng, Jonathan; Aurnou, Jonathan

    2015-04-01

    The exact nature of the mechanical boundary conditions (i.e. no-slip versus stress-free) is usually considered to be of secondary importance in the rapidly rotating parameter regime characterizing planetary cores. While they have considerable influence for the Ekman numbers achievable in today's global simulations, for planetary values both the viscous Ekman layers and the associated secondary flows are generally expected to become negligibly small. In fact, usually the main purpose of using stress-free boundary conditions in numerical dynamo simulations is to suppress unrealistically large friction and pumping effects. In this study, we investigate the influence of the mechanical boundary conditions on core convection systematically. By restricting ourselves to the idealized case of rapidly rotating Rayleigh-Bénard convection, we are able to combine results from direct numerical simulations (DNS), laboratory experiments and asymptotic theory into a coherent picture. Contrary to the general expectation, we show that the dynamical effects of Ekman pumping increase with decreasing Ekman number over the investigated parameter range. While stress-free DNS results converge to the asymptotic predictions, both no-slip simulations and laboratory experiments consistently reveal increasingly large deviations from the existing asymptotic theory based on dynamically passive Ekman layers. The implications of these results for core dynamics are discussed briefly.

  9. Large eddy simulation of turbine wakes using higher-order methods

    NASA Astrophysics Data System (ADS)

    Deskos, Georgios; Laizet, Sylvain; Piggott, Matthew D.; Sherwin, Spencer

    2017-11-01

    Large eddy simulations (LES) of a horizontal-axis turbine wake are presented using the well-known actuator line (AL) model. The fluid flow is resolved by employing higher-order numerical schemes on a 3D Cartesian mesh combined with a 2D Domain Decomposition strategy for an efficient use of supercomputers. In order to simulate flows at relatively high Reynolds numbers for a reasonable computational cost, a novel strategy is used to introduce controlled numerical dissipation to a selected range of small scales. The idea is to mimic the contribution of the unresolved small-scales by imposing a targeted numerical dissipation at small scales when evaluating the viscous term of the Navier-Stokes equations. The numerical technique is shown to behave similarly to the traditional eddy viscosity sub-filter scale models such as the classic or the dynamic Smagorinsky models. The results from the simulations are compared to experimental data for a Reynolds number scaled by the diameter equal to ReD =1,000,000 and both the time-averaged stream wise velocity and turbulent kinetic energy (TKE) are showing a good overall agreement. At the end, suggestions for the amount of numerical dissipation required by our approach are made for the particular case of horizontal-axis turbine wakes.

  10. MHC variability supports dog domestication from a large number of wolves: high diversity in Asia.

    PubMed

    Niskanen, A K; Hagström, E; Lohi, H; Ruokonen, M; Esparza-Salas, R; Aspi, J; Savolainen, P

    2013-01-01

    The process of dog domestication is still somewhat unresolved. Earlier studies indicate that domestic dogs from all over the world have a common origin in Asia. So far, major histocompatibility complex (MHC) diversity has not been studied in detail in Asian dogs, although high levels of genetic diversity are expected at the domestication locality. We sequenced the second exon of the canine MHC gene DLA-DRB1 from 128 Asian dogs and compared our data with a previously published large data set of MHC alleles, mostly from European dogs. Our results show that Asian dogs have a higher MHC diversity than European dogs. We also estimated that there is only a small probability that new alleles have arisen by mutation since domestication. Based on the assumption that all of the currently known 102 DLA-DRB1 alleles come from the founding wolf population, we simulated the number of founding wolf individuals. Our simulations indicate an effective population size of at least 500 founding wolves, suggesting that the founding wolf population was large or that backcrossing has taken place.

  11. MHC variability supports dog domestication from a large number of wolves: high diversity in Asia

    PubMed Central

    Niskanen, A K; Hagström, E; Lohi, H; Ruokonen, M; Esparza-Salas, R; Aspi, J; Savolainen, P

    2013-01-01

    The process of dog domestication is still somewhat unresolved. Earlier studies indicate that domestic dogs from all over the world have a common origin in Asia. So far, major histocompatibility complex (MHC) diversity has not been studied in detail in Asian dogs, although high levels of genetic diversity are expected at the domestication locality. We sequenced the second exon of the canine MHC gene DLA–DRB1 from 128 Asian dogs and compared our data with a previously published large data set of MHC alleles, mostly from European dogs. Our results show that Asian dogs have a higher MHC diversity than European dogs. We also estimated that there is only a small probability that new alleles have arisen by mutation since domestication. Based on the assumption that all of the currently known 102 DLA–DRB1 alleles come from the founding wolf population, we simulated the number of founding wolf individuals. Our simulations indicate an effective population size of at least 500 founding wolves, suggesting that the founding wolf population was large or that backcrossing has taken place. PMID:23073392

  12. A trial of e-simulation of sudden patient deterioration (FIRST2ACT WEB) on student learning.

    PubMed

    Bogossian, Fiona E; Cooper, Simon J; Cant, Robyn; Porter, Joanne; Forbes, Helen

    2015-10-01

    High-fidelity simulation pedagogy is of increasing importance in health professional education; however, face-to-face simulation programs are resource intensive and impractical to implement across large numbers of students. To investigate undergraduate nursing students' theoretical and applied learning in response to the e-simulation program-FIRST2ACT WEBTM, and explore predictors of virtual clinical performance. Multi-center trial of FIRST2ACT WEBTM accessible to students in five Australian universities and colleges, across 8 campuses. A population of 489 final-year nursing students in programs of study leading to license to practice. Participants proceeded through three phases: (i) pre-simulation-briefing and assessment of clinical knowledge and experience; (ii) e-simulation-three interactive e-simulation clinical scenarios which included video recordings of patients with deteriorating conditions, interactive clinical tasks, pop up responses to tasks, and timed performance; and (iii) post-simulation feedback and evaluation. Descriptive statistics were followed by bivariate analysis to detect any associations, which were further tested using standard regression analysis. Of 409 students who commenced the program (83% response rate), 367 undergraduate nursing students completed the web-based program in its entirety, yielding a completion rate of 89.7%; 38.1% of students achieved passing clinical performance across three scenarios, and the proportion achieving passing clinical knowledge increased from 78.15% pre-simulation to 91.6% post-simulation. Knowledge was the main independent predictor of clinical performance in responding to a virtual deteriorating patient R(2)=0.090, F(7, 352)=4.962, p<0.001. The use of web-based technology allows simulation activities to be accessible to a large number of participants and completion rates indicate that 'Net Generation' nursing students were highly engaged with this mode of learning. The web-based e-simulation program FIRST2ACTTM effectively enhanced knowledge, virtual clinical performance, and self-assessed knowledge, skills, confidence, and competence in final-year nursing students. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Constraining Gamma-Ray Pulsar Gap Models with a Simulated Pulsar Population

    NASA Technical Reports Server (NTRS)

    Pierbattista, Marco; Grenier, I. A.; Harding, A. K.; Gonthier, P. L.

    2012-01-01

    With the large sample of young gamma-ray pulsars discovered by the Fermi Large Area Telescope (LAT), population synthesis has become a powerful tool for comparing their collective properties with model predictions. We synthesised a pulsar population based on a radio emission model and four gamma-ray gap models (Polar Cap, Slot Gap, Outer Gap, and One Pole Caustic). Applying gamma-ray and radio visibility criteria, we normalise the simulation to the number of detected radio pulsars by a select group of ten radio surveys. The luminosity and the wide beams from the outer gaps can easily account for the number of Fermi detections in 2 years of observations. The wide slot-gap beam requires an increase by a factor of 10 of the predicted luminosity to produce a reasonable number of gamma-ray pulsars. Such large increases in the luminosity may be accommodated by implementing offset polar caps. The narrow polar-cap beams contribute at most only a handful of LAT pulsars. Using standard distributions in birth location and pulsar spin-down power (E), we skew the initial magnetic field and period distributions in a an attempt to account for the high E Fermi pulsars. While we compromise the agreement between simulated and detected distributions of radio pulsars, the simulations fail to reproduce the LAT findings: all models under-predict the number of LAT pulsars with high E , and they cannot explain the high probability of detecting both the radio and gamma-ray beams at high E. The beaming factor remains close to 1.0 over 4 decades in E evolution for the slot gap whereas it significantly decreases with increasing age for the outer gaps. The evolution of the enhanced slot-gap luminosity with E is compatible with the large dispersion of gamma-ray luminosity seen in the LAT data. The stronger evolution predicted for the outer gap, which is linked to the polar cap heating by the return current, is apparently not supported by the LAT data. The LAT sample of gamma-ray pulsars therefore provides a fresh perspective on the early evolution of the luminosity and beam width of the gamma-ray emission from young pulsars, calling for thin and more luminous gaps.

  14. Direct Computation of Sound Radiation by Jet Flow Using Large-scale Equations

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Shih, S. H.; Hixon, D. R.; Povinelli, L. A.

    1995-01-01

    Jet noise is directly predicted using large-scale equations. The computational domain is extended in order to directly capture the radiated field. As in conventional large-eddy-simulations, the effect of the unresolved scales on the resolved ones is accounted for. Special attention is given to boundary treatment to avoid spurious modes that can render the computed fluctuations totally unacceptable. Results are presented for a supersonic jet at Mach number 2.1.

  15. Simulation of Combustion Systems with Realistic g-jitter

    NASA Technical Reports Server (NTRS)

    Mell, William E.; McGrattan, Kevin B.; Baum, Howard R.

    2003-01-01

    In this project a transient, fully three-dimensional computer simulation code was developed to simulate the effects of realistic g-jitter on a number of combustion systems. The simulation code is capable of simulating flame spread on a solid and nonpremixed or premixed gaseous combustion in nonturbulent flow with simple combustion models. Simple combustion models were used to preserve computational efficiency since this is meant to be an engineering code. Also, the use of sophisticated turbulence models was not pursued (a simple Smagorinsky type model can be implemented if deemed appropriate) because if flow velocities are large enough for turbulence to develop in a reduced gravity combustion scenario it is unlikely that g-jitter disturbances (in NASA's reduced gravity facilities) will play an important role in the flame dynamics. Acceleration disturbances of realistic orientation, magnitude, and time dependence can be easily included in the simulation. The simulation algorithm was based on techniques used in an existing large eddy simulation code which has successfully simulated fire dynamics in complex domains. A series of simulations with measured and predicted acceleration disturbances on the International Space Station (ISS) are presented. The results of this series of simulations suggested a passive isolation system and appropriate scheduling of crew activity would provide a sufficiently "quiet" acceleration environment for spherical diffusion flames.

  16. Experimental and simulation studies on the behavior of signal harmonics in magnetic particle imaging.

    PubMed

    Murase, Kenya; Konishi, Takashi; Takeuchi, Yuki; Takata, Hiroshige; Saito, Shigeyoshi

    2013-07-01

    Our purpose in this study was to investigate the behavior of signal harmonics in magnetic particle imaging (MPI) by experimental and simulation studies. In the experimental studies, we made an apparatus for MPI in which both a drive magnetic field (DMF) and a selection magnetic field (SMF) were generated with a Maxwell coil pair. The MPI signals from magnetic nanoparticles (MNPs) were detected with a solenoid coil. The odd- and even-numbered harmonics were calculated by Fourier transformation with or without background subtraction. The particle size of the MNPs was measured by transmission electron microscopy (TEM), dynamic light-scattering, and X-ray diffraction methods. In the simulation studies, the magnetization and particle size distribution of MNPs were assumed to obey the Langevin theory of paramagnetism and a log-normal distribution, respectively. The odd- and even-numbered harmonics were calculated by Fourier transformation under various conditions of DMF and SMF and for three different particle sizes. The behavior of the harmonics largely depended on the size of the MNPs. When we used the particle size obtained from the TEM image, the simulation results were most similar to the experimental results. The similarity between the experimental and simulation results for the even-numbered harmonics was better than that for the odd-numbered harmonics. This was considered to be due to the fact that the odd-numbered harmonics were more sensitive to background subtraction than were the even-numbered harmonics. This study will be useful for a better understanding, optimization, and development of MPI and for designing MNPs appropriate for MPI.

  17. Marker-based reconstruction of the kinematics of a chain of segments: a new method that incorporates joint kinematic constraints.

    PubMed

    Klous, Miriam; Klous, Sander

    2010-07-01

    The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.

  18. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367

  19. Investigation of the Dynamic Contact Angle Using a Direct Numerical Simulation Method.

    PubMed

    Zhu, Guangpu; Yao, Jun; Zhang, Lei; Sun, Hai; Li, Aifen; Shams, Bilal

    2016-11-15

    A large amount of residual oil, which exists as isolated oil slugs, remains trapped in reservoirs after water flooding. Numerous numerical studies are performed to investigate the fundamental flow mechanism of oil slugs to improve flooding efficiency. Dynamic contact angle models are usually introduced to simulate an accurate contact angle and meniscus displacement of oil slugs under a high capillary number. Nevertheless, in the oil slug flow simulation process, it is unnecessary to introduce the dynamic contact angle model because of a negligible change in the meniscus displacement after using the dynamic contact angle model when the capillary number is small. Therefore, a critical capillary number should be introduced to judge whether the dynamic contact model should be incorporated into simulations. In this study, a direct numerical simulation method is employed to simulate the oil slug flow in a capillary tube at the pore scale. The position of the interface between water and the oil slug is determined using the phase-field method. The capacity and accuracy of the model are validated using a classical benchmark: a dynamic capillary filling process. Then, different dynamic contact angle models and the factors that affect the dynamic contact angle are analyzed. The meniscus displacements of oil slugs with a dynamic contact angle and a static contact angle (SCA) are obtained during simulations, and the relative error between them is calculated automatically. The relative error limit has been defined to be 5%, beyond which the dynamic contact angle model needs to be incorporated into the simulation to approach the realistic displacement. Thus, the desired critical capillary number can be determined. A three-dimensional universal chart of critical capillary number, which functions as static contact angle and viscosity ratio, is given to provide a guideline for oil slug simulation. Also, a fitting formula is presented for ease of use.

  20. Measuring discharge with ADCPs: Inferences from synthetic velocity profiles

    USGS Publications Warehouse

    Rehmann, C.R.; Mueller, D.S.; Oberg, K.A.

    2009-01-01

    Synthetic velocity profiles are used to determine guidelines for sampling discharge with acoustic Doppler current profilers (ADCPs). The analysis allows the effects of instrument characteristics, sampling parameters, and properties of the flow to be studied systematically. For mid-section measurements, the averaging time required for a single profile measurement always exceeded the 40 s usually recommended for velocity measurements, and it increased with increasing sample interval and increasing time scale of the large eddies. Similarly, simulations of transect measurements show that discharge error decreases as the number of large eddies sampled increases. The simulations allow sampling criteria that account for the physics of the flow to be developed. ?? 2009 ASCE.

  1. Decay of homogeneous two-dimensional quantum turbulence

    NASA Astrophysics Data System (ADS)

    Baggaley, Andrew W.; Barenghi, Carlo F.

    2018-03-01

    We numerically simulate the free decay of two-dimensional quantum turbulence in a large, homogeneous Bose-Einstein condensate. The large number of vortices, the uniformity of the density profile, and the absence of boundaries (where vortices can drift out of the condensate) isolate the annihilation of vortex-antivortex pairs as the only mechanism which reduces the number of vortices, Nv, during the turbulence decay. The results clearly reveal that vortex annihilation is a four-vortex process, confirming the decay law Nv˜t-1 /3 where t is time, which was inferred from experiments with relatively few vortices in small harmonically trapped condensates.

  2. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  3. Forecasting European cold waves based on subsampling strategies of CMIP5 and Euro-CORDEX ensembles

    NASA Astrophysics Data System (ADS)

    Cordero-Llana, Laura; Braconnot, Pascale; Vautard, Robert; Vrac, Mathieu; Jezequel, Aglae

    2016-04-01

    Forecasting future extreme events under the present changing climate represents a difficult task. Currently there are a large number of ensembles of simulations for climate projections that take in account different models and scenarios. However, there is a need for reducing the size of the ensemble to make the interpretation of these simulations more manageable for impact studies or climate risk assessment. This can be achieved by developing subsampling strategies to identify a limited number of simulations that best represent the ensemble. In this study, cold waves are chosen to test different approaches for subsampling available simulations. The definition of cold waves depends on the criteria used, but they are generally defined using a minimum temperature threshold, the duration of the cold spell as well as their geographical extend. These climate indicators are not universal, highlighting the difficulty of directly comparing different studies. As part of the of the CLIPC European project, we use daily surface temperature data obtained from CMIP5 outputs as well as Euro-CORDEX simulations to predict future cold waves events in Europe. From these simulations a clustering method is applied to minimise the number of ensembles required. Furthermore, we analyse the different uncertainties that arise from the different model characteristics and definitions of climate indicators. Finally, we will test if the same subsampling strategy can be used for different climate indicators. This will facilitate the use of the subsampling results for a wide number of impact assessment studies.

  4. Cretin Memory Flow on Sierra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langer, S. H.; Scott, H. A.

    2016-08-05

    The Cretin iCOE project has a goal of enabling the efficient generation of Non-LTE opacities for use in radiation-hydrodynamic simulation codes using the Nvidia boards on LLNL’s upcoming Sierra system. Achieving the desired level of accuracy for some simulations require the use of a vary large number of atomic configurations (a configuration includes the atomic level for all electrons and how they are coupled together). The NLTE rate matrix needs to be solved separately in each zone. Calculating NLTE opacities can consume more time than all other physics packages used in a simulation.

  5. Training effectiveness assessment: Methodological problems and issues

    NASA Technical Reports Server (NTRS)

    Cross, Kenneth D.

    1992-01-01

    The U.S. military uses a large number of simulators to train and sustain the flying skills of helicopter pilots. Despite the enormous resources required to purchase, maintain, and use those simulators, little effort has been expended in assessing their training effectiveness. One reason for this is the lack of an evaluation methodology that yields comprehensive and valid data at a practical cost. Some of these methodological problems and issues that arise in assessing simulator training effectiveness, as well as problems with the classical transfer-of-learning paradigm were discussed.

  6. Direct Numerical Simulation of turbulent heat transfer up to Reτ = 2000

    NASA Astrophysics Data System (ADS)

    Hoyas, Sergio; Pérez-Quiles, Jezabel; Lluesma-Rodríguez, Federico

    2017-11-01

    We present a new set of direct numerical simulations of turbulent heat transfer in a channel flow for a Prandtl number of 0.71 and a friction Reynolds number of 2000. Mixed boundary conditions, i.e., wall temperature is time independent and varies linearly along streamwise component, have been used as boundary conditions for the thermal field. The effect of the size of the box in the one point statistics of the thermal field, and the kinetic energy, dissipation and turbulent budgets has been studied, showing that a domain with streamwise and spanwise sizes of 4 πh and 2 πh, where h is the channel half-height, is large enough to reproduce the one point statistics of larger boxes. The scaling of the previous quantities with respect to the Reynolds number has been also studied using a new dataset of simulations at smaller Reynolds number, finding two different scales for the inner and outer layers of the flow. Funded by project ENE2015-71333-R of the Spanish Ministerio de Economía y Competitividad.

  7. How many neurons can we see with current spike sorting algorithms?

    PubMed Central

    Pedreira, Carlos; Martinez, Juan; Ison, Matias J.; Quian Quiroga, Rodrigo

    2012-01-01

    Recent studies highlighted the disagreement between the typical number of neurons observed with extracellular recordings and the ones to be expected based on anatomical and physiological considerations. This disagreement has been mainly attributed to the presence of sparsely firing neurons. However, it is also possible that this is due to limitations of the spike sorting algorithms used to process the data. To address this issue, we used realistic simulations of extracellular recordings and found a relatively poor spike sorting performance for simulations containing a large number of neurons. In fact, the number of correctly identified neurons for single-channel recordings showed an asymptotic behavior saturating at about 8–10 units, when up to 20 units were present in the data. This performance was significantly poorer for neurons with low firing rates, as these units were twice more likely to be missed than the ones with high firing rates in simulations containing many neurons. These results uncover one of the main reasons for the relatively low number of neurons found in extracellular recording and also stress the importance of further developments of spike sorting algorithms. PMID:22841630

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Som, Sibendu; Wang, Zihan; Pei, Yuanjiang

    A state-of-the-art spray modeling methodology, recently presented by Senecal et al. [ , , ], is applied to Large Eddy Simulations (LES) of vaporizing gasoline sprays. Simulations of non-combusting Spray G (gasoline fuel) from the Engine Combustion Network are performed. Adaptive mesh refinement (AMR) with cell sizes from 0.09 mm to 0.5 mm are utilized to further demonstrate grid convergence of the dynamic structure LES model for the gasoline sprays. Grid settings are recommended to optimize the accuracy/runtime tradeoff for LES-based spray simulations at different injection pressure conditions typically encountered in gasoline direct injection (GDI) applications. The influence of LESmore » sub-grid scale (SGS) models is explored by comparing the results from dynamic structure and Smagorinsky based models against simulations without any SGS model. Twenty different realizations are simulated by changing the random number seed used in the spray sub-models. It is shown that for global quantities such as spray penetration, comparing a single LES simulation to experimental data is reasonable. Through a detailed analysis using the relevance index (RI) criteria, recommendations are made regarding the minimum number of LES realizations required for accurate prediction of the gasoline sprays.« less

  9. Numerical Simulation of a High Mach Number Jet Flow

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Turkel, Eli; Mankbadi, Reda R.

    1993-01-01

    The recent efforts to develop accurate numerical schemes for transition and turbulent flows are motivated, among other factors, by the need for accurate prediction of flow noise. The success of developing high speed civil transport plane (HSCT) is contingent upon our understanding and suppression of the jet exhaust noise. The radiated sound can be directly obtained by solving the full (time-dependent) compressible Navier-Stokes equations. However, this requires computational storage that is beyond currently available machines. This difficulty can be overcome by limiting the solution domain to the near field where the jet is nonlinear and then use acoustic analogy (e.g., Lighthill) to relate the far-field noise to the near-field sources. The later requires obtaining the time-dependent flow field. The other difficulty in aeroacoustics computations is that at high Reynolds numbers the turbulent flow has a large range of scales. Direct numerical simulations (DNS) cannot obtain all the scales of motion at high Reynolds number of technological interest. However, it is believed that the large scale structure is more efficient than the small-scale structure in radiating noise. Thus, one can model the small scales and calculate the acoustically active scales. The large scale structure in the noise-producing initial region of the jet can be viewed as a wavelike nature, the net radiated sound is the net cancellation after integration over space. As such, aeroacoustics computations are highly sensitive to errors in computing the sound sources. It is therefore essential to use a high-order numerical scheme to predict the flow field. The present paper presents the first step in a ongoing effort to predict jet noise. The emphasis here is in accurate prediction of the unsteady flow field. We solve the full time-dependent Navier-Stokes equations by a high order finite difference method. Time accurate spatial simulations of both plane and axisymmetric jet are presented. Jet Mach numbers of 1.5 and 2.1 are considered. Reynolds number in the simulations was about a million. Our numerical model is based on the 2-4 scheme by Gottlieb & Turkel. Bayliss et al. applied the 2-4 scheme in boundary layer computations. This scheme was also used by Ragab and Sheen to study the nonlinear development of supersonic instability waves in a mixing layer. In this study, we present two dimensional direct simulation results for both plane and axisymmetric jets. These results are compared with linear theory predictions. These computations were made for near nozzle exit region and velocity in spanwise/azimuthal direction was assumed to be zero.

  10. An interior penalty stabilised incompressible discontinuous Galerkin-Fourier solver for implicit large eddy simulations

    NASA Astrophysics Data System (ADS)

    Ferrer, Esteban

    2017-11-01

    We present an implicit Large Eddy Simulation (iLES) h / p high order (≥2) unstructured Discontinuous Galerkin-Fourier solver with sliding meshes. The solver extends the laminar version of Ferrer and Willden, 2012 [34], to enable the simulation of turbulent flows at moderately high Reynolds numbers in the incompressible regime. This solver allows accurate flow solutions of the laminar and turbulent 3D incompressible Navier-Stokes equations on moving and static regions coupled through a high order sliding interface. The spatial discretisation is provided by the Symmetric Interior Penalty Discontinuous Galerkin (IP-DG) method in the x-y plane coupled with a purely spectral method that uses Fourier series and allows efficient computation of spanwise periodic three-dimensional flows. Since high order methods (e.g. discontinuous Galerkin and Fourier) are unable to provide enough numerical dissipation to enable under-resolved high Reynolds computations (i.e. as necessary in the iLES approach), we adapt the laminar version of the solver to increase (controllably) the dissipation and enhance the stability in under-resolved simulations. The novel stabilisation relies on increasing the penalty parameter included in the DG interior penalty (IP) formulation. The latter penalty term is included when discretising the linear viscous terms in the incompressible Navier-Stokes equations. These viscous penalty fluxes substitute the stabilising effect of non-linear fluxes, which has been the main trend in implicit LES discontinuous Galerkin approaches. The IP-DG penalty term provides energy dissipation, which is controlled by the numerical jumps at element interfaces (e.g. large in under-resolved regions) such as to stabilise under-resolved high Reynolds number flows. This dissipative term has minimal impact in well resolved regions and its implicit treatment does not restrict the use of large time steps, thus providing an efficient stabilization mechanism for iLES. The IP-DG stabilisation is complemented with a Spectral Vanishing Viscosity (SVV) method, in the z-direction, to enhance stability in the continuous Fourier space. The coupling between the numerical viscosity in the DG plane and the SVV damping, provides an efficient approach to stabilise high order methods at moderately high Reynolds numbers. We validate the formulation for three turbulent flow cases: a circular cylinder at Re = 3900, a static and pitch oscillating NACA 0012 airfoil at Re = 10000 and finally a rotating vertical-axis turbine at Re = 40000, with Reynolds based on the circular diameter, airfoil chord and turbine diameter, respectively. All our results compare favourably with published direct numerical simulations, large eddy simulations or experimental data. We conclude that the DG-Fourier high order solver, with IP-SVV stabilisation, proves to be a valuable tool to predict turbulent flows and associated statistics for both static and rotating machinery.

  11. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  12. Scalable Domain Decomposed Monte Carlo Particle Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  13. Particle Filter Based Tracking in a Detection Sparse Discrete Event Simulation Environment

    DTIC Science & Technology

    2007-03-01

    obtained by disqualifying a large number of particles. 52 (a) (b) ( c ) Figure 31. Particle Disqualification via Sanitization b...1 B. RESEARCH APPROACH..............................................................................5 C . THESIS ORGANIZATION...38 b. Detection Distribution Sampling............................................43 c . Estimated Position Calculation

  14. Cross-flow turbines: physical and numerical model studies towards improved array simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2015-12-01

    Cross-flow, or vertical-axis turbines, show potential in marine hydrokinetic (MHK) and wind energy applications. As turbine designs mature, the research focus is shifting from individual devices towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow turbines, or taking advantage of constructive wake interaction for cross-flow turbines. Numerical simulations are generally better suited to explore the turbine array design parameter space, as physical model studies of large arrays at large model scale would be expensive. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries, the turbines' interaction with the energy resource needs to be parameterized, or modeled. Most models in use today, e.g. actuator disk, are not able to predict the unique wake structure generated by cross-flow turbines. Experiments were carried out using a high-resolution turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier--Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An additional sub-model is considered for injecting turbulence model scalar quantities based on actuator line element loading. Results are presented for the simulation of performance and wake dynamics of axial- and cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET grant 1150797.

  15. Numerical simulation of the geodynamo reaches Earth's core dynamical regime

    NASA Astrophysics Data System (ADS)

    Aubert, J.; Gastine, T.; Fournier, A.

    2016-12-01

    Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.

  16. Reduction of artifacts in computer simulation of breast Cooper's ligaments

    NASA Astrophysics Data System (ADS)

    Pokrajac, David D.; Kuperavage, Adam; Maidment, Andrew D. A.; Bakic, Predrag R.

    2016-03-01

    Anthropomorphic software breast phantoms have been introduced as a tool for quantitative validation of breast imaging systems. Efficacy of the validation results depends on the realism of phantom images. The recursive partitioning algorithm based upon the octree simulation has been demonstrated as versatile and capable of efficiently generating large number of phantoms to support virtual clinical trials of breast imaging. Previously, we have observed specific artifacts, (here labeled "dents") on the boundaries of simulated Cooper's ligaments. In this work, we have demonstrated that these "dents" result from the approximate determination of the closest simulated ligament to an examined subvolume (i.e., octree node) of the phantom. We propose a modification of the algorithm that determines the closest ligament by considering a pre-specified number of neighboring ligaments selected based upon the functions that govern the shape of ligaments simulated in the subvolume. We have qualitatively and quantitatively demonstrated that the modified algorithm can lead to elimination or reduction of dent artifacts in software phantoms. In a proof-of concept example, we simulated a 450 ml phantom with 333 compartments at 100 micrometer resolution. After the proposed modification, we corrected 148,105 dents, with an average size of 5.27 voxels (5.27nl). We have also qualitatively analyzed the corresponding improvement in the appearance of simulated mammographic images. The proposed algorithm leads to reduction of linear and star-like artifacts in simulated phantom projections, which can be attributed to dents. Analysis of a larger number of phantoms is ongoing.

  17. A gaussian model for simulated geomagnetic field reversals

    NASA Astrophysics Data System (ADS)

    Wicht, Johannes; Meduri, Domenico G.

    2016-10-01

    Field reversals are the most spectacular events in the geomagnetic history but remain little understood. Here we explore the dipole behaviour in particularly long numerical dynamo simulations to reveal statistically significant conditions required for reversals and excursions to happen. We find that changes in the axial dipole moment behaviour are crucial while the equatorial dipole moment plays a negligible role. For small Rayleigh numbers, the axial dipole always remains strong and stable and obeys a clearly Gaussian probability distribution. Only when the Rayleigh number is increased sufficiently the axial dipole can reverse and its distribution becomes decisively non-Gaussian. Increased likelihoods around zero indicate a pronounced lingering in a new low dipole moment state. Reversals and excursions can only happen when axial dipole fluctuations are large enough to drive the system from the high dipole moment state assumed during stable polarity epochs into the low dipole moment state. Since it is just a matter of chance which polarity is amplified during dipole recovery, reversals and grand excursions, i.e. excursions during which the dipole assumes reverse polarity, are equally likely. While the overall reversal behaviour seems Earth-like, a closer comparison to palaeomagnetic findings suggests that the simulated events last too long and that grand excursions are too rare. For a particularly large Ekman number we find a second but less Earth-like type of reversals where the total field decays and recovers after a certain time.

  18. Adhesion of single- and multi-walled carbon nanotubes to silicon substrate: atomistic simulations and continuum analysis

    NASA Astrophysics Data System (ADS)

    Yuan, Xuebo; Wang, Youshan

    2017-10-01

    The radial deformation of carbon nanotubes (CNTs) adhering to a substrate may prominently affect their mechanical and physical properties. In this study, both classical atomistic simulations and continuum analysis are carried out, to investigate the lateral adhesion of single-walled CNTs (SWCNTs) and multi-walled CNTs (MWCNTs) to a silicon substrate. A linear elastic model for analyzing the adhesion of 2D shells to a rigid semi-infinite substrate is constructed in the framework of continuum mechanics. Good agreement is achieved between the cross-section profiles of adhesive CNTs obtained by the continuum model and by the atomistic simulation approach. It is found that the adhesion of a CNT to the silicon substrate is significantly influenced by its initial diameter and the number of walls. CNTs with radius larger than a certain critical radius are deformed radially on the silicon substrate with flat contact regions. With increasing number of walls, the extent of radial deformation of a MWCNT on the substrate decreases dramatically, and the flat contact area reduces—and eventually vanishes—due to increasing equivalent bending stiffness. It is analytically predicted that large-diameter MWCNTs with a large number of walls are likely to ‘stand’ on the silicon substrate. The present work can be useful for understanding the radial deformation of CNTs adhering to a solid planar substrate.

  19. Low Mass-Damping Vortex-Induced Vibrations of a Single Cylinder at Moderate Reynolds Number.

    PubMed

    Jus, Y; Longatte, E; Chassaing, J-C; Sagaut, P

    2014-10-01

    The feasibility and accuracy of large eddy simulation is investigated for the case of three-dimensional unsteady flows past an elastically mounted cylinder at moderate Reynolds number. Although these flow problems are unconfined, complex wake flow patterns may be observed depending on the elastic properties of the structure. An iterative procedure is used to solve the structural dynamic equation to be coupled with the Navier-Stokes system formulated in a pseudo-Eulerian way. A moving mesh method is involved to deform the computational domain according to the motion of the fluid structure interface. Numerical simulations of vortex-induced vibrations are performed for a freely vibrating cylinder at Reynolds number 3900 in the subcritical regime under two low mass-damping conditions. A detailed physical analysis is provided for a wide range of reduced velocities, and the typical three-branch response of the amplitude behavior usually reported in the experiments is exhibited and reproduced by numerical simulation.

  20. Developing a discrete event simulation model for university student shuttle buses

    NASA Astrophysics Data System (ADS)

    Zulkepli, Jafri; Khalid, Ruzelan; Nawawi, Mohd Kamal Mohd; Hamid, Muhammad Hafizan

    2017-11-01

    Providing shuttle buses for university students to attend their classes is crucial, especially when their number is large and the distances between their classes and residential halls are far. These factors, in addition to the non-optimal current bus services, typically require the students to wait longer which eventually opens a space for them to complain. To considerably reduce the waiting time, providing the optimal number of buses to transport them from location to location and the effective route schedules to fulfil the students' demand at relevant time ranges are thus important. The optimal bus number and schedules are to be determined and tested using a flexible decision platform. This paper thus models the current services of student shuttle buses in a university using a Discrete Event Simulation approach. The model can flexibly simulate whatever changes configured to the current system and report its effects to the performance measures. How the model was conceptualized and formulated for future system configurations are the main interest of this paper.

  1. A dynamic wall model for Large-Eddy simulations of wind turbine dedicated airfoils

    NASA Astrophysics Data System (ADS)

    J, Calafell; O, Lehmkuhl; A, Carmona; D, Pérez-Segarra C.; A, Oliva

    2014-06-01

    This work aims at modelling the flow behavior past a wind turbine dedicated airfoil at high Reynolds number and large angle of attack (AoA). The DU-93-W-210 airfoil has been selected. To do this, Large Eddy Simulations (LES) have been performed. Momentum equations have been solved with a parallel unstructured symmetry preserving formulation while the wall-adapting local-eddy viscosity model within a variational multi-scale framework (VMS- WALE) is used as the subgrid-scales model. Since LES calculations are still very expensive at high Reynolds Number, specially at the near-wall region, a dynamic wall model has been implemented in order to overcome this limitation. The model has been validated with a very unresolved Channel Flow case at Reτ = 2000. Afterwards, the model is also tested with the Ahmed Car case, that from the flow physics point of view is more similar to an stalled airfoil than the Channel Flow is, including flow features as boundary layer detachment and recirculations. This case has been selected because experimental results of mean velocity profiles are available. Finally, a flow around a DU-93-W-210 airfoil is computed at Re = 3 x 106 and with an AoA of 15°. Numerical results are presented in comparison with Direct Numerical Simulation (DNS) or experimental data for all cases.

  2. The Parallel System for Integrating Impact Models and Sectors (pSIMS)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian

    2014-01-01

    We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.

  3. A normal stress subgrid-scale eddy viscosity model in large eddy simulation

    NASA Technical Reports Server (NTRS)

    Horiuti, K.; Mansour, N. N.; Kim, John J.

    1993-01-01

    The Smagorinsky subgrid-scale eddy viscosity model (SGS-EVM) is commonly used in large eddy simulations (LES) to represent the effects of the unresolved scales on the resolved scales. This model is known to be limited because its constant must be optimized in different flows, and it must be modified with a damping function to account for near-wall effects. The recent dynamic model is designed to overcome these limitations but is compositionally intensive as compared to the traditional SGS-EVM. In a recent study using direct numerical simulation data, Horiuti has shown that these drawbacks are due mainly to the use of an improper velocity scale in the SGS-EVM. He also proposed the use of the subgrid-scale normal stress as a new velocity scale that was inspired by a high-order anisotropic representation model. The testing of Horiuti, however, was conducted using DNS data from a low Reynolds number channel flow simulation. It was felt that further testing at higher Reynolds numbers and also using different flows (other than wall-bounded shear flows) were necessary steps needed to establish the validity of the new model. This is the primary motivation of the present study. The objective is to test the new model using DNS databases of high Reynolds number channel and fully developed turbulent mixing layer flows. The use of both channel (wall-bounded) and mixing layer flows is important for the development of accurate LES models because these two flows encompass many characteristic features of complex turbulent flows.

  4. Large Eddy Simulation of turbulence induced secondary flows in stationary and rotating straight square ducts

    NASA Astrophysics Data System (ADS)

    Sudjai, W.; Juntasaro, V.; Juttijudata, V.

    2018-01-01

    The accuracy of predicting turbulence induced secondary flows is crucially important in many industrial applications such as turbine blade internal cooling passages in a gas turbine and fuel rod bundles in a nuclear reactor. A straight square duct is popularly used to reveal the characteristic of turbulence induced secondary flows which consists of two counter rotating vortices distributed in each corner of the duct. For a rotating duct, the flow can be divided into the pressure side and the suction side. The turbulence induced secondary flows are converted to the Coriolis force driven two large circulations with a pair of additional vortices on the pressure wall due to the rotational effect. In this paper, the Large Eddy Simulation (LES) of turbulence induced secondary flows in a straight square duct is performed using the ANSYS FLUENT CFD software. A dynamic kinetic energy subgrid-scale model is used to describe the three-dimensional incompressible turbulent flows in the stationary and the rotating straight square ducts. The Reynolds number based on the friction velocity and the hydraulic diameter is 300 with the various rotation numbers for the rotating cases. The flow is assumed fully developed by imposing the constant pressure gradient in the streamwise direction. For the rotating cases, the rotational axis is placed perpendicular to the streamwise direction. The simulation results on the secondary flows and the turbulent statistics are found to be in good agreement with the available Direct Numerical Simulation (DNS) data. Finally, the details of the Coriolis effects are discussed.

  5. Numerical simulation of sloshing with large deforming free surface by MPS-LES method

    NASA Astrophysics Data System (ADS)

    Pan, Xu-jie; Zhang, Huai-xin; Sun, Xue-yao

    2012-12-01

    Moving particle semi-implicit (MPS) method is a fully Lagrangian particle method which can easily solve problems with violent free surface. Although it has demonstrated its advantage in ocean engineering applications, it still has some defects to be improved. In this paper, MPS method is extended to the large eddy simulation (LES) by coupling with a sub-particle-scale (SPS) turbulence model. The SPS turbulence model turns into the Reynolds stress terms in the filtered momentum equation, and the Smagorinsky model is introduced to describe the Reynolds stress terms. Although MPS method has the advantage in the simulation of the free surface flow, a lot of non-free surface particles are treated as free surface particles in the original MPS model. In this paper, we use a new free surface tracing method and the key point is "neighbor particle". In this new method, the zone around each particle is divided into eight parts, and the particle will be treated as a free surface particle as long as there are no "neighbor particles" in any two parts of the zone. As the number density parameter judging method has a high efficiency for the free surface particles tracing, we combine it with the neighbor detected method. First, we select out the particles which may be mistreated with high probabilities by using the number density parameter judging method. And then we deal with these particles with the neighbor detected method. By doing this, the new mixed free surface tracing method can reduce the mistreatment problem efficiently. The serious pressure fluctuation is an obvious defect in MPS method, and therefore an area-time average technique is used in this paper to remove the pressure fluctuation with a quite good result. With these improvements, the modified MPS-LES method is applied to simulate liquid sloshing problems with large deforming free surface. Results show that the modified MPS-LES method can simulate the large deforming free surface easily. It can not only capture the large impact pressure accurately on rolling tank wall but also can generate all physical phenomena successfully. The good agreement between numerical and experimental results proves that the modified MPS-LES method is a good CFD methodology in free surface flow simulations.

  6. Completing the mechanical energy pathways in turbulent Rayleigh-Bénard convection.

    PubMed

    Gayen, Bishakhdatta; Hughes, Graham O; Griffiths, Ross W

    2013-09-20

    A new, more complete view of the mechanical energy budget for Rayleigh-Bénard convection is developed and examined using three-dimensional numerical simulations at large Rayleigh numbers and Prandtl number of 1. The driving role of available potential energy is highlighted. The relative magnitudes of different energy conversions or pathways change significantly over the range of Rayleigh numbers Ra ~ 10(7)-10(13). At Ra < 10(7) small-scale turbulent motions are energized directly from available potential energy via turbulent buoyancy flux and kinetic energy is dissipated at comparable rates by both the large- and small-scale motions. In contrast, at Ra ≥ 10(10) most of the available potential energy goes into kinetic energy of the large-scale flow, which undergoes shear instabilities that sustain small-scale turbulence. The irreversible mixing is largely confined to the unstable boundary layer, its rate exactly equal to the generation of available potential energy by the boundary fluxes, and mixing efficiency is 50%.

  7. SUSTAINED TURBULENCE IN DIFFERENTIALLY ROTATING MAGNETIZED FLUIDS AT A LOW MAGNETIC PRANDTL NUMBER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nauman, Farrukh; Pessah, Martin E., E-mail: nauman@nbi.ku.dk

    2016-12-20

    We show for the first time that sustained turbulence is possible at a low magnetic Prandtl number in local simulations of Keplerian flows with no mean magnetic flux. Our results indicate that increasing the vertical domain size is equivalent to increasing the dynamical range between the energy injection scale and the dissipative scale. This has important implications for a large variety of differentially rotating systems with low magnetic Prandtl number such as protostellar disks and laboratory experiments.

  8. Motion of deformable drops through granular media and other confined geometries.

    PubMed

    Davis, Robert H; Zinchenko, Alexander Z

    2009-06-15

    This article features recent simulation studies of the flow of emulsions containing deformable drops through pores, constrictions, and granular media. The flow is assumed to be at low Reynolds number, so that viscous forces dominate, and boundary-integral methods are used to determine interfacial velocities and, hence, track the drop motion and shapes. A single drop in a flat channel migrates to the channel centerplane due to deformation-induced drift, which increases its steady-state velocity along the channel. A drop moving towards a smaller interparticle constriction squeezes through the constriction if the capillary number (ratio of viscous deforming forces and interfacial tension forces) is large enough, but it becomes trapped when the capillary number is below a critical value. These concepts then influence the flow of an emulsion through a granular medium, for which the drop phase moves faster than the suspending liquid at large capillary numbers but slower than the suspending liquid at smaller capillary numbers. The permeabilities of the granular medium to both phases increase with increasing capillary number, due to the reduced resistance to squeezing of easily deformed drops, though drop breakup must also be considered at large capillary numbers.

  9. Monte Carlo simulation of depth-dose distributions in TLD-100 under 90Sr-90Y irradiation.

    PubMed

    Rodríguez-Villafuerte, M; Gamboa-deBuen, I; Brandan, M E

    1997-04-01

    In this work the depth-dose distribution in TLD-100 dosimeters under beta irradiation from a 90Sr-90Y source was investigated using the Monte Carlo method. Comparisons between the simulated data and experimental results showed that the depth-dose distribution is strongly affected by the different components of both the source and dosimeter holders due to the large number of electron scattering events.

  10. Boundary layers in turbulent convection for air, liquid gallium and liquid sodium

    NASA Astrophysics Data System (ADS)

    Scheel, Janet; Schumacher, Joerg

    2017-11-01

    The scaling of physical quantities that characterize the shape and dynamics of the viscous and thermal boundary layers with respect to the Rayleigh number will be presented for three series of three-dimensional high-resolution direct numerical simulations of Rayleigh-Benard convection (RBC) in a closed cylindrical cell of aspect ratio one. The simulations have been conducted for convection in air at a Prandtl number Pr = 0.7, in liquid gallium at Pr = 0.021 and in liquid sodium at Pr = 0.005. Then we discuss three statistical analysis methods which have been developed to predict the transition of turbulent RBC into the ultimate regime. The methods are based on the large-scale properties of the velocity profile. All three methods indicate that the range of critical Rayleigh numbers is shifted to smaller magnitudes as the Prandtl number becomes smaller. This work is supported by the Priority Programme SPP 1881 of the Deutsche Forschungsgemeinschaft.

  11. Simulation of fruit-set and trophic competition and optimization of yield advantages in six Capsicum cultivars using functional-structural plant modelling.

    PubMed

    Ma, Y T; Wubs, A M; Mathieu, A; Heuvelink, E; Zhu, J Y; Hu, B G; Cournède, P H; de Reffye, P

    2011-04-01

    Many indeterminate plants can have wide fluctuations in the pattern of fruit-set and harvest. Fruit-set in these types of plants depends largely on the balance between source (assimilate supply) and sink strength (assimilate demand) within the plant. This study aims to evaluate the ability of functional-structural plant models to simulate different fruit-set patterns among Capsicum cultivars through source-sink relationships. A greenhouse experiment of six Capsicum cultivars characterized with different fruit weight and fruit-set was conducted. Fruit-set patterns and potential fruit sink strength were determined through measurement. Source and sink strength of other organs were determined via the GREENLAB model, with a description of plant organ weight and dimensions according to plant topological structure established from the measured data as inputs. Parameter optimization was determined using a generalized least squares method for the entire growth cycle. Fruit sink strength differed among cultivars. Vegetative sink strength was generally lower for large-fruited cultivars than for small-fruited ones. The larger the size of the fruit, the larger variation there was in fruit-set and fruit yield. Large-fruited cultivars need a higher source-sink ratio for fruit-set, which means higher demand for assimilates. Temporal heterogeneity of fruit-set affected both number and yield of fruit. The simulation study showed that reducing heterogeneity of fruit-set was obtained by different approaches: for example, increasing source strength; decreasing vegetative sink strength, source-sink ratio for fruit-set and flower appearance rate; and harvesting individual fruits earlier before full ripeness. Simulation results showed that, when we increased source strength or decreased vegetative sink strength, fruit-set and fruit weight increased. However, no significant differences were found between large-fruited and small-fruited groups of cultivars regarding the effects of source and vegetative sink strength on fruit-set and fruit weight. When the source-sink ratio at fruit-set decreased, the number of fruit retained on the plant increased competition for assimilates with vegetative organs. Therefore, total plant and vegetative dry weights decreased, especially for large-fruited cultivars. Optimization study showed that temporal heterogeneity of fruit-set and ripening was predicted to be reduced when fruits were harvested earlier. Furthermore, there was a 20 % increase in the number of extra fruit set.

  12. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  13. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  14. Simulations of Micropumps Based on Tilted Flexible Fibers

    NASA Astrophysics Data System (ADS)

    Hancock, Matthew; Elabbasi, Nagi; Demirel, Melik

    2015-11-01

    Pumping liquids at low Reynolds numbers is challenging because of the principle of reversibility. We report here a class of microfluidic pump designs based on tilted flexible structures that combines the concepts of cilia (flexible elastic elements) and rectifiers (e.g., Tesla valves, check valves). We demonstrate proof-of-concept with 2D and 3D fluid-structure interaction (FSI) simulations in COMSOL Multiphysics®of micropumps consisting of a source for oscillatory fluidic motion, e.g. a piston, and a channel lined with tilted flexible rods or sheets to provide rectification. When flow is against the rod tilt direction, the rods bend backward, narrowing the channel and increasing flow resistance; when flow is in the direction of rod tilt, the rods bend forward, widening the channel and decreasing flow resistance. The 2D and 3D simulations involve moving meshes whose quality is maintained by prescribing the mesh displacement on guide surfaces positioned on either side of each flexible structure. The prescribed displacement depends on structure bending and maintains mesh quality even for large deformations. Simulations demonstrate effective pumping even at Reynolds numbers as low as 0.001. Because rod rigidity may be specified independently of Reynolds number, in principle, rod rigidity may be reduced to enable pumping at arbitrarily low Reynolds numbers.

  15. An implicit turbulence model for low-Mach Roe scheme using truncated Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Li, Chung-Gang; Tsubokura, Makoto

    2017-09-01

    The original Roe scheme is well-known to be unsuitable in simulations of turbulence because the dissipation that develops is unsatisfactory. Simulations of turbulent channel flow for Reτ = 180 show that, with the 'low-Mach-fix for Roe' (LMRoe) proposed by Rieper [J. Comput. Phys. 230 (2011) 5263-5287], the Roe dissipation term potentially equates the simulation to an implicit large eddy simulation (ILES) at low Mach number. Thus inspired, a new implicit turbulence model for low Mach numbers is proposed that controls the Roe dissipation term appropriately. Referred to as the automatic dissipation adjustment (ADA) model, the method of solution follows procedures developed previously for the truncated Navier-Stokes (TNS) equations and, without tuning of parameters, uses the energy ratio as a criterion to automatically adjust the upwind dissipation. Turbulent channel flow at two different Reynold numbers and the Taylor-Green vortex were performed to validate the ADA model. In simulations of turbulent channel flow for Reτ = 180 at Mach number of 0.05 using the ADA model, the mean velocity and turbulence intensities are in excellent agreement with DNS results. With Reτ = 950 at Mach number of 0.1, the result is also consistent with DNS results, indicating that the ADA model is also reliable at higher Reynolds numbers. In simulations of the Taylor-Green vortex at Re = 3000, the kinetic energy is consistent with the power law of decaying turbulence with -1.2 exponents for both LMRoe with and without the ADA model. However, with the ADA model, the dissipation rate can be significantly improved near the dissipation peak region and the peak duration can be also more accurately captured. With a firm basis in TNS theory, applicability at higher Reynolds number, and ease in implementation as no extra terms are needed, the ADA model offers to become a promising tool for turbulence modeling.

  16. Effect of grid resolution on large eddy simulation of wall-bounded turbulence

    NASA Astrophysics Data System (ADS)

    Rezaeiravesh, S.; Liefvendahl, M.

    2018-05-01

    The effect of grid resolution on a large eddy simulation (LES) of a wall-bounded turbulent flow is investigated. A channel flow simulation campaign involving a systematic variation of the streamwise (Δx) and spanwise (Δz) grid resolution is used for this purpose. The main friction-velocity-based Reynolds number investigated is 300. Near the walls, the grid cell size is determined by the frictional scaling, Δx+ and Δz+, and strongly anisotropic cells, with first Δy+ ˜ 1, thus aiming for the wall-resolving LES. Results are compared to direct numerical simulations, and several quality measures are investigated, including the error in the predicted mean friction velocity and the error in cross-channel profiles of flow statistics. To reduce the total number of channel flow simulations, techniques from the framework of uncertainty quantification are employed. In particular, a generalized polynomial chaos expansion (gPCE) is used to create metamodels for the errors over the allowed parameter ranges. The differing behavior of the different quality measures is demonstrated and analyzed. It is shown that friction velocity and profiles of the velocity and Reynolds stress tensor are most sensitive to Δz+, while the error in the turbulent kinetic energy is mostly influenced by Δx+. Recommendations for grid resolution requirements are given, together with the quantification of the resulting predictive accuracy. The sensitivity of the results to the subgrid-scale (SGS) model and varying Reynolds number is also investigated. All simulations are carried out with second-order accurate finite-volume-based solver OpenFOAM. It is shown that the choice of numerical scheme for the convective term significantly influences the error portraits. It is emphasized that the proposed methodology, involving the gPCE, can be applied to other modeling approaches, i.e., other numerical methods and the choice of SGS model.

  17. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.

    1985-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  18. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1987-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  19. Stratified turbulence diagnostics for high-Reynolds-number momentum wakes

    NASA Astrophysics Data System (ADS)

    Diamessis, Peter; Zhou, Qi

    2017-11-01

    We analyze a large-eddy simulation (LES) dataset of the turbulent wake behind a sphere of diameter D translating at speed U in a linearly stratified Boussinesq fluid with buoyancy frequency N. These simulations are performed at Reynolds numbers Re ≡ UD / ν ∈ { 5 ×103 , 105 , 4 ×105 } and various Froude numbers Fr ≡ 2 U /(ND) . The recently obtained data at Re = 4 ×105 , the highest Re attained so far in either simulation or laboratory, and Fr ∈ { 4 , 16 } enable us to systematically investigate the effects of Reynolds number on this prototypical localized stratified turbulent shear flow. Our analysis focuses on the time evolution of various diagnostics of stratified turbulence, such as the horizontal and vertical integral length scales, turbulent kinetic energy and its dissipation rate ɛ, and the local rate of shear between the spontaneously formed layers of vorticity within the larger-scale quasi-horizontal flow structures. This leads to a discussion of the transitions between distinct stratified flow regimes (Brethouwer et al. 2007) in the appropriately defined phase diagram, and we highlight the dynamical role of the Gibson number Gi = ɛ /(νN2) , and its dependence on the body-based Reynolds number Re . ONR Grants N00014-13-1-0665 and N00014-15-1-2513.

  20. Consequences of high effective Prandtl number on solar differential rotation and convective velocity

    NASA Astrophysics Data System (ADS)

    Karak, Bidya Binay; Miesch, Mark; Bekki, Yuto

    2018-04-01

    Observations suggest that the large-scale convective velocities obtained by solar convection simulations might be over-estimated (convective conundrum). One plausible solution to this could be the small-scale dynamo which cannot be fully resolved by global simulations. The small-scale Lorentz force suppresses the convective motions and also the turbulent mixing of entropy between upflows and downflows, leading to a large effective Prandtl number (Pr). We explore this idea in three-dimensional global rotating convection simulations at different thermal conductivity (κ), i.e., at different Pr. In agreement with previous non-rotating simulations, the convective velocity is reduced with the increase of Pr as long as the thermal conductive flux is negligible. A subadiabatic layer is formed near the base of the convection zone due to continuous deposition of low entropy plumes in low-κ simulations. The most interesting result of our low-κ simulations is that the convective motions are accompanied by a change in the convection structure that is increasingly influenced by small-scale plumes. These plumes tend to transport angular momentum radially inward and thus establish an anti-solar differential rotation, in striking contrast to the solar rotation profile. If such low diffusive plumes, driven by the radiative-surface cooling, are present in the Sun, then our results cast doubt on the idea that a high effective Pr may be a viable solution to the solar convective conundrum. Our study also emphasizes that any resolution of the conundrum that relies on the downward plumes must take into account the angular momentum transport and heat transport.

  1. Simulation of turbulent separated flows using a novel, evolution-based, eddy-viscosity formulation

    NASA Astrophysics Data System (ADS)

    Castellucci, Paul

    Currently, there exists a lack of confidence in the computational simulation of turbulent separated flows at large Reynolds numbers. The most accurate methods available are too computationally costly to use in engineering applications. Thus, inexpensive models, developed using the Reynolds-averaged Navier-Stokes (RANS) equations, are often extended beyond their applicability. Although these methods will often reproduce integrated quantities within engineering tolerances, such metrics are often insensitive to details within a separated wake, and therefore, poor indicators of simulation fidelity. Using concepts borrowed from large-eddy simulation (LES), a two-equation RANS model is modified to simulate the turbulent wake behind a circular cylinder. This modification involves the computation of one additional scalar field, adding very little to the overall computational cost. When properly inserted into the baseline RANS model, this modification mimics LES in the separated wake, yet reverts to the unmodified form at the cylinder surface. In this manner, superior predictive capability may be achieved without the additional cost of fine spatial resolution associated with LES near solid boundaries. Simulations using modified and baseline RANS models are benchmarked against both LES and experimental data for a circular cylinder wake at Reynolds number 3900. In addition, the computational tool used in this investigation is subject to verification via the Method of Manufactured Solutions. Post-processing of the resultant flow fields includes both mean value and triple-decomposition analysis. These results reveal substantial improvements using the modified system and appear to drive the baseline wake solution toward that of LES, as intended.

  2. Number-squeezed and fragmented states of strongly interacting bosons in a double well

    NASA Astrophysics Data System (ADS)

    Corbo, Joel C.; DuBois, Jonathan L.; Whaley, K. Birgitta

    2017-11-01

    We present a systematic study of the phenomena of number squeezing and fragmentation for a repulsive Bose-Einstein condensate (BEC) in a three-dimensional double-well potential over a range of interaction strengths and barrier heights, including geometries that exhibit appreciable overlap in the one-body wave functions localized in the left and right wells. We compute the properties of the condensate with numerically exact, full-dimensional path-integral ground-state (PIGS) quantum Monte Carlo simulations and compare with results obtained from using two- and eight-mode truncated basis models. The truncated basis models are found to agree with the numerically exact PIGS simulations for weak interactions, but fail to correctly predict the amount of number squeezing and fragmentation exhibited by the PIGS simulations for strong interactions. We find that both number squeezing and fragmentation of the BEC show nonmonotonic behavior at large values of interaction strength a . The number squeezing shows a universal scaling with the product of number of particles and interaction strength (N a ), but no such universal behavior is found for fragmentation. Detailed analysis shows that the introduction of repulsive interactions not only suppresses number fluctuations to enhance number squeezing, but can also enhance delocalization across wells and tunneling between wells, each of which may suppress number squeezing. This results in a dynamical competition whose resolution shows a complex dependence on all three physical parameters defining the system: interaction strength, number of particles, and barrier height.

  3. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  4. BFS Simulation and Experimental Analysis of the Effect of Ti Additions on the Structure of NiAl

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Noebe, Ronald D.; Ferrante,John; Garg, Anita; Honecy, Frank S.; Amador, Carlos

    1999-01-01

    The Bozzolo-Ferrante-Smith (BFS) method for alloy energetics is applied to the study of ternary additions to NiAl. A description of the method and its application to alloy design is given. Two different approaches are used in the analysis of the effect of Ti additions to NiAl. First, a thorough analytical study is performed, where the energy of formation, lattice parameter and bulk modulus are calculated for a large number of possible atomic distributions of Ni, Al and Ti. Substitutional site preference schemes and formation of precipitates are thus predicted and analyzed. The second approach used consists of the determination of temperature effects on the final results, as obtained by performing a number of large scale numerical simulations using the Monte Carlo-Metropolis procedure and BFS for the calculation of the energy at every step in the simulation. The results indicate a sharp preference of Ti for Al sites in Ni-rich NiAl alloys and the formation of ternary Heusler precipitates beyond the predicted solubility limit of 5 at. % Ti. Experimental analysis of three Ni-Al-Ti alloys confirms the theoretical predictions.

  5. Precision of EM Simulation Based Wireless Location Estimation in Multi-Sensor Capsule Endoscopy

    PubMed Central

    Ye, Yunxing; Aisha, Ain-Ul; Swar, Pranay; Pahlavan, Kaveh

    2018-01-01

    In this paper, we compute and examine two-way localization limits for an RF endoscopy pill as it passes through an individuals gastrointestinal (GI) tract. We obtain finite-difference time-domain and finite element method-based simulation results position assessment employing time of arrival (TOA). By means of a 3-D human body representation from a full-wave simulation software and lognormal models for TOA propagation from implant organs to body surface, we calculate bounds on location estimators in three digestive organs: stomach, small intestine, and large intestine. We present an investigation of the causes influencing localization precision, consisting of a range of organ properties; peripheral sensor array arrangements, number of pills in cooperation, and the random variations in transmit power of sensor nodes. We also perform a localization precision investigation for the situation where the transmission signal of the antenna is arbitrary with a known probability distribution. The computational solver outcome shows that the number of receiver antennas on the exterior of the body has higher impact on the precision of the location than the amount of capsules in collaboration within the GI region. The large intestine is influenced the most by the transmitter power probability distribution. PMID:29651364

  6. Simulation of interaction between ground water in an alluvial aquifer and surface water in a large braided river

    USGS Publications Warehouse

    Leake, S.A.; Lilly, M.R.

    1995-01-01

    The Fairbanks, Alaska, area has many contaminated sites in a shallow alluvial aquifer. A ground-water flow model is being developed using the MODFLOW finite-difference ground-water flow model program with the River Package. The modeled area is discretized in the horizontal dimensions into 118 rows and 158 columns of approximately 150-meter square cells. The fine grid spacing has the advantage of providing needed detail at the contaminated sites and surface-water features that bound the aquifer. However, the fine spacing of cells adds difficulty to simulating interaction between the aquifer and the large, braided Tanana River. In particular, the assignment of a river head is difficult if cells are much smaller than the river width. This was solved by developing a procedure for interpolating and extrapolating river head using a river distance function. Another problem is that future transient simulations would require excessive numbers of input records using the current version of the River Package. The proposed solution to this problem is to modify the River Package to linearly interpolate river head for time steps within each stress period, thereby reducing the number of stress periods required.

  7. Precision of EM Simulation Based Wireless Location Estimation in Multi-Sensor Capsule Endoscopy.

    PubMed

    Khan, Umair; Ye, Yunxing; Aisha, Ain-Ul; Swar, Pranay; Pahlavan, Kaveh

    2018-01-01

    In this paper, we compute and examine two-way localization limits for an RF endoscopy pill as it passes through an individuals gastrointestinal (GI) tract. We obtain finite-difference time-domain and finite element method-based simulation results position assessment employing time of arrival (TOA). By means of a 3-D human body representation from a full-wave simulation software and lognormal models for TOA propagation from implant organs to body surface, we calculate bounds on location estimators in three digestive organs: stomach, small intestine, and large intestine. We present an investigation of the causes influencing localization precision, consisting of a range of organ properties; peripheral sensor array arrangements, number of pills in cooperation, and the random variations in transmit power of sensor nodes. We also perform a localization precision investigation for the situation where the transmission signal of the antenna is arbitrary with a known probability distribution. The computational solver outcome shows that the number of receiver antennas on the exterior of the body has higher impact on the precision of the location than the amount of capsules in collaboration within the GI region. The large intestine is influenced the most by the transmitter power probability distribution.

  8. Numerical investigation of supersonic turbulent boundary layers with high wall temperature

    NASA Technical Reports Server (NTRS)

    Guo, Y.; Adams, N. A.

    1994-01-01

    A direct numerical approach has been developed to simulate supersonic turbulent boundary layers. The mean flow quantities are obtained by solving the parabolized Reynolds-averaged Navier-Stokes equations (globally). Fluctuating quantities are computed locally with a temporal direct numerical simulation approach, in which nonparallel effects of boundary layers are partially modeled. Preliminary numerical results obtained at the free-stream Mach numbers 3, 4.5, and 6 with hot-wall conditions are presented. Approximately 5 million grid points are used in all three cases. The numerical results indicate that compressibility effects on turbulent kinetic energy, in terms of dilatational dissipation and pressure-dilatation correlation, are small. Due to the hot-wall conditions the results show significant low Reynolds number effects and large streamwise streaks. Further simulations with a bigger computational box or a cold-wall condition are desirable.

  9. Analysis of Numerical Simulation Database for Pressure Fluctuations Induced by High-Speed Turbulent Boundary Layers

    NASA Technical Reports Server (NTRS)

    Duan, Lian; Choudhari, Meelan M.

    2014-01-01

    Direct numerical simulations (DNS) of Mach 6 turbulent boundary layer with nominal freestream Mach number of 6 and Reynolds number of Re(sub T) approximately 460 are conducted at two wall temperatures (Tw/Tr = 0.25, 0.76) to investigate the generated pressure fluctuations and their dependence on wall temperature. Simulations indicate that the influence of wall temperature on pressure fluctuations is largely limited to the near-wall region, with the characteristics of wall-pressure fluctuations showing a strong temperature dependence. Wall temperature has little influence on the propagation speed of the freestream pressure signal. The freestream radiation intensity compares well between wall-temperature cases when normalized by the local wall shear; the propagation speed of the freestream pressure signal and the orientation of the radiation wave front show little dependence on the wall temperature.

  10. Full numerical simulation of coflowing, axisymmetric jet diffusion flames

    NASA Technical Reports Server (NTRS)

    Mahalingam, S.; Cantwell, B. J.; Ferziger, J. H.

    1990-01-01

    The near field of a non-premixed flame in a low speed, coflowing axisymmetric jet is investigated numerically using full simulation. The time-dependent governing equations are solved by a second-order, explicit finite difference scheme and a single-step, finite rate model is used to represent the chemistry. Steady laminar flame results show the correct dependence of flame height on Peclet number and reaction zone thickness on Damkoehler number. Forced simulations reveal a large difference in the instantaneous structure of scalar dissipation fields between nonbuoyant and buoyant cases. In the former, the scalar dissipation marks intense reaction zones, supporting the flamelet concept; however, results suggest that flamelet modeling assumptions need to be reexamined. In the latter, this correspondence breaks down, suggesting that modifications to the flamelet modeling approach are needed in buoyant turbulent diffusion flames.

  11. Synthetic drought event sets: thousands of meteorological drought events for risk-based management under present and future conditions

    NASA Astrophysics Data System (ADS)

    Guillod, Benoit P.; Massey, Neil; Otto, Friederike E. L.; Allen, Myles R.; Jones, Richard; Hall, Jim W.

    2016-04-01

    Droughts and related water scarcity can have large impacts on societies and consist of interactions between a number of natural and human factors. Meteorological conditions are usually the first natural trigger of droughts, and climate change is expected to impact these and thereby the frequency and intensity of the events. However, extreme events such as droughts are, by definition, rare, and accurately quantifying the risk related to such events is therefore difficult. The MaRIUS project (Managing the Risks, Impacts and Uncertainties of drought and water Scarcity) aims at quantifying the risks associated with droughts in the UK under present and future conditions. To do so, a large number of drought events, from climate model simulations downscaled at 25km over Europe, are being fed into hydrological models of various complexity and used for the estimation of drought risk associated with human and natural systems, including impacts on the economy, industry, agriculture, terrestrial and aquatic ecosystems, and socio-cultural aspects. Here, we present the hydro-meteorological drought event set that has been produced by weather@home [1] for MaRIUS. Using idle processor time on volunteers' computers around the world, we have run a very large number (10'000s) of Global Climate Model (GCM) simulations, downscaled at 25km over Europe by a nested Regional Climate Model (RCM). Simulations include the past 100 years as well as two future horizons (2030s and 2080s), and provide a large number of sequences of spatio-temporally consistent weather, which are consistent with the boundary forcing such as the ocean, greenhouse gases and solar forcing. The drought event set for use in impact studies is constructed by extracting sequences of dry conditions from these model runs, leading to several thousand drought events. In addition to describing methodological and validation aspects of the synthetic drought event sets, we provide insights into drought risk in the UK, its meteorological drivers, and how it can be expected to change in the future. Finally, we assess the applicability of this methodology to other regions. [1] Massey, N. et al., 2014, Q. J. R. Meteorol. Soc.

  12. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.

  13. Auralization of concert hall acoustics using finite difference time domain methods and wave field synthesis

    NASA Astrophysics Data System (ADS)

    Hochgraf, Kelsey

    Auralization methods have been used for a long time to simulate the acoustics of a concert hall for different seat positions. The goal of this thesis was to apply the concept of auralization to a larger audience area that the listener could walk through to compare differences in acoustics for a wide range of seat positions. For this purpose, the acoustics of Rensselaer's Experimental Media and Performing Arts Center (EMPAC) Concert Hall were simulated to create signals for a 136 channel wave field synthesis (WFS) system located at Rensselaer's Collaborative Research Augmented Immersive Virtual Environment (CRAIVE) Laboratory. By allowing multiple people to dynamically experience the concert hall's acoustics at the same time, this research gained perspective on what is important for achieving objective accuracy and subjective plausibility in an auralization. A finite difference time domain (FDTD) simulation on a three-dimensional face-centered cubic grid, combined at a crossover frequency of 800 Hz with a CATT-Acoustic(TM) simulation, was found to have a reverberation time, direct to reverberant sound energy ratio, and early reflection pattern that more closely matched measured data from the hall compared to a CATT-Acoustic(TM) simulation and other hybrid simulations. In the CRAIVE lab, nine experienced listeners found all hybrid auralizations (with varying source location, grid resolution, crossover frequency, and number of loudspeakers) to be more perceptually plausible than the CATT-Acoustic(TM) auralization. The FDTD simulation required two days to compute, while the CATT-Acoustic(TM) simulation required three separate TUCT(TM) computations, each taking four hours, to accommodate the large number of receivers. Given the perceptual advantages realized with WFS for auralization of a large, inhomogeneous sound field, it is recommended that hybrid simulations be used in the future to achieve more accurate and plausible auralizations. Predictions are made for a parallelized version of the simulation code that could achieve such auralizations in less than one hour, making the tool practical for everyday application.

  14. Large eddy simulation on buoyant gas diffusion near building

    NASA Astrophysics Data System (ADS)

    Tominaga, Yoshihide; Murakami, Shuzo; Mochida, Akashi

    1992-12-01

    Large eddy simulations on turbulent diffusion of buoyant gases near a building model are carried out for three cases in which the densimetric Froude Number (Frd) was specified at - 8.6, zero and 8.6 respectively. The accuracy of these simulations is examined by comparing the numerically predicted results with wind tunnel experiments conducted. Two types of sub-grid scale models, the standard Smagorinsky model (type 1) and the modified Smagorinsky model (type 2) are compared. The former does not take account of the production of subgrid energy by buoyancy force but the latter incorporates this effect. The latter model (type 2) gives more accurate results than those given by the standard Smagorinsky model (type 1) in terms of the distributions of kappa greater than sign C less than sign greater than sign C(sup - 2) less than sign.

  15. Coupling vibration research on Vehicle-bridge system

    NASA Astrophysics Data System (ADS)

    Zhou, Jiguo; Wang, Guihua

    2018-01-01

    The vehicle-bridge coupling system forms when vehicle running on a bridge. It will generate a relatively large influence on the driving comfort and driving safe when the vibration of the vehicle is bigger. A three-dimensional vehicle-bridge system with biaxial seven degrees of freedom has been establish in this paper based on finite numerical simulation. Adopting the finite element transient numerical simulation to realize the numerical simulation of vehicle-bridge system coupling vibration. Then, analyze the dynamic response of vehicle and bridge while different numbers of vehicles running on the bridge. Got the variation rule of vertical vibration of car body and bridge, and that of the contact force between the wheel and bridge deck. The research results have a reference value for the analysis about the vehicle running on a large-span cabled bridge.

  16. Scale disparity and spectral transfer in anisotropic numerical turbulence

    NASA Technical Reports Server (NTRS)

    Zhou, YE; Yeung, P. K.; Brasseur, James G.

    1994-01-01

    To study the effect of cancellations within long-range interactions on local isotropy at the small scales, we calculate explicitly the degree of cancellation in distant interactions in the simulations of Yeung & Brasseur and Yeung, Brasseur & Wang using the single scale disparity parameter 's' developed by Zhou. In the simulations, initially isotropic simulated turbulence was subjected to coherent anisotropic forcing at the large scales and the smallest scales were found to become anisotropic as a consequence of direct large-small scale couplings. We find that the marginally distant interactions in the simulation do not cancel out under summation and that the development of small-scale anisotropy is indeed a direct consequence of the distant triadic group, as argued by Yeung, et. al. A reduction of anisotropy at later times occurs as a result of the isotropizing influences of more local energy-cascading triadic interactions. Nevertheless, the local-to-nonlocal triadic group persists as an isotropizing influence at later times. We find that, whereas long-range interactions, in general, contribute little to net energy transfer into or out of a high wavenumber shell k, the anisotropic transfer of component energy within the shell increases with increasing scale separations. These results are consistent with results by Zhou, and Brasseur & Wei, and suggest that the anisotropizing influences of long range interactions should persist to higher Reynolds numbers. The residual effect of the forced distant group in this low-Reynolds number simulation is found to be forward cascading, on average.

  17. cuTauLeaping: A GPU-Powered Tau-Leaping Stochastic Simulator for Massive Parallel Analyses of Biological Systems

    PubMed Central

    Besozzi, Daniela; Pescini, Dario; Mauri, Giancarlo

    2014-01-01

    Tau-leaping is a stochastic simulation algorithm that efficiently reconstructs the temporal evolution of biological systems, modeled according to the stochastic formulation of chemical kinetics. The analysis of dynamical properties of these systems in physiological and perturbed conditions usually requires the execution of a large number of simulations, leading to high computational costs. Since each simulation can be executed independently from the others, a massive parallelization of tau-leaping can bring to relevant reductions of the overall running time. The emerging field of General Purpose Graphic Processing Units (GPGPU) provides power-efficient high-performance computing at a relatively low cost. In this work we introduce cuTauLeaping, a stochastic simulator of biological systems that makes use of GPGPU computing to execute multiple parallel tau-leaping simulations, by fully exploiting the Nvidia's Fermi GPU architecture. We show how a considerable computational speedup is achieved on GPU by partitioning the execution of tau-leaping into multiple separated phases, and we describe how to avoid some implementation pitfalls related to the scarcity of memory resources on the GPU streaming multiprocessors. Our results show that cuTauLeaping largely outperforms the CPU-based tau-leaping implementation when the number of parallel simulations increases, with a break-even directly depending on the size of the biological system and on the complexity of its emergent dynamics. In particular, cuTauLeaping is exploited to investigate the probability distribution of bistable states in the Schlögl model, and to carry out a bidimensional parameter sweep analysis to study the oscillatory regimes in the Ras/cAMP/PKA pathway in S. cerevisiae. PMID:24663957

  18. Reduced order models for assessing CO 2 impacts in shallow unconfined aquifers

    DOE PAGES

    Keating, Elizabeth H.; Harp, Dylan H.; Dai, Zhenxue; ...

    2016-01-28

    Risk assessment studies of potential CO 2 sequestration projects consider many factors, including the possibility of brine and/or CO 2 leakage from the storage reservoir. Detailed multiphase reactive transport simulations have been developed to predict the impact of such leaks on shallow groundwater quality; however, these simulations are computationally expensive and thus difficult to directly embed in a probabilistic risk assessment analysis. Here we present a process for developing computationally fast reduced-order models which emulate key features of the more detailed reactive transport simulations. A large ensemble of simulations that take into account uncertainty in aquifer characteristics and CO 2/brinemore » leakage scenarios were performed. Twelve simulation outputs of interest were used to develop response surfaces (RSs) using a MARS (multivariate adaptive regression splines) algorithm (Milborrow, 2015). A key part of this study is to compare different measures of ROM accuracy. We then show that for some computed outputs, MARS performs very well in matching the simulation data. The capability of the RS to predict simulation outputs for parameter combinations not used in RS development was tested using cross-validation. Again, for some outputs, these results were quite good. For other outputs, however, the method performs relatively poorly. Performance was best for predicting the volume of depressed-pH-plumes, and was relatively poor for predicting organic and trace metal plume volumes. We believe several factors, including the non-linearity of the problem, complexity of the geochemistry, and granularity in the simulation results, contribute to this varied performance. The reduced order models were developed principally to be used in probabilistic performance analysis where a large range of scenarios are considered and ensemble performance is calculated. We demonstrate that they effectively predict the ensemble behavior. But, the performance of the RSs is much less accurate when used to predict time-varying outputs from a single simulation. If an analysis requires only a small number of scenarios to be investigated, computationally expensive physics-based simulations would likely provide more reliable results. Finally, if the aggregate behavior of a large number of realizations is the focus, as will be the case in probabilistic quantitative risk assessment, the methodology presented here is relatively robust.« less

  19. Experimental observation of a large low-frequency band gap in a polymer waveguide

    NASA Astrophysics Data System (ADS)

    Miniaci, Marco; Mazzotti, Matteo; Radzieński, Maciej; Kherraz, Nesrine; Kudela, Pawel; Ostachowicz, Wieslaw; Morvan, Bruno; Bosia, Federico; Pugno, Nicola M.

    2018-02-01

    The quest for large and low frequency band gaps is one of the principal objectives pursued in a number of engineering applications, ranging from noise absorption to vibration control, to seismic wave abatement. For this purpose, a plethora of complex architectures (including multi-phase materials) and multi-physics approaches have been proposed in the past, often involving difficulties in their practical realization. To address this issue, in this work we propose an easy-to-manufacture design able to open large, low frequency complete Lamb band gaps exploiting a suitable arrangement of masses and stiffnesses produced by cavities in a monolithic material. The performance of the designed structure is evaluated by numerical simulations and confirmed by Scanning Laser Doppler Vibrometer (SLDV) measurements on an isotropic polyvinyl chloride plate in which a square ring region of cross-like cavities is fabricated. The full wave field reconstruction clearly confirms the ability of even a limited number of unit cell rows of the proposed design to efficiently attenuate Lamb waves. In addition, numerical simulations show that the structure allows to shift of the central frequency of the BG through geometrical modifications. The design may be of interest for applications in which large BGs at low frequencies are required.

  20. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGES

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  1. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lin, Guang, E-mail: guanglin@purdue.edu

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  2. Hydro-meteorological drought event sets in the UK based on a large ensemble of global-regional climate simulations: climatology, drivers and changes in the future

    NASA Astrophysics Data System (ADS)

    Guillod, B. P.; Massey, N.; Otto, F. E. L.; Allen, M. R.; Jones, R.; Hall, J. W.

    2016-12-01

    Extreme events being rare by definition, accurately quantifying the probabilities associated with a given event is difficult. This is particularly true for droughts, for which only few events are available in the observational record owing to their long-lasting characteristics. The MaRIUS project (Managing the Risks, Impacts and Uncertainties of drought and water Scarcity) aims at quantifying present and future risks associated with droughts in the UK. To do so, a large number of modelled weather time series for "synthetic" drought events are being fed into hydrological and impact models to assess their impacts on various sectors (social sciences, economy, industry, agriculture, and ecosystems). Here, we present and analyse the hydro-meteorological drought event sets that have been produced with a new version of weather@home [1] for MaRIUS. Using idle processor time on volunteers' computers around the world, we have run a very large number (10'000s) of Global Climate Model simulations, downscaled at 25km over Europe by a nested Regional Climate Model. Simulations include the past 100 years as well as two future time slices (2030s and 2080s), and provide a large number of sequences of spatio-temporally coherent weather, which are consistent with the boundary forcing such as the ocean, greenhouse gases and solar forcing. Beside presenting the methodology and validation of the event sets, we provide insights into drought risk in the UK and the drivers of drought. In particular, we examine their sensitivity to sea surface temperature and sea ice patterns, both in the recent past and for future projections. How drought risk in the UK can be expected to change in the future will also be discussed. Finally, we assess the applicability of this methodology to other regions. Reference: [1] Massey, N. et al., 2014, Q. J. R. Meteorol. Soc.

  3. Fast parametric relationships for the large-scale reservoir simulation of mixed CH 4-CO 2 gas hydrate systems

    DOE PAGES

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    2017-03-27

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less

  4. Fast parametric relationships for the large-scale reservoir simulation of mixed CH4-CO2 gas hydrate systems

    NASA Astrophysics Data System (ADS)

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    2017-06-01

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO2-CH4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this work, we present a set of fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. The mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.

  5. Revealing the global map of protein folding space by large-scale simulations

    NASA Astrophysics Data System (ADS)

    Sinner, Claude; Lutz, Benjamin; Verma, Abhinav; Schug, Alexander

    2015-12-01

    The full characterization of protein folding is a remarkable long-standing challenge both for experiment and simulation. Working towards a complete understanding of this process, one needs to cover the full diversity of existing folds and identify the general principles driving the process. Here, we want to understand and quantify the diversity in folding routes for a large and representative set of protein topologies covering the full range from all alpha helical topologies towards beta barrels guided by the key question: Does the majority of the observed routes contribute to the folding process or only a particular route? We identified a set of two-state folders among non-homologous proteins with a sequence length of 40-120 residues. For each of these proteins, we ran native-structure based simulations both with homogeneous and heterogeneous contact potentials. For each protein, we simulated dozens of folding transitions in continuous uninterrupted simulations and constructed a large database of kinetic parameters. We investigate folding routes by tracking the formation of tertiary structure interfaces and discuss whether a single specific route exists for a topology or if all routes are equiprobable. These results permit us to characterize the complete folding space for small proteins in terms of folding barrier ΔG‡, number of routes, and the route specificity RT.

  6. Large-eddy simulation of flow past a circular cylinder

    NASA Technical Reports Server (NTRS)

    Mittal, R.

    1995-01-01

    Some of the most challenging applications of large-eddy simulation are those in complex geometries where spectral methods are of limited use. For such applications more conventional methods such as finite difference or finite element have to be used. However, it has become clear in recent years that dissipative numerical schemes which are routinely used in viscous flow simulations are not good candidates for use in LES of turbulent flows. Except in cases where the flow is extremely well resolved, it has been found that upwind schemes tend to damp out a significant portion of the small scales that can be resolved on the grid. Furthermore, it has been found that even specially designed higher-order upwind schemes that have been used successfully in the direct numerical simulation of turbulent flows produce too much dissipation when used in conjunction with large-eddy simulation. The objective of the current study is to perform a LES of incompressible flow past a circular cylinder at a Reynolds number of 3900 using a solver which employs an energy-conservative second-order central difference scheme for spatial discretization and compare the results obtained with those of Beaudan & Moin (1994) and with the experiments in order to assess the performance of the central scheme for this relatively complex geometry.

  7. Accelerating large-scale simulation of seismic wave propagation by multi-GPUs and three-dimensional domain decomposition

    NASA Astrophysics Data System (ADS)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki

    2010-12-01

    We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.

  8. Fast parametric relationships for the large-scale reservoir simulation of mixed CH 4-CO 2 gas hydrate systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less

  9. Resolving Low-Density Lipoprotein (LDL) on the Human Aortic Surface Using Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Lantz, Jonas; Karlsson, Matts

    2011-11-01

    The prediction and understanding of the genesis of vascular diseases is one of the grand challenges in biofluid engineering. The progression of atherosclerosis is correlated to the build- up of LDL on the arterial surface, which is affected by the blood flow. A multi-physics simulation of LDL mass transport in the blood and through the arterial wall of a subject specific human aorta was performed, employing a LES turbulence model to resolve the turbulent flow. Geometry and velocity measurements from magnetic resonance imaging (MRI) were incorporated to assure physiological relevance of the simulation. Due to the turbulent nature of the flow, consecutive cardiac cycles are not identical, neither in vivo nor in the simulations. A phase average based on a large number of cardiac cycles is therefore computed, which is the proper way to get reliable statistical results from a LES simulation. In total, 50 cardiac cycles were simulated, yielding over 2.5 Billion data points to be post-processed. An inverse relation between LDL and WSS was found; LDL accumulated on locations where WSS was low and vice-versa. Large temporal differences were present, with the concentration level decreasing during systolic acceleration and increasing during the deceleration phase. This method makes it possible to resolve the localization of LDL accumulation in the normal human aorta with its complex transitional flow.

  10. A Double-Moment Multiple-Phase Four-Class Bulk Ice Scheme. Part II: Simulations of Convective Storms in Different Large-Scale Environments and Comparisons with other Bulk Parameterizations.

    NASA Astrophysics Data System (ADS)

    Schoenberg Ferrier, Brad; Tao, Wei-Kuo; Simpson, Joanne

    1995-04-01

    Part I of this study described a detailed four-class bulk ice scheme (4ICE) developed to simulate the hydro-meteor profiles of convective and stratiform precipitation associated with mesoscale convective systems. In Part II, the 4ICE scheme is incorporated into the Goddard Cumulus Ensemble (GCE) model and applied without any `tuning' to two squall lines occurring in widely different environments, namely, one over the `Pica) ocean in the Global Atmospheric Research Program's (GARP) Atlantic Tropical Experiment (GATE) and the other over a midlatitude continent in the Cooperative Huntsville Meteorological Experiment (COHMEX). Comparisons were made both with earlier three-class ice formulations and with observations. In both cases, the 4ICE scheme interacted with the dynamics so as to resemble the observations much more closely than did the model runs with either of the three-class ice parameterizations. The following features were well simulated in the COHMEX case: a lack of stratiform rain at the surface ahead of the storm, reflectivity maxima near 60 dBZ in the vicinity of the melting level, and intense radar echoes up to near the tropopause. These features were in strong contrast with the GATE simulation, which showed extensive trailing stratiform precipitation containing a horizontally oriented radar bright band. Peak reflectivities were below the melting level, rarely exceeding 50 dBz, with a steady decrease in reflectivity with height above. With the other bulk formulations, the large stratiform rain areas were not reproduced in the GATE conditions.The microphysical structure of the model clouds in both environments were more realistic than that of earlier modeling efforts. Number concentrations of ice of O(100 L1) occurred above 6 km in the GATE model clouds as a result of ice enhancement and rime splintering in the 4ICE runs. These processes were more effective in the GATE simulation, because near the freezing level the weaker updrafts were comparable in magnitude to the fall speeds of newly frozen drops. Many of the ice crystals initiated at relatively warm temperatures (above 15°C) grew rapidly by deposition into sizes large enough to be converted to snow. In contrast, in the more intense COHMEX updrafts, very large numbers of small ice crystals were initiated at colder temperatures (below 15°C) by nucleation and stochastic freezing of droplets, such that relatively few ice crystals grew by deposition to sizes large enough to be converted to snow. In addition, the large number of frozen drops of O(5 L1) in the 4ICE run am consistent with airborne microphysical data in intense COHMEX updrafts.Numerous sensitivity experiments were made with the four-class and three-class ice schemes, varying fall speed relationships, particle characteristics, and ice collection efficiencies. These tests provide strong support to the conclusion that the 4ICE scheme gives improved resemblance to observations despite present uncertainties in a number of important microphysical parameters.

  11. Oblique-wing research airplane motion simulation with decoupling control laws

    NASA Technical Reports Server (NTRS)

    Kempel, Robert W.; Mc Neill, Walter E.; Maine, Trindel A.

    1988-01-01

    A large piloted vertical motion simulator was used to assess the performance of a preliminary decoupling control law for an early version of the F-8 oblique wing research demonstrator airplane. Evaluations were performed for five discrete flight conditions, ranging from low-altitude subsonic Mach numbers to moderate-altitude supersonic Mach numbers. Asymmetric sideforce as a function of angle of attack was found to be the primary cause of both the lateral acceleration noted in pitch and the tendency to roll into left turns and out of right turns. The flight control system was shown to be effective in generally decoupling the airplane and reducing the lateral acceleration in pitch maneuvers.

  12. Large-eddy simulation of a backward facing step flow using a least-squares spectral element method

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Mittal, Rajat

    1996-01-01

    We report preliminary results obtained from the large eddy simulation of a backward facing step at a Reynolds number of 5100. The numerical platform is based on a high order Legendre spectral element spatial discretization and a least squares time integration scheme. A non-reflective outflow boundary condition is in place to minimize the effect of downstream influence. Smagorinsky model with Van Driest near wall damping is used for sub-grid scale modeling. Comparisons of mean velocity profiles and wall pressure show good agreement with benchmark data. More studies are needed to evaluate the sensitivity of this method on numerical parameters before it is applied to complex engineering problems.

  13. The effect of clulstering of galaxies on the statistics of gravitational lenses

    NASA Technical Reports Server (NTRS)

    Anderson, N.; Alcock, C.

    1986-01-01

    It is examined whether clustering of galaxies can significantly alter the statistical properties of gravitational lenses? Only models of clustering that resemble the observed distribution of galaxies in the properties of the two-point correlation function are considered. Monte-Carlo simulations of the imaging process are described. It is found that the effect of clustering is too small to be significant, unless the mass of the deflectors is so large that gravitational lenses become common occurrences. A special model is described which was concocted to optimize the effect of clustering on gravitational lensing but still resemble the observed distribution of galaxies; even this simulation did not satisfactorily produce large numbers of wide-angle lenses.

  14. Universal nonlinear small-scale dynamo.

    PubMed

    Beresnyak, A

    2012-01-20

    We consider astrophysically relevant nonlinear MHD dynamo at large Reynolds numbers (Re). We argue that it is universal in a sense that magnetic energy grows at a rate which is a constant fraction C(E) of the total turbulent dissipation rate. On the basis of locality bounds we claim that this "efficiency of the small-scale dynamo", C(E), is a true constant for large Re and is determined only by strongly nonlinear dynamics at the equipartition scale. We measured C(E) in numerical simulations and observed a value around 0.05 in the highest resolution simulations. We address the issue of C(E) being small, unlike the Kolmogorov constant which is of order unity. © 2012 American Physical Society

  15. Economics of residue harvest: Regional partnership evaluation

    USDA-ARS?s Scientific Manuscript database

    Economic analyses on the viability of corn (Zea mays, L.) stover harvest for bioenergy production have largely been based on simulation modeling. While some studies have utilized field research data, most field-based analyses have included a limited number of sites and a narrow geographic distributi...

  16. USING 14 C METHODOLOGY IN SMOG CHAMBER RESEARCH

    EPA Science Inventory

    Smog chambers are large enclosures (~ 10-200 m3) that are used to perform laboratory simulations of atmospheric reactions. By dealing with simple systems in which the number of reactants is limited and the conditions are strictly controlled, insights on how reactions ...

  17. State-of-the-Art Report: Roundabouts Design, Modeling and Simulation

    DOT National Transportation Integrated Search

    2001-03-01

    With the increased success of roundabout use in Europe and Australia, there is a renewed interest of their use in the US. Several States, including Florida, are now considering the use of roundabouts to solve traffic problems. A large number of diver...

  18. RAPTR-SV: a hybrid method for the detection of structural variants

    USDA-ARS?s Scientific Manuscript database

    Motivation: Identification of Structural Variants (SV) in sequence data results in a large number of false positive calls using existing software, which overburdens subsequent validation. Results: Simulations using RAPTR-SV and another software package that uses a similar algorithm for SV detection...

  19. Numerical simulation of flow around the NREL S826 airfoil at moderate Reynolds number using delayed detached Eddy simulation (DDES)

    NASA Astrophysics Data System (ADS)

    Prytz, Erik R.; Huuse, Øyvind; Müller, Bernhard; Bartl, Jan; Sætran, Lars Roar

    2017-07-01

    Turbulent flow at Reynolds numbers 5 . 104 to 106 around the NREL S826 airfoil used for wind turbine blades is simulated using delayed detached eddy simulation (DDES). The 3D domain is built as a replica of the low speed wind tunnel at the Norwegian University of Science and Technology (NTNU) with the wind tunnel walls considered as slip walls. The subgrid turbulent kinetic energy is used to model the sub-grid scale in the large eddy simulation (LES) part of DDES. Different Reynoldsaveraged Navier-Stokes (RANS) models are tested in ANSYS Fluent. The realizable k - ∈ model as the RANS model in DDES is found to yield the best agreement of simulated pressure distributions with the experimental data both from NTNU and the Technical University of Denmark (DTU), the latter for a shorter spanwise domain. The present DDES results are in excellent agreement with LES results from DTU. Since DDES requires much fewer cells in the RANS region near the wing surface than LES, DDES is computationally much more efficient than LES. Whereas DDES is able to predict lift and drag in close agreement with experiment up to stall, pure 2D RANS simulations fail near stall. After testing different numerical settings, time step sizes and grids for DDES, a Reynolds number study is conducted. Near stall, separated flow structures, so-called stall cells, are observed in the DDES results.

  20. Fluid nonlinear frequency shift of nonlinear ion acoustic waves in multi-ion species plasmas in the small wave number region

    NASA Astrophysics Data System (ADS)

    Feng, Q. S.; Xiao, C. Z.; Wang, Q.; Zheng, C. Y.; Liu, Z. J.; Cao, L. H.; He, X. T.

    2016-08-01

    The properties of the nonlinear frequency shift (NFS), especially the fluid NFS from the harmonic generation of the ion-acoustic wave (IAW) in multi-ion species plasmas, have been researched by Vlasov simulation. Pictures of the nonlinear frequency shift from harmonic generation and particle trapping are shown to explain the mechanism of NFS qualitatively. The theoretical model of the fluid NFS from harmonic generation in multi-ion species plasmas is given, and the results of Vlasov simulation are consistent with the theoretical result of multi-ion species plasmas. When the wave number k λD e is small, such as k λD e=0.1 , the fluid NFS dominates in the total NFS and will reach as large as nearly 15 % when the wave amplitude |e ϕ / Te|˜0.1 , which indicates that in the condition of small k λD e , the fluid NFS dominates in the saturation of stimulated Brillouin scattering, especially when the nonlinear IAW amplitude is large.

  1. Fluid nonlinear frequency shift of nonlinear ion acoustic waves in multi-ion species plasmas in the small wave number region.

    PubMed

    Feng, Q S; Xiao, C Z; Wang, Q; Zheng, C Y; Liu, Z J; Cao, L H; He, X T

    2016-08-01

    The properties of the nonlinear frequency shift (NFS), especially the fluid NFS from the harmonic generation of the ion-acoustic wave (IAW) in multi-ion species plasmas, have been researched by Vlasov simulation. Pictures of the nonlinear frequency shift from harmonic generation and particle trapping are shown to explain the mechanism of NFS qualitatively. The theoretical model of the fluid NFS from harmonic generation in multi-ion species plasmas is given, and the results of Vlasov simulation are consistent with the theoretical result of multi-ion species plasmas. When the wave number kλ_{De} is small, such as kλ_{De}=0.1, the fluid NFS dominates in the total NFS and will reach as large as nearly 15% when the wave amplitude |eϕ/T_{e}|∼0.1, which indicates that in the condition of small kλ_{De}, the fluid NFS dominates in the saturation of stimulated Brillouin scattering, especially when the nonlinear IAW amplitude is large.

  2. Equivalent circuit-based analysis of CMUT cell dynamics in arrays.

    PubMed

    Oguz, H K; Atalar, Abdullah; Köymen, Hayrettin

    2013-05-01

    Capacitive micromachined ultrasonic transducers (CMUTs) are usually composed of large arrays of closely packed cells. In this work, we use an equivalent circuit model to analyze CMUT arrays with multiple cells. We study the effects of mutual acoustic interactions through the immersion medium caused by the pressure field generated by each cell acting upon the others. To do this, all the cells in the array are coupled through a radiation impedance matrix at their acoustic terminals. An accurate approximation for the mutual radiation impedance is defined between two circular cells, which can be used in large arrays to reduce computational complexity. Hence, a performance analysis of CMUT arrays can be accurately done with a circuit simulator. By using the proposed model, one can very rapidly obtain the linear frequency and nonlinear transient responses of arrays with an arbitrary number of CMUT cells. We performed several finite element method (FEM) simulations for arrays with small numbers of cells and showed that the results are very similar to those obtained by the equivalent circuit model.

  3. Experimental methods for studying microbial survival in extraterrestrial environments.

    PubMed

    Olsson-Francis, Karen; Cockell, Charles S

    2010-01-01

    Microorganisms can be used as model systems for studying biological responses to extraterrestrial conditions; however, the methods for studying their response are extremely challenging. Since the first high altitude microbiological experiment in 1935 a large number of facilities have been developed for short- and long-term microbial exposure experiments. Examples are the BIOPAN facility, used for short-term exposure, and the EXPOSE facility aboard the International Space Station, used for long-term exposure. Furthermore, simulation facilities have been developed to conduct microbiological experiments in the laboratory environment. A large number of microorganisms have been used for exposure experiments; these include pure cultures and microbial communities. Analyses of these experiments have involved both culture-dependent and independent methods. This review highlights and discusses the facilities available for microbiology experiments, both in space and in simulation environments. A description of the microorganisms and the techniques used to analyse survival is included. Finally we discuss the implications of microbiological studies for future missions and for space applications. Copyright 2009 Elsevier B.V. All rights reserved.

  4. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.

    2003-01-01

    Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques

  5. An adaptive response surface method for crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Yang, Ren-Jye; Zhu, Ping

    2013-11-01

    Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.

  6. Aircraft-type dependency of contrail evolution

    NASA Astrophysics Data System (ADS)

    Unterstrasser, S.; Görsch, N.

    2014-12-01

    The impact of aircraft type on contrail evolution is assessed using a large eddy simulation model with Lagrangian ice microphysics. Six different aircraft ranging from the small regional airliner Bombardier CRJ to the largest aircraft Airbus A380 are taken into account. Differences in wake vortex properties and fuel flow lead to considerable variations in the early contrail geometric depth and ice crystal number. Larger aircraft produce contrails with more ice crystals (assuming that the number of initially generated ice crystals per kilogram fuel is constant). These initial differences are reduced in the first minutes, as the ice crystal loss during the vortex phase is stronger for larger aircraft. In supersaturated air, contrails of large aircraft are much deeper after 5 min than those of small aircraft. A parameterization for the final vertical displacement of the wake vortex system is provided, depending only on the initial vortex circulation and stratification. Cloud resolving simulations are used to examine whether the aircraft-induced initial differences have a long-lasting mark. These simulations suggest that the synoptic scenario controls the contrail cirrus evolution qualitatively. However, quantitative differences between the contrail cirrus properties of the various aircraft remain over the total simulation period of 6 h. The total extinctions of A380-produced contrails are about 1.5 to 2.5 times higher than those from contrails of a Bombardier CRJ.

  7. Unsteady adjoint for large eddy simulation of a coupled turbine stator-rotor system

    NASA Astrophysics Data System (ADS)

    Talnikar, Chaitanya; Wang, Qiqi; Laskowski, Gregory

    2016-11-01

    Unsteady fluid flow simulations like large eddy simulation are crucial in capturing key physics in turbomachinery applications like separation and wake formation in flow over a turbine vane with a downstream blade. To determine how sensitive the design objectives of the coupled system are to control parameters, an unsteady adjoint is needed. It enables the computation of the gradient of an objective with respect to a large number of inputs in a computationally efficient manner. In this paper we present unsteady adjoint solutions for a coupled turbine stator-rotor system. As the transonic fluid flows over the stator vane, the boundary layer transitions to turbulence. The turbulent wake then impinges on the rotor blades, causing early separation. This coupled system exhibits chaotic dynamics which causes conventional adjoint solutions to diverge exponentially, resulting in the corruption of the sensitivities obtained from the adjoint solutions for long-time simulations. In this presentation, adjoint solutions for aerothermal objectives are obtained through a localized adjoint viscosity injection method which aims to stabilize the adjoint solution and maintain accurate sensitivities. Preliminary results obtained from the supercomputer Mira will be shown in the presentation.

  8. Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.

    PubMed

    Slażyński, Leszek; Bohte, Sander

    2012-01-01

    The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.

  9. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  10. Integration of MATLAB Simulink(Registered Trademark) Models with the Vertical Motion Simulator

    NASA Technical Reports Server (NTRS)

    Lewis, Emily K.; Vuong, Nghia D.

    2012-01-01

    This paper describes the integration of MATLAB Simulink(Registered TradeMark) models into the Vertical Motion Simulator (VMS) at NASA Ames Research Center. The VMS is a high-fidelity, large motion flight simulator that is capable of simulating a variety of aerospace vehicles. Integrating MATLAB Simulink models into the VMS needed to retain the development flexibility of the MATLAB environment and allow rapid deployment of model changes. The process developed at the VMS was used successfully in a number of recent simulation experiments. This accomplishment demonstrated that the model integrity was preserved, while working within the hard real-time run environment of the VMS architecture, and maintaining the unique flexibility of the VMS to meet diverse research requirements.

  11. The Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  12. Large eddy simulations of time-dependent and buoyancy-driven channel flows

    NASA Technical Reports Server (NTRS)

    Cabot, William H.

    1993-01-01

    The primary goal of this work has been to assess the performance of the dynamic SGS model in the large eddy simulation (LES) of channel flows in a variety of situations, viz., in temporal development of channel flow turned by a transverse pressure gradient and especially in buoyancy-driven turbulent flows such as Rayleigh-Benard and internally heated channel convection. For buoyancy-driven flows, there are additional buoyant terms that are possible in the base models, and one objective has been to determine if the dynamic SGS model results are sensitive to such terms. The ultimate goal is to determine the minimal base model needed in the dynamic SGS model to provide accurate results in flows with more complicated physical features. In addition, a program of direct numerical simulation (DNS) of fully compressible channel convection has been undertaken to determine stratification and compressibility effects. These simulations are intended to provide a comparative base for performing the LES of compressible (or highly stratified, pseudo-compressible) convection at high Reynolds number in the future.

  13. A new class of finite element variational multiscale turbulence models for incompressible magnetohydrodynamics

    DOE PAGES

    Sondak, D.; Shadid, J. N.; Oberai, A. A.; ...

    2015-04-29

    New large eddy simulation (LES) turbulence models for incompressible magnetohydrodynamics (MHD) derived from the variational multiscale (VMS) formulation for finite element simulations are introduced. The new models include the variational multiscale formulation, a residual-based eddy viscosity model, and a mixed model that combines both of these component models. Each model contains terms that are proportional to the residual of the incompressible MHD equations and is therefore numerically consistent. Moreover, each model is also dynamic, in that its effect vanishes when this residual is small. The new models are tested on the decaying MHD Taylor Green vortex at low and highmore » Reynolds numbers. The evaluation of the models is based on comparisons with available data from direct numerical simulations (DNS) of the time evolution of energies as well as energy spectra at various discrete times. Thus a numerical study, on a sequence of meshes, is presented that demonstrates that the large eddy simulation approaches the DNS solution for these quantities with spatial mesh refinement.« less

  14. Simulations of cloud-radiation interaction using large-scale forcing derived from the CINDY/DYNAMO northern sounding array

    DOE PAGES

    Wang, Shuguang; Sobel, Adam H.; Fridlind, Ann; ...

    2015-09-25

    The recently completed CINDY/DYNAMO field campaign observed two Madden-Julian oscillation (MJO) events in the equatorial Indian Ocean from October to December 2011. Prior work has indicated that the moist static energy anomalies in these events grew and were sustained to a significant extent by radiative feedbacks. We present here a study of radiative fluxes and clouds in a set of cloud-resolving simulations of these MJO events. The simulations are driven by the large scale forcing dataset derived from the DYNAMO northern sounding array observations, and carried out in a doubly-periodic domain using the Weather Research and Forecasting (WRF) model. simulatedmore » cloud properties and radiative fluxes are compared to those derived from the S-Polka radar and satellite observations. Furthermore, to accommodate the uncertainty in simulated cloud microphysics, a number of single moment (1M) and double moment (2M) microphysical schemes in the WRF model are tested.« less

  15. n-body simulations using message passing parallel computers.

    NASA Astrophysics Data System (ADS)

    Grama, A. Y.; Kumar, V.; Sameh, A.

    The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.

  16. Effect of distributor on performance of a continuous fluidized bed dryer

    NASA Astrophysics Data System (ADS)

    Yogendrasasidhar, D.; Srinivas, G.; Pydi Setty, Y.

    2018-03-01

    Proper gas distribution is very important in fluidized bed drying in industrial practice. Improper distribution of gas may lead to non-idealities like channeling, short circuiting and accumulation which gives rise to non-uniform quality of dried product. Gas distribution depends on the distributor plate used. Gas distribution mainly depends on orifice diameter, number of orifices and opening area of the distributor plate. Small orifice diameter leads to clogging, and a large orifice diameter gives uneven distribution of gas. The present work involves experimental studies using different distributor plates and simulation studies using ASPEN PLUS steady state simulator. The effect of various parameters such as orifice diameter, number of orifices and the opening area of the distributor plate on the performance of fluidized bed dryer have been studied through simulation and experimentation. Simulations were carried out (i) with increasing air inlet temperature to study the characteristics of solid temperature and moisture in outlet (ii) with increasing orifice diameter and (iii) with increase in number orifices to study the solid outlet temperature profiles. It can be observed from the simulation that, an increase in orifice diameter and number orifices increases solid outlet temperature upto certain condition and then after there is no effect with further increase. Experiments were carried out with increasing opening area (3.4 to 42%) in the form of increasing orifice diameter keeping the number of orifices constant and increasing number of orifices of the distributor plate keeping the orifice diameter constant. It can be seen that the drying rate and solid outlet temperature increase upto certain condition and then after with further increase in the orifice diameter and number of orifices, the change in the drying rate and solid outlet temperature observed is little. The optimum values of orifice diameter and number of orifices from experimentation are found to be 5 mm and 60 (22% opening area).

  17. Simulation of Rutherford backscattering spectrometry from arbitrary atom structures

    DOE PAGES

    Zhang, S.; Univ. of Helsinki; Nordlund, Kai; ...

    2016-10-25

    Rutherford backscattering spectrometry in a channeling direction (RBS/C) is a powerful tool for analysis of the fraction of atoms displaced from their lattice positions. However, it is in many cases not straightforward to analyze what is the actual defect structure underlying the RBS/C signal. To reveal insights of RBS/C signals from arbitrarily complex defective atomic structures, we develop in this paper a method for simulating the RBS/C spectrum from a set of arbitrary read-in atom coordinates (obtained, e.g., from molecular dynamics simulations). We apply the developed method to simulate the RBS/C signals from Ni crystal structures containing randomly displaced atoms,more » Frenkel point defects, and extended defects, respectively. The RBS/C simulations show that, even for the same number of atoms in defects, the RBS/C signal is much stronger for the extended defects. Finally, comparison with experimental results shows that the disorder profile obtained from RBS/C signals in ion-irradiated Ni is due to a small fraction of extended defects rather than a large number of individual random atoms.« less

  18. Modeling microbiological and chemical processes in municipal solid waste bioreactor, Part II: Application of numerical model BIOKEMOD-3P.

    PubMed

    Gawande, Nitin A; Reinhart, Debra R; Yeh, Gour-Tsyh

    2010-02-01

    Biodegradation process modeling of municipal solid waste (MSW) bioreactor landfills requires the knowledge of various process reactions and corresponding kinetic parameters. Mechanistic models available to date are able to simulate biodegradation processes with the help of pre-defined species and reactions. Some of these models consider the effect of critical parameters such as moisture content, pH, and temperature. Biomass concentration is a vital parameter for any biomass growth model and often not compared with field and laboratory results. A more complex biodegradation model includes a large number of chemical and microbiological species. Increasing the number of species and user defined process reactions in the simulation requires a robust numerical tool. A generalized microbiological and chemical model, BIOKEMOD-3P, was developed to simulate biodegradation processes in three-phases (Gawande et al. 2009). This paper presents the application of this model to simulate laboratory-scale MSW bioreactors under anaerobic conditions. BIOKEMOD-3P was able to closely simulate the experimental data. The results from this study may help in application of this model to full-scale landfill operation.

  19. Marvel-ous Dwarfs: Results from Four Heroically Large Simulated Volumes of Dwarf Galaxies

    NASA Astrophysics Data System (ADS)

    Munshi, Ferah; Brooks, Alyson; Weisz, Daniel; Bellovary, Jillian; Christensen, Charlotte

    2018-01-01

    We present results from high resolution, fully cosmological simulations of cosmic sheets that contain many dwarf galaxies. Together, they create the largest collection of simulated dwarf galaxies to date, with z=0 stellar masses comparable to the LMC or smaller. In total, we have simulated almost 100 luminous dwarf galaxies, forming a sample of simulated dwarfs which span a wide range of physical (stellar and halo mass) and evolutionary properties (merger history). We show how they can be calibrated against a wealth of observations of nearby galaxies including star formation histories, HI masses and kinematics, as well as stellar metallicities. We present preliminary results answering the following key questions: What is the slope of the stellar mass function at extremely low masses? Do halos with HI and no stars exist? What is the scatter in the stellar to halo mass relationship as a function of dwarf mass? What drives the scatter? With this large suite, we are beginning to statistically characterize dwarf galaxies and identify the types and numbers of outliers to expect.

  20. Pulsar simulations for the Fermi Large Area Telescope

    DOE PAGES

    Razzano, M.; Harding, Alice K.; Baldini, L.; ...

    2009-05-21

    Pulsars are among the prime targets for the Large Area Telescope (LAT) aboard the recently launched Fermi observatory. The LAT will study the gamma-ray Universe between 20 MeV and 300 GeV with unprecedented detail. Increasing numbers of gamma-ray pulsars are being firmly identified, yet their emission mechanisms are far from being understood. To better investigate and exploit the LAT capabilities for pulsar science, a set of new detailed pulsar simulation tools have been developed within the LAT collaboration. The structure of the pulsar simulator package ( PulsarSpectrum) is presented here. Starting from photon distributions in energy and phase obtained frommore » theoretical calculations or phenomenological considerations, gamma-rays are generated and their arrival times at the spacecraft are determined by taking into account effects such as barycentric effects and timing noise. Pulsars in binary systems also can be simulated given orbital parameters. As a result, we present how simulations can be used for generating a realistic set of gamma-rays as observed by the LAT, focusing on some case studies that show the performance of the LAT for pulsar observations.« less

  1. Requirements for Large Eddy Simulation Computations of Variable-Speed Power Turbine Flows

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2016-01-01

    Variable-speed power turbines (VSPTs) operate at low Reynolds numbers and with a wide range of incidence angles. Transition, separation, and the relevant physics leading to them are important to VSPT flow. Higher fidelity tools such as large eddy simulation (LES) may be needed to resolve the flow features necessary for accurate predictive capability and design of such turbines. A survey conducted for this report explores the requirements for such computations. The survey is limited to the simulation of two-dimensional flow cases and endwalls are not included. It suggests that a grid resolution necessary for this type of simulation to accurately represent the physics may be of the order of Delta(x)+=45, Delta(x)+ =2 and Delta(z)+=17. Various subgrid-scale (SGS) models have been used and except for the Smagorinsky model, all seem to perform well and in some instances the simulations worked well without SGS modeling. A method of specifying the inlet conditions such as synthetic eddy modeling (SEM) is necessary to correctly represent the inlet conditions.

  2. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE PAGES

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk; ...

    2017-11-07

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  3. A Nonlinear Dynamic Subscale Model for Partially Resolved Numerical Simulation (PRNS)/Very Large Eddy Simulation (VLES) of Internal Non-Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, nan-Suey

    2010-01-01

    A brief introduction of the temporal filter based partially resolved numerical simulation/very large eddy simulation approach (PRNS/VLES) and its distinct features are presented. A nonlinear dynamic subscale model and its advantages over the linear subscale eddy viscosity model are described. In addition, a guideline for conducting a PRNS/VLES simulation is provided. Results are presented for three turbulent internal flows. The first one is the turbulent pipe flow at low and high Reynolds numbers to illustrate the basic features of PRNS/VLES; the second one is the swirling turbulent flow in a LM6000 single injector to further demonstrate the differences in the calculated flow fields resulting from the nonlinear model versus the pure eddy viscosity model; the third one is a more complex turbulent flow generated in a single-element lean direct injection (LDI) combustor, the calculated result has demonstrated that the current PRNS/VLES approach is capable of capturing the dynamically important, unsteady turbulent structures while using a relatively coarse grid.

  4. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  5. On the Subgrid-Scale Modeling of Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Squires, Kyle; Zeman, Otto

    1990-01-01

    A new sub-grid scale model is presented for the large-eddy simulation of compressible turbulence. In the proposed model, compressibility contributions have been incorporated in the sub-grid scale eddy viscosity which, in the incompressible limit, reduce to a form originally proposed by Smagorinsky (1963). The model has been tested against a simple extension of the traditional Smagorinsky eddy viscosity model using simulations of decaying, compressible homogeneous turbulence. Simulation results show that the proposed model provides greater dissipation of the compressive modes of the resolved-scale velocity field than does the Smagorinsky eddy viscosity model. For an initial r.m.s. turbulence Mach number of 1.0, simulations performed using the Smagorinsky model become physically unrealizable (i.e., negative energies) because of the inability of the model to sufficiently dissipate fluctuations due to resolved scale velocity dilations. The proposed model is able to provide the necessary dissipation of this energy and maintain the realizability of the flow. Following Zeman (1990), turbulent shocklets are considered to dissipate energy independent of the Kolmogorov energy cascade. A possible parameterization of dissipation by turbulent shocklets for Large-Eddy Simulation is also presented.

  6. Building test data from real outbreaks for evaluating detection algorithms.

    PubMed

    Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.

  7. Building test data from real outbreaks for evaluating detection algorithms

    PubMed Central

    Texier, Gaetan; Jackson, Michael L.; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method—ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals. PMID:28863159

  8. A clustering approach for the analysis of solar energy yields: A case study for concentrating solar thermal power plants

    NASA Astrophysics Data System (ADS)

    Peruchena, Carlos M. Fernández; García-Barberena, Javier; Guisado, María Vicenta; Gastón, Martín

    2016-05-01

    The design of Concentrating Solar Thermal Power (CSTP) systems requires a detailed knowledge of the dynamic behavior of the meteorology at the site of interest. Meteorological series are often condensed into one representative year with the aim of data volume reduction and speeding-up of energy system simulations, defined as Typical Meteorological Year (TMY). This approach seems to be appropriate for rather detailed simulations of a specific plant; however, in previous stages of the design of a power plant, especially during the optimization of the large number of plant parameters before a final design is reached, a huge number of simulations are needed. Even with today's technology, the computational effort to simulate solar energy system performance with one year of data at high frequency (as 1-min) may become colossal if a multivariable optimization has to be performed. This work presents a simple and efficient methodology for selecting number of individual days able to represent the electrical production of the plant throughout the complete year. To achieve this objective, a new procedure for determining a reduced set of typical weather data in order to evaluate the long-term performance of a solar energy system is proposed. The proposed methodology is based on cluster analysis and permits to drastically reduce computational effort related to the calculation of a CSTP plant energy yield by simulating a reduced number of days from a high frequency TMY.

  9. Stochastic simulations on a model of circadian rhythm generation.

    PubMed

    Miura, Shigehiro; Shimokawa, Tetsuya; Nomura, Taishin

    2008-01-01

    Biological phenomena are often modeled by differential equations, where states of a model system are described by continuous real values. When we consider concentrations of molecules as dynamical variables for a set of biochemical reactions, we implicitly assume that numbers of the molecules are large enough so that their changes can be regarded as continuous and they are described deterministically. However, for a system with small numbers of molecules, changes in their numbers are apparently discrete and molecular noises become significant. In such cases, models with deterministic differential equations may be inappropriate, and the reactions must be described by stochastic equations. In this study, we focus a clock gene expression for a circadian rhythm generation, which is known as a system involving small numbers of molecules. Thus it is appropriate for the system to be modeled by stochastic equations and analyzed by methodologies of stochastic simulations. The interlocked feedback model proposed by Ueda et al. as a set of deterministic ordinary differential equations provides a basis of our analyses. We apply two stochastic simulation methods, namely Gillespie's direct method and the stochastic differential equation method also by Gillespie, to the interlocked feedback model. To this end, we first reformulated the original differential equations back to elementary chemical reactions. With those reactions, we simulate and analyze the dynamics of the model using two methods in order to compare them with the dynamics obtained from the original deterministic model and to characterize dynamics how they depend on the simulation methodologies.

  10. Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture

    PubMed Central

    Knight, James C.; Furber, Steve B.

    2016-01-01

    While the adult human brain has approximately 8.8 × 1010 neurons, this number is dwarfed by its 1 × 1015 synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously. PMID:27683540

  11. Finite-Time and -Size Scalings in the Evaluation of Large Deviation Functions. Numerical Analysis in Continuous Time

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.

  12. Properties of convective oxygen and silicon burning shells in supernova progenitors

    NASA Astrophysics Data System (ADS)

    Collins, Christine; Müller, Bernhard; Heger, Alexander

    2018-01-01

    Recent 3D simulations have suggested that convective seed perturbations from shell burning can play an important role in triggering neutrino-driven supernova explosions. Since isolated simulations cannot determine whether this perturbation-aided mechanism is of general relevance across the progenitor mass range, we here investigate the pertinent properties of convective oxygen and silicon burning shells in a broad range of pre-supernova stellar evolution models. We find that conditions for perturbation-aided explosions are most favourable in the extended oxygen shells of progenitors between about 16 and 26 solar masses, which exhibit large-scale convective overturn with high convective Mach numbers. Although the highest convective Mach numbers of up to 0.3 are reached in the oxygen shells of low-mass progenitors, convection is typically dominated by small-scale modes in these shells, which implies a more modest role of initial perturbations in the explosion mechanism. Convective silicon burning rarely provides the high Mach numbers and large-scale perturbations required for perturbation-aided explosions. We also find that about 40 per cent of progenitors between 16 and 26 solar masses exhibit simultaneous oxygen and neon burning in the same convection zone as a result of a shell merger shortly before collapse.

  13. Dynamic security contingency screening and ranking using neural networks.

    PubMed

    Mansour, Y; Vaahedi, E; El-Sharkawi, M A

    1997-01-01

    This paper summarizes BC Hydro's experience in applying neural networks to dynamic security contingency screening and ranking. The idea is to use the information on the prevailing operating condition and directly provide contingency screening and ranking using a trained neural network. To train the two neural networks for the large scale systems of BC Hydro and Hydro Quebec, in total 1691 detailed transient stability simulation were conducted, 1158 for BC Hydro system and 533 for the Hydro Quebec system. The simulation program was equipped with the energy margin calculation module (second kick) to measure the energy margin in each run. The first set of results showed poor performance for the neural networks in assessing the dynamic security. However a number of corrective measures improved the results significantly. These corrective measures included: 1) the effectiveness of output; 2) the number of outputs; 3) the type of features (static versus dynamic); 4) the number of features; 5) system partitioning; and 6) the ratio of training samples to features. The final results obtained using the large scale systems of BC Hydro and Hydro Quebec demonstrates a good potential for neural network in dynamic security assessment contingency screening and ranking.

  14. The effect of grading the atomic number at resistive guide element interface on magnetic collimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alraddadi, R. A. B.; Woolsey, N. C.; Robinson, A. P. L.

    2016-07-15

    Using 3 dimensional numerical simulations, this paper shows that grading the atomic number and thus the resistivity at the interface between an embedded high atomic number guide element and a lower atomic number substrate enhances the growth of a resistive magnetic field. This can lead to a large integrated magnetic flux density, which is fundamental to confining higher energy fast electrons. This results in significant improvements in both magnetic collimation and fast-electron-temperature uniformity across the guiding. The graded interface target provides a method for resistive guiding that is tolerant to laser pointing.

  15. User's Guide for the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS)

    NASA Technical Reports Server (NTRS)

    Frederick, Dean K.; DeCastro, Jonathan A.; Litt, Jonathan S.

    2007-01-01

    This report is a Users Guide for the NASA-developed Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) software, which is a transient simulation of a large commercial turbofan engine (up to 90,000-lb thrust) with a realistic engine control system. The software supports easy access to health, control, and engine parameters through a graphical user interface (GUI). C-MAPSS provides the user with a graphical turbofan engine simulation environment in which advanced algorithms can be implemented and tested. C-MAPSS can run user-specified transient simulations, and it can generate state-space linear models of the nonlinear engine model at an operating point. The code has a number of GUI screens that allow point-and-click operation, and have editable fields for user-specified input. The software includes an atmospheric model which allows simulation of engine operation at altitudes from sea level to 40,000 ft, Mach numbers from 0 to 0.90, and ambient temperatures from -60 to 103 F. The package also includes a power-management system that allows the engine to be operated over a wide range of thrust levels throughout the full range of flight conditions.

  16. How many neurons can we see with current spike sorting algorithms?

    PubMed

    Pedreira, Carlos; Martinez, Juan; Ison, Matias J; Quian Quiroga, Rodrigo

    2012-10-15

    Recent studies highlighted the disagreement between the typical number of neurons observed with extracellular recordings and the ones to be expected based on anatomical and physiological considerations. This disagreement has been mainly attributed to the presence of sparsely firing neurons. However, it is also possible that this is due to limitations of the spike sorting algorithms used to process the data. To address this issue, we used realistic simulations of extracellular recordings and found a relatively poor spike sorting performance for simulations containing a large number of neurons. In fact, the number of correctly identified neurons for single-channel recordings showed an asymptotic behavior saturating at about 8-10 units, when up to 20 units were present in the data. This performance was significantly poorer for neurons with low firing rates, as these units were twice more likely to be missed than the ones with high firing rates in simulations containing many neurons. These results uncover one of the main reasons for the relatively low number of neurons found in extracellular recording and also stress the importance of further developments of spike sorting algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Read margin analysis of crossbar arrays using the cell-variability-aware simulation method

    NASA Astrophysics Data System (ADS)

    Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon

    2018-02-01

    This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.

  18. Magnetic properties of dendrimer structures with different coordination numbers: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Masrour, R.; Jabar, A.

    2016-11-01

    We investigate the magnetic properties of Cayley trees of large molecules with dendrimer structure using Monte Carlo simulations. The thermal magnetization and magnetic susceptibility of a dendrimer structure are given with different coordination numbers, Z=3, 4, 5 and different generations g=3 and 2. The variation of magnetizations with the exchange interactions and crystal fields have been given of this system. The magnetic hysteresis cycles have been established.

  19. Evidence for Bolgiano-Obukhov scaling in rotating stratified turbulence using high-resolution direct numerical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Duane L; Pouquet, Dr. Annick; Mininni, Dr. Pablo D.

    2015-01-01

    We report results on rotating stratified turbulence in the absence of forcing, with large-scale isotropic initial conditions, using direct numerical simulations computed on grids of up tomore » $4096^3$ points. The Reynolds and Froude numbers are respectively equal to $$Re=5.4\\times 10^4$$ and $Fr=0.0242$$. The ratio of the Brunt-V\\"ais\\"al\\"a to the inertial wave frequency, $$N/f$, is taken to be equal to 5, a choice appropriate to model the dynamics of the southern abyssal ocean at mid latitudes. This gives a global buoyancy Reynolds number $$R_B=ReFr^2=32$$, a value sufficient for some isotropy to be recovered in the small scales beyond the Ozmidov scale, but still moderate enough that the intermediate scales where waves are prevalent are well resolved. We concentrate on the large-scale dynamics and confirm that the Froude number based on a typical vertical length scale is of order unity, with strong gradients in the vertical. Two characteristic scales emerge from this computation, and are identified from sharp variations in the spectral distribution of either total energy or helicity. A spectral break is also observed at a scale at which the partition of energy between the kinetic and potential modes changes abruptly, and beyond which a Kolmogorov-like spectrum recovers. Large slanted layers are ubiquitous in the flow in the velocity and temperature fields, and a large-scale enhancement of energy is also observed, directly attributable to the effect of rotation.« less

  20. Methodological Considerations in Estimation of Phenotype Heritability Using Genome-Wide SNP Data, Illustrated by an Analysis of the Heritability of Height in a Large Sample of African Ancestry Adults

    PubMed Central

    Chen, Fang; He, Jing; Zhang, Jianqi; Chen, Gary K.; Thomas, Venetta; Ambrosone, Christine B.; Bandera, Elisa V.; Berndt, Sonja I.; Bernstein, Leslie; Blot, William J.; Cai, Qiuyin; Carpten, John; Casey, Graham; Chanock, Stephen J.; Cheng, Iona; Chu, Lisa; Deming, Sandra L.; Driver, W. Ryan; Goodman, Phyllis; Hayes, Richard B.; Hennis, Anselm J. M.; Hsing, Ann W.; Hu, Jennifer J.; Ingles, Sue A.; John, Esther M.; Kittles, Rick A.; Kolb, Suzanne; Leske, M. Cristina; Monroe, Kristine R.; Murphy, Adam; Nemesure, Barbara; Neslund-Dudas, Christine; Nyante, Sarah; Ostrander, Elaine A; Press, Michael F.; Rodriguez-Gil, Jorge L.; Rybicki, Ben A.; Schumacher, Fredrick; Stanford, Janet L.; Signorello, Lisa B.; Strom, Sara S.; Stevens, Victoria; Van Den Berg, David; Wang, Zhaoming; Witte, John S.; Wu, Suh-Yuh; Yamamura, Yuko; Zheng, Wei; Ziegler, Regina G.; Stram, Alexander H.; Kolonel, Laurence N.; Marchand, Loïc Le; Henderson, Brian E.; Haiman, Christopher A.; Stram, Daniel O.

    2015-01-01

    Height has an extremely polygenic pattern of inheritance. Genome-wide association studies (GWAS) have revealed hundreds of common variants that are associated with human height at genome-wide levels of significance. However, only a small fraction of phenotypic variation can be explained by the aggregate of these common variants. In a large study of African-American men and women (n = 14,419), we genotyped and analyzed 966,578 autosomal SNPs across the entire genome using a linear mixed model variance components approach implemented in the program GCTA (Yang et al Nat Genet 2010), and estimated an additive heritability of 44.7% (se: 3.7%) for this phenotype in a sample of evidently unrelated individuals. While this estimated value is similar to that given by Yang et al in their analyses, we remain concerned about two related issues: (1) whether in the complete absence of hidden relatedness, variance components methods have adequate power to estimate heritability when a very large number of SNPs are used in the analysis; and (2) whether estimation of heritability may be biased, in real studies, by low levels of residual hidden relatedness. We addressed the first question in a semi-analytic fashion by directly simulating the distribution of the score statistic for a test of zero heritability with and without low levels of relatedness. The second question was addressed by a very careful comparison of the behavior of estimated heritability for both observed (self-reported) height and simulated phenotypes compared to imputation R2 as a function of the number of SNPs used in the analysis. These simulations help to address the important question about whether today's GWAS SNPs will remain useful for imputing causal variants that are discovered using very large sample sizes in future studies of height, or whether the causal variants themselves will need to be genotyped de novo in order to build a prediction model that ultimately captures a large fraction of the variability of height, and by implication other complex phenotypes. Our overall conclusions are that when study sizes are quite large (5,000 or so) the additive heritability estimate for height is not apparently biased upwards using the linear mixed model; however there is evidence in our simulation that a very large number of causal variants (many thousands) each with very small effect on phenotypic variance will need to be discovered to fill the gap between the heritability explained by known versus unknown causal variants. We conclude that today's GWAS data will remain useful in the future for causal variant prediction, but that finding the causal variants that need to be predicted may be extremely laborious. PMID:26125186

  1. Methodological Considerations in Estimation of Phenotype Heritability Using Genome-Wide SNP Data, Illustrated by an Analysis of the Heritability of Height in a Large Sample of African Ancestry Adults.

    PubMed

    Chen, Fang; He, Jing; Zhang, Jianqi; Chen, Gary K; Thomas, Venetta; Ambrosone, Christine B; Bandera, Elisa V; Berndt, Sonja I; Bernstein, Leslie; Blot, William J; Cai, Qiuyin; Carpten, John; Casey, Graham; Chanock, Stephen J; Cheng, Iona; Chu, Lisa; Deming, Sandra L; Driver, W Ryan; Goodman, Phyllis; Hayes, Richard B; Hennis, Anselm J M; Hsing, Ann W; Hu, Jennifer J; Ingles, Sue A; John, Esther M; Kittles, Rick A; Kolb, Suzanne; Leske, M Cristina; Millikan, Robert C; Monroe, Kristine R; Murphy, Adam; Nemesure, Barbara; Neslund-Dudas, Christine; Nyante, Sarah; Ostrander, Elaine A; Press, Michael F; Rodriguez-Gil, Jorge L; Rybicki, Ben A; Schumacher, Fredrick; Stanford, Janet L; Signorello, Lisa B; Strom, Sara S; Stevens, Victoria; Van Den Berg, David; Wang, Zhaoming; Witte, John S; Wu, Suh-Yuh; Yamamura, Yuko; Zheng, Wei; Ziegler, Regina G; Stram, Alexander H; Kolonel, Laurence N; Le Marchand, Loïc; Henderson, Brian E; Haiman, Christopher A; Stram, Daniel O

    2015-01-01

    Height has an extremely polygenic pattern of inheritance. Genome-wide association studies (GWAS) have revealed hundreds of common variants that are associated with human height at genome-wide levels of significance. However, only a small fraction of phenotypic variation can be explained by the aggregate of these common variants. In a large study of African-American men and women (n = 14,419), we genotyped and analyzed 966,578 autosomal SNPs across the entire genome using a linear mixed model variance components approach implemented in the program GCTA (Yang et al Nat Genet 2010), and estimated an additive heritability of 44.7% (se: 3.7%) for this phenotype in a sample of evidently unrelated individuals. While this estimated value is similar to that given by Yang et al in their analyses, we remain concerned about two related issues: (1) whether in the complete absence of hidden relatedness, variance components methods have adequate power to estimate heritability when a very large number of SNPs are used in the analysis; and (2) whether estimation of heritability may be biased, in real studies, by low levels of residual hidden relatedness. We addressed the first question in a semi-analytic fashion by directly simulating the distribution of the score statistic for a test of zero heritability with and without low levels of relatedness. The second question was addressed by a very careful comparison of the behavior of estimated heritability for both observed (self-reported) height and simulated phenotypes compared to imputation R2 as a function of the number of SNPs used in the analysis. These simulations help to address the important question about whether today's GWAS SNPs will remain useful for imputing causal variants that are discovered using very large sample sizes in future studies of height, or whether the causal variants themselves will need to be genotyped de novo in order to build a prediction model that ultimately captures a large fraction of the variability of height, and by implication other complex phenotypes. Our overall conclusions are that when study sizes are quite large (5,000 or so) the additive heritability estimate for height is not apparently biased upwards using the linear mixed model; however there is evidence in our simulation that a very large number of causal variants (many thousands) each with very small effect on phenotypic variance will need to be discovered to fill the gap between the heritability explained by known versus unknown causal variants. We conclude that today's GWAS data will remain useful in the future for causal variant prediction, but that finding the causal variants that need to be predicted may be extremely laborious.

  2. Fractional Transport in Strongly Turbulent Plasmas.

    PubMed

    Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana

    2017-07-28

    We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.

  3. Fractional Transport in Strongly Turbulent Plasmas

    NASA Astrophysics Data System (ADS)

    Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana

    2017-07-01

    We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.

  4. BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs

    NASA Astrophysics Data System (ADS)

    Govender, Nicolin; Wilke, Daniel; Pizette, Patrick; Khinast, Johannes

    2017-06-01

    Accurately predicting the dynamics of particulate materials is of importance to numerous scientific and industrial areas with applications ranging across particle scales from powder flow to ore crushing. Computational discrete element simulations is a viable option to aid in the understanding of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory scale tests comes at a significant cost. However, the computational time required to simulate an industrial scale simulation which consists of tens of millions of particles can take months to complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for industrial applications. Simulations are therefore typically restricted to tens of thousands of particles with highly detailed particle shapes or a few million of particles with often oversimplified particle shapes. However, a number of applications require accurate representation of the particle shape to capture the macroscopic behaviour of the particulate system. In this paper we give an overview of the recent extensions to the open source GPU based DEM code, BlazeDEM3D-GPU, that can simulate millions of polyhedra and tens of millions of spheres on a desktop computer with a single or multiple GPUs.

  5. Development of Multimedia Computer Applications for Clinical Pharmacy Training.

    ERIC Educational Resources Information Center

    Schlict, John R.; Livengood, Bruce; Shepherd, John

    1997-01-01

    Computer simulations in clinical pharmacy education help expose students to clinical patient management earlier and enable training of large numbers of students outside conventional clinical practice sites. Multimedia instruction and its application to pharmacy training are described, the general process for developing multimedia presentations is…

  6. QTL fine mapping with Bayes C(π): a simulation study.

    PubMed

    van den Berg, Irene; Fritz, Sébastien; Boichard, Didier

    2013-06-19

    Accurate QTL mapping is a prerequisite in the search for causative mutations. Bayesian genomic selection models that analyse many markers simultaneously should provide more accurate QTL detection results than single-marker models. Our objectives were to (a) evaluate by simulation the influence of heritability, number of QTL and number of records on the accuracy of QTL mapping with Bayes Cπ and Bayes C; (b) estimate the QTL status (homozygous vs. heterozygous) of the individuals analysed. This study focussed on the ten largest detected QTL, assuming they are candidates for further characterization. Our simulations were based on a true dairy cattle population genotyped for 38,277 phased markers. Some of these markers were considered biallelic QTL and used to generate corresponding phenotypes. Different numbers of records (4387 and 1500), heritability values (0.1, 0.4 and 0.7) and numbers of QTL (10, 100 and 1000) were studied. QTL detection was based on the posterior inclusion probability for individual markers, or on the sum of the posterior inclusion probabilities for consecutive markers, estimated using Bayes C or Bayes Cπ. The QTL status of the individuals was derived from the contrast between the sums of the SNP allelic effects of their chromosomal segments. The proportion of markers with null effect (π) frequently did not reach convergence, leading to poor results for Bayes Cπ in QTL detection. Fixing π led to better results. Detection of the largest QTL was most accurate for medium to high heritability, for low to moderate numbers of QTL, and with a large number of records. The QTL status was accurately inferred when the distribution of the contrast between chromosomal segment effects was bimodal. QTL detection is feasible with Bayes C. For QTL detection, it is recommended to use a large dataset and to focus on highly heritable traits and on the largest QTL. QTL statuses were inferred based on the distribution of the contrast between chromosomal segment effects.

  7. Three-dimensional direct numerical simulation of turbulent lean premixed methane combustion with detailed kinetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aspden, A. J.; Day, M. S.; Bell, J. B.

    The interaction of maintained homogeneous isotropic turbulence with lean premixed methane flames is investigated using direct numerical simulation with detailed chemistry. The conditions are chosen to be close to those found in atmospheric laboratory experiments. As the Karlovitz number is increased from 1 to 36, the preheat zone becomes thickened, while the reaction zone remains largely unaffected. A negative correlation of fuel consumption with mean flame surface curvature is observed. With increasing turbulence intensity, the chemical composition in the preheat zone tends towards that of an idealised unity Lewis number flame, which we argue is the onset of the transitionmore » to distributed burning, and the response of the various chemical species is shown to fall into broad classes. Smaller-scale simulations are used to isolate the specific role of species diffusion at high turbulent intensities. Diffusion of atomic hydrogen is shown to be related to the observed curvature correlations, but does not have significant consequential impact on the thickening of the preheat zone. It is also shown that susceptibility of the preheat zone to thickening by turbulence is related to the 'global' Lewis number (the Lewis number of the deficient reactant); higher global Lewis number flames tend to be more prone to thickening.« less

  8. Three-dimensional direct numerical simulation of turbulent lean premixed methane combustion with detailed kinetics

    DOE PAGES

    Aspden, A. J.; Day, M. S.; Bell, J. B.

    2016-02-18

    The interaction of maintained homogeneous isotropic turbulence with lean premixed methane flames is investigated using direct numerical simulation with detailed chemistry. The conditions are chosen to be close to those found in atmospheric laboratory experiments. As the Karlovitz number is increased from 1 to 36, the preheat zone becomes thickened, while the reaction zone remains largely unaffected. A negative correlation of fuel consumption with mean flame surface curvature is observed. With increasing turbulence intensity, the chemical composition in the preheat zone tends towards that of an idealised unity Lewis number flame, which we argue is the onset of the transitionmore » to distributed burning, and the response of the various chemical species is shown to fall into broad classes. Smaller-scale simulations are used to isolate the specific role of species diffusion at high turbulent intensities. Diffusion of atomic hydrogen is shown to be related to the observed curvature correlations, but does not have significant consequential impact on the thickening of the preheat zone. It is also shown that susceptibility of the preheat zone to thickening by turbulence is related to the 'global' Lewis number (the Lewis number of the deficient reactant); higher global Lewis number flames tend to be more prone to thickening.« less

  9. PLATSIM: An efficient linear simulation and analysis package for large-order flexible systems

    NASA Technical Reports Server (NTRS)

    Maghami, Periman; Kenny, Sean P.; Giesy, Daniel P.

    1995-01-01

    PLATSIM is a software package designed to provide efficient time and frequency domain analysis of large-order generic space platforms implemented with any linear time-invariant control system. Time domain analysis provides simulations of the overall spacecraft response levels due to either onboard or external disturbances. The time domain results can then be processed by the jitter analysis module to assess the spacecraft's pointing performance in a computationally efficient manner. The resulting jitter analysis algorithms have produced an increase in speed of several orders of magnitude over the brute force approach of sweeping minima and maxima. Frequency domain analysis produces frequency response functions for uncontrolled and controlled platform configurations. The latter represents an enabling technology for large-order flexible systems. PLATSIM uses a sparse matrix formulation for the spacecraft dynamics model which makes both the time and frequency domain operations quite efficient, particularly when a large number of modes are required to capture the true dynamics of the spacecraft. The package is written in MATLAB script language. A graphical user interface (GUI) is included in the PLATSIM software package. This GUI uses MATLAB's Handle graphics to provide a convenient way for setting simulation and analysis parameters.

  10. Analysis of near-surface relative humidity in a wind turbine array boundary layer using an instrumented unmanned aerial system and large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Adkins, Kevin; Elfajri, Oumnia; Sescu, Adrian

    2016-11-01

    Simulation and modeling have shown that wind farms have an impact on the near-surface atmospheric boundary layer (ABL) as turbulent wakes generated by the turbines enhance vertical mixing. These changes alter downstream atmospheric properties. With a large portion of wind farms hosted within an agricultural context, changes to the environment can potentially have secondary impacts such as to the productivity of crops. With the exception of a few observational data sets that focus on the impact to near-surface temperature, little to no observational evidence exists. These few studies also lack high spatial resolution due to their use of a limited number of meteorological towers or remote sensing techniques. This study utilizes an instrumented small unmanned aerial system (sUAS) to gather in-situ field measurements from two Midwest wind farms, focusing on the impact that large utility-scale wind turbines have on relative humidity. Results are also compared to numerical experiments conducted using large eddy simulation (LES). Wind turbines are found to differentially alter the relative humidity in the downstream, spanwise and vertical directions under a variety of atmospheric stability conditions.

  11. Examining the impact of larval source management and insecticide-treated nets using a spatial agent-based model of Anopheles gambiae and a landscape generator tool.

    PubMed

    Arifin, S M Niaz; Madey, Gregory R; Collins, Frank H

    2013-08-21

    Agent-based models (ABMs) have been used to estimate the effects of malaria-control interventions. Early studies have shown the efficacy of larval source management (LSM) and insecticide-treated nets (ITNs) as vector-control interventions, applied both in isolation and in combination. However, the robustness of results can be affected by several important modelling assumptions, including the type of boundary used for landscapes, and the number of replicated simulation runs reported in results. Selection of the ITN coverage definition may also affect the predictive findings. Hence, by replication, independent verification of prior findings of published models bears special importance. A spatially-explicit entomological ABM of Anopheles gambiae is used to simulate the resource-seeking process of mosquitoes in grid-based landscapes. To explore LSM and replicate results of an earlier LSM study, the original landscapes and scenarios are replicated by using a landscape generator tool, and 1,800 replicated simulations are run using absorbing and non-absorbing boundaries. To explore ITNs and evaluate the relative impacts of the different ITN coverage schemes, the settings of an earlier ITN study are replicated, the coverage schemes are defined and simulated, and 9,000 replicated simulations for three ITN parameters (coverage, repellence and mortality) are run. To evaluate LSM and ITNs in combination, landscapes with varying densities of houses and human populations are generated, and 12,000 simulations are run. General agreement with an earlier LSM study is observed when an absorbing boundary is used. However, using a non-absorbing boundary produces significantly different results, which may be attributed to the unrealistic killing effect of an absorbing boundary. Abundance cannot be completely suppressed by removing aquatic habitats within 300 m of houses. Also, with density-dependent oviposition, removal of insufficient number of aquatic habitats may prove counter-productive. The importance of performing large number of simulation runs is also demonstrated. For ITNs, the choice of coverage scheme has important implications, and too high repellence yields detrimental effects. When LSM and ITNs are applied in combination, ITNs' mortality can play more important roles with higher densities of houses. With partial mortality, increasing ITN coverage is more effective than increasing LSM coverage, and integrating both interventions yields more synergy as the densities of houses increase. Using a non-absorbing boundary and reporting average results from sufficiently large number of simulation runs are strongly recommended for malaria ABMs. Several guidelines (code and data sharing, relevant documentation, and standardized models) for future modellers are also recommended.

  12. A parallel-pipelined architecture for a multi carrier demodulator

    NASA Astrophysics Data System (ADS)

    Kwatra, S. C.; Jamali, M. M.; Eugene, Linus P.

    1991-03-01

    Analog devices have been used for processing the information on board the satellites. Presently, digital devices are being used because they are economical and flexible as compared to their analog counterparts. Several schemes of digital transmission can be used depending on the data rate requirement of the user. An economical scheme of transmission for small earth stations uses single channel per carrier/frequency division multiple access (SCPC/FDMA) on the uplink and time division multiplexing (TDM) on the downlink. This is a typical communication service offered to low data rate users in commercial mass market. These channels usually pertain to either voice or data transmission. An efficient digital demodulator architecture is provided for a large number of law data rate users. A demodulator primarily consists of carrier, clock, and data recovery modules. This design uses principles of parallel processing, pipelining, and time sharing schemes to process large numbers of voice or data channels. It maintains the optimum throughput which is derived from the designed architecture and from the use of high speed components. The design is optimized for reduced power and area requirements. This is essential for satellite applications. The design is also flexible in processing a group of a varying number of channels. The algorithms that are used are verified by the use of a computer aided software engineering (CASE) tool called the Block Oriented System Simulator. The data flow, control circuitry, and interface of the hardware design is simulated in C language. Also, a multiprocessor approach is provided to map, model, and simulate the demodulation algorithms mainly from a speed view point. A hypercude based architecture implementation is provided for such a scheme of operation. The hypercube structure and the demodulation models on hypercubes are simulated in Ada.

  13. A parallel-pipelined architecture for a multi carrier demodulator. M.S. Thesis Final Technical Report, Jan. 1989 - Aug. 1990

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Jamali, M. M.; Eugene, Linus P.

    1991-01-01

    Analog devices have been used for processing the information on board the satellites. Presently, digital devices are being used because they are economical and flexible as compared to their analog counterparts. Several schemes of digital transmission can be used depending on the data rate requirement of the user. An economical scheme of transmission for small earth stations uses single channel per carrier/frequency division multiple access (SCPC/FDMA) on the uplink and time division multiplexing (TDM) on the downlink. This is a typical communication service offered to low data rate users in commercial mass market. These channels usually pertain to either voice or data transmission. An efficient digital demodulator architecture is provided for a large number of law data rate users. A demodulator primarily consists of carrier, clock, and data recovery modules. This design uses principles of parallel processing, pipelining, and time sharing schemes to process large numbers of voice or data channels. It maintains the optimum throughput which is derived from the designed architecture and from the use of high speed components. The design is optimized for reduced power and area requirements. This is essential for satellite applications. The design is also flexible in processing a group of a varying number of channels. The algorithms that are used are verified by the use of a computer aided software engineering (CASE) tool called the Block Oriented System Simulator. The data flow, control circuitry, and interface of the hardware design is simulated in C language. Also, a multiprocessor approach is provided to map, model, and simulate the demodulation algorithms mainly from a speed view point. A hypercude based architecture implementation is provided for such a scheme of operation. The hypercube structure and the demodulation models on hypercubes are simulated in Ada.

  14. Dissipative closures for statistical moments, fluid moments, and subgrid scales in plasma turbulence

    NASA Astrophysics Data System (ADS)

    Smith, Stephen Andrew

    1997-11-01

    Closures are necessary in the study physical systems with large numbers of degrees of freedom when it is only possible to compute a small number of modes. The modes that are to be computed, the resolved modes, are coupled to unresolved modes that must be estimated. This thesis focuses on dissipative closures models for two problems that arises in the study of plasma turbulence: the fluid moment closure problem and the subgrid scale closure problem. The fluid moment closures of Hammett and Perkins (1990) were originally applied to a one-dimensional kinetic equation, the Vlasov equation. These closures are generalized in this thesis and applied to the stochastic oscillator problem, a standard paradigm problem for statistical closures. The linear theory of the Hammett- Perkins closures is shown to converge with increasing numbers of moments. A novel parameterized hyperviscosity is proposed for two- dimensional drift-wave turbulence. The magnitude and exponent of the hyperviscosity are expressed as functions of the large scale advection velocity. Traditionally hyperviscosities are applied to simulations with a fixed exponent that must be arbitrarily chosen. Expressing the exponent as a function of the simulation parameters eliminates this ambiguity. These functions are parameterized by comparing the hyperviscous dissipation to the subgrid dissipation calculated from direct numerical simulations. Tests of the parameterization demonstrate that it performs better than using no additional damping term or than using a standard hyperviscosity. Heuristic arguments are presented to extend this hyperviscosity model to three-dimensional (3D) drift-wave turbulence where eddies are highly elongated along the field line. Preliminary results indicate that this generalized 3D hyperviscosity is capable of reducing the resolution requirements for 3D gyrofluid turbulence simulations.

  15. Validation of radiative transfer computation with Monte Carlo method for ultra-relativistic background flow

    NASA Astrophysics Data System (ADS)

    Ishii, Ayako; Ohnishi, Naofumi; Nagakura, Hiroki; Ito, Hirotaka; Yamada, Shoichi

    2017-11-01

    We developed a three-dimensional radiative transfer code for an ultra-relativistic background flow-field by using the Monte Carlo (MC) method in the context of gamma-ray burst (GRB) emission. For obtaining reliable simulation results in the coupled computation of MC radiation transport with relativistic hydrodynamics which can reproduce GRB emission, we validated radiative transfer computation in the ultra-relativistic regime and assessed the appropriate simulation conditions. The radiative transfer code was validated through two test calculations: (1) computing in different inertial frames and (2) computing in flow-fields with discontinuous and smeared shock fronts. The simulation results of the angular distribution and spectrum were compared among three different inertial frames and in good agreement with each other. If the time duration for updating the flow-field was sufficiently small to resolve a mean free path of a photon into ten steps, the results were thoroughly converged. The spectrum computed in the flow-field with a discontinuous shock front obeyed a power-law in frequency whose index was positive in the range from 1 to 10 MeV. The number of photons in the high-energy side decreased with the smeared shock front because the photons were less scattered immediately behind the shock wave due to the small electron number density. The large optical depth near the shock front was needed for obtaining high-energy photons through bulk Compton scattering. Even one-dimensional structure of the shock wave could affect the results of radiation transport computation. Although we examined the effect of the shock structure on the emitted spectrum with a large number of cells, it is hard to employ so many computational cells per dimension in multi-dimensional simulations. Therefore, a further investigation with a smaller number of cells is required for obtaining realistic high-energy photons with multi-dimensional computations.

  16. Simulations of the general circulation of the Martian atmosphere. I - Polar processes

    NASA Technical Reports Server (NTRS)

    Pollack, James B.; Haberle, Robert M.; Schaeffer, James; Lee, Hilda

    1990-01-01

    Numerical simulations of the Martian atmosphere general circulation are carried out for 50 simulated days, using a three-dimensional model, based on the primitive equations of meteorology, which incorporated the radiative effects of atmospheric dust on solar and thermal radiation. A large number of numerical experiments were conducted for alternative choices of seasonal date and dust optical depth. It was found that, as the dust content of the winter polar region increased, the rate of atmospheric CO2 condensation increased sharply. It is shown that the strong seasonal variation in the atmospheric dust content observed might cause a number of hemispheric asymmetries. These asymmetries include the greater prevalence of polar hoods in the northern polar region during winter, the lower albedo of the northern polar cap during spring, and the total dissipation of the northern CO2 ice cap during the warmer seasons.

  17. Multi-Conformer Ensemble Docking to Difficult Protein Targets

    DOE PAGES

    Ellingson, Sally R.; Miao, Yinglong; Baudry, Jerome; ...

    2014-09-08

    We investigate large-scale ensemble docking using five proteins from the Directory of Useful Decoys (DUD, dud.docking.org) for which docking to crystal structures has proven difficult. Molecular dynamics trajectories are produced for each protein and an ensemble of representative conformational structures extracted from the trajectories. Docking calculations are performed on these selected simulation structures and ensemble-based enrichment factors compared with those obtained using docking in crystal structures of the same protein targets or random selection of compounds. We also found simulation-derived snapshots with improved enrichment factors that increased the chemical diversity of docking hits for four of the five selected proteins.more » A combination of all the docking results obtained from molecular dynamics simulation followed by selection of top-ranking compounds appears to be an effective strategy for increasing the number and diversity of hits when using docking to screen large libraries of chemicals against difficult protein targets.« less

  18. Large Eddy Simulation of Ducted Propulsors in Crashback

    NASA Astrophysics Data System (ADS)

    Jang, Hyunchul; Mahesh, Krishnan

    2009-11-01

    Flow around a ducted marine propulsor is computed using the large eddy simulation methodology under crashback conditions. Crashback is an operating condition where a propulsor rotates in the reverse direction while the vessel moves in the forward direction. It is characterized by massive flow separation and highly unsteady propeller loads, which affect both blade life and maneuverability. The simulations are performed on unstructured grids using the discrete kinetic energy conserving algorithm developed by Mahesh at al. (2004, J. Comput. Phys 197). Numerical challenges posed by sharp blade edges and small blade tip clearances are discussed. The flow is computed at the advance ratio J=-0.7 and Reynolds number Re=480,000 based on the propeller diameter. Average and RMS values of the unsteady loads such as thrust, torque, and side force on the blades and duct are compared to experiment, and the effect of the duct on crashback is discussed.

  19. Bio-inspired group modeling and analysis for intruder detection in mobile sensor/robotic networks.

    PubMed

    Fu, Bo; Xiao, Yang; Liang, Xiannuan; Philip Chen, C L

    2015-01-01

    Although previous bio-inspired models have concentrated on invertebrates (such as ants), mammals such as primates with higher cognitive function are valuable for modeling the increasingly complex problems in engineering. Understanding primates' social and communication systems, and applying what is learned from them to engineering domains is likely to inspire solutions to a number of problems. This paper presents a novel bio-inspired approach to determine group size by researching and simulating primate society. Group size does matter for both primate society and digital entities. It is difficult to determine how to group mobile sensors/robots that patrol in a large area when many factors are considered such as patrol efficiency, wireless interference, coverage, inter/intragroup communications, etc. This paper presents a simulation-based theoretical study on patrolling strategies for robot groups with the comparison of large and small groups through simulations and theoretical results.

  20. Reynolds number invariance of the structure inclination angle in wall turbulence.

    PubMed

    Marusic, Ivan; Heuer, Weston D C

    2007-09-14

    Cross correlations of the fluctuating wall-shear stress and the streamwise velocity in the logarithmic region of turbulent boundary layers are reported over 3 orders of magnitude change in Reynolds number. These results are obtained using hot-film and hot-wire anemometry in a wind tunnel facility, and sonic anemometers and a purpose-built wall-shear stress sensor in the near-neutral atmospheric surface layer on the salt flats of Utah's western desert. The direct measurement of fluctuating wall-shear stress in the atmospheric surface layer has not been available before. Structure inclination angles are inferred from the cross correlation results and are found to be invariant over the large range of Reynolds number. The findings justify the prior use of low Reynolds number experiments for obtaining structure angles for near-wall models in the large-eddy simulation of atmospheric surface layer flows.

  1. Numerical simulation of the plasma thermal disturbances during ionospheric modification experiments at the SURA heating facility

    NASA Astrophysics Data System (ADS)

    Belov, Alexey; Huba, J. D.

    indent=1cm We present the results of numerical simulation of the near-Earth plasma disturbances produced by resonant heating of the ionospheric F-region by high-power HF radio emission from the SURA facility. The computational model is based on the modified version of the SAMI2 code (release 1.00). The model input parameters are appropriated to the conditions of the SURA-DEMETER experiment. In this work, we study the spatial structure and temporal characteristics of stimulated large-scale disturbances of the electron number density and temperature. It is shown that the stimulated disturbances are observed throughout the ionosphere. Disturbances are recorded both in the region below the pump wave reflection level and in the outer ionosphere (up to 3000 km). At the DEMETER altitude, an increase in the ion number density is stipulated by the oxygen ions O (+) , whereas the number density of lighter H (+) ions decreases. A typical time of the formation of large-scale plasma density disturbances in the outer ionosphere is 2-3 min. After the heater is turned off, the disturbances relaxation time is approximately 30 min. The simulation results are important for planning future promising experiments on the formation of ionospheric artificial density ducts. This work was supported by the Russian Foundation for Basic Research (project No. 12-02-00747-a), and the Government of the Russian Federation (contract No. 14.B25.31.0008).

  2. A transport model for computer simulation of wildfires

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linn, R.

    1997-12-31

    Realistic self-determining simulation of wildfires is a difficult task because of a large variety of important length scales (including scales on the size of twigs or grass and the size of large trees), imperfect data, complex fluid mechanics and heat transfer, and very complicated chemical reactions. The author uses a transport approach to produce a model that exhibits a self-determining propagation rate. The transport approach allows him to represent a large number of environments such as those with nonhomogeneous vegetation and terrain. He accounts for the microscopic details of a fire with macroscopic resolution by dividing quantities into mean andmore » fluctuating parts similar to what is done in traditional turbulence modeling. These divided quantities include fuel, wind, gas concentrations, and temperature. Reaction rates are limited by the mixing process and not the chemical kinetics. The author has developed a model that includes the transport of multiple gas species, such as oxygen and volatile hydrocarbons, and tracks the depletion of various fuels and other stationary solids and liquids. From this model he develops a simplified local burning model with which he performs a number of simulations that demonstrate that he is able to capture the important physics with the transport approach. With this simplified model he is able to pick up the essence of wildfire propagation, including such features as acceleration when transitioning to upsloping terrain, deceleration of fire fronts when they reach downslopes, and crowning in the presence of high winds.« less

  3. Large eddy simulation study of turbulent kinetic energy and scalar variance budgets and turbulent/non-turbulent interface in planar jets

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Sakai, Yasuhiko; Nagata, Koji; Ito, Yasumasa

    2016-04-01

    Spatially developing planar jets with passive scalar transports are simulated for various Reynolds (Re = 2200, 7000, and 22 000) and Schmidt numbers (Sc = 1, 4, 16, 64, and 128) by the implicit large eddy simulation (ILES) using low-pass filtering as an implicit subgrid-scale model. The budgets of resolved turbulent kinetic energy k and scalar variance < {φ }\\prime 2> are explicitly evaluated from the ILES data except for the dissipation terms, which are obtained from the balance in the transport equations. The budgets of k and < {φ }\\prime 2> in the ILES agree well with the DNS and experiments for both high and low Re cases. The streamwise decay of the mean turbulent kinetic energy dissipation rate obeys the power low obtained by the scaling argument. The mechanical-to-scalar timescale ratio C ϕ is evaluated in the self-similar region. For the high Re case, C ϕ is close to the isotropic value (C ϕ = 2) near the jet centerline. However, when Re is not large, C ϕ is smaller than 2 and depends on the Schmidt number. The T/NT interface is also investigated by using the scalar isosurface. The velocity and scalar fields near the interface depend on the interface orientation for all Re. The velocity toward the interface is observed near the interface facing in the streamwise, cross-streamwise, and spanwise directions in the planar jet in the resolved velocity field.

  4. ASHEE: a compressible, Equilibrium-Eulerian model for volcanic ash plumes

    NASA Astrophysics Data System (ADS)

    Cerminara, M.; Esposti Ongaro, T.; Berselli, L. C.

    2015-10-01

    A new fluid-dynamic model is developed to numerically simulate the non-equilibrium dynamics of polydisperse gas-particle mixtures forming volcanic plumes. Starting from the three-dimensional N-phase Eulerian transport equations (Neri et al., 2003) for a mixture of gases and solid dispersed particles, we adopt an asymptotic expansion strategy to derive a compressible version of the first-order non-equilibrium model (Ferry and Balachandar, 2001), valid for low concentration regimes (particle volume fraction less than 10-3) and particles Stokes number (St, i.e., the ratio between their relaxation time and flow characteristic time) not exceeding about 0.2. The new model, which is called ASHEE (ASH Equilibrium Eulerian), is significantly faster than the N-phase Eulerian model while retaining the capability to describe gas-particle non-equilibrium effects. Direct numerical simulation accurately reproduce the dynamics of isotropic, compressible turbulence in subsonic regime. For gas-particle mixtures, it describes the main features of density fluctuations and the preferential concentration and clustering of particles by turbulence, thus verifying the model reliability and suitability for the numerical simulation of high-Reynolds number and high-temperature regimes in presence of a dispersed phase. On the other hand, Large-Eddy Numerical Simulations of forced plumes are able to reproduce their observed averaged and instantaneous flow properties. In particular, the self-similar Gaussian radial profile and the development of large-scale coherent structures are reproduced, including the rate of turbulent mixing and entrainment of atmospheric air. Application to the Large-Eddy Simulation of the injection of the eruptive mixture in a stratified atmosphere describes some of important features of turbulent volcanic plumes, including air entrainment, buoyancy reversal, and maximum plume height. For very fine particles (St → 0, when non-equilibrium effects are negligible) the model reduces to the so-called dusty-gas model. However, coarse particles partially decouple from the gas phase within eddies (thus modifying the turbulent structure) and preferentially concentrate at the eddy periphery, eventually being lost from the plume margins due to the concurrent effect of gravity. By these mechanisms, gas-particle non-equilibrium processes are able to influence the large-scale behavior of volcanic plumes.

  5. Stellar dynamics in E+E pairs of galaxies. 2: Simulations and interpretation

    NASA Astrophysics Data System (ADS)

    Combes, F.; Rampazzo, R.; Bonfanti, P. P.; Prugniel, P.; Sulentic, J. W.

    1995-05-01

    We have presented in a companion article a kinematic study of three E+E galaxy pairs, NGC741/742, 1587/1588 (CPG 99) and 2672/2673 (CPG 175). We find some evidence for perturbed velocity dispersion profiles. These perturbation features are now reported for 14 galaxies in the literature. They occur, or require observations for detection, at large radii where the S/N in the data is low. While observations of individual galaxies are sometimes uncertain, the large number of objects where such features are suspected gives confidence that they are real. These perturbations can be attributed to projection effects contamination along the line of sight, or directly to the tidal interaction. We report the results of several self-gravitating simulations of unbound pairs in an effort to better understand these perturbations another generic features of close E+E pairs reported in the literature. The models frequently show off-center envelopes created by the asymmetry of tidal forces during interpenetrating encounters. The envelopes last for a few 108 yrs, which explains the frequency of such features in observed pairs. This phenomenon is stronger in the self-gravitating simulations than in the MTBA runs. U-shaped (and an equal number of inverse U shaped velocity profiles are seen in the simulations, a result of ablation in the outer envelopes. Simulations including inner galaxy rotation also preserve this feature, irrespective of the spin vector direction in each galaxy. U-shape velocity structure is found to be a robust indicator of the ongoing interaction. All simulations show evidence for enhanced velocity dispersion between the galaxies even in the case of simple superposition of two non interacting objects. We therefore conclude that this cannot be considered an unambiguous indicator of the interaction.

  6. Simulation methods with extended stability for stiff biochemical Kinetics.

    PubMed

    Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin

    2010-08-11

    With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

  7. Ocean Simulation Model. Version 2. First Order Frontal Simulation

    DTIC Science & Technology

    1991-05-01

    REAL DEP(NXVS),TEMP(MXVS),SAL(MXVS),SIG(MXVS), DVF (MXVS), * DEP2(MXVS),TEMP2(MXVS),SAL2(MXVS),SIG2CMXVS),BVF2(MXVS), * DEPO(MXVS), TEMPO(MX)VS),SALO...processing parameters to desired values. Generating the Front Position Directive FRNT uses the current clock time as initial seed to call the intrinsic...potentially be very time consuming if the parameter ITER is set to a large number. Directive RES was designed to allow the user to resume the HELM

  8. Simulation of pyroshock environments using a tunable resonant fixture

    DOEpatents

    Davie, N.T.

    1996-10-15

    Disclosed are a method and apparatus for simulating pyrotechnic shock for the purpose of qualifying electronic components for use in weapons, satellite, and aerospace applications. According to the invention, a single resonant bar fixture has an adjustable resonant frequency in order to exhibit a desired shock response spectrum upon mechanical impact. The invention eliminates the need for availability of a large number of different fixtures, capable of exhibiting a range of shock response characteristics, in favor of a single tunable system. 32 figs.

  9. Simulation of pyroshock environments using a tunable resonant fixture

    DOEpatents

    Davie, Neil T.

    1996-01-01

    Disclosed are a method and apparatus for simulating pyrotechnic shock for the purpose of qualifying electronic components for use in weapons, satellite, and aerospace applications. According to the invention, a single resonant bar fixture has an adjustable resonant frequency in order to exhibit a desired shock response spectrum upon mechanical impact. The invention eliminates the need for availability of a large number of different fixtures, capable of exhibiting a range of shock response characteristics, in favor of a single tunable system.

  10. A three-dimensional ground-water-flow model modified to reduce computer-memory requirements and better simulate confining-bed and aquifer pinchouts

    USGS Publications Warehouse

    Leahy, P.P.

    1982-01-01

    The Trescott computer program for modeling groundwater flow in three dimensions has been modified to (1) treat aquifer and confining bed pinchouts more realistically and (2) reduce the computer memory requirements needed for the input data. Using the original program, simulation of aquifer systems with nonrectangular external boundaries may result in a large number of nodes that are not involved in the numerical solution of the problem, but require computer storage. (USGS)

  11. Large-scale particle acceleration by magnetic reconnection during solar flares

    NASA Astrophysics Data System (ADS)

    Li, X.; Guo, F.; Li, H.; Li, G.; Li, S.

    2017-12-01

    Magnetic reconnection that triggers explosive magnetic energy release has been widely invoked to explain the large-scale particle acceleration during solar flares. While great efforts have been spent in studying the acceleration mechanism in small-scale kinetic simulations, there have been rare studies that make predictions to acceleration in the large scale comparable to the flare reconnection region. Here we present a new arrangement to study this problem. We solve the large-scale energetic-particle transport equation in the fluid velocity and magnetic fields from high-Lundquist-number MHD simulations of reconnection layers. This approach is based on examining the dominant acceleration mechanism and pitch-angle scattering in kinetic simulations. Due to the fluid compression in reconnection outflows and merging magnetic islands, particles are accelerated to high energies and develop power-law energy distributions. We find that the acceleration efficiency and power-law index depend critically on upstream plasma beta and the magnitude of guide field (the magnetic field component perpendicular to the reconnecting component) as they influence the compressibility of the reconnection layer. We also find that the accelerated high-energy particles are mostly concentrated in large magnetic islands, making the islands a source of energetic particles and high-energy emissions. These findings may provide explanations for acceleration process in large-scale magnetic reconnection during solar flares and the temporal and spatial emission properties observed in different flare events.

  12. Using internal discharge data in a distributed conceptual model to reduce uncertainty in streamflow simulations

    NASA Astrophysics Data System (ADS)

    Guerrero, J.; Halldin, S.; Xu, C.; Lundin, L.

    2011-12-01

    Distributed hydrological models are important tools in water management as they account for the spatial variability of the hydrological data, as well as being able to produce spatially distributed outputs. They can directly incorporate and assess potential changes in the characteristics of our basins. A recognized problem for models in general is equifinality, which is only exacerbated for distributed models who tend to have a large number of parameters. We need to deal with the fundamentally ill-posed nature of the problem that such models force us to face, i.e. a large number of parameters and very few variables that can be used to constrain them, often only the catchment discharge. There is a growing but yet limited literature showing how the internal states of a distributed model can be used to calibrate/validate its predictions. In this paper, a distributed version of WASMOD, a conceptual rainfall runoff model with only three parameters, combined with a routing algorithm based on the high-resolution HydroSHEDS data was used to simulate the discharge in the Paso La Ceiba basin in Honduras. The parameter space was explored using Monte-Carlo simulations and the region of space containing the parameter-sets that were considered behavioral according to two different criteria was delimited using the geometric concept of alpha-shapes. The discharge data from five internal sub-basins was used to aid in the calibration of the model and to answer the following questions: Can this information improve the simulations at the outlet of the catchment, or decrease their uncertainty? Also, after reducing the number of model parameters needing calibration through sensitivity analysis: Is it possible to relate them to basin characteristics? The analysis revealed that in most cases the internal discharge data can be used to reduce the uncertainty in the discharge at the outlet, albeit with little improvement in the overall simulation results.

  13. Development of EnergyPlus Utility to Batch Simulate Building Energy Performance on a National Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valencia, Jayson F.; Dirks, James A.

    2008-08-29

    EnergyPlus is a simulation program that requires a large number of details to fully define and model a building. Hundreds or even thousands of lines in a text file are needed to run the EnergyPlus simulation depending on the size of the building. To manually create these files is a time consuming process that would not be practical when trying to create input files for thousands of buildings needed to simulate national building energy performance. To streamline the process needed to create the input files for EnergyPlus, two methods were created to work in conjunction with the National Renewable Energymore » Laboratory (NREL) Preprocessor; this reduced the hundreds of inputs needed to define a building in EnergyPlus to a small set of high-level parameters. The first method uses Java routines to perform all of the preprocessing on a Windows machine while the second method carries out all of the preprocessing on the Linux cluster by using an in-house built utility called Generalized Parametrics (GPARM). A comma delimited (CSV) input file is created to define the high-level parameters for any number of buildings. Each method then takes this CSV file and uses the data entered for each parameter to populate an extensible markup language (XML) file used by the NREL Preprocessor to automatically prepare EnergyPlus input data files (idf) using automatic building routines and macro templates. Using a Linux utility called “make”, the idf files can then be automatically run through the Linux cluster and the desired data from each building can be aggregated into one table to be analyzed. Creating a large number of EnergyPlus input files results in the ability to batch simulate building energy performance and scale the result to national energy consumption estimates.« less

  14. Grand Minima and Equatorward Propagation in a Cycling Stellar Convective Dynamo

    NASA Astrophysics Data System (ADS)

    Augustson, Kyle C.; Brun, Allan Sacha; Miesch, Mark; Toomre, Juri

    2015-08-01

    The 3-D magnetohydrodynamic (MHD) Anelastic Spherical Harmonic (ASH) code, using slope-limited diffusion, is employed to capture convective and dynamo processes achieved in a global-scale stellar convection simulation for a model solar-mass star rotating at three times the solar rate. The dynamo generated magnetic fields possesses many time scales, with a prominent polarity cycle occurring roughly every 6.2 years. The magnetic field forms large-scale toroidal wreaths, whose formation is tied to the low Rossby number of the convection in this simulation. The polarity reversals are linked to the weakened differential rotation and a resistive collapse of the large-scale magnetic field. An equatorial migration of the magnetic field is seen, which is due to the strong modulation of the differential rotation rather than a dynamo wave. A poleward migration of magnetic flux from the equator eventually leads to the reversal of the polarity of the high-latitude magnetic field. This simulation also enters an interval with reduced magnetic energy at low latitudes lasting roughly 16 years (about 2.5 polarity cycles), during which the polarity cycles are disrupted and after which the dynamo recovers its regular polarity cycles. An analysis of this grand minimum reveals that it likely arises through the interplay of symmetric and antisymmetric dynamo families. This intermittent dynamo state potentially results from the simulations relatively low magnetic Prandtl number. A mean-field-based analysis of this dynamo simulation demonstrates that it is of the α-Ω type. The time scales that appear to be relevant to the magnetic polarity reversal are also identified.

  15. An approach to hydrogeological modeling of a large system of groundwater-fed lakes and wetlands in the Nebraska Sand Hills, USA

    NASA Astrophysics Data System (ADS)

    Rossman, Nathan R.; Zlotnik, Vitaly A.; Rowe, Clinton M.

    2018-05-01

    The feasibility of a hydrogeological modeling approach to simulate several thousand shallow groundwater-fed lakes and wetlands without explicitly considering their connection with groundwater is investigated at the regional scale ( 40,000 km2) through an application in the semi-arid Nebraska Sand Hills (NSH), USA. Hydraulic heads are compared to local land-surface elevations from a digital elevation model (DEM) within a geographic information system to assess locations of lakes and wetlands. The water bodies are inferred where hydraulic heads exceed, or are above a certain depth below, the land surface. Numbers of lakes and/or wetlands are determined via image cluster analysis applied to the same 30-m grid as the DEM after interpolating both simulated and estimated heads. The regional water-table map was used for groundwater model calibration, considering MODIS-based net groundwater recharge data. Resulting values of simulated total baseflow to interior streams are within 1% of observed values. Locations, areas, and numbers of simulated lakes and wetlands are compared with Landsat 2005 survey data and with areas of lakes from a 1979-1980 Landsat survey and the National Hydrography Dataset. This simplified process-based modeling approach avoids the need for field-based morphology or water-budget data from individual lakes or wetlands, or determination of lake-groundwater exchanges, yet it reproduces observed lake-wetland characteristics at regional groundwater management scales. A better understanding of the NSH hydrogeology is attained, and the approach shows promise for use in simulations of groundwater-fed lake and wetland characteristics in other large groundwater systems.

  16. Mesoscopic simulation of a micellar poly(N-isopropyl acrylamide)-b-(polyethylene oxide) copolymer system

    NASA Astrophysics Data System (ADS)

    Bautista-Reyes, Rubén; Soto-Figueroa, César; Vicente, Luis

    2016-05-01

    In this article we studied the micellar formation of poly(N-isopropyl acrylamide)-b-polyethylene oxide (PNIPAM-b-PEO) copolymers in an aqueous system. From molecular simulations the dependence on temperature of the Flory-Huggins interaction parameter χ for PNIPAM and PEO in water is obtained and compared with available experimental results and values from other theoretical calculations. By means of dissipative particle dynamics (DPD) we then simulated the coil-globule transition for PNIPAM chains in water with a transition temperature of around 305 K. The simulations for PNIPAM-b-PEO copolymers showed that at room temperature the chains are miscible in an aqueous phase but with a temperature increase the system turns into micelles at T  =  305 K. The change in micelle anisotropy due to a different ratio PNIPAM/PEO of chains is also analyzed. What is observed is that for large PEO the large number of dissolved PEO chains gives a large corona size and the micelle is not spherical but obloide and as the number of PNIPAM is increased the micelle acquires a spherical shape. As an important application we considered the system micelle-water/anionic liquid (1-butyl-3-methylimidazolium hexafluorophosphate [BMIM]+[PF6]-). By increasing the temperature of the system from 306 K it is shown that at T  =  345 K there is a transfer of the micelle from water to the ionic liquid phase and this was due to the change in the relative affinity of PEO to water and ionic liquid expressed by the change in χ. All the simulation outcomes are qualitatively consistent with experimental results and thus to our knowledge we give the first set of χ values for the interaction between PNIPAM and water in a wide range of temperature values.

  17. Direct numerical simulation of turbulent plane Couette flow under neutral and stable stratification

    NASA Astrophysics Data System (ADS)

    Mortikov, Evgeny

    2017-11-01

    Direct numerical simulation (DNS) approach was used to study turbulence dynamics in plane Couette flow under conditions ranging from neutral stability to the case of extreme stable stratification, where intermittency is observed. Simulations were performed for Reynolds numbers, based on the channel height and relative wall speed, up to 2 ×105 . Using DNS data, which covers a wide range of stability conditions, parameterizations of pressure correlation terms used in second-order closure turbulence models are discussed. Particular attention is also paid to the sustainment of intermittent turbulence under strong stratification. Intermittent regime is found to be associated with the formation of secondary large-scale structures elongated in the spanwise direction, which define spatially confined alternating regions of laminar and turbulent flow. The spanwise length of this structures increases with the increase in the bulk Richardson number and defines and additional constraint on the computational box size. In this work DNS results are presented in extended computational domains, where the intermittent turbulence is sustained for sufficiently higher Richardson numbers than previously reported.

  18. A Trotter-Suzuki approximation for Lie groups with applications to Hamiltonian simulation

    NASA Astrophysics Data System (ADS)

    Somma, Rolando D.

    2016-06-01

    We present a product formula to approximate the exponential of a skew-Hermitian operator that is a sum of generators of a Lie algebra. The number of terms in the product depends on the structure factors. When the generators have large norm with respect to the dimension of the Lie algebra, or when the norm of the effective operator resulting from nested commutators is less than the product of the norms, the number of terms in the product is significantly less than that obtained from well-known results. We apply our results to construct product formulas useful for the quantum simulation of some continuous-variable and bosonic physical systems, including systems whose potential is not quadratic. For many of these systems, we show that the number of terms in the product can be sublinear or even subpolynomial in the dimension of the relevant local Hilbert spaces, where such a dimension is usually determined by the energy scale of the problem. Our results emphasize the power of quantum computers for the simulation of various quantum systems.

  19. By-passing the sign-problem in Fermion Path Integral Monte Carlo simulations by use of high-order propagators

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.

    2014-03-01

    The sign-problem in PIMC simulations of non-relativistic fermions increases in serverity with the number of fermions and the number of beads (or time-slices) of the simulation. A large of number of beads is usually needed, because the conventional primitive propagator is only second-order and the usual thermodynamic energy-estimator converges very slowly from below with the total imaginary time. The Hamiltonian energy-estimator, while more complicated to evaluate, is a variational upper-bound and converges much faster with the total imaginary time, thereby requiring fewer beads. This work shows that when the Hamiltonian estimator is used in conjunction with fourth-order propagators with optimizable parameters, the ground state energies of 2D parabolic quantum-dots with approximately 10 completely polarized electrons can be obtain with ONLY 3-5 beads, before the onset of severe sign problems. This work was made possible by NPRP GRANT #5-674-1-114 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the author.

  20. A Trotter-Suzuki approximation for Lie groups with applications to Hamiltonian simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somma, Rolando D., E-mail: somma@lanl.gov

    2016-06-15

    We present a product formula to approximate the exponential of a skew-Hermitian operator that is a sum of generators of a Lie algebra. The number of terms in the product depends on the structure factors. When the generators have large norm with respect to the dimension of the Lie algebra, or when the norm of the effective operator resulting from nested commutators is less than the product of the norms, the number of terms in the product is significantly less than that obtained from well-known results. We apply our results to construct product formulas useful for the quantum simulation ofmore » some continuous-variable and bosonic physical systems, including systems whose potential is not quadratic. For many of these systems, we show that the number of terms in the product can be sublinear or even subpolynomial in the dimension of the relevant local Hilbert spaces, where such a dimension is usually determined by the energy scale of the problem. Our results emphasize the power of quantum computers for the simulation of various quantum systems.« less

  1. A Trotter-Suzuki approximation for Lie groups with applications to Hamiltonian simulation

    DOE PAGES

    Somma, Rolando D.

    2016-06-01

    In this paper, we present a product formula to approximate the exponential of a skew-Hermitian operator that is a sum of generators of a Lie algebra. The number of terms in the product depends on the structure factors. When the generators have large norm with respect to the dimension of the Lie algebra, or when the norm of the effective operator resulting from nested commutators is less than the product of the norms, the number of terms in the product is significantly less than that obtained from well-known results. We apply our results to construct product formulas useful for themore » quantum simulation of some continuous-variable and bosonic physical systems, including systems whose potential is not quadratic. For many of these systems, we show that the number of terms in the product can be sublinear or even subpolynomial in the dimension of the relevant local Hilbert spaces, where such a dimension is usually determined by the energy scale of the problem. Our results emphasize the power of quantum computers for the simulation of various quantum systems.« less

  2. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  3. Simulations Using Random-Generated DNA and RNA Sequences

    ERIC Educational Resources Information Center

    Bryce, C. F. A.

    1977-01-01

    Using a very simple computer program written in BASIC, a very large number of random-generated DNA or RNA sequences are obtained. Students use these sequences to predict complementary sequences and translational products, evaluate base compositions, determine frequencies of particular triplet codons, and suggest possible secondary structures.…

  4. COMPUTER SIMULATIONS OF LUNG AIRWAY STRUCTURES USING DATA-DRIVEN SURFACE MODELING TECHNIQUES

    EPA Science Inventory

    ABSTRACT

    Knowledge of human lung morphology is a subject critical to many areas of medicine. The visualization of lung structures naturally lends itself to computer graphics modeling due to the large number of airways involved and the complexities of the branching systems...

  5. Simulating Multi-Scale Mercury Fate and Transport in a Coastal Plain Watershed

    EPA Science Inventory

    Mercury is the toxicant responsible for the largest number of fish advisories across the United States, with 1.1 million river miles under advisory. The processes governing fate, transport, and transformation of mercury in streams and rivers are not well understood, in large part...

  6. The Impact of Missing Background Data on Subpopulation Estimation

    ERIC Educational Resources Information Center

    Rutkowski, Leslie

    2011-01-01

    Although population modeling methods are well established, a paucity of literature appears to exist regarding the effect of missing background data on subpopulation achievement estimates. Using simulated data that follows typical large-scale assessment designs with known parameters and a number of missing conditions, this paper examines the extent…

  7. Power-law versus log-law in wall-bounded turbulence: A large-eddy simulation perspective

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Samtaney, R.

    2014-01-01

    The debate whether the mean streamwise velocity in wall-bounded turbulent flows obeys a log-law or a power-law scaling originated over two decades ago, and continues to ferment in recent years. As experiments and direct numerical simulation can not provide sufficient clues, in this study we present an insight into this debate from a large-eddy simulation (LES) viewpoint. The LES organically combines state-of-the-art models (the stretched-vortex model and inflow rescaling method) with a virtual-wall model derived under different scaling law assumptions (the log-law or the power-law by George and Castillo ["Zero-pressure-gradient turbulent boundary layer," Appl. Mech. Rev. 50, 689 (1997)]). Comparison of LES results for Reθ ranging from 105 to 1011 for zero-pressure-gradient turbulent boundary layer flows are carried out for the mean streamwise velocity, its gradient and its scaled gradient. Our results provide strong evidence that for both sets of modeling assumption (log law or power law), the turbulence gravitates naturally towards the log-law scaling at extremely large Reynolds numbers.

  8. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  9. Modified social force model based on information transmission toward crowd evacuation simulation

    NASA Astrophysics Data System (ADS)

    Han, Yanbin; Liu, Hong

    2017-03-01

    In this paper, the information transmission mechanism is introduced into the social force model to simulate pedestrian behavior in an emergency, especially when most pedestrians are unfamiliar with the evacuation environment. This modified model includes a collision avoidance strategy and an information transmission model that considers information loss. The former is used to avoid collision among pedestrians in a simulation, whereas the latter mainly describes how pedestrians obtain and choose directions appropriate to them. Simulation results show that pedestrians can obtain the correct moving direction through information transmission mechanism and that the modified model can simulate actual pedestrian behavior during an emergency evacuation. Moreover, we have drawn four conclusions to improve evacuation based on the simulation results; and these conclusions greatly contribute in optimizing a number of efficient emergency evacuation schemes for large public places.

  10. Effects of the seasonal cycle on superrotation in planetary atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Jonathan L.; Vallis, Geoffrey K.; Potter, Samuel F.

    2014-05-20

    The dynamics of dry atmospheric general circulation model simulations forced by seasonally varying Newtonian relaxation are explored over a wide range of two control parameters and are compared with the large-scale circulation of Earth, Mars, and Titan in their relevant parameter regimes. Of the parameters that govern the behavior of the system, the thermal Rossby number (Ro) has previously been found to be important in governing the spontaneous transition from an Earth-like climatology of winds to a superrotating one with prograde equatorial winds, in the absence of a seasonal cycle. This case is somewhat unrealistic as it applies only ifmore » the planet has zero obliquity or if surface thermal inertia is very large. While Venus has nearly vanishing obliquity, Earth, Mars, and Titan (Saturn) all have obliquities of ∼25° and varying degrees of seasonality due to their differing thermal inertias and orbital periods. Motivated by this, we introduce a time-dependent Newtonian cooling to drive a seasonal cycle using idealized model forcing, and we define a second control parameter that mimics non-dimensional thermal inertia of planetary surfaces. We then perform and analyze simulations across the parameter range bracketed by Earth-like and Titan-like regimes, assess the impact on the spontaneous transition to superrotation, and compare Earth, Mars, and Titan to the model simulations in the relevant parameter regime. We find that a large seasonal cycle (small thermal inertia) prevents model atmospheres with large thermal Rossby numbers from developing superrotation by the influences of (1) cross-equatorial momentum advection by the Hadley circulation and (2) hemispherically asymmetric zonal-mean zonal winds that suppress instabilities leading to equatorial momentum convergence. We also demonstrate that baroclinic instabilities must be sufficiently weak to allow superrotation to develop. In the relevant parameter regimes, our seasonal model simulations compare favorably to large-scale, seasonal phenomena observed on Earth and Mars. In the Titan-like regime the seasonal cycle in our model acts to prevent superrotation from developing, and it is necessary to increase the value of a third parameter—the atmospheric Newtonian cooling time—to achieve a superrotating climatology.« less

  11. Time domain simulation of the response of geometrically nonlinear panels subjected to random loading

    NASA Technical Reports Server (NTRS)

    Moyer, E. Thomas, Jr.

    1988-01-01

    The response of composite panels subjected to random pressure loads large enough to cause geometrically nonlinear responses is studied. A time domain simulation is employed to solve the equations of motion. An adaptive time stepping algorithm is employed to minimize intermittent transients. A modified algorithm for the prediction of response spectral density is presented which predicts smooth spectral peaks for discrete time histories. Results are presented for a number of input pressure levels and damping coefficients. Response distributions are calculated and compared with the analytical solution of the Fokker-Planck equations. RMS response is reported as a function of input pressure level and damping coefficient. Spectral densities are calculated for a number of examples.

  12. DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation

    PubMed Central

    Sherfey, Jason S.; Soplata, Austin E.; Ardid, Salva; Roberts, Erik A.; Stanley, David A.; Pittman-Polletta, Benjamin R.; Kopell, Nancy J.

    2018-01-01

    DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community. PMID:29599715

  13. DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation.

    PubMed

    Sherfey, Jason S; Soplata, Austin E; Ardid, Salva; Roberts, Erik A; Stanley, David A; Pittman-Polletta, Benjamin R; Kopell, Nancy J

    2018-01-01

    DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.

  14. Diploid male dynamics under different numbers of sexual alleles and male dispersal abilities.

    PubMed

    Faria, Luiz R R; Soares, Elaine Della Giustina; Carmo, Eduardo do; Oliveira, Paulo Murilo Castro de

    2016-09-01

    Insects in the order Hymenoptera (bees, wasps and ants) present an haplodiploid system of sexual determination in which fertilized eggs become females and unfertilized eggs males. Under single locus complementary sex-determination (sl-CSD) system, the sex of a specimen depends on the alleles at a single locus: when diploid, an individual will be a female if heterozygous and male if homozygous. Significant diploid male (DM) production may drive a population to an extinction scenario called "diploid male vortex". We aimed at studying the dynamics of populations of a sl-CSD organism under several combinations of two parameters: male flight abilities and number of sexual alleles. In these simulations, we evaluated the frequency of DM and a genetic diversity measure over 10,000 generations. The number of sexual alleles varied from 10 to 100 and, at each generation, a male offspring might fly to another random site within a varying radius R. Two main results emerge from our simulations: (i) the number of DM depends more on male flight radius than on the number of alleles; (ii) in large geographic regions, the effect of males flight radius on the allelic diversity turns out much less pronounced than in small regions. In other words, small regions where inbreeding normally appears recover genetic diversity due to large flight radii. These results may be particularly relevant when considering the population dynamics of species with increasingly limited dispersal ability (e.g., forest-dependent species of euglossine bees in fragmented landscapes).

  15. The topology of the cosmic web in terms of persistent Betti numbers

    NASA Astrophysics Data System (ADS)

    Pranav, Pratyush; Edelsbrunner, Herbert; van de Weygaert, Rien; Vegter, Gert; Kerber, Michael; Jones, Bernard J. T.; Wintraecken, Mathijs

    2017-03-01

    We introduce a multiscale topological description of the Megaparsec web-like cosmic matter distribution. Betti numbers and topological persistence offer a powerful means of describing the rich connectivity structure of the cosmic web and of its multiscale arrangement of matter and galaxies. Emanating from algebraic topology and Morse theory, Betti numbers and persistence diagrams represent an extension and deepening of the cosmologically familiar topological genus measure and the related geometric Minkowski functionals. In addition to a description of the mathematical background, this study presents the computational procedure for computing Betti numbers and persistence diagrams for density field filtrations. The field may be computed starting from a discrete spatial distribution of galaxies or simulation particles. The main emphasis of this study concerns an extensive and systematic exploration of the imprint of different web-like morphologies and different levels of multiscale clustering in the corresponding computed Betti numbers and persistence diagrams. To this end, we use Voronoi clustering models as templates for a rich variety of web-like configurations and the fractal-like Soneira-Peebles models exemplify a range of multiscale configurations. We have identified the clear imprint of cluster nodes, filaments, walls, and voids in persistence diagrams, along with that of the nested hierarchy of structures in multiscale point distributions. We conclude by outlining the potential of persistent topology for understanding the connectivity structure of the cosmic web, in large simulations of cosmic structure formation and in the challenging context of the observed galaxy distribution in large galaxy surveys.

  16. Equivalent modulus method for finite element simulation of the sound absorption of anechoic coating backed with orthogonally rib-stiffened plate

    NASA Astrophysics Data System (ADS)

    Jin, Zhongkun; Yin, Yao; Liu, Bilong

    2016-03-01

    The finite element method is often used to investigate the sound absorption of anechoic coating backed with orthogonally rib-stiffened plate. Since the anechoic coating contains cavities, the number of grid nodes of a periodic unit cell is usually large. An equivalent modulus method is proposed to reduce the large amount of nodes by calculating an equivalent homogeneous layer. Applications of this method in several models show that the method can well predict the sound absorption coefficient of such structure in a wide frequency range. Based on the simulation results, the sound absorption performance of such structure and the influences of different backings on the first absorption peak are also discussed.

  17. Large-Eddy Simulation (LES) of a Compressible Mixing Layer and the Significance of Inflow Turbulence

    NASA Technical Reports Server (NTRS)

    Mankbadi, Mina Reda; Georgiadis, Nicholas J.; Debonis, James R.

    2017-01-01

    In the context of Large Eddy Simulations (LES), the effects of inflow turbulence are investigated through the Synthetic Eddy Method (SEM). The growth rate of a turbulent compressible mixing layer corresponding to operating conditions of GeobelDutton Case 2 is investigated herein. The effects of spanwise width on the growth rate of the mixing layer is investigated such that spanwise width independence is reached. The error in neglecting inflow turbulence effects is quantified by comparing two methodologies: (1) Hybrid-RANS-LES methodology and (2) SEM-LES methodology. Best practices learned from Case 2 are developed herein and then applied to a higher convective mach number corresponding to Case 4 experiments of GeobelDutton.

  18. Role of Correlations in the Collective Behavior of Microswimmer Suspensions

    NASA Astrophysics Data System (ADS)

    Stenhammar, Joakim; Nardini, Cesare; Nash, Rupert W.; Marenduzzo, Davide; Morozov, Alexander

    2017-07-01

    In this Letter, we study the collective behavior of a large number of self-propelled microswimmers immersed in a fluid. Using unprecedentedly large-scale lattice Boltzmann simulations, we reproduce the transition to bacterial turbulence. We show that, even well below the transition, swimmers move in a correlated fashion that cannot be described by a mean-field approach. We develop a novel kinetic theory that captures these correlations and is nonperturbative in the swimmer density. To provide an experimentally accessible measure of correlations, we calculate the diffusivity of passive tracers and reveal its nontrivial density dependence. The theory is in quantitative agreement with the lattice Boltzmann simulations and captures the asymmetry between pusher and puller swimmers below the transition to turbulence.

  19. A comprehensive study of MPI parallelism in three-dimensional discrete element method (DEM) simulation of complex-shaped granular particles

    NASA Astrophysics Data System (ADS)

    Yan, Beichuan; Regueiro, Richard A.

    2018-02-01

    A three-dimensional (3D) DEM code for simulating complex-shaped granular particles is parallelized using message-passing interface (MPI). The concepts of link-block, ghost/border layer, and migration layer are put forward for design of the parallel algorithm, and theoretical scalability function of 3-D DEM scalability and memory usage is derived. Many performance-critical implementation details are managed optimally to achieve high performance and scalability, such as: minimizing communication overhead, maintaining dynamic load balance, handling particle migrations across block borders, transmitting C++ dynamic objects of particles between MPI processes efficiently, eliminating redundant contact information between adjacent MPI processes. The code executes on multiple US Department of Defense (DoD) supercomputers and tests up to 2048 compute nodes for simulating 10 million three-axis ellipsoidal particles. Performance analyses of the code including speedup, efficiency, scalability, and granularity across five orders of magnitude of simulation scale (number of particles) are provided, and they demonstrate high speedup and excellent scalability. It is also discovered that communication time is a decreasing function of the number of compute nodes in strong scaling measurements. The code's capability of simulating a large number of complex-shaped particles on modern supercomputers will be of value in both laboratory studies on micromechanical properties of granular materials and many realistic engineering applications involving granular materials.

  20. An Initial Examination for Verifying Separation Algorithms by Simulation

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Neogi, Natasha; Herencia-Zapana, Heber

    2012-01-01

    An open question in algorithms for aircraft is what can be validated by simulation where the simulation shows that the probability of undesirable events is below some given level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first proposes a goal based on the number of flights per year in several regions. The paper examines the probabilistic interpretation of this goal and computes the number of trials needed to establish it at an equivalent confidence level. Since any simulation is likely to consider the algorithms for only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. This paper is an initial effort, and as such, it considers separation maneuvers, which are elementary but include numerous aspects of aircraft behavior. The scenario includes decisions under uncertainty since the position of each aircraft is only known to the other by broadcasting where GPS believes each aircraft to be (ADS-B). Each aircraft operates under feedback control with perturbations. It is shown that a scenario three or four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.

Top