Sample records for time scale simulation

  1. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model

    PubMed Central

    van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.

    2018-01-01

    The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620

  2. A Group Simulation of the Development of the Geologic Time Scale.

    ERIC Educational Resources Information Center

    Bennington, J. Bret

    2000-01-01

    Explains how to demonstrate to students that the relative dating of rock layers is redundant. Uses two column diagrams to simulate stratigraphic sequences from two different geological time scales and asks students to complete the time scale. (YDS)

  3. Hot-bench simulation of the active flexible wing wind-tunnel model

    NASA Technical Reports Server (NTRS)

    Buttrill, Carey S.; Houck, Jacob A.

    1990-01-01

    Two simulations, one batch and one real-time, of an aeroelastically-scaled wind-tunnel model were developed. The wind-tunnel model was a full-span, free-to-roll model of an advanced fighter concept. The batch simulation was used to generate and verify the real-time simulation and to test candidate control laws prior to implementation. The real-time simulation supported hot-bench testing of a digital controller, which was developed to actively control the elastic deformation of the wind-tunnel model. Time scaling was required for hot-bench testing. The wind-tunnel model, the mathematical models for the simulations, the techniques employed to reduce the hot-bench time-scale factors, and the verification procedures are described.

  4. Using Wavelet Analysis To Assist in Identification of Significant Events in Molecular Dynamics Simulations.

    PubMed

    Heidari, Zahra; Roe, Daniel R; Galindo-Murillo, Rodrigo; Ghasemi, Jahan B; Cheatham, Thomas E

    2016-07-25

    Long time scale molecular dynamics (MD) simulations of biological systems are becoming increasingly commonplace due to the availability of both large-scale computational resources and significant advances in the underlying simulation methodologies. Therefore, it is useful to investigate and develop data mining and analysis techniques to quickly and efficiently extract the biologically relevant information from the incredible amount of generated data. Wavelet analysis (WA) is a technique that can quickly reveal significant motions during an MD simulation. Here, the application of WA on well-converged long time scale (tens of μs) simulations of a DNA helix is described. We show how WA combined with a simple clustering method can be used to identify both the physical and temporal locations of events with significant motion in MD trajectories. We also show that WA can not only distinguish and quantify the locations and time scales of significant motions, but by changing the maximum time scale of WA a more complete characterization of these motions can be obtained. This allows motions of different time scales to be identified or ignored as desired.

  5. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  6. Exact-Differential Large-Scale Traffic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less

  7. Relativistic initial conditions for N-body simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fidler, Christian; Tram, Thomas; Crittenden, Robert

    2017-06-01

    Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code.more » This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.« less

  8. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  9. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  10. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE PAGES

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; ...

    2016-01-28

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  11. Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.

    PubMed

    Serebrinsky, Santiago A

    2011-03-01

    We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.

  12. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less

  13. 49 CFR 239.105 - Debriefing and critique.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. (b) Exceptions. (1) No debriefing and critique...; (2) How much time elapsed between the occurrence of the emergency situation or full-scale simulation...

  14. 49 CFR 239.105 - Debriefing and critique.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. (b) Exceptions. (1) No debriefing and critique...; (2) How much time elapsed between the occurrence of the emergency situation or full-scale simulation...

  15. 49 CFR 239.105 - Debriefing and critique.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. (b) Exceptions. (1) No debriefing and critique...; (2) How much time elapsed between the occurrence of the emergency situation or full-scale simulation...

  16. 49 CFR 239.105 - Debriefing and critique.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. (b) Exceptions. (1) No debriefing and critique...; (2) How much time elapsed between the occurrence of the emergency situation or full-scale simulation...

  17. Parallel Simulation of Unsteady Turbulent Flames

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1996-01-01

    Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.

  18. A derivation and scalable implementation of the synchronous parallel kinetic Monte Carlo method for simulating long-time dynamics

    NASA Astrophysics Data System (ADS)

    Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2017-10-01

    Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.

  19. Time and length scales within a fire and implications for numerical simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TIESZEN,SHELDON R.

    2000-02-02

    A partial non-dimensionalization of the Navier-Stokes equations is used to obtain order of magnitude estimates of the rate-controlling transport processes in the reacting portion of a fire plume as a function of length scale. Over continuum length scales, buoyant times scales vary as the square root of the length scale; advection time scales vary as the length scale, and diffusion time scales vary as the square of the length scale. Due to the variation with length scale, each process is dominant over a given range. The relationship of buoyancy and baroclinc vorticity generation is highlighted. For numerical simulation, first principlesmore » solution for fire problems is not possible with foreseeable computational hardware in the near future. Filtered transport equations with subgrid modeling will be required as two to three decades of length scale are captured by solution of discretized conservation equations. By whatever filtering process one employs, one must have humble expectations for the accuracy obtainable by numerical simulation for practical fire problems that contain important multi-physics/multi-length-scale coupling with up to 10 orders of magnitude in length scale.« less

  20. SCALE-UP OF RAPID SMALL-SCALE ADSORPTION TESTS TO FIELD-SCALE ADSORBERS: THEORETICAL AND EXPERIMENTAL BASIS

    EPA Science Inventory

    Design of full-scale adsorption systems typically includes expensive and time-consuming pilot studies to simulate full-scale adsorber performance. Accordingly, the rapid small-scale column test (RSSCT) was developed and evaluated experimentally. The RSSCT can simulate months of f...

  1. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  2. Postcoalescence evolution of growth stress in polycrystalline films.

    PubMed

    González-González, A; Polop, C; Vasco, E

    2013-02-01

    The growth stress generated once grains coalesce in Volmer-Weber-type thin films is investigated by time-multiscale simulations comprising complementary modules of (i) finite-element modeling to address the interactions between grains happening at atomic vibration time scales (~0.1 ps), (ii) dynamic scaling to account for the surface stress relaxation via morphology changes at surface diffusion time scales (~μs-ms), and (iii) the mesoscopic rate equation approach to simulate the bulk stress relaxation at deposition time scales (~sec-h). On the basis of addressing the main experimental evidence reported so far on the topic dealt with, the simulation results provide key findings concerning the interplay between anisotropic grain interactions at complementary space scales, deposition conditions (such as flux and mobility), and mechanisms of stress accommodation-relaxation, which underlies the origin, nature and spatial distribution, and the flux dependence of the postcoalescence growth stress.

  3. Solvation of fluoro-acetonitrile in water by 2D-IR spectroscopy: A combined experimental-computational study

    NASA Astrophysics Data System (ADS)

    Cazade, Pierre-André; Tran, Halina; Bereau, Tristan; Das, Akshaya K.; Kläsi, Felix; Hamm, Peter; Meuwly, Markus

    2015-06-01

    The solvent dynamics around fluorinated acetonitrile is characterized by 2-dimensional infrared spectroscopy and atomistic simulations. The lineshape of the linear infrared spectrum is better captured by semiempirical (density functional tight binding) mixed quantum mechanical/molecular mechanics simulations, whereas force field simulations with multipolar interactions yield lineshapes that are significantly too narrow. For the solvent dynamics, a relatively slow time scale of 2 ps is found from the experiments and supported by the mixed quantum mechanical/molecular mechanics simulations. With multipolar force fields fitted to the available thermodynamical data, the time scale is considerably faster—on the 0.5 ps time scale. The simulations provide evidence for a well established CF-HOH hydrogen bond (population of 25%) which is found from the radial distribution function g(r) from both, force field and quantum mechanics/molecular mechanics simulations.

  4. Multiscale simulations of patchy particle systems combining Molecular Dynamics, Path Sampling and Green's Function Reaction Dynamics

    NASA Astrophysics Data System (ADS)

    Bolhuis, Peter

    Important reaction-diffusion processes, such as biochemical networks in living cells, or self-assembling soft matter, span many orders in length and time scales. In these systems, the reactants' spatial dynamics at mesoscopic length and time scales of microns and seconds is coupled to the reactions between the molecules at microscopic length and time scales of nanometers and milliseconds. This wide range of length and time scales makes these systems notoriously difficult to simulate. While mean-field rate equations cannot describe such processes, the mesoscopic Green's Function Reaction Dynamics (GFRD) method enables efficient simulation at the particle level provided the microscopic dynamics can be integrated out. Yet, many processes exhibit non-trivial microscopic dynamics that can qualitatively change the macroscopic behavior, calling for an atomistic, microscopic description. The recently developed multiscale Molecular Dynamics Green's Function Reaction Dynamics (MD-GFRD) approach combines GFRD for simulating the system at the mesocopic scale where particles are far apart, with microscopic Molecular (or Brownian) Dynamics, for simulating the system at the microscopic scale where reactants are in close proximity. The association and dissociation of particles are treated with rare event path sampling techniques. I will illustrate the efficiency of this method for patchy particle systems. Replacing the microscopic regime with a Markov State Model avoids the microscopic regime completely. The MSM is then pre-computed using advanced path-sampling techniques such as multistate transition interface sampling. I illustrate this approach on patchy particle systems that show multiple modes of binding. MD-GFRD is generic, and can be used to efficiently simulate reaction-diffusion systems at the particle level, including the orientational dynamics, opening up the possibility for large-scale simulations of e.g. protein signaling networks.

  5. Effect of helicity on the correlation time of large scales in turbulent flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2017-11-01

    Solutions of the forced Navier-Stokes equation have been conjectured to thermalize at scales larger than the forcing scale, similar to an absolute equilibrium obtained for the spectrally truncated Euler equation. Using direct numeric simulations of Taylor-Green flows and general-periodic helical flows, we present results on the probability density function, energy spectrum, autocorrelation function, and correlation time that compare the two systems. In the case of highly helical flows, we derive an analytic expression describing the correlation time for the absolute equilibrium of helical flows that is different from the E-1 /2k-1 scaling law of weakly helical flows. This model predicts a new helicity-based scaling law for the correlation time as τ (k ) ˜H-1 /2k-1 /2 . This scaling law is verified in simulations of the truncated Euler equation. In simulations of the Navier-Stokes equations the large-scale modes of forced Taylor-Green symmetric flows (with zero total helicity and large separation of scales) follow the same properties as absolute equilibrium including a τ (k ) ˜E-1 /2k-1 scaling for the correlation time. General-periodic helical flows also show similarities between the two systems; however, the largest scales of the forced flows deviate from the absolute equilibrium solutions.

  6. Relationship between thermodynamic parameter and thermodynamic scaling parameter for orientational relaxation time for flip-flop motion of nematic liquid crystals.

    PubMed

    Satoh, Katsuhiko

    2013-03-07

    Thermodynamic parameter Γ and thermodynamic scaling parameter γ for low-frequency relaxation time, which characterize flip-flop motion in a nematic phase, were verified by molecular dynamics simulation with a simple potential based on the Maier-Saupe theory. The parameter Γ, which is the slope of the logarithm for temperature and volume, was evaluated under various conditions at a wide range of temperatures, pressures, and volumes. To simulate thermodynamic scaling so that experimental data at isobaric, isothermal, and isochoric conditions can be rescaled onto a master curve with the parameters for some liquid crystal (LC) compounds, the relaxation time was evaluated from the first-rank orientational correlation function in the simulations, and thermodynamic scaling was verified with the simple potential representing small clusters. A possibility of an equivalence relationship between Γ and γ determined from the relaxation time in the simulation was assessed with available data from the experiments and simulations. In addition, an argument was proposed for the discrepancy between Γ and γ for some LCs in experiments: the discrepancy arises from disagreement of the value of the order parameter P2 rather than the constancy of relaxation time τ1(*) on pressure.

  7. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations.

    PubMed

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-07

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  8. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    NASA Astrophysics Data System (ADS)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  9. Dynamics analysis of the fast-slow hydro-turbine governing system with different time-scale coupling

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Chen, Diyi; Wu, Changzhi; Wang, Xiangyu

    2018-01-01

    Multi-time scales modeling of hydro-turbine governing system is crucial in precise modeling of hydropower plant and provides support for the stability analysis of the system. Considering the inertia and response time of the hydraulic servo system, the hydro-turbine governing system is transformed into the fast-slow hydro-turbine governing system. The effects of the time-scale on the dynamical behavior of the system are analyzed and the fast-slow dynamical behaviors of the system are investigated with different time-scale. Furthermore, the theoretical analysis of the stable regions is presented. The influences of the time-scale on the stable region are analyzed by simulation. The simulation results prove the correctness of the theoretical analysis. More importantly, the methods and results of this paper provide a perspective to multi-time scales modeling of hydro-turbine governing system and contribute to the optimization analysis and control of the system.

  10. An optimal modification of a Kalman filter for time scales

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2003-01-01

    The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.

  11. Simulating recurrent event data with hazard functions defined on a total time scale.

    PubMed

    Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald

    2015-03-08

    In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.

  12. Scale-dependent intrinsic entropies of complex time series.

    PubMed

    Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E

    2016-04-13

    Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).

  13. Efficient coarse simulation of a growing avascular tumor

    PubMed Central

    Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.

    2013-01-01

    The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128

  14. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  15. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  16. Degradation modeling of high temperature proton exchange membrane fuel cells using dual time scale simulation

    NASA Astrophysics Data System (ADS)

    Pohl, E.; Maximini, M.; Bauschulte, A.; vom Schloß, J.; Hermanns, R. T. E.

    2015-02-01

    HT-PEM fuel cells suffer from performance losses due to degradation effects. Therefore, the durability of HT-PEM is currently an important factor of research and development. In this paper a novel approach is presented for an integrated short term and long term simulation of HT-PEM accelerated lifetime testing. The physical phenomena of short term and long term effects are commonly modeled separately due to the different time scales. However, in accelerated lifetime testing, long term degradation effects have a crucial impact on the short term dynamics. Our approach addresses this problem by applying a novel method for dual time scale simulation. A transient system simulation is performed for an open voltage cycle test on a HT-PEM fuel cell for a physical time of 35 days. The analysis describes the system dynamics by numerical electrochemical impedance spectroscopy. Furthermore, a performance assessment is performed in order to demonstrate the efficiency of the approach. The presented approach reduces the simulation time by approximately 73% compared to conventional simulation approach without losing too much accuracy. The approach promises a comprehensive perspective considering short term dynamic behavior and long term degradation effects.

  17. Effects of forcing time scale on the simulated turbulent flows and turbulent collision statistics of inertial particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100

    2015-01-15

    In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less

  18. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  19. Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B

    2011-01-01

    In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less

  20. Multi-scale enhancement of climate prediction over land by improving the model sensitivity to vegetation variability

    NASA Astrophysics Data System (ADS)

    Alessandri, A.; Catalano, F.; De Felice, M.; Hurk, B. V. D.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.

    2017-12-01

    Here we demonstrate, for the first time, that the implementation of a realistic representation of vegetation in Earth System Models (ESMs) can significantly improve climate simulation and prediction across multiple time-scales. The effective sub-grid vegetation fractional coverage vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the surface resistance to evapotranspiration, albedo, roughness lenght, and soil field capacity. To adequately represent this effect in the EC-Earth ESM, we included an exponential dependence of the vegetation cover on the Leaf Area Index.By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal (2-4 months) and weather (4 days) time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation-cover consistently correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.Above results are discussed in a peer-review paper just being accepted for publication on Climate Dynamics (Alessandri et al., 2017; doi:10.1007/s00382-017-3766-y).

  1. Large Scale Traffic Simulations

    DOT National Transportation Integrated Search

    1997-01-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computation speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated "looping" between t...

  2. AQMEII3 evaluation of regional NA/EU simulations and analysis of scale, boundary conditions and emissions error-dependence

    EPA Science Inventory

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...

  3. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    NASA Technical Reports Server (NTRS)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  4. An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing

    DTIC Science & Technology

    2002-08-01

    simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital

  5. Gyrokinetic Simulations of Transport Scaling and Structure

    NASA Astrophysics Data System (ADS)

    Hahm, Taik Soo

    2001-10-01

    There is accumulating evidence from global gyrokinetic particle simulations with profile variations and experimental fluctuation measurements that microturbulence, with its time-averaged eddy size which scales with the ion gyroradius, can cause ion thermal transport which deviates from the gyro-Bohm scaling. The physics here can be best addressed by large scale (rho* = rho_i/a = 0.001) full torus gyrokinetic particle-in-cell turbulence simulations using our massively parallel, general geometry gyrokinetic toroidal code with field-aligned mesh. Simulation results from device-size scans for realistic parameters show that ``wave transport'' mechanism is not the dominant contribution for this Bohm-like transport and that transport is mostly diffusive driven by microscopic scale fluctuations in the presence of self-generated zonal flows. In this work, we analyze the turbulence and zonal flow statistics from simulations and compare to nonlinear theoretical predictions including the radial decorrelation of the transport events by zonal flows and the resulting probability distribution function (PDF). In particular, possible deviation of the characteristic radial size of transport processes from the time-averaged radial size of the density fluctuation eddys will be critically examined.

  6. Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams

    NASA Astrophysics Data System (ADS)

    Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping

    2018-06-01

    A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).

  7. A behavioral-level HDL description of SFQ logic circuits for quantitative performance analysis of large-scale SFQ digital systems

    NASA Astrophysics Data System (ADS)

    Matsuzaki, F.; Yoshikawa, N.; Tanaka, M.; Fujimaki, A.; Takai, Y.

    2003-10-01

    Recently many single flux quantum (SFQ) logic circuits containing several thousands of Josephson junctions have been designed successfully by using digital domain simulation based on the hard ware description language (HDL). In the present HDL-based design of SFQ circuits, a structure-level HDL description has been used, where circuits are made up of basic gate cells. However, in order to analyze large-scale SFQ digital systems, such as a microprocessor, more higher-level circuit abstraction is necessary to reduce the circuit simulation time. In this paper we have investigated the way to describe functionality of the large-scale SFQ digital circuits by a behavior-level HDL description. In this method, the functionality and the timing of the circuit block is defined directly by describing their behavior by the HDL. Using this method, we can dramatically reduce the simulation time of large-scale SFQ digital circuits.

  8. Quasi-coarse-grained dynamics: modelling of metallic materials at mesoscales

    NASA Astrophysics Data System (ADS)

    Dongare, Avinash M.

    2014-12-01

    A computationally efficient modelling method called quasi-coarse-grained dynamics (QCGD) is developed to expand the capabilities of molecular dynamics (MD) simulations to model behaviour of metallic materials at the mesoscales. This mesoscale method is based on solving the equations of motion for a chosen set of representative atoms from an atomistic microstructure and using scaling relationships for the atomic-scale interatomic potentials in MD simulations to define the interactions between representative atoms. The scaling relationships retain the atomic-scale degrees of freedom and therefore energetics of the representative atoms as would be predicted in MD simulations. The total energetics of the system is retained by scaling the energetics and the atomic-scale degrees of freedom of these representative atoms to account for the missing atoms in the microstructure. This scaling of the energetics renders improved time steps for the QCGD simulations. The success of the QCGD method is demonstrated by the prediction of the structural energetics, high-temperature thermodynamics, deformation behaviour of interfaces, phase transformation behaviour, plastic deformation behaviour, heat generation during plastic deformation, as well as the wave propagation behaviour, as would be predicted using MD simulations for a reduced number of representative atoms. The reduced number of atoms and the improved time steps enables the modelling of metallic materials at the mesoscale in extreme environments.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elber, Ron

    Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances.

  10. Assessment of the relationship between chlorophyll fluorescence and photosynthesis across scales from measurements and simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Guanter, L.; Berry, J. A.; Tol, C. V. D.

    2016-12-01

    Solar-induced chlorophyll fluorescence (SIF) is a novel optical tool for assessment of terrestrial photosynthesis (GPP). Recent work have shown the strong link between GPP and satellite retrievals of SIF at broad scales. However, critical gaps remain between short term small-scale mechanistic understanding and seasonal global observations. In this presentation, we provide a model-based analysis of the relationship between SIF and GPP across scales for diverse vegetation types and a range of meteorological conditions, with the ultimate focus on reproducing the environmental conditions during remote sensing measurements. The coupled fluorescence-photosynthesis model SCOPE is used to simulate GPP and SIF at the both leaf and canopy levels for 13 flux sites. Analyses were conducted to investigate the effects of temporal scaling, canopy structure, overpass time, and spectral domain on the relationship between SIF and GPP. The simulated SIF is highly non-linear with GPP at the leaf level and instantaneous time scale and tends to linearize when scaling to the canopy level and daily to seasonal scales. These relationships are consistent across a wide range of vegetation types. The relationship between SIF and GPP is primarily driven by absorbed photosynthetically active radiation (APAR), especially at the seasonal scale, although the photosynthetic efficiency also contributes to strengthen the link between them. The linearization of their relationship from leaf to canopy and averaging over time is because the overall conditions of the canopy fall within the range of the linear responses of GPP and SIF to light and the photosynthetic capacity. Our results further show that the top-of-canopy relationships between simulated SIF and GPP have similar linearity regardless of whether we used the morning or midday satellite overpass times. These findings are confirmed by field measurements. In addition, the simulated red SIF at 685 nm has a similar relationship with GPP as that of far-red SIF at 740 nm at the canopy level.

  11. Synchrony between reanalysis-driven RCM simulations and observations: variation with time scale

    NASA Astrophysics Data System (ADS)

    de Elía, Ramón; Laprise, René; Biner, Sébastien; Merleau, James

    2017-04-01

    Unlike coupled global climate models (CGCMs) that run in a stand-alone mode, nested regional climate models (RCMs) are driven by either a CGCM or a reanalysis dataset. This feature makes high correlations between the RCM simulation and its driver possible. When the driving dataset is a reanalysis, time correlations between RCM output and observations are also common and to be expected. In certain situations time correlation between driver and driven RCM is of particular interest and techniques have been developed to increase it (e.g. large-scale spectral nudging). For such cases, a question that remains open is whether aggregating in time increases the correlation between RCM output and observations. That is, although the RCM may be unable to reproduce a given daily event, whether it will still be able to satisfactorily simulate an anomaly on a monthly or annual basis. This is a preconception that the authors of this work and others in the community have held, perhaps as a natural extension of the properties of upscaling or aggregating other statistics such as the mean squared error. Here we explore analytically four particular cases that help us partially answer this question. In addition, we use observations datasets and RCM-simulated data to illustrate our findings. Results indicate that time upscaling does not necessarily increase time correlations, and that those interested in achieving high monthly or annual time correlations between RCM output and observations may have to do so by increasing correlation as much as possible at the shortest time scale. This may indicate that even when only concerned with time correlations at large temporal scale, large-scale spectral nudging acting at the time-step level may have to be used.

  12. Spatial adaptive sampling in multiscale simulation

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.

    2014-07-01

    In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.

  13. Resolving Dynamic Properties of Polymers through Coarse-Grained Computational Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salerno, K. Michael; Agrawal, Anupriya; Perahia, Dvora

    2016-02-05

    Coupled length and time scales determine the dynamic behavior of polymers and underlie their unique viscoelastic properties. To resolve the long-time dynamics it is imperative to determine which time and length scales must be correctly modeled. In this paper, we probe the degree of coarse graining required to simultaneously retain significant atomistic details and access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using linear polyethylene as a model system, we probe how the coarse-graining scale affects the measured dynamics. Iterative Boltzmann inversion ismore » used to derive coarse-grained potentials with 2–6 methylene groups per coarse-grained bead from a fully atomistic melt simulation. We show that atomistic detail is critical to capturing large-scale dynamics. Finally, using these models we simulate polyethylene melts for times over 500 μs to study the viscoelastic properties of well-entangled polymer melts.« less

  14. High performance cellular level agent-based simulation with FLAME for the GPU.

    PubMed

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  15. Simulating Non-Fickian Transport across Péclet Regimes by doing Lévy Flights in the Rank Space of Velocity

    NASA Astrophysics Data System (ADS)

    Most, S.; Dentz, M.; Bolster, D.; Bijeljic, B.; Nowak, W.

    2017-12-01

    Transport in real porous media shows non-Fickian characteristics. In the Lagrangian perspective this leads to skewed distributions of particle arrival times. The skewness is triggered by particles' memory of velocity that persists over a characteristic length. Capturing process memory is essential to represent non-Fickianity thoroughly. Classical non-Fickian models (e.g., CTRW models) simulate the effects of memory but not the mechanisms leading to process memory. CTRWs have been applied successfully in many studies but nonetheless they have drawbacks. In classical CTRWs each particle makes a spatial transition for which each particle adapts a random transit time. Consecutive transit times are drawn independently from each other, and this is only valid for sufficiently large spatial transitions. If we want to apply a finer numerical resolution than that, we have to implement memory into the simulation. Recent CTRW methods use transitions matrices to simulate correlated transit times. However, deriving such transition matrices require transport data of a fine-scale transport simulation, and the obtained transition matrix is solely valid for this single Péclet regime. The CTRW method we propose overcomes all three drawbacks: 1) We simulate transport without restrictions in transition length. 2) We parameterize our CTRW without requiring a transport simulation. 3) Our parameterization scales across Péclet regimes. We do so by sampling the pore-scale velocity distribution to generate correlated transit times as a Lévy flight on the CDF-axis of velocities with reflection at 0 and 1. The Lévy flight is parametrized only by the correlation length. We explicitly model memory including the evolution and decay of non-Fickianity, so it extends from local via pre-asymptotic to asymptotic scales.

  16. Use of simulated evaporation to assess the potential for scale formation during reverse osmosis desalination

    USGS Publications Warehouse

    Huff, G.F.

    2004-01-01

    The tendency of solutes in input water to precipitate efficiency lowering scale deposits on the membranes of reverse osmosis (RO) desalination systems is an important factor in determining the suitability of input water for desalination. Simulated input water evaporation can be used as a technique to quantitatively assess the potential for scale formation in RO desalination systems. The technique was demonstrated by simulating the increase in solute concentrations required to form calcite, gypsum, and amorphous silica scales at 25??C and 40??C from 23 desalination input waters taken from the literature. Simulation results could be used to quantitatively assess the potential of a given input water to form scale or to compare the potential of a number of input waters to form scale during RO desalination. Simulated evaporation of input waters cannot accurately predict the conditions under which scale will form owing to the effects of potentially stable supersaturated solutions, solution velocity, and residence time inside RO systems. However, the simulated scale-forming potential of proposed input waters could be compared with the simulated scale-forming potentials and actual scale-forming properties of input waters having documented operational histories in RO systems. This may provide a technique to estimate the actual performance and suitability of proposed input waters during RO.

  17. Large Eddy Simulation in the Computation of Jet Noise

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.

    1999-01-01

    Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.

  18. Multi-scale modeling in cell biology

    PubMed Central

    Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick

    2009-01-01

    Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808

  19. Atomistic simulations of graphite etching at realistic time scales.

    PubMed

    Aussems, D U B; Bal, K M; Morgan, T W; van de Sanden, M C M; Neyts, E C

    2017-10-01

    Hydrogen-graphite interactions are relevant to a wide variety of applications, ranging from astrophysics to fusion devices and nano-electronics. In order to shed light on these interactions, atomistic simulation using Molecular Dynamics (MD) has been shown to be an invaluable tool. It suffers, however, from severe time-scale limitations. In this work we apply the recently developed Collective Variable-Driven Hyperdynamics (CVHD) method to hydrogen etching of graphite for varying inter-impact times up to a realistic value of 1 ms, which corresponds to a flux of ∼10 20 m -2 s -1 . The results show that the erosion yield, hydrogen surface coverage and species distribution are significantly affected by the time between impacts. This can be explained by the higher probability of C-C bond breaking due to the prolonged exposure to thermal stress and the subsequent transition from ion- to thermal-induced etching. This latter regime of thermal-induced etching - chemical erosion - is here accessed for the first time using atomistic simulations. In conclusion, this study demonstrates that accounting for long time-scales significantly affects ion bombardment simulations and should not be neglected in a wide range of conditions, in contrast to what is typically assumed.

  20. Full-Scale Numerical Modeling of Turbulent Processes in the Earth's Ionosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliasson, B.; Stenflo, L.; Department of Physics, Linkoeping University, SE-581 83 Linkoeping

    2008-10-15

    We present a full-scale simulation study of ionospheric turbulence by means of a generalized Zakharov model based on the separation of variables into high-frequency and slow time scales. The model includes realistic length scales of the ionospheric profile and of the electromagnetic and electrostatic fields, and uses ionospheric plasma parameters relevant for high-latitude radio facilities such as Eiscat and HAARP. A nested grid numerical method has been developed to resolve the different length-scales, while avoiding severe restrictions on the time step. The simulation demonstrates the parametric decay of the ordinary mode into Langmuir and ion-acoustic waves, followed by a Langmuirmore » wave collapse and short-scale caviton formation, as observed in ionospheric heating experiments.« less

  1. Synchronization Of Parallel Discrete Event Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  2. Predictive Model for Particle Residence Time Distributions in Riser Reactors. Part 1: Model Development and Validation

    DOE PAGES

    Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; ...

    2017-02-28

    Here in this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partialmore » slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. Finally, we discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.« less

  3. Theory and data for simulating fine-scale human movement in an urban environment

    PubMed Central

    Perkins, T. Alex; Garcia, Andres J.; Paz-Soldán, Valerie A.; Stoddard, Steven T.; Reiner, Robert C.; Vazquez-Prokopec, Gonzalo; Bisanzio, Donal; Morrison, Amy C.; Halsey, Eric S.; Kochel, Tadeusz J.; Smith, David L.; Kitron, Uriel; Scott, Thomas W.; Tatem, Andrew J.

    2014-01-01

    Individual-based models of infectious disease transmission depend on accurate quantification of fine-scale patterns of human movement. Existing models of movement either pertain to overly coarse scales, simulate some aspects of movement but not others, or were designed specifically for populations in developed countries. Here, we propose a generalizable framework for simulating the locations that an individual visits, time allocation across those locations, and population-level variation therein. As a case study, we fit alternative models for each of five aspects of movement (number, distance from home and types of locations visited; frequency and duration of visits) to interview data from 157 residents of the city of Iquitos, Peru. Comparison of alternative models showed that location type and distance from home were significant determinants of the locations that individuals visited and how much time they spent there. We also found that for most locations, residents of two neighbourhoods displayed indistinguishable preferences for visiting locations at various distances, despite differing distributions of locations around those neighbourhoods. Finally, simulated patterns of time allocation matched the interview data in a number of ways, suggesting that our framework constitutes a sound basis for simulating fine-scale movement and for investigating factors that influence it. PMID:25142528

  4. A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, K; Seymour, R; Wang, W

    2009-02-17

    A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less

  5. Multi-scale simulations of droplets in generic time-dependent flows

    NASA Astrophysics Data System (ADS)

    Milan, Felix; Biferale, Luca; Sbragaglia, Mauro; Toschi, Federico

    2017-11-01

    We study the deformation and dynamics of droplets in time-dependent flows using a diffuse interface model for two immiscible fluids. The numerical simulations are at first benchmarked against analytical results of steady droplet deformation, and further extended to the more interesting case of time-dependent flows. The results of these time-dependent numerical simulations are compared against analytical models available in the literature, which assume the droplet shape to be an ellipsoid at all times, with time-dependent major and minor axis. In particular we investigate the time-dependent deformation of a confined droplet in an oscillating Couette flow for the entire capillary range until droplet break-up. In this way these multi component simulations prove to be a useful tool to establish from ``first principles'' the dynamics of droplets in complex flows involving multiple scales. European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No 642069. & European Research Council under the European Community's Seventh Framework Program, ERC Grant Agreement No 339032.

  6. Scale disparity and spectral transfer in anisotropic numerical turbulence

    NASA Technical Reports Server (NTRS)

    Zhou, YE; Yeung, P. K.; Brasseur, James G.

    1994-01-01

    To study the effect of cancellations within long-range interactions on local isotropy at the small scales, we calculate explicitly the degree of cancellation in distant interactions in the simulations of Yeung & Brasseur and Yeung, Brasseur & Wang using the single scale disparity parameter 's' developed by Zhou. In the simulations, initially isotropic simulated turbulence was subjected to coherent anisotropic forcing at the large scales and the smallest scales were found to become anisotropic as a consequence of direct large-small scale couplings. We find that the marginally distant interactions in the simulation do not cancel out under summation and that the development of small-scale anisotropy is indeed a direct consequence of the distant triadic group, as argued by Yeung, et. al. A reduction of anisotropy at later times occurs as a result of the isotropizing influences of more local energy-cascading triadic interactions. Nevertheless, the local-to-nonlocal triadic group persists as an isotropizing influence at later times. We find that, whereas long-range interactions, in general, contribute little to net energy transfer into or out of a high wavenumber shell k, the anisotropic transfer of component energy within the shell increases with increasing scale separations. These results are consistent with results by Zhou, and Brasseur & Wei, and suggest that the anisotropizing influences of long range interactions should persist to higher Reynolds numbers. The residual effect of the forced distant group in this low-Reynolds number simulation is found to be forward cascading, on average.

  7. Hierarchical Engine for Large-scale Infrastructure Co-Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-04-24

    HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.

  8. Inference of scale-free networks from gene expression time series.

    PubMed

    Daisuke, Tominaga; Horton, Paul

    2006-04-01

    Quantitative time-series observation of gene expression is becoming possible, for example by cell array technology. However, there are no practical methods with which to infer network structures using only observed time-series data. As most computational models of biological networks for continuous time-series data have a high degree of freedom, it is almost impossible to infer the correct structures. On the other hand, it has been reported that some kinds of biological networks, such as gene networks and metabolic pathways, may have scale-free properties. We hypothesize that the architecture of inferred biological network models can be restricted to scale-free networks. We developed an inference algorithm for biological networks using only time-series data by introducing such a restriction. We adopt the S-system as the network model, and a distributed genetic algorithm to optimize models to fit its simulated results to observed time series data. We have tested our algorithm on a case study (simulated data). We compared optimization under no restriction, which allows for a fully connected network, and under the restriction that the total number of links must equal that expected from a scale free network. The restriction reduced both false positive and false negative estimation of the links and also the differences between model simulation and the given time-series data.

  9. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  10. IMPLEMENTATION OF FIRST-PASSAGE TIME APPROACH FOR OBJECT KINETIC MONTE CARLO SIMULATIONS OF IRRADIATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.

    2014-06-30

    The objective of the work is to implement a first-passage time (FPT) approach to deal with very fast 1D diffusing SIA clusters in KSOME (kinetic simulations of microstructural evolution) [1] to achieve longer time-scales during irradiation damage simulations. The goal is to develop FPT-KSOME, which has the same flexibility as KSOME.

  11. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware

    PubMed Central

    Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.

    2016-01-01

    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061

  12. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

    PubMed

    Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

    2016-05-01

    The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. CYCLIC THERMAL SIGNATURE IN A GLOBAL MHD SIMULATION OF SOLAR CONVECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cossette, Jean-Francois; Charbonneau, Paul; Smolarkiewicz, Piotr K.

    Global magnetohydrodynamical simulations of the solar convection zone have recently achieved cyclic large-scale axisymmetric magnetic fields undergoing polarity reversals on a decadal time scale. In this Letter, we show that these simulations also display a thermal convective luminosity that varies in-phase with the magnetic cycle, and trace this modulation to deep-seated magnetically mediated changes in convective flow patterns. Within the context of the ongoing debate on the physical origin of the observed 11 yr variations in total solar irradiance, such a signature supports the thesis according to which all, or part, of the variations on decadal time scales and longermore » could be attributed to a global modulation of the Sun's internal thermal structure by magnetic activity.« less

  14. New analytic results for speciation times in neutral models.

    PubMed

    Gernhard, Tanja

    2008-05-01

    In this paper, we investigate the standard Yule model, and a recently studied model of speciation and extinction, the "critical branching process." We develop an analytic way-as opposed to the common simulation approach-for calculating the speciation times in a reconstructed phylogenetic tree. Simple expressions for the density and the moments of the speciation times are obtained. Methods for dating a speciation event become valuable, if for the reconstructed phylogenetic trees, no time scale is available. A missing time scale could be due to supertree methods, morphological data, or molecular data which violates the molecular clock. Our analytic approach is, in particular, useful for the model with extinction, since simulations of birth-death processes which are conditioned on obtaining n extant species today are quite delicate. Further, simulations are very time consuming for big n under both models.

  15. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    PubMed

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  16. Next Generation Extended Lagrangian Quantum-based Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Negre, Christian

    2017-06-01

    A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.

  17. Scaling Analysis of Alloy Solidification and Fluid Flow in a Rectangular Cavity

    NASA Astrophysics Data System (ADS)

    Plotkowski, A.; Fezi, K.; Krane, M. J. M.

    A scaling analysis was performed to predict trends in alloy solidification in a side-cooled rectangular cavity. The governing equations for energy and momentum were scaled in order to determine the dependence of various aspects of solidification on the process parameters for a uniform initial temperature and an isothermal boundary condition. This work improved on previous analyses by adding considerations for the cooling bulk fluid flow. The analysis predicted the time required to extinguish the superheat, the maximum local solidification time, and the total solidification time. The results were compared to a numerical simulation for a Al-4.5 wt.% Cu alloy with various initial and boundary conditions. Good agreement was found between the simulation results and the trends predicted by the scaling analysis.

  18. Sea-ice deformation in a coupled ocean-sea-ice model and in satellite remote sensing data

    NASA Astrophysics Data System (ADS)

    Spreen, Gunnar; Kwok, Ron; Menemenlis, Dimitris; Nguyen, An T.

    2017-07-01

    A realistic representation of sea-ice deformation in models is important for accurate simulation of the sea-ice mass balance. Simulated sea-ice deformation from numerical simulations with 4.5, 9, and 18 km horizontal grid spacing and a viscous-plastic (VP) sea-ice rheology are compared with synthetic aperture radar (SAR) satellite observations (RGPS, RADARSAT Geophysical Processor System) for the time period 1996-2008. All three simulations can reproduce the large-scale ice deformation patterns, but small-scale sea-ice deformations and linear kinematic features (LKFs) are not adequately reproduced. The mean sea-ice total deformation rate is about 40 % lower in all model solutions than in the satellite observations, especially in the seasonal sea-ice zone. A decrease in model grid spacing, however, produces a higher density and more localized ice deformation features. The 4.5 km simulation produces some linear kinematic features, but not with the right frequency. The dependence on length scale and probability density functions (PDFs) of absolute divergence and shear for all three model solutions show a power-law scaling behavior similar to RGPS observations, contrary to what was found in some previous studies. Overall, the 4.5 km simulation produces the most realistic divergence, vorticity, and shear when compared with RGPS data. This study provides an evaluation of high and coarse-resolution viscous-plastic sea-ice simulations based on spatial distribution, time series, and power-law scaling metrics.

  19. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  20. Atomistic and coarse-grained computer simulations of raft-like lipid mixtures.

    PubMed

    Pandit, Sagar A; Scott, H Larry

    2007-01-01

    Computer modeling can provide insights into the existence, structure, size, and thermodynamic stability of localized raft-like regions in membranes. However, the challenges in the construction and simulation of accurate models of heterogeneous membranes are great. The primary obstacle in modeling the lateral organization within a membrane is the relatively slow lateral diffusion rate for lipid molecules. Microsecond or longer time-scales are needed to fully model the formation and stability of a raft in a membra ne. Atomistic simulations currently are not able to reach this scale, but they do provide quantitative information on the intermolecular forces and correlations that are involved in lateral organization. In this chapter, the steps needed to carry out and analyze atomistic simulations of hydrated lipid bilayers having heterogeneous composition are outlined. It is then shown how the data from a molecular dynamics simulation can be used to construct a coarse-grained model for the heterogeneous bilayer that can predict the lateral organization and stability of rafts at up to millisecond time-scales.

  1. Finite-Time and -Size Scalings in the Evaluation of Large Deviation Functions. Numerical Analysis in Continuous Time

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.

  2. Multi-Scale Modeling of Liquid Phase Sintering Affected by Gravity: Preliminary Analysis

    NASA Technical Reports Server (NTRS)

    Olevsky, Eugene; German, Randall M.

    2012-01-01

    A multi-scale simulation concept taking into account impact of gravity on liquid phase sintering is described. The gravity influence can be included at both the micro- and macro-scales. At the micro-scale, the diffusion mass-transport is directionally modified in the framework of kinetic Monte-Carlo simulations to include the impact of gravity. The micro-scale simulations can provide the values of the constitutive parameters for macroscopic sintering simulations. At the macro-scale, we are attempting to embed a continuum model of sintering into a finite-element framework that includes the gravity forces and substrate friction. If successful, the finite elements analysis will enable predictions relevant to space-based processing, including size and shape and property predictions. Model experiments are underway to support the models via extraction of viscosity moduli versus composition, particle size, heating rate, temperature and time.

  3. The effects of different dry roast parameters on peanut quality using an industrial, belt-type roaster simulator

    USDA-ARS?s Scientific Manuscript database

    Recent lab scale experiments demonstrated that peanuts roasted to equivalent surface colors at different temperature/time combinations can vary substantially in chemical and physical properties related to product quality. This study expanded that approach to a pilot plant scale roaster that simulate...

  4. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  5. A reduced basis method for molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Vincent-Finley, Rachel Elisabeth

    In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.

  6. Event Detection and Sub-state Discovery from Bio-molecular Simulations Using Higher-Order Statistics: Application To Enzyme Adenylate Kinase

    PubMed Central

    Ramanathan, Arvind; Savol, Andrej J.; Agarwal, Pratul K.; Chennubhotla, Chakra S.

    2012-01-01

    Biomolecular simulations at milli-second and longer timescales can provide vital insights into functional mechanisms. Since post-simulation analyses of such large trajectory data-sets can be a limiting factor in obtaining biological insights, there is an emerging need to identify key dynamical events and relating these events to the biological function online, that is, as simulations are progressing. Recently, we have introduced a novel computational technique, quasi-anharmonic analysis (QAA) (PLoS One 6(1): e15827), for partitioning the conformational landscape into a hierarchy of functionally relevant sub-states. The unique capabilities of QAA are enabled by exploiting anharmonicity in the form of fourth-order statistics for characterizing atomic fluctuations. In this paper, we extend QAA for analyzing long time-scale simulations online. In particular, we present HOST4MD - a higher-order statistical toolbox for molecular dynamics simulations, which (1) identifies key dynamical events as simulations are in progress, (2) explores potential sub-states and (3) identifies conformational transitions that enable the protein to access those sub-states. We demonstrate HOST4MD on micro-second time-scale simulations of the enzyme adenylate kinase in its apo state. HOST4MD identifies several conformational events in these simulations, revealing how the intrinsic coupling between the three sub-domains (LID, CORE and NMP) changes during the simulations. Further, it also identifies an inherent asymmetry in the opening/closing of the two binding sites. We anticipate HOST4MD will provide a powerful and extensible framework for detecting biophysically relevant conformational coordinates from long time-scale simulations. PMID:22733562

  7. TURBULENCE AND PROTON–ELECTRON HEATING IN KINETIC PLASMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthaeus, William H; Parashar, Tulasi N; Wu, P.

    2016-08-10

    Analysis of particle-in-cell simulations of kinetic plasma turbulence reveals a connection between the strength of cascade, the total heating rate, and the partitioning of dissipated energy into proton heating and electron heating. A von Karman scaling of the cascade rate explains the total heating across several families of simulations. The proton to electron heating ratio increases in proportion to total heating. We argue that the ratio of gyroperiod to nonlinear turnover time at the ion kinetic scales controls the ratio of proton and electron heating. The proposed scaling is consistent with simulations.

  8. Supercomputers ready for use as discovery machines for neuroscience.

    PubMed

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.

  9. Supercomputers Ready for Use as Discovery Machines for Neuroscience

    PubMed Central

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. PMID:23129998

  10. Lagrangian and Eulerian statistics obtained from direct numerical simulations of homogeneous turbulence

    NASA Technical Reports Server (NTRS)

    Squires, Kyle D.; Eaton, John K.

    1991-01-01

    Direct numerical simulation is used to study dispersion in decaying isotropic turbulence and homogeneous shear flow. Both Lagrangian and Eulerian data are presented allowing direct comparison, but at fairly low Reynolds number. The quantities presented include properties of the dispersion tensor, isoprobability contours of particle displacement, Lagrangian and Eulerian velocity autocorrelations and time scale ratios, and the eddy diffusivity tensor. The Lagrangian time microscale is found to be consistently larger than the Eulerian microscale, presumably due to the advection of the small scales by the large scales in the Eulerian reference frame.

  11. Quasi-real-time end-to-end simulations of ELT-scale adaptive optics systems on GPUs

    NASA Astrophysics Data System (ADS)

    Gratadour, Damien

    2011-09-01

    Our team has started the development of a code dedicated to GPUs for the simulation of AO systems at the E-ELT scale. It uses the CUDA toolkit and an original binding to Yorick (an open source interpreted language) to provide the user with a comprehensive interface. In this paper we present the first performance analysis of our simulation code, showing its ability to provide Shack-Hartmann (SH) images and measurements at the kHz scale for VLT-sized AO system and in quasi-real-time (up to 70 Hz) for ELT-sized systems on a single top-end GPU. The simulation code includes multiple layers atmospheric turbulence generation, ray tracing through these layers, image formation at the focal plane of every sub-apertures of a SH sensor using either natural or laser guide stars and centroiding on these images using various algorithms. Turbulence is generated on-the-fly giving the ability to simulate hours of observations without the need of loading extremely large phase screens in the global memory. Because of its performance this code additionally provides the unique ability to test real-time controllers for future AO systems under nominal conditions.

  12. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  13. Fully kinetic 3D simulations of the Hermean magnetosphere under realistic conditions: a new approach

    NASA Astrophysics Data System (ADS)

    Amaya, Jorge; Gonzalez-Herrero, Diego; Lembège, Bertrand; Lapenta, Giovanni

    2017-04-01

    Simulations of the magnetosphere of planets are usually performed using the MHD and the hybrid approaches. However, these two methods still rely on approximations for the computation of the pressure tensor, and require the neutrality of the plasma at every point of the domain by construction. These approximations undermine the role of electrons on the emergence of plasma features in the magnetosphere of planets. The high mobility of electrons, their characteristic time and space scales, and the lack of perfect neutrality, are the source of many observed phenomena in the magnetospheres, including the turbulence energy cascade, the magnetic reconnection, the particle acceleration in the shock front and the formation of current systems around the magnetosphere. Fully kinetic codes are extremely demanding of computing time, and have been unable to perform simulations of the full magnetosphere at the real scales of a planet with realistic plasma conditions. This is caused by two main reasons: 1) explicit codes must resolve the electron scales limiting the time and space discretisation, and 2) current versions of semi-implicit codes are unstable for cell sizes larger than a few Debye lengths. In this work we present new simulations performed with ECsim, an Energy Conserving semi-implicit method [1], that can overcome these two barriers. We compare the solutions obtained with ECsim with the solutions obtained by the classic semi-implicit code iPic3D [2]. The new simulations with ECsim demand a larger computational effort, but the time and space discretisations are larger than those in iPic3D allowing for a faster simulation time of the full planetary environment. The new code, ECsim, can reach a resolution allowing the capture of significant large scale physics without loosing kinetic electron information, such as wave-electron interaction and non-Maxwellian electron velocity distributions [3]. The code is able to better capture the thickness of the different boundary layers of the magnetosphere of Mercury. Electron kinetics are consistent with the spatial and temporal scale resolutions. Simulations are compared with measurements from the MESSENGER spacecraft showing a better fit when compared against the classic fully kinetic code iPic3D. These results show that the new generation of Energy Conserving semi-implicit codes can be used for an accurate analysis and interpretation of particle data from magnetospheric missions like BepiColombo and MMS, including electron velocity distributions and electron temperature anisotropies. [1] Lapenta, G. (2016). Exactly Energy Conserving Implicit Moment Particle in Cell Formulation. arXiv preprint arXiv:1602.06326. [2] Markidis, S., & Lapenta, G. (2010). Multi-scale simulations of plasma with iPIC3D. Mathematics and Computers in Simulation, 80(7), 1509-1519. [3] Lapenta, G., Gonzalez-Herrero, D., & Boella, E. (2016). Multiple scale kinetic simulations with the energy conserving semi implicit particle in cell (PIC) method. arXiv preprint arXiv:1612.08289.

  14. Signatures Of Coronal Heating Driven By Footpoint Shuffling: Closed and Open Structures.

    NASA Astrophysics Data System (ADS)

    Velli, M. C. M.; Rappazzo, A. F.; Dahlburg, R. B.; Einaudi, G.; Ugarte-Urra, I.

    2017-12-01

    We have previously described the characteristic state of the confined coronal magnetic field as a special case of magnetically dominated magnetohydrodynamic (MHD) turbulence, where the free energy in the transverse magnetic field is continuously cascaded to small scales, even though the overall kinetic energy is small. This coronal turbulence problem is defined by the photospheric boundary conditions: here we discuss recent numerical simulations of the fully compressible 3D MHD equations using the HYPERION code. Loops are forced at their footpoints by random photospheric motions, energizing the field to a state with continuous formation and dissipation of field-aligned current sheets: energy is deposited at small scales where heating occurs. Only a fraction of the coronal mass and volume gets heated at any time. Temperature and density are highly structured at scales that, in the solar corona, remain observationally unresolved: the plasma of simulated loops is multithermal, where highly dynamical hotter and cooler plasma strands are scattered throughout the loop at sub-observational scales. We will also compare Reduced MHD simulations with fully compressible simulations and photospheric forcings with different time-scales compared to the Alfv'en transit time. Finally, we will discuss the differences between the closed field and open field (solar wind) turbulence heating problem, leading to observational consequences that may be amenable to Parker Solar Probe and Solar Orbiter.

  15. Implementation Strategies for Large-Scale Transport Simulations Using Time Domain Particle Tracking

    NASA Astrophysics Data System (ADS)

    Painter, S.; Cvetkovic, V.; Mancillas, J.; Selroos, J.

    2008-12-01

    Time domain particle tracking is an emerging alternative to the conventional random walk particle tracking algorithm. With time domain particle tracking, particles are moved from node to node on one-dimensional pathways defined by streamlines of the groundwater flow field or by discrete subsurface features. The time to complete each deterministic segment is sampled from residence time distributions that include the effects of advection, longitudinal dispersion, a variety of kinetically controlled retention (sorption) processes, linear transformation, and temporal changes in groundwater velocities and sorption parameters. The simulation results in a set of arrival times at a monitoring location that can be post-processed with a kernel method to construct mass discharge (breakthrough) versus time. Implementation strategies differ for discrete flow (fractured media) systems and continuous porous media systems. The implementation strategy also depends on the scale at which hydraulic property heterogeneity is represented in the supporting flow model. For flow models that explicitly represent discrete features (e.g., discrete fracture networks), the sampling of residence times along segments is conceptually straightforward. For continuous porous media, such sampling needs to be related to the Lagrangian velocity field. Analytical or semi-analytical methods may be used to approximate the Lagrangian segment velocity distributions in aquifers with low-to-moderate variability, thereby capturing transport effects of subgrid velocity variability. If variability in hydraulic properties is large, however, Lagrangian velocity distributions are difficult to characterize and numerical simulations are required; in particular, numerical simulations are likely to be required for estimating the velocity integral scale as a basis for advective segment distributions. Aquifers with evolving heterogeneity scales present additional challenges. Large-scale simulations of radionuclide transport at two potential repository sites for high-level radioactive waste will be used to demonstrate the potential of the method. The simulations considered approximately 1000 source locations, multiple radionuclides with contrasting sorption properties, and abrupt changes in groundwater velocity associated with future glacial scenarios. Transport pathways linking the source locations to the accessible environment were extracted from discrete feature flow models that include detailed representations of the repository construction (tunnels, shafts, and emplacement boreholes) embedded in stochastically generated fracture networks. Acknowledgment The authors are grateful to SwRI Advisory Committee for Research, the Swedish Nuclear Fuel and Waste Management Company, and Posiva Oy for financial support.

  16. A real-time simulator of a turbofan engine

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Delaat, John C.; Merrill, Walter C.

    1989-01-01

    A real-time digital simulator of a Pratt and Whitney F100 engine has been developed for real-time code verification and for actuator diagnosis during full-scale engine testing. This self-contained unit can operate in an open-loop stand-alone mode or as part of closed-loop control system. It can also be used for control system design and development. Tests conducted in conjunction with the NASA Advanced Detection, Isolation, and Accommodation program show that the simulator is a valuable tool for real-time code verification and as a real-time actuator simulator for actuator fault diagnosis. Although currently a small perturbation model, advances in microprocessor hardware should allow the simulator to evolve into a real-time, full-envelope, full engine simulation.

  17. An effective online data monitoring and saving strategy for large-scale climate simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin

    Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less

  18. An effective online data monitoring and saving strategy for large-scale climate simulations

    DOE PAGES

    Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin; ...

    2018-01-22

    Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less

  19. Time scale bridging in atomistic simulation of slow dynamics: viscous relaxation and defect activation

    NASA Astrophysics Data System (ADS)

    Kushima, A.; Eapen, J.; Li, Ju; Yip, S.; Zhu, T.

    2011-08-01

    Atomistic simulation methods are known for timescale limitations in resolving slow dynamical processes. Two well-known scenarios of slow dynamics are viscous relaxation in supercooled liquids and creep deformation in stressed solids. In both phenomena the challenge to theory and simulation is to sample the transition state pathways efficiently and follow the dynamical processes on long timescales. We present a perspective based on the biased molecular simulation methods such as metadynamics, autonomous basin climbing (ABC), strain-boost and adaptive boost simulations. Such algorithms can enable an atomic-level explanation of the temperature variation of the shear viscosity of glassy liquids, and the relaxation behavior in solids undergoing creep deformation. By discussing the dynamics of slow relaxation in two quite different areas of condensed matter science, we hope to draw attention to other complex problems where anthropological or geological-scale time behavior can be simulated at atomic resolution and understood in terms of micro-scale processes of molecular rearrangements and collective interactions. As examples of a class of phenomena that can be broadly classified as materials ageing, we point to stress corrosion cracking and cement setting as opportunities for atomistic modeling and simulations.

  20. Coarse-graining to the meso and continuum scales with molecular-dynamics-like models

    NASA Astrophysics Data System (ADS)

    Plimpton, Steve

    Many engineering-scale problems that industry or the national labs try to address with particle-based simulations occur at length and time scales well beyond the most optimistic hopes of traditional coarse-graining methods for molecular dynamics (MD), which typically start at the atomic scale and build upward. However classical MD can be viewed as an engine for simulating particles at literally any length or time scale, depending on the models used for individual particles and their interactions. To illustrate I'll highlight several coarse-grained (CG) materials models, some of which are likely familiar to molecular-scale modelers, but others probably not. These include models for water droplet freezing on surfaces, dissipative particle dynamics (DPD) models of explosives where particles have internal state, CG models of nano or colloidal particles in solution, models for aspherical particles, Peridynamics models for fracture, and models of granular materials at the scale of industrial processing. All of these can be implemented as MD-style models for either soft or hard materials; in fact they are all part of our LAMMPS MD package, added either by our group or contributed by collaborators. Unlike most all-atom MD simulations, CG simulations at these scales often involve highly non-uniform particle densities. So I'll also discuss a load-balancing method we've implemented for these kinds of models, which can improve parallel efficiencies. From the physics point-of-view, these models may be viewed as non-traditional or ad hoc. But because they are MD-style simulations, there's an opportunity for physicists to add statistical mechanics rigor to individual models. Or, in keeping with a theme of this session, to devise methods that more accurately bridge models from one scale to the next.

  1. Testing new approaches to carbonate system simulation at the reef scale: the ReefSam model first results, application to a question in reef morphology and future challenges.

    NASA Astrophysics Data System (ADS)

    Barrett, Samuel; Webster, Jody

    2016-04-01

    Numerical simulation of the stratigraphy and sedimentology of carbonate systems (carbonate forward stratigraphic modelling - CFSM) provides significant insight into the understanding of both the physical nature of these systems and the processes which control their development. It also provides the opportunity to quantitatively test conceptual models concerning stratigraphy, sedimentology or geomorphology, and allows us to extend our knowledge either spatially (e.g. between bore holes) or temporally (forwards or backwards in time). The later is especially important in determining the likely future development of carbonate systems, particularly regarding the effects of climate change. This application, by its nature, requires successful simulation of carbonate systems on short time scales and at high spatial resolutions. Previous modelling attempts have typically focused on the scales of kilometers and kilo-years or greater (the scale of entire carbonate platforms), rather than at the scale of centuries or decades, and tens to hundreds of meters (the scale of individual reefs). Previous work has identified limitations in common approaches to simulating important reef processes. We present a new CFSM, Reef Sedimentary Accretion Model (ReefSAM), which is designed to test new approaches to simulating reef-scale processes, with the aim of being able to better simulate the past and future development of coral reefs. Four major features have been tested: 1. A simulation of wave based hydrodynamic energy with multiple simultaneous directions and intensities including wave refraction, interaction, and lateral sheltering. 2. Sediment transport simulated as sediment being moved from cell to cell in an iterative fashion until complete deposition. 3. A coral growth model including consideration of local wave energy and composition of the basement substrate (as well as depth). 4. A highly quantitative model testing approach where dozens of output parameters describing the reef morphology and development are compared with observational data. Despite being a test-bed and work in progress, ReefSAM was able to simulate the Holocene development of One Tree Reef in the Southern Great Barrier Reef (Australia) and was able to improve upon previous modelling attempts in terms of both quantitative measures and qualitative outputs, such as the presence of previously un-simulated reef features. Given the success of the model in simulating the Holocene development of OTR, we used it to quantitatively explore the effect of basement substrate depth and morphology on reef maturity/lagoonal filling (as discussed by Purdy and Gischer 2005). Initial results show a number of non-linear relationships between basement substrate depth, lagoonal filling and volume of sand produced on the reef rims and deposited in the lagoon. Lastly, further testing of the model has revealed new challenges which are likely to manifest in any attempt at reef-scale simulation. Subtly different sets of energy direction and magnitude input parameters (different in each time step but with identical probability distributions across the entire model run) resulted in a wide range of quantitative model outputs. Time step length is a likely contributing factor and the results of further testing to address this challenge will be presented.

  2. Representation of fine scale atmospheric variability in a nudged limited area quasi-geostrophic model: application to regional climate modelling

    NASA Astrophysics Data System (ADS)

    Omrani, H.; Drobinski, P.; Dubos, T.

    2009-09-01

    In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.

  3. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William D; Johansen, Hans; Evans, Katherine J

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  5. A comparative study of two approaches to analyse groundwater recharge, travel times and nitrate storage distribution at a regional scale

    NASA Astrophysics Data System (ADS)

    Turkeltaub, T.; Ascott, M.; Gooddy, D.; Jia, X.; Shao, M.; Binley, A. M.

    2017-12-01

    Understanding deep percolation, travel time processes and nitrate storage in the unsaturated zone at a regional scale is crucial for sustainable management of many groundwater systems. Recently, global hydrological models have been developed to quantify the water balance at such scales and beyond. However, the coarse spatial resolution of the global hydrological models can be a limiting factor when analysing regional processes. This study compares simulations of water flow and nitrate storage based on regional and global scale approaches. The first approach was applied over the Loess Plateau of China (LPC) to investigate the water fluxes and nitrate storage and travel time to the LPC groundwater system. Using raster maps of climate variables, land use data and soil parameters enabled us to determine fluxes by employing Richards' equation and the advection - dispersion equation. These calculations were conducted for each cell on the raster map in a multiple 1-D column approach. In the second approach, vadose zone travel times and nitrate storage were estimated by coupling groundwater recharge (PCR-GLOBWB) and nitrate leaching (IMAGE) models with estimates of water table depth and unsaturated zone porosity. The simulation results of the two methods indicate similar spatial groundwater recharge, nitrate storage and travel time distribution. Intensive recharge rates are located mainly at the south central and south west parts of the aquifer's outcrops. Particularly low recharge rates were simulated in the top central area of the outcrops. However, there are significant discrepancies between the simulated absolute recharge values, which might be related to the coarse scale that is used in the PCR-GLOBWB model, leading to smoothing of the recharge estimations. Both models indicated large nitrate inventories in the south central and south west parts of the aquifer's outcrops and the shortest travel times in the vadose zone are in the south central and east parts of the outcrops. Our results suggest that, for the LPC at least, global scale models might be useful for highlighting the locations with higher recharge rates potential and nitrate contamination risk. Global modelling simulations appear ideal as a primary step in recognizing locations which require investigations at the plot, field and local scales.

  6. Grand Minima and Equatorward Propagation in a Cycling Stellar Convective Dynamo

    NASA Astrophysics Data System (ADS)

    Augustson, Kyle C.; Brun, Allan Sacha; Miesch, Mark; Toomre, Juri

    2015-08-01

    The 3-D magnetohydrodynamic (MHD) Anelastic Spherical Harmonic (ASH) code, using slope-limited diffusion, is employed to capture convective and dynamo processes achieved in a global-scale stellar convection simulation for a model solar-mass star rotating at three times the solar rate. The dynamo generated magnetic fields possesses many time scales, with a prominent polarity cycle occurring roughly every 6.2 years. The magnetic field forms large-scale toroidal wreaths, whose formation is tied to the low Rossby number of the convection in this simulation. The polarity reversals are linked to the weakened differential rotation and a resistive collapse of the large-scale magnetic field. An equatorial migration of the magnetic field is seen, which is due to the strong modulation of the differential rotation rather than a dynamo wave. A poleward migration of magnetic flux from the equator eventually leads to the reversal of the polarity of the high-latitude magnetic field. This simulation also enters an interval with reduced magnetic energy at low latitudes lasting roughly 16 years (about 2.5 polarity cycles), during which the polarity cycles are disrupted and after which the dynamo recovers its regular polarity cycles. An analysis of this grand minimum reveals that it likely arises through the interplay of symmetric and antisymmetric dynamo families. This intermittent dynamo state potentially results from the simulations relatively low magnetic Prandtl number. A mean-field-based analysis of this dynamo simulation demonstrates that it is of the α-Ω type. The time scales that appear to be relevant to the magnetic polarity reversal are also identified.

  7. Nonadiabatic dynamics of electron transfer in solution: Explicit and implicit solvent treatments that include multiple relaxation time scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu

    2014-01-21

    The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less

  8. Orthogonal recursive bisection as data decomposition strategy for massively parallel cardiac simulations.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Pitman, Michael C; Rice, John J

    2011-06-01

    We present the orthogonal recursive bisection algorithm that hierarchically segments the anatomical model structure into subvolumes that are distributed to cores. The anatomy is derived from the Visible Human Project, with electrophysiology based on the FitzHugh-Nagumo (FHN) and ten Tusscher (TT04) models with monodomain diffusion. Benchmark simulations with up to 16,384 and 32,768 cores on IBM Blue Gene/P and L supercomputers for both FHN and TT04 results show good load balancing with almost perfect speedup factors that are close to linear with the number of cores. Hence, strong scaling is demonstrated. With 32,768 cores, a 1000 ms simulation of full heart beat requires about 6.5 min of wall clock time for a simulation of the FHN model. For the largest machine partitions, the simulations execute at a rate of 0.548 s (BG/P) and 0.394 s (BG/L) of wall clock time per 1 ms of simulation time. To our knowledge, these simulations show strong scaling to substantially higher numbers of cores than reported previously for organ-level simulation of the heart, thus significantly reducing run times. The ability to reduce runtimes could play a critical role in enabling wider use of cardiac models in research and clinical applications.

  9. A multi-scale model for geared transmission aero-thermodynamics

    NASA Astrophysics Data System (ADS)

    McIntyre, Sean M.

    A multi-scale, multi-physics computational tool for the simulation of high-per- formance gearbox aero-thermodynamics was developed and applied to equilibrium and pathological loss-of-lubrication performance simulation. The physical processes at play in these systems include multiphase compressible ow of the air and lubricant within the gearbox, meshing kinematics and tribology, as well as heat transfer by conduction, and free and forced convection. These physics are coupled across their representative space and time scales in the computational framework developed in this dissertation. These scales span eight orders of magnitude, from the thermal response of the full gearbox O(100 m; 10 2 s), through effects at the tooth passage time scale O(10-2 m; 10-4 s), down to tribological effects on the meshing gear teeth O(10-6 m; 10-6 s). Direct numerical simulation of these coupled physics and scales is intractable. Accordingly, a scale-segregated simulation strategy was developed by partitioning and treating the contributing physical mechanisms as sub-problems, each with associated space and time scales, and appropriate coupling mechanisms. These are: (1) the long time scale thermal response of the system, (2) the multiphase (air, droplets, and film) aerodynamic flow and convective heat transfer within the gearbox, (3) the high-frequency, time-periodic thermal effects of gear tooth heating while in mesh and its subsequent cooling through the rest of rotation, (4) meshing effects including tribology and contact mechanics. The overarching goal of this dissertation was to develop software and analysis procedures for gearbox loss-of-lubrication performance. To accommodate these four physical effects and their coupling, each is treated in the CFD code as a sub problem. These physics modules are coupled algorithmically. Specifically, the high- frequency conduction analysis derives its local heat transfer coefficient and near-wall air temperature boundary conditions from a quasi-steady cyclic-symmetric simulation of the internal flow. This high-frequency conduction solution is coupled directly with a model for the meshing friction, developed by a collaborator, which was adapted for use in a finite-volume CFD code. The local surface heat flux on solid surfaces is calculated by time-averaging the heat flux in the high-frequency analysis. This serves as a fixed-flux boundary condition in the long time scale conduction module. The temperature distribution from this long time scale heat transfer calculation serves as a boundary condition for the internal convection simulation, and as the initial condition for the high-frequency heat transfer module. Using this multi-scale model, simulations were performed for equilibrium and loss-of-lubrication operation of the NASA Glenn Research Center test stand. Results were compared with experimental measurements. In addition to the multi-scale model itself, several other specific contributions were made. Eulerian models for droplets and wall-films were developed and im- plemented in the CFD code. A novel approach to retaining liquid film on the solid surfaces, and strategies for its mass exchange with droplets, were developed and verified. Models for interfacial transfer between droplets and wall-film were implemented, and include the effects of droplet deposition, splashing, bouncing, as well as film breakup. These models were validated against airfoil data. To mitigate the observed slow convergence of CFD simulations of the enclosed aerodynamic flows within gearboxes, Fourier stability analysis was applied to the SIMPLE-C fractional-step algorithm. From this, recommendations to accelerate the convergence rate through enhanced pressure-velocity coupling were made. These were shown to be effective. A fast-running finite-volume reduced-order-model of the gearbox aero-thermo- dynamics was developed, and coupled with the tribology model to investigate the sensitivity of loss-of-lubrication predictions to various model and physical param- eters. This sensitivity study was instrumental in guiding efforts toward improving the accuracy of the multi-scale model without undue increase in computational cost. In addition, the reduced-order model is now used extensively by a collaborator in tribology model development and testing. Experimental measurements of high-speed gear windage in partially and fully- shrouded configurations were performed to supplement the paucity of available validation data. This measurement program provided measurements of windage loss for a gear of design-relevant size and operating speed, as well as guidance for increasing the accuracy of future measurements.

  10. Hybrid stochastic simplifications for multiscale gene networks.

    PubMed

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-09-07

    Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.

  11. Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories

    NASA Astrophysics Data System (ADS)

    Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy

    2013-05-01

    Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.

  12. Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.

    PubMed

    Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy

    2013-05-01

    Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.

  13. Multi-scale simulations of space problems with iPIC3D

    NASA Astrophysics Data System (ADS)

    Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano

    The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038

  14. Enhanced sampling by multiple molecular dynamics trajectories: carbonmonoxy myoglobin 10 micros A0-->A(1-3) transition from ten 400 picosecond simulations.

    PubMed

    Loccisano, Anne E; Acevedo, Orlando; DeChancie, Jason; Schulze, Brita G; Evanseck, Jeffrey D

    2004-05-01

    The utility of multiple trajectories to extend the time scale of molecular dynamics simulations is reported for the spectroscopic A-states of carbonmonoxy myoglobin (MbCO). Experimentally, the A0-->A(1-3) transition has been observed to be 10 micros at 300 K, which is beyond the time scale of standard molecular dynamics simulations. To simulate this transition, 10 short (400 ps) and two longer time (1.2 ns) molecular dynamics trajectories, starting from five different crystallographic and solution phase structures with random initial velocities centered in a 37 A radius sphere of water, have been used to sample the native-fold of MbCO. Analysis of the ensemble of structures gathered over the cumulative 5.6 ns reveals two biomolecular motions involving the side chains of His64 and Arg45 to explain the spectroscopic states of MbCO. The 10 micros A0-->A(1-3) transition involves the motion of His64, where distance between His64 and CO is found to vary up to 8.8 +/- 1.0 A during the transition of His64 from the ligand (A(1-3)) to bulk solvent (A0). The His64 motion occurs within a single trajectory only once, however the multiple trajectories populate the spectroscopic A-states fully. Consequently, multiple independent molecular dynamics simulations have been found to extend biomolecular motion from 5 ns of total simulation to experimental phenomena on the microsecond time scale.

  15. Scale-Up of Lubricant Mixing Process by Using V-Type Blender Based on Discrete Element Method.

    PubMed

    Horibe, Masashi; Sonoda, Ryoichi; Watano, Satoru

    2018-01-01

    A method for scale-up of a lubricant mixing process in a V-type blender was proposed. Magnesium stearate was used for the lubricant, and the lubricant mixing experiment was conducted using three scales of V-type blenders (1.45, 21 and 130 L) under the same fill level and Froude (Fr) number. However, the properties of lubricated mixtures and tablets could not correspond with the mixing time or the total revolution number. To find the optimum scale-up factor, discrete element method (DEM) simulations of three scales of V-type blender mixing were conducted, and the total travel distance of particles under the different scales was calculated. The properties of the lubricated mixture and tablets obtained from the scale-up experiment were well correlated with the mixing time determined by the total travel distance. It was found that a scale-up simulation based on the travel distance of particles is valid for the lubricant mixing scale-up processes.

  16. Nonuniversal star formation efficiency in turbulent ISM

    DOE PAGES

    Semenov, Vadim A.; Kravtsov, Andrey V.; Gnedin, Nickolay Y.

    2016-07-29

    Here, we present a study of a star formation prescription in which star formation efficiency depends on local gas density and turbulent velocity dispersion, as suggested by direct simulations of SF in turbulent giant molecular clouds (GMCs). We test the model using a simulation of an isolated Milky Way-sized galaxy with a self-consistent treatment of turbulence on unresolved scales. We show that this prescription predicts a wide variation of local star formation efficiency per free-fall time,more » $$\\epsilon_{\\rm ff} \\sim 0.1 - 10\\%$$, and gas depletion time, $$t_{\\rm dep} \\sim 0.1 - 10$$ Gyr. In addition, it predicts an effective density threshold for star formation due to suppression of $$\\epsilon_{\\rm ff}$$ in warm diffuse gas stabilized by thermal pressure. We show that the model predicts star formation rates in agreement with observations from the scales of individual star-forming regions to the kiloparsec scales. This agreement is non-trivial, as the model was not tuned in any way and the predicted star formation rates on all scales are determined by the distribution of the GMC-scale densities and turbulent velocities $$\\sigma$$ in the cold gas within the galaxy, which is shaped by galactic dynamics. The broad agreement of the star formation prescription calibrated in the GMC-scale simulations with observations, both gives credence to such simulations and promises to put star formation modeling in galaxy formation simulations on a much firmer theoretical footing.« less

  17. The detection of local irreversibility in time series based on segmentation

    NASA Astrophysics Data System (ADS)

    Teng, Yue; Shang, Pengjian

    2018-06-01

    We propose a strategy for the detection of local irreversibility in stationary time series based on multiple scale. The detection is beneficial to evaluate the displacement of irreversibility toward local skewness. By means of this method, we can availably discuss the local irreversible fluctuations of time series as the scale changes. The method was applied to simulated nonlinear signals generated by the ARFIMA process and logistic map to show how the irreversibility functions react to the increasing of the multiple scale. The method was applied also to series of financial markets i.e., American, Chinese and European markets. The local irreversibility for different markets demonstrate distinct characteristics. Simulations and real data support the need of exploring local irreversibility.

  18. Effects of Eddy Viscosity on Time Correlations in Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    He, Guowei; Rubinstein, R.; Wang, Lian-Ping; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Subgrid-scale (SGS) models for large. eddy simulation (LES) have generally been evaluated by their ability to predict single-time statistics of turbulent flows such as kinetic energy and Reynolds stresses. Recent application- of large eddy simulation to the evaluation of sound sources in turbulent flows, a problem in which time, correlations determine the frequency distribution of acoustic radiation, suggest that subgrid models should also be evaluated by their ability to predict time correlations in turbulent flows. This paper compares the two-point, two-time Eulerian velocity correlation evaluated from direct numerical simulation (DNS) with that evaluated from LES, using a spectral eddy viscosity, for isotropic homogeneous turbulence. It is found that the LES fields are too coherent, in the sense that their time correlations decay more slowly than the corresponding time. correlations in the DNS fields. This observation is confirmed by theoretical estimates of time correlations using the Taylor expansion technique. Tile reason for the slower decay is that the eddy viscosity does not include the random backscatter, which decorrelates fluid motion at large scales. An effective eddy viscosity associated with time correlations is formulated, to which the eddy viscosity associated with energy transfer is a leading order approximation.

  19. Event detection and sub-state discovery from biomolecular simulations using higher-order statistics: application to enzyme adenylate kinase.

    PubMed

    Ramanathan, Arvind; Savol, Andrej J; Agarwal, Pratul K; Chennubhotla, Chakra S

    2012-11-01

    Biomolecular simulations at millisecond and longer time-scales can provide vital insights into functional mechanisms. Because post-simulation analyses of such large trajectory datasets can be a limiting factor in obtaining biological insights, there is an emerging need to identify key dynamical events and relating these events to the biological function online, that is, as simulations are progressing. Recently, we have introduced a novel computational technique, quasi-anharmonic analysis (QAA) (Ramanathan et al., PLoS One 2011;6:e15827), for partitioning the conformational landscape into a hierarchy of functionally relevant sub-states. The unique capabilities of QAA are enabled by exploiting anharmonicity in the form of fourth-order statistics for characterizing atomic fluctuations. In this article, we extend QAA for analyzing long time-scale simulations online. In particular, we present HOST4MD--a higher-order statistical toolbox for molecular dynamics simulations, which (1) identifies key dynamical events as simulations are in progress, (2) explores potential sub-states, and (3) identifies conformational transitions that enable the protein to access those sub-states. We demonstrate HOST4MD on microsecond timescale simulations of the enzyme adenylate kinase in its apo state. HOST4MD identifies several conformational events in these simulations, revealing how the intrinsic coupling between the three subdomains (LID, CORE, and NMP) changes during the simulations. Further, it also identifies an inherent asymmetry in the opening/closing of the two binding sites. We anticipate that HOST4MD will provide a powerful and extensible framework for detecting biophysically relevant conformational coordinates from long time-scale simulations. Copyright © 2012 Wiley Periodicals, Inc.

  20. Enabling Functional Neural Circuit Simulations with Distributed Computing of Neuromodulated Plasticity

    PubMed Central

    Potjans, Wiebke; Morrison, Abigail; Diesmann, Markus

    2010-01-01

    A major puzzle in the field of computational neuroscience is how to relate system-level learning in higher organisms to synaptic plasticity. Recently, plasticity rules depending not only on pre- and post-synaptic activity but also on a third, non-local neuromodulatory signal have emerged as key candidates to bridge the gap between the macroscopic and the microscopic level of learning. Crucial insights into this topic are expected to be gained from simulations of neural systems, as these allow the simultaneous study of the multiple spatial and temporal scales that are involved in the problem. In particular, synaptic plasticity can be studied during the whole learning process, i.e., on a time scale of minutes to hours and across multiple brain areas. Implementing neuromodulated plasticity in large-scale network simulations where the neuromodulatory signal is dynamically generated by the network itself is challenging, because the network structure is commonly defined purely by the connectivity graph without explicit reference to the embedding of the nodes in physical space. Furthermore, the simulation of networks with realistic connectivity entails the use of distributed computing. A neuromodulated synapse must therefore be informed in an efficient way about the neuromodulatory signal, which is typically generated by a population of neurons located on different machines than either the pre- or post-synaptic neuron. Here, we develop a general framework to solve the problem of implementing neuromodulated plasticity in a time-driven distributed simulation, without reference to a particular implementation language, neuromodulator, or neuromodulated plasticity mechanism. We implement our framework in the simulator NEST and demonstrate excellent scaling up to 1024 processors for simulations of a recurrent network incorporating neuromodulated spike-timing dependent plasticity. PMID:21151370

  1. Cooling of solar flares plasmas. 1: Theoretical considerations

    NASA Technical Reports Server (NTRS)

    Cargill, Peter J.; Mariska, John T.; Antiochos, Spiro K.

    1995-01-01

    Theoretical models of the cooling of flare plasma are reexamined. By assuming that the cooling occurs in two separate phase where conduction and radiation, respectively, dominate, a simple analytic formula for the cooling time of a flare plasma is derived. Unlike earlier order-of-magnitude scalings, this result accounts for the effect of the evolution of the loop plasma parameters on the cooling time. When the conductive cooling leads to an 'evaporation' of chromospheric material, the cooling time scales L(exp 5/6)/p(exp 1/6), where the coronal phase (defined as the time maximum temperature). When the conductive cooling is static, the cooling time scales as L(exp 3/4)n(exp 1/4). In deriving these results, use was made of an important scaling law (T proportional to n(exp 2)) during the radiative cooling phase that was forst noted in one-dimensional hydrodynamic numerical simulations (Serio et al. 1991; Jakimiec et al. 1992). Our own simulations show that this result is restricted to approximately the radiative loss function of Rosner, Tucker, & Vaiana (1978). for different radiative loss functions, other scaling result, with T and n scaling almost linearly when the radiative loss falls off as T(exp -2). It is shown that these scaling laws are part of a class of analytic solutions developed by Antiocos (1980).

  2. Analysis of large-scale tablet coating: Modeling, simulation and experiments.

    PubMed

    Boehling, P; Toschkoff, G; Knop, K; Kleinebudde, P; Just, S; Funke, A; Rehbaum, H; Khinast, J G

    2016-07-30

    This work concerns a tablet coating process in an industrial-scale drum coater. We set up a full-scale Design of Simulation Experiment (DoSE) using the Discrete Element Method (DEM) to investigate the influence of various process parameters (the spray rate, the number of nozzles, the rotation rate and the drum load) on the coefficient of inter-tablet coating variation (cv,inter). The coater was filled with up to 290kg of material, which is equivalent to 1,028,369 tablets. To mimic the tablet shape, the glued sphere approach was followed, and each modeled tablet consisted of eight spheres. We simulated the process via the eXtended Particle System (XPS), proving that it is possible to accurately simulate the tablet coating process on the industrial scale. The process time required to reach a uniform tablet coating was extrapolated based on the simulated data and was in good agreement with experimental results. The results are provided at various levels of details, from thorough investigation of the influence that the process parameters have on the cv,inter and the amount of tablets that visit the spray zone during the simulated 90s to the velocity in the spray zone and the spray and bed cycle time. It was found that increasing the number of nozzles and decreasing the spray rate had the highest influence on the cv,inter. Although increasing the drum load and the rotation rate increased the tablet velocity, it did not have a relevant influence on the cv,inter and the process time. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. A methodology towards virtualisation-based high performance simulation platform supporting multidisciplinary design of complex products

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin

    2012-08-01

    Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.

  4. Parallel methodology to capture cyclic variability in motored engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei

    2016-07-28

    Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less

  5. A spectral radius scaling semi-implicit iterative time stepping method for reactive flow simulations with detailed chemistry

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin

    2018-09-01

    A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.

  6. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    DOE PAGES

    Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; ...

    2015-11-05

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less

  7. Combined aerodynamic and structural dynamic problem emulating routines (CASPER): Theory and implementation

    NASA Technical Reports Server (NTRS)

    Jones, William H.

    1985-01-01

    The Combined Aerodynamic and Structural Dynamic Problem Emulating Routines (CASPER) is a collection of data-base modification computer routines that can be used to simulate Navier-Stokes flow through realistic, time-varying internal flow fields. The Navier-Stokes equation used involves calculations in all three dimensions and retains all viscous terms. The only term neglected in the current implementation is gravitation. The solution approach is of an interative, time-marching nature. Calculations are based on Lagrangian aerodynamic elements (aeroelements). It is assumed that the relationships between a particular aeroelement and its five nearest neighbor aeroelements are sufficient to make a valid simulation of Navier-Stokes flow on a small scale and that the collection of all small-scale simulations makes a valid simulation of a large-scale flow. In keeping with these assumptions, it must be noted that CASPER produces an imitation or simulation of Navier-Stokes flow rather than a strict numerical solution of the Navier-Stokes equation. CASPER is written to operate under the Parallel, Asynchronous Executive (PAX), which is described in a separate report.

  8. Scaling NS-3 DCE Experiments on Multi-Core Servers

    DTIC Science & Technology

    2016-06-15

    that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on

  9. Turbulence modeling and combustion simulation in porous media under high Peclet number

    NASA Astrophysics Data System (ADS)

    Moiseev, Andrey A.; Savin, Andrey V.

    2018-05-01

    Turbulence modelling in porous flows and burning still remains not completely clear until now. Undoubtedly, conventional turbulence models must work well under high Peclet numbers when porous channels shape is implemented in details. Nevertheless, the true turbulent mixing takes place at micro-scales only, and the dispersion mixing works at macro-scales almost independent from true turbulence. The dispersion mechanism is characterized by the definite space scale (scale of the porous structure) and definite velocity scale (filtration velocity). The porous structure is stochastic one usually, and this circumstance allows applying the analogy between space-time-stochastic true turbulence and the dispersion flow which is stochastic in space only, when porous flow is simulated at the macro-scale level. Additionally, the mentioned analogy allows applying well-known turbulent combustion models in simulations of porous combustion under high Peclet numbers.

  10. Multi-Scale Analysis for Characterizing Near-Field Constituent Concentrations in the Context of a Macro-Scale Semi-Lagrangian Numerical Model

    NASA Astrophysics Data System (ADS)

    Yearsley, J. R.

    2017-12-01

    The semi-Lagrangian numerical scheme employed by RBM, a model for simulating time-dependent, one-dimensional water quality constituents in advection-dominated rivers, is highly scalable both in time and space. Although the model has been used at length scales of 150 meters and time scales of three hours, the majority of applications have been at length scales of 1/16th degree latitude/longitude (about 5 km) or greater and time scales of one day. Applications of the method at these scales has proven successful for characterizing the impacts of climate change on water temperatures in global rivers and on the vulnerability of thermoelectric power plants to changes in cooling water temperatures in large river systems. However, local effects can be very important in terms of ecosystem impacts, particularly in the case of developing mixing zones for wastewater discharges with pollutant loadings limited by regulations imposed by the Federal Water Pollution Control Act (FWPCA). Mixing zone analyses have usually been decoupled from large-scale watershed influences by developing scenarios that represent critical scenarios for external processes associated with streamflow and weather conditions . By taking advantage of the particle-tracking characteristics of the numerical scheme, RBM can provide results at any point in time within the model domain. We develop a proof of concept for locations in the river network where local impacts such as mixing zones may be important. Simulated results from the semi-Lagrangian numerical scheme are treated as input to a finite difference model of the two-dimensional diffusion equation for water quality constituents such as water temperature or toxic substances. Simulations will provide time-dependent, two-dimensional constituent concentration in the near-field in response to long-term basin-wide processes. These results could provide decision support to water quality managers for evaluating mixing zone characteristics.

  11. Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean

    2017-10-01

    Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.

  12. Scaling laws and dynamics of bubble coalescence

    NASA Astrophysics Data System (ADS)

    Anthony, Christopher R.; Kamat, Pritish M.; Thete, Sumeet S.; Munro, James P.; Lister, John R.; Harris, Michael T.; Basaran, Osman A.

    2017-08-01

    The coalescence of bubbles and drops plays a central role in nature and industry. During coalescence, two bubbles or drops touch and merge into one as the neck connecting them grows from microscopic to macroscopic scales. The hydrodynamic singularity that arises when two bubbles or drops have just touched and the flows that ensue have been studied thoroughly when two drops coalesce in a dynamically passive outer fluid. In this paper, the coalescence of two identical and initially spherical bubbles, which are idealized as voids that are surrounded by an incompressible Newtonian liquid, is analyzed by numerical simulation. This problem has recently been studied (a) experimentally using high-speed imaging and (b) by asymptotic analysis in which the dynamics is analyzed by determining the growth of a hole in the thin liquid sheet separating the two bubbles. In the latter, advantage is taken of the fact that the flow in the thin sheet of nonconstant thickness is governed by a set of one-dimensional, radial extensional flow equations. While these studies agree on the power law scaling of the variation of the minimum neck radius with time, they disagree with respect to the numerical value of the prefactors in the scaling laws. In order to reconcile these differences and also provide insights into the dynamics that are difficult to probe by either of the aforementioned approaches, simulations are used to access both earlier times than has been possible in the experiments and also later times when asymptotic analysis is no longer applicable. Early times and extremely small length scales are attained in the new simulations through the use of a truncated domain approach. Furthermore, it is shown by direct numerical simulations in which the flow within the bubbles is also determined along with the flow exterior to them that idealizing the bubbles as passive voids has virtually no effect on the scaling laws relating minimum neck radius and time.

  13. Communication: Polymer entanglement dynamics: Role of attractive interactions

    DOE PAGES

    Grest, Gary S.

    2016-10-10

    The coupled dynamics of entangled polymers, which span broad time and length scales, govern their unique viscoelastic properties. To follow chain mobility by numerical simulations from the intermediate Rouse and reptation regimes to the late time diffusive regime, highly coarse grained models with purely repulsive interactions between monomers are widely used since they are computationally the most efficient. In this paper, using large scale molecular dynamics simulations, the effect of including the attractive interaction between monomers on the dynamics of entangled polymer melts is explored for the first time over a wide temperature range. Attractive interactions have little effect onmore » the local packing for all temperatures T and on the chain mobility for T higher than about twice the glass transition T g. Finally, these results, across a broad range of molecular weight, show that to study the dynamics of entangled polymer melts, the interactions can be treated as pure repulsive, confirming a posteriori the validity of previous studies and opening the way to new large scale numerical simulations.« less

  14. Study of discrete-particle effects in a one-dimensional plasma simulation with the Krook type collision model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Po-Yen; Chen, Liu; Institute for Fusion Theory and Simulation, Zhejiang University, 310027 Hangzhou

    2015-09-15

    The thermal relaxation time of a one-dimensional plasma has been demonstrated to scale with N{sub D}{sup 2} due to discrete particle effects by collisionless particle-in-cell (PIC) simulations, where N{sub D} is the particle number in a Debye length. The N{sub D}{sup 2} scaling is consistent with the theoretical analysis based on the Balescu-Lenard-Landau kinetic equation. However, it was found that the thermal relaxation time is anomalously shortened to scale with N{sub D} while externally introducing the Krook type collision model in the one-dimensional electrostatic PIC simulation. In order to understand the discrete particle effects enhanced by the Krook type collisionmore » model, the superposition principle of dressed test particles was applied to derive the modified Balescu-Lenard-Landau kinetic equation. The theoretical results are shown to be in good agreement with the simulation results when the collisional effects dominate the plasma system.« less

  15. Molecular Dynamics Simulations of Chemical Reactions for Use in Education

    ERIC Educational Resources Information Center

    Qian Xie; Tinker, Robert

    2006-01-01

    One of the simulation engines of an open-source program called the Molecular Workbench, which can simulate thermodynamics of chemical reactions, is described. This type of real-time, interactive simulation and visualization of chemical reactions at the atomic scale could help students understand the connections between chemical reaction equations…

  16. Wavelet-based time series bootstrap model for multidecadal streamflow simulation using climate indicators

    NASA Astrophysics Data System (ADS)

    Erkyihun, Solomon Tassew; Rajagopalan, Balaji; Zagona, Edith; Lall, Upmanu; Nowak, Kenneth

    2016-05-01

    A model to generate stochastic streamflow projections conditioned on quasi-oscillatory climate indices such as Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) is presented. Recognizing that each climate index has underlying band-limited components that contribute most of the energy of the signals, we first pursue a wavelet decomposition of the signals to identify and reconstruct these features from annually resolved historical data and proxy based paleoreconstructions of each climate index covering the period from 1650 to 2012. A K-Nearest Neighbor block bootstrap approach is then developed to simulate the total signal of each of these climate index series while preserving its time-frequency structure and marginal distributions. Finally, given the simulated climate signal time series, a K-Nearest Neighbor bootstrap is used to simulate annual streamflow series conditional on the joint state space defined by the simulated climate index for each year. We demonstrate this method by applying it to simulation of streamflow at Lees Ferry gauge on the Colorado River using indices of two large scale climate forcings: Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO), which are known to modulate the Colorado River Basin (CRB) hydrology at multidecadal time scales. Skill in stochastic simulation of multidecadal projections of flow using this approach is demonstrated.

  17. The Physical Origin of Long Gas Depletion Times in Galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Semenov, Vadim A.; Kravtsov, Andrey V.; Gnedin, Nickolay Y.

    2017-08-18

    We present a model that elucidates why gas depletion times in galaxies are long compared to the time scales of the processes driving the evolution of the interstellar medium. We show that global depletion times are not set by any "bottleneck" in the process of gas evolution towards the star-forming state. Instead, depletion times are long because star-forming gas converts only a small fraction of its mass into stars before it is dispersed by dynamical and feedback processes. Thus, complete depletion requires that gas transitions between star-forming and non-star-forming states multiple times. Our model does not rely on the assumption of equilibrium and can be used to interpret trends of depletion times with the properties of observed galaxies and the parameters of star formation and feedback recipes in galaxy simulations. In particular, the model explains the mechanism by which feedback self-regulates star formation rate in simulations and makes it insensitive to the local star formation efficiency. We illustrate our model using the results of an isolatedmore » $$L_*$$-sized disk galaxy simulation that reproduces the observed Kennicutt-Schmidt relation for both molecular and atomic gas. Interestingly, the relation for molecular gas is close to linear on kiloparsec scales, even though a non-linear relation is adopted in simulation cells. This difference is due to stellar feedback, which breaks the self-similar scaling of the gas density PDF with the average gas surface density.« less

  18. An Integrated Computational Materials Engineering Method for Woven Carbon Fiber Composites Preforming Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Weizhao; Ren, Huaqing; Wang, Zequn

    2016-10-19

    An integrated computational materials engineering method is proposed in this paper for analyzing the design and preforming process of woven carbon fiber composites. The goal is to reduce the cost and time needed for the mass production of structural composites. It integrates the simulation methods from the micro-scale to the macro-scale to capture the behavior of the composite material in the preforming process. In this way, the time consuming and high cost physical experiments and prototypes in the development of the manufacturing process can be circumvented. This method contains three parts: the micro-scale representative volume element (RVE) simulation to characterizemore » the material; the metamodeling algorithm to generate the constitutive equations; and the macro-scale preforming simulation to predict the behavior of the composite material during forming. The results show the potential of this approach as a guidance to the design of composite materials and its manufacturing process.« less

  19. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    NASA Astrophysics Data System (ADS)

    Alessandri, A.; Catalano, F.; De Felice, M.; van den Hurk, B.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.

    2016-12-01

    The European consortium earth system model (EC-Earth; http://www.ec-earth.org) has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.

  20. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    NASA Astrophysics Data System (ADS)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.

    2017-08-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (twentieth century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2 m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.

  1. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    NASA Astrophysics Data System (ADS)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.

    2017-04-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.

  2. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  3. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE PAGES

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...

    2017-03-24

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  4. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.

  5. Modeling of molecular diffusion and thermal conduction with multi-particle interaction in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Tai, Y.; Watanabe, T.; Nagata, K.

    2018-03-01

    A mixing volume model (MVM) originally proposed for molecular diffusion in incompressible flows is extended as a model for molecular diffusion and thermal conduction in compressible turbulence. The model, established for implementation in Lagrangian simulations, is based on the interactions among spatially distributed notional particles within a finite volume. The MVM is tested with the direct numerical simulation of compressible planar jets with the jet Mach number ranging from 0.6 to 2.6. The MVM well predicts molecular diffusion and thermal conduction for a wide range of the size of mixing volume and the number of mixing particles. In the transitional region of the jet, where the scalar field exhibits a sharp jump at the edge of the shear layer, a smaller mixing volume is required for an accurate prediction of mean effects of molecular diffusion. The mixing time scale in the model is defined as the time scale of diffusive effects at a length scale of the mixing volume. The mixing time scale is well correlated for passive scalar and temperature. Probability density functions of the mixing time scale are similar for molecular diffusion and thermal conduction when the mixing volume is larger than a dissipative scale because the mixing time scale at small scales is easily affected by different distributions of intermittent small-scale structures between passive scalar and temperature. The MVM with an assumption of equal mixing time scales for molecular diffusion and thermal conduction is useful in the modeling of the thermal conduction when the modeling of the dissipation rate of temperature fluctuations is difficult.

  6. Understanding quantum tunneling using diffusion Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.

    2018-03-01

    In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.

  7. Consistent View of Protein Fluctuations from All-Atom Molecular Dynamics and Coarse-Grained Dynamics with Knowledge-Based Force-Field.

    PubMed

    Jamroz, Michal; Orozco, Modesto; Kolinski, Andrzej; Kmiecik, Sebastian

    2013-01-08

    It is widely recognized that atomistic Molecular Dynamics (MD), a classical simulation method, captures the essential physics of protein dynamics. That idea is supported by a theoretical study showing that various MD force-fields provide a consensus picture of protein fluctuations in aqueous solution [Rueda, M. et al. Proc. Natl. Acad. Sci. U.S.A. 2007, 104, 796-801]. However, atomistic MD cannot be applied to most biologically relevant processes due to its limitation to relatively short time scales. Much longer time scales can be accessed by properly designed coarse-grained models. We demonstrate that the aforementioned consensus view of protein dynamics from short (nanosecond) time scale MD simulations is fairly consistent with the dynamics of the coarse-grained protein model - the CABS model. The CABS model employs stochastic dynamics (a Monte Carlo method) and a knowledge-based force-field, which is not biased toward the native structure of a simulated protein. Since CABS-based dynamics allows for the simulation of entire folding (or multiple folding events) in a single run, integration of the CABS approach with all-atom MD promises a convenient (and computationally feasible) means for the long-time multiscale molecular modeling of protein systems with atomistic resolution.

  8. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, M D; Cole, S; Frenk, C S

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less

  9. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations

    PubMed Central

    Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-01-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911

  10. Simulation of reaction diffusion processes over biologically relevant size and time scales using multi-GPU workstations.

    PubMed

    Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida

    2014-05-01

    Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.

  11. Assessing Performance in Shoulder Arthroscopy: The Imperial Global Arthroscopy Rating Scale (IGARS).

    PubMed

    Bayona, Sofia; Akhtar, Kash; Gupte, Chinmay; Emery, Roger J H; Dodds, Alexander L; Bello, Fernando

    2014-07-02

    Surgical training is undergoing major changes with reduced resident work hours and an increasing focus on patient safety and surgical aptitude. The aim of this study was to create a valid, reliable method for an assessment of arthroscopic skills that is independent of time and place and is designed for both real and simulated settings. The validity of the scale was tested using a virtual reality shoulder arthroscopy simulator. The study consisted of two parts. In the first part, an Imperial Global Arthroscopy Rating Scale for assessing technical performance was developed using a Delphi method. Application of this scale required installing a dual-camera system to synchronously record the simulator screen and body movements of trainees to allow an assessment that is independent of time and place. The scale includes aspects such as efficient portal positioning, angles of instrument insertion, proficiency in handling the arthroscope and adequately manipulating the camera, and triangulation skills. In the second part of the study, a validation study was conducted. Two experienced arthroscopic surgeons, blinded to the identities and experience of the participants, each assessed forty-nine subjects performing three different tests using the Imperial Global Arthroscopy Rating Scale. Results were analyzed using two-way analysis of variance with measures of absolute agreement. The intraclass correlation coefficient was calculated for each test to assess inter-rater reliability. The scale demonstrated high internal consistency (Cronbach alpha, 0.918). The intraclass correlation coefficient demonstrated high agreement between the assessors: 0.91 (p < 0.001). Construct validity was evaluated using Kruskal-Wallis one-way analysis of variance (chi-square test, 29.826; p < 0.001), demonstrating that the Imperial Global Arthroscopy Rating Scale distinguishes significantly between subjects with different levels of experience utilizing a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale has a high internal consistency and excellent inter-rater reliability and offers an approach for assessing technical performance in basic arthroscopy on a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale provides detailed information on surgical skills. Although it requires further validation in the operating room, this scale, which is independent of time and place, offers a robust and reliable method for assessing arthroscopic technical skills. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.

  12. A new paradigm for atomically detailed simulations of kinetics in biophysical systems.

    PubMed

    Elber, Ron

    2017-01-01

    The kinetics of biochemical and biophysical events determined the course of life processes and attracted considerable interest and research. For example, modeling of biological networks and cellular responses relies on the availability of information on rate coefficients. Atomically detailed simulations hold the promise of supplementing experimental data to obtain a more complete kinetic picture. However, simulations at biological time scales are challenging. Typical computer resources are insufficient to provide the ensemble of trajectories at the correct length that is required for straightforward calculations of time scales. In the last years, new technologies emerged that make atomically detailed simulations of rate coefficients possible. Instead of computing complete trajectories from reactants to products, these approaches launch a large number of short trajectories at different positions. Since the trajectories are short, they are computed trivially in parallel on modern computer architecture. The starting and termination positions of the short trajectories are chosen, following statistical mechanics theory, to enhance efficiency. These trajectories are analyzed. The analysis produces accurate estimates of time scales as long as hours. The theory of Milestoning that exploits the use of short trajectories is discussed, and several applications are described.

  13. Hybrid stochastic simplifications for multiscale gene networks

    PubMed Central

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-01-01

    Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554

  14. HRLSim: a high performance spiking neural network simulator for GPGPU clusters.

    PubMed

    Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan

    2014-02-01

    Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.

  15. Multiscale analysis of structure development in expanded starch snacks

    NASA Astrophysics Data System (ADS)

    van der Sman, R. G. M.; Broeze, J.

    2014-11-01

    In this paper we perform a multiscale analysis of the food structuring process of the expansion of starchy snack foods like keropok, which obtains a solid foam structure. In particular, we want to investigate the validity of the hypothesis of Kokini and coworkers, that expansion is optimal at the moisture content, where the glass transition and the boiling line intersect. In our analysis we make use of several tools, (1) time scale analysis from the field of physical transport phenomena, (2) the scale separation map (SSM) developed within a multiscale simulation framework of complex automata, (3) the supplemented state diagram (SSD), depicting phase transition and glass transition lines, and (4) a multiscale simulation model for the bubble expansion. Results of the time scale analysis are plotted in the SSD, and give insight into the dominant physical processes involved in expansion. Furthermore, the results of the time scale analysis are used to construct the SSM, which has aided us in the construction of the multiscale simulation model. Simulation results are plotted in the SSD. This clearly shows that the hypothesis of Kokini is qualitatively true, but has to be refined. Our results show that bubble expansion is optimal for moisture content, where the boiling line for gas pressure of 4 bars intersects the isoviscosity line of the critical viscosity 106 Pa.s, which runs parallel to the glass transition line.

  16. Time scales of supercooled water and implications for reversible polyamorphism

    NASA Astrophysics Data System (ADS)

    Limmer, David T.; Chandler, David

    2015-09-01

    Deeply supercooled water exhibits complex dynamics with large density fluctuations, ice coarsening and characteristic time scales extending from picoseconds to milliseconds. Here, we discuss implications of these time scales as they pertain to two-phase coexistence and to molecular simulations of supercooled water. Specifically, we argue that it is possible to discount liquid-liquid criticality because the time scales imply that correlation lengths for such behaviour would be bounded by no more than a few nanometres. Similarly, it is possible to discount two-liquid coexistence because the time scales imply a bounded interfacial free energy that cannot grow in proportion to a macroscopic surface area. From time scales alone, therefore, we see that coexisting domains of differing density in supercooled water can be no more than nanoscale transient fluctuations.

  17. Molecular dynamics simulations of propane in slit shaped silica nano-pores: direct comparison with quasielastic neutron scattering experiments.

    PubMed

    Gautam, Siddharth; Le, Thu; Striolo, Alberto; Cole, David

    2017-12-13

    Molecular motion under confinement has important implications for a variety of applications including gas recovery and catalysis. Propane confined in mesoporous silica aerogel as studied using quasielastic neutron scattering (QENS) showed anomalous pressure dependence in its diffusion coefficient (J. Phys. Chem. C, 2015, 119, 18188). Molecular dynamics (MD) simulations are often employed to complement the information obtained from QENS experiments. Here, we report an MD simulation study to probe the anomalous pressure dependence of propane diffusion in silica aerogel. Comparison is attempted based on the self-diffusion coefficients and on the time scales of the decay of the simulated intermediate scattering functions. While the self-diffusion coefficients obtained from the simulated mean squared displacement profiles do not exhibit the anomalous pressure dependence observed in the experiments, the time scales of the decay of the intermediate scattering functions calculated from the simulation data match the corresponding quantities obtained in the QENS experiment and thus confirm the anomalous pressure dependence of the diffusion coefficient. The origin of the anomaly in pressure dependence lies in the presence of an adsorbed layer of propane molecules that seems to dominate the confined propane dynamics at low pressure, thereby lowering the diffusion coefficient. Further, time scales for rotational motion obtained from the simulations explain the absence of rotational contribution to the QENS spectra in the experiments. In particular, the rotational motion of the simulated propane molecules is found to exhibit large angular jumps at lower pressure. The present MD simulation work thus reveals important new insights into the origin of anomalous pressure dependence of propane diffusivity in silica mesopores and supplements the information obtained experimentally by QENS data.

  18. Higher order moments of the matter distribution in scale-free cosmological simulations with large dynamic range

    NASA Technical Reports Server (NTRS)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1994-01-01

    We calculate reduced moments (xi bar)(sub q) of the matter density fluctuations, up to order q = 5, from counts in cells produced by particle-mesh numerical simulations with scale-free Gaussian initial conditions. We use power-law spectra P(k) proportional to k(exp n) with indices n = -3, -2, -1, 0, 1. Due to the supposed absence of characteristic times or scales in our models, all quantities are expected to depend on a single scaling variable. For each model, the moments at all times can be expressed in terms of the variance (xi bar)(sub 2), alone. We look for agreement with the hierarchical scaling ansatz, according to which ((xi bar)(sub q)) proportional to ((xi bar)(sub 2))(exp (q - 1)). For n less than or equal to -2 models, we find strong deviations from the hierarchy, which are mostly due to the presence of boundary problems in the simulations. A small, residual signal of deviation from the hierarchical scaling is however also found in n greater than or equal to -1 models. The wide range of spectra considered and the large dynamic range, with careful checks of scaling and shot-noise effects, allows us to reliably detect evolution away from the perturbation theory result.

  19. Simulating Microbial Community Patterning Using Biocellion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Seung-Hwa; Kahan, Simon H.; Momeni, Babak

    2014-04-17

    Mathematical modeling and computer simulation are important tools for understanding complex interactions between cells and their biotic and abiotic environment: similarities and differences between modeled and observed behavior provide the basis for hypothesis forma- tion. Momeni et al. [5] investigated pattern formation in communities of yeast strains engaging in different types of ecological interactions, comparing the predictions of mathematical modeling and simulation to actual patterns observed in wet-lab experiments. However, simu- lations of millions of cells in a three-dimensional community are ex- tremely time-consuming. One simulation run in MATLAB may take a week or longer, inhibiting exploration of the vastmore » space of parameter combinations and assumptions. Improving the speed, scale, and accu- racy of such simulations facilitates hypothesis formation and expedites discovery. Biocellion is a high performance software framework for ac- celerating discrete agent-based simulation of biological systems with millions to trillions of cells. Simulations of comparable scale and accu- racy to those taking a week of computer time using MATLAB require just hours using Biocellion on a multicore workstation. Biocellion fur- ther accelerates large scale, high resolution simulations using cluster computers by partitioning the work to run on multiple compute nodes. Biocellion targets computational biologists who have mathematical modeling backgrounds and basic C++ programming skills. This chap- ter describes the necessary steps to adapt the original Momeni et al.'s model to the Biocellion framework as a case study.« less

  20. Direct comparison of elastic incoherent neutron scattering experiments with molecular dynamics simulations of DMPC phase transitions.

    PubMed

    Aoun, Bachir; Pellegrini, Eric; Trapp, Marcus; Natali, Francesca; Cantù, Laura; Brocca, Paola; Gerelli, Yuri; Demé, Bruno; Marek Koza, Michael; Johnson, Mark; Peters, Judith

    2016-04-01

    Neutron scattering techniques have been employed to investigate 1,2-dimyristoyl-sn -glycero-3-phosphocholine (DMPC) membranes in the form of multilamellar vesicles (MLVs) and deposited, stacked multilamellar-bilayers (MLBs), covering transitions from the gel to the liquid phase. Neutron diffraction was used to characterise the samples in terms of transition temperatures, whereas elastic incoherent neutron scattering (EINS) demonstrates that the dynamics on the sub-macromolecular length-scale and pico- to nano-second time-scale are correlated with the structural transitions through a discontinuity in the observed elastic intensities and the derived mean square displacements. Molecular dynamics simulations have been performed in parallel focussing on the length-, time- and temperature-scales of the neutron experiments. They correctly reproduce the structural features of the main gel-liquid phase transition. Particular emphasis is placed on the dynamical amplitudes derived from experiment and simulations. Two methods are used to analyse the experimental data and mean square displacements. They agree within a factor of 2 irrespective of the probed time-scale, i.e. the instrument utilized. Mean square displacements computed from simulations show a comparable level of agreement with the experimental values, albeit, the best match with the two methods varies for the two instruments. Consequently, experiments and simulations together give a consistent picture of the structural and dynamical aspects of the main lipid transition and provide a basis for future, theoretical modelling of dynamics and phase behaviour in membranes. The need for more detailed analytical models is pointed out by the remaining variation of the dynamical amplitudes derived in two different ways from experiments on the one hand and simulations on the other.

  1. Large scale cardiac modeling on the Blue Gene supercomputer.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J

    2008-01-01

    Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.

  2. Using the Weak-Temperature Gradient Approximation to Evaluate Parameterizations: An Example of the Transition From Suppressed to Active Convection

    NASA Astrophysics Data System (ADS)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.

    2017-10-01

    Two single-column models are fully coupled via the weak-temperature gradient approach. The coupled-SCM is used to simulate the transition from suppressed to active convection under the influence of an interactive large-scale circulation. The sensitivity of this transition to the value of mixing entrainment within the convective parameterization is explored. The results from these simulations are compared with those from equivalent simulations using coupled cloud-resolving models. Coupled-column simulations over nonuniform surface forcing are used to initialize the simulations of the transition, in which the column with suppressed convection is forced to undergo a transition to active convection by changing the local and/or remote surface forcings. The direct contributions from the changes in surface forcing are to induce a weakening of the large-scale circulation which systematically modulates the transition. In the SCM, the contributions from the large-scale circulation are dominated by the heating effects, while in the CRM the heating and moistening effects are about equally divided. A transition time is defined as the time when the rain rate in the dry column is halfway to the value at equilibrium after the transition. For the control value of entrainment, the order of the transition times is identical to that obtained in the CRM, but the transition times are markedly faster. The locally forced transition is strongly delayed by a higher entrainment. A consequence is that for a 50% higher entrainment the transition times are reordered. The remotely forced transition remains fast while the locally forced transition becomes slow, compared to the CRM.

  3. The Time Dependent Propensity Function for Acceleration of Spatial Stochastic Simulation of Reaction-Diffusion Systems

    PubMed Central

    Wu, Sheng; Li, Hong; Petzold, Linda R.

    2015-01-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy. PMID:26609185

  4. A new time-space accounting scheme to predict stream water residence time and hydrograph source components at the watershed scale

    Treesearch

    Takahiro Sayama; Jeffrey J. McDonnell

    2009-01-01

    Hydrograph source components and stream water residence time are fundamental behavioral descriptors of watersheds but, as yet, are poorly represented in most rainfall-runoff models. We present a new time-space accounting scheme (T-SAS) to simulate the pre-event and event water fractions, mean residence time, and spatial source of streamflow at the watershed scale. We...

  5. On simulating large earthquakes by Green's-function addition of smaller earthquakes

    NASA Astrophysics Data System (ADS)

    Joyner, William B.; Boore, David M.

    Simulation of ground motion from large earthquakes has been attempted by a number of authors using small earthquakes (subevents) as Green's functions and summing them, generally in a random way. We present a simple model for the random summation of subevents to illustrate how seismic scaling relations can be used to constrain methods of summation. In the model η identical subevents are added together with their start times randomly distributed over the source duration T and their waveforms scaled by a factor κ. The subevents can be considered to be distributed on a fault with later start times at progressively greater distances from the focus, simulating the irregular propagation of a coherent rupture front. For simplicity the distance between source and observer is assumed large compared to the source dimensions of the simulated event. By proper choice of η and κ the spectrum of the simulated event deduced from these assumptions can be made to conform at both low- and high-frequency limits to any arbitrary seismic scaling law. For the ω -squared model with similarity (that is, with constant Moƒ3o scaling, where ƒo is the corner frequency), the required values are η = (Mo/Moe)4/3 and κ = (Mo/Moe)-1/3, where Mo is moment of the simulated event and Moe is the moment of the subevent. The spectra resulting from other choices of η and κ, will not conform at both high and low frequency. If η is determined by the ratio of the rupture area of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at high frequency to the ω-squared model with similarity, but not at low frequency. Because the high-frequency part of the spectrum is generally the important part for engineering applications, however, this choice of values for η and κ may be satisfactory in many cases. If η is determined by the ratio of the moment of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at low frequency to the ω-squared model with similarity, but not at high frequency. Interestingly, the high-frequency scaling implied by this latter choice of η and κ corresponds to an ω-squared model with constant Moƒ4o—a scaling law proposed by Nuttli, although questioned recently by Haar and others. Simple scaling with κ equal to unity and η equal to the moment ratio would work if the high-frequency spectral decay were ω-1.5 instead of ω-2. Just the required decay is exhibited by the stochastic source model recently proposed by Joynet, if the dislocation-time function is deconvolved out of the spectrum. Simulated motions derived from such source models could be used as subevents rather than recorded motions as is usually done. This strategy is a promising approach to simulation of ground motion from an extended rupture.

  6. Simulation of FRET dyes allows quantitative comparison against experimental data

    NASA Astrophysics Data System (ADS)

    Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander

    2018-03-01

    Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.

  7. Cooling rate effects in sodium silicate glasses: Bridging the gap between molecular dynamics simulations and experiments

    NASA Astrophysics Data System (ADS)

    Li, Xin; Song, Weiying; Yang, Kai; Krishnan, N. M. Anoop; Wang, Bu; Smedskjaer, Morten M.; Mauro, John C.; Sant, Gaurav; Balonis, Magdalena; Bauchy, Mathieu

    2017-08-01

    Although molecular dynamics (MD) simulations are commonly used to predict the structure and properties of glasses, they are intrinsically limited to short time scales, necessitating the use of fast cooling rates. It is therefore challenging to compare results from MD simulations to experimental results for glasses cooled on typical laboratory time scales. Based on MD simulations of a sodium silicate glass with varying cooling rate (from 0.01 to 100 K/ps), here we show that thermal history primarily affects the medium-range order structure, while the short-range order is largely unaffected over the range of cooling rates simulated. This results in a decoupling between the enthalpy and volume relaxation functions, where the enthalpy quickly plateaus as the cooling rate decreases, whereas density exhibits a slower relaxation. Finally, we show that, using the proper extrapolation method, the outcomes of MD simulations can be meaningfully compared to experimental values when extrapolated to slower cooling rates.

  8. Anomalous diffusion for bed load transport with a physically-based model

    NASA Astrophysics Data System (ADS)

    Fan, N.; Singh, A.; Foufoula-Georgiou, E.; Wu, B.

    2013-12-01

    Diffusion of bed load particles shows both normal and anomalous behavior for different spatial-temporal scales. Understanding and quantifying these different types of diffusion is important not only for the development of theoretical models of particle transport but also for practical purposes, e.g., river management. Here we extend a recently proposed physically-based model of particle transport by Fan et al. [2013] to further develop an Episodic Langevin equation (ELE) for individual particle motion which reproduces the episodic movement (start and stop) of sediment particles. Using the proposed ELE we simulate particle movements for a large number of uniform size particles, incorporating different probability distribution functions (PDFs) of particle waiting time. For exponential PDFs of waiting times, particles reveal ballistic motion in short time scales and turn to normal diffusion at long time scales. The PDF of simulated particle travel distances also shows a change in its shape from exponential to Gamma to Gaussian with a change in timescale implying different diffusion scaling regimes. For power-law PDF (with power - μ) of waiting times, the asymptotic behavior of particles at long time scales reveals both super-diffusion and sub-diffusion, however, only very heavy tailed waiting times (i.e. 1.0 < μ < 1.5) could result in sub-diffusion. We suggest that the contrast between our results and previous studies (for e.g., studies based on fractional advection-diffusion models of thin/heavy tailed particle hops and waiting times) results could be due the assumption in those studies that the hops are achieved instantaneously, but in reality, particles achieve their hops within finite times (as we simulate here) instead of instantaneously, even if the hop times are much shorter than waiting times. In summary, this study stresses on the need to rethink the alternative models to the previous models, such as, fractional advection-diffusion equations, for studying the anomalous diffusion of bed load particles. The implications of these results for modeling sediment transport are discussed.

  9. An anatomically sound surgical simulation model for myringotomy and tympanostomy tube insertion.

    PubMed

    Hong, Paul; Webb, Amanda N; Corsten, Gerard; Balderston, Janet; Haworth, Rebecca; Ritchie, Krista; Massoud, Emad

    2014-03-01

    Myringotomy and tympanostomy tube insertion (MT) is a common surgical procedure. Although surgical simulation has proven to be an effective training tool, an anatomically sound simulation model for MT is lacking. We developed such a model and assessed its impact on the operating room performance of senior medical students. Prospective randomized trial. A randomized single-blind controlled study of simulation training with the MT model versus no simulation training. Each participant was randomized to either the simulation model group or control group, after performing an initial MT procedure. Within two weeks of the first procedure, the students performed a second MT. All procedures were performed on real patients and rated with a Global Rating Scale by two attending otolaryngologists. Time to complete the MT was also recorded. Twenty-four senior medical students were enrolled. Control and intervention groups did not differ at baseline on their Global Rating Scale score or time to complete the MT procedure. Following simulation training, the study group received significantly higher scores (P=.005) and performed the MT procedure in significantly less time (P=.034). The control group did not improve their performance scores (P>.05) or the time to complete the procedure (P>.05). Our surgical simulation model shows promise for being a valuable teaching tool for MT for senior medical students. Such anatomically appropriate physical simulators may benefit teaching of junior trainees. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.

    The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less

  11. Improved Clinical Performance and Teamwork of Pediatric Interprofessional Resuscitation Teams With a Simulation-Based Educational Intervention.

    PubMed

    Gilfoyle, Elaine; Koot, Deanna A; Annear, John C; Bhanji, Farhan; Cheng, Adam; Duff, Jonathan P; Grant, Vincent J; St George-Hyslop, Cecilia E; Delaloye, Nicole J; Kotsakis, Afrothite; McCoy, Carolyn D; Ramsay, Christa E; Weiss, Matthew J; Gottesman, Ronald D

    2017-02-01

    To measure the effect of a 1-day team training course for pediatric interprofessional resuscitation team members on adherence to Pediatric Advanced Life Support guidelines, team efficiency, and teamwork in a simulated clinical environment. Multicenter prospective interventional study. Four tertiary-care children's hospitals in Canada from June 2011 to January 2015. Interprofessional pediatric resuscitation teams including resident physicians, ICU nurse practitioners, registered nurses, and registered respiratory therapists (n = 300; 51 teams). A 1-day simulation-based team training course was delivered, involving an interactive lecture, group discussions, and four simulated resuscitation scenarios, each followed by a debriefing. The first scenario of the day (PRE) was conducted prior to any team training. The final scenario of the day (POST) was the same scenario, with a slightly modified patient history. All scenarios included standardized distractors designed to elicit and challenge specific teamwork behaviors. Primary outcome measure was change (before and after training) in adherence to Pediatric Advanced Life Support guidelines, as measured by the Clinical Performance Tool. Secondary outcome measures were as follows: 1) change in times to initiation of chest compressions and defibrillation and 2) teamwork performance, as measured by the Clinical Teamwork Scale. Correlation between Clinical Performance Tool and Clinical Teamwork Scale scores was also analyzed. Teams significantly improved Clinical Performance Tool scores (67.3-79.6%; p < 0.0001), time to initiation of chest compressions (60.8-27.1 s; p < 0.0001), time to defibrillation (164.8-122.0 s; p < 0.0001), and Clinical Teamwork Scale scores (56.0-71.8%; p < 0.0001). A positive correlation was found between Clinical Performance Tool and Clinical Teamwork Scale (R = 0.281; p < 0.0001). Participation in a simulation-based team training educational intervention significantly improved surrogate measures of clinical performance, time to initiation of key clinical tasks, and teamwork during simulated pediatric resuscitation. A positive correlation between clinical and teamwork performance suggests that effective teamwork improves clinical performance of resuscitation teams.

  12. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    NASA Astrophysics Data System (ADS)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  13. Thin film growth by 3D multi-particle diffusion limited aggregation model: Anomalous roughening and fractal analysis

    NASA Astrophysics Data System (ADS)

    Nasehnejad, Maryam; Nabiyouni, G.; Gholipour Shahraki, Mehran

    2018-03-01

    In this study a 3D multi-particle diffusion limited aggregation method is employed to simulate growth of rough surfaces with fractal behavior in electrodeposition process. A deposition model is used in which the radial motion of the particles with probability P, competes with random motions with probability 1 - P. Thin films growth is simulated for different values of probability P (related to the electric field) and thickness of the layer(related to the number of deposited particles). The influence of these parameters on morphology, kinetic of roughening and the fractal dimension of the simulated surfaces has been investigated. The results show that the surface roughness increases with increasing the deposition time and scaling exponents exhibit a complex behavior which is called as anomalous scaling. It seems that in electrodeposition process, radial motion of the particles toward the growing seeds may be an important mechanism leading to anomalous scaling. The results also indicate that the larger values of probability P, results in smoother topography with more densely packed structure. We have suggested a dynamic scaling ansatz for interface width has a function of deposition time, scan length and probability. Two different methods are employed to evaluate the fractal dimension of the simulated surfaces which are "cube counting" and "roughness" methods. The results of both methods show that by increasing the probability P or decreasing the deposition time, the fractal dimension of the simulated surfaces is increased. All gained values for fractal dimensions are close to 2.5 in the diffusion limited aggregation model.

  14. Evolution of Flow channels and Dipolarization Using THEMIS Observations and Global MHD Simulations

    NASA Astrophysics Data System (ADS)

    El-Alaoui, M.; McPherron, R. L.; Nishimura, Y.

    2017-12-01

    We have extensively analyzed a substorm on March 14, 2008 for which we have observations from THEMIS spacecraft located beyond 9 RE near 2100 local time. The available data include an extensive network of all sky cameras and ground magnetometers that establish the times of various auroral and magnetic events. This arrangement provided an excellent data set with which to investigate meso-scale structures in the plasma sheet. We have used a global magnetohydrodynamic simulation to investigate the structure and dynamics of the magnetotail current sheet during this substorm. Both earthward and tailward flows were found in the observations as well as the simulations. The simulation shows that the flow channels follow tortuous paths that are often reflected or deflected before arriving at the inner magnetosphere. The simulation shows a sequence of fast flows and dipolarization events similar to what is seen in the data, though not at precisely the same times or locations. We will use our simulation results combined with the observations to investigate the global convection systems and current sheet structure during this event, showing how meso-scale structures fit into the context of the overall tail dynamics during this event. Our study includes determining the location, timing and strength of several current wedges and expansion onsets during an 8-hour interval.

  15. Plasmonic resonances of nanoparticles from large-scale quantum mechanical simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Xiang, Hongping; Zhang, Mingliang; Lu, Gang

    2017-09-01

    Plasmonic resonance of metallic nanoparticles results from coherent motion of its conduction electrons, driven by incident light. For the nanoparticles less than 10 nm in diameter, localized surface plasmonic resonances become sensitive to the quantum nature of the conduction electrons. Unfortunately, quantum mechanical simulations based on time-dependent Kohn-Sham density functional theory are computationally too expensive to tackle metal particles larger than 2 nm. Herein, we introduce the recently developed time-dependent orbital-free density functional theory (TD-OFDFT) approach which enables large-scale quantum mechanical simulations of plasmonic responses of metallic nanostructures. Using TD-OFDFT, we have performed quantum mechanical simulations to understand size-dependent plasmonic response of Na nanoparticles and plasmonic responses in Na nanoparticle dimers and trimers. An outlook of future development of the TD-OFDFT method is also presented.

  16. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    PubMed

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  17. Short- and long-time diffusion and dynamic scaling in suspensions of charged colloidal particles

    NASA Astrophysics Data System (ADS)

    Banchio, Adolfo J.; Heinen, Marco; Holmqvist, Peter; Nägele, Gerhard

    2018-04-01

    We report on a comprehensive theory-simulation-experimental study of collective and self-diffusion in concentrated suspensions of charge-stabilized colloidal spheres. In theory and simulation, the spheres are assumed to interact directly by a hard-core plus screened Coulomb effective pair potential. The intermediate scattering function, fc(q, t), is calculated by elaborate accelerated Stokesian dynamics (ASD) simulations for Brownian systems where many-particle hydrodynamic interactions (HIs) are fully accounted for, using a novel extrapolation scheme to a macroscopically large system size valid for all correlation times. The study spans the correlation time range from the colloidal short-time to the long-time regime. Additionally, Brownian Dynamics (BD) simulation and mode-coupling theory (MCT) results of fc(q, t) are generated where HIs are neglected. Using these results, the influence of HIs on collective and self-diffusion and the accuracy of the MCT method are quantified. It is shown that HIs enhance collective and self-diffusion at intermediate and long times. At short times self-diffusion, and for wavenumbers outside the structure factor peak region also collective diffusion, are slowed down by HIs. MCT significantly overestimates the slowing influence of dynamic particle caging. The dynamic scattering functions obtained in the ASD simulations are in overall good agreement with our dynamic light scattering (DLS) results for a concentration series of charged silica spheres in an organic solvent mixture, in the experimental time window and wavenumber range. From the simulation data for the time derivative of the width function associated with fc(q, t), there is indication of long-time exponential decay of fc(q, t), for wavenumbers around the location of the static structure factor principal peak. The experimental scattering functions in the probed time range are consistent with a time-wavenumber factorization scaling behavior of fc(q, t) that was first reported by Segrè and Pusey [Phys. Rev. Lett. 77, 771 (1996)] for suspensions of hard spheres. Our BD simulation and MCT results predict a significant violation of exact factorization scaling which, however, is approximately restored according to the ASD results when HIs are accounted for, consistent with the experimental findings for fc(q, t). Our study of collective diffusion is amended by simulation and theoretical results for the self-intermediate scattering function, fs(q, t), and its non-Gaussian parameter α2(t) and for the particle mean squared displacement W(t) and its time derivative. Since self-diffusion properties are not assessed in standard DLS measurements, a method to deduce W(t) approximately from fc(q, t) is theoretically validated.

  18. MSM/RD: Coupling Markov state models of molecular kinetics with reaction-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Dibak, Manuel; del Razo, Mauricio J.; De Sancho, David; Schütte, Christof; Noé, Frank

    2018-06-01

    Molecular dynamics (MD) simulations can model the interactions between macromolecules with high spatiotemporal resolution but at a high computational cost. By combining high-throughput MD with Markov state models (MSMs), it is now possible to obtain long time-scale behavior of small to intermediate biomolecules and complexes. To model the interactions of many molecules at large length scales, particle-based reaction-diffusion (RD) simulations are more suitable but lack molecular detail. Thus, coupling MSMs and RD simulations (MSM/RD) would be highly desirable, as they could efficiently produce simulations at large time and length scales, while still conserving the characteristic features of the interactions observed at atomic detail. While such a coupling seems straightforward, fundamental questions are still open: Which definition of MSM states is suitable? Which protocol to merge and split RD particles in an association/dissociation reaction will conserve the correct bimolecular kinetics and thermodynamics? In this paper, we make the first step toward MSM/RD by laying out a general theory of coupling and proposing a first implementation for association/dissociation of a protein with a small ligand (A + B ⇌ C). Applications on a toy model and CO diffusion into the heme cavity of myoglobin are reported.

  19. General-relativistic Large-eddy Simulations of Binary Neutron Star Mergers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radice, David, E-mail: dradice@astro.princeton.edu

    The flow inside remnants of binary neutron star (NS) mergers is expected to be turbulent, because of magnetohydrodynamics instability activated at scales too small to be resolved in simulations. To study the large-scale impact of these instabilities, we develop a new formalism, based on the large-eddy simulation technique, for the modeling of subgrid-scale turbulent transport in general relativity. We apply it, for the first time, to the simulation of the late-inspiral and merger of two NSs. We find that turbulence can significantly affect the structure and survival time of the merger remnant, as well as its gravitational-wave (GW) and neutrinomore » emissions. The former will be relevant for GW observation of merging NSs. The latter will affect the composition of the outflow driven by the merger and might influence its nucleosynthetic yields. The accretion rate after black hole formation is also affected. Nevertheless, we find that, for the most likely values of the turbulence mixing efficiency, these effects are relatively small and the GW signal will be affected only weakly by the turbulence. Thus, our simulations provide a first validation of all existing post-merger GW models.« less

  20. Progress in Computational Simulation of Earthquakes

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay; Lyzenga, Gregory; Judd, Michele; Li, P. Peggy; Norton, Charles; Tisdale, Edwin; Granat, Robert

    2006-01-01

    GeoFEST(P) is a computer program written for use in the QuakeSim project, which is devoted to development and improvement of means of computational simulation of earthquakes. GeoFEST(P) models interacting earthquake fault systems from the fault-nucleation to the tectonic scale. The development of GeoFEST( P) has involved coupling of two programs: GeoFEST and the Pyramid Adaptive Mesh Refinement Library. GeoFEST is a message-passing-interface-parallel code that utilizes a finite-element technique to simulate evolution of stress, fault slip, and plastic/elastic deformation in realistic materials like those of faulted regions of the crust of the Earth. The products of such simulations are synthetic observable time-dependent surface deformations on time scales from days to decades. Pyramid Adaptive Mesh Refinement Library is a software library that facilitates the generation of computational meshes for solving physical problems. In an application of GeoFEST(P), a computational grid can be dynamically adapted as stress grows on a fault. Simulations on workstations using a few tens of thousands of stress and displacement finite elements can now be expanded to multiple millions of elements with greater than 98-percent scaled efficiency on over many hundreds of parallel processors (see figure).

  1. Asynchronous adaptive time step in quantitative cellular automata modeling

    PubMed Central

    Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan

    2004-01-01

    Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901

  2. Taking care of business in a flash: constraining the time-scale for low-mass satellite quenching with ELVIS

    NASA Astrophysics Data System (ADS)

    Fillingham, Sean P.; Cooper, Michael C.; Wheeler, Coral; Garrison-Kimmel, Shea; Boylan-Kolchin, Michael; Bullock, James S.

    2015-12-01

    The vast majority of dwarf satellites orbiting the Milky Way and M31 are quenched, while comparable galaxies in the field are gas rich and star forming. Assuming that this dichotomy is driven by environmental quenching, we use the Exploring the Local Volume in Simulations (ELVIS) suite of N-body simulations to constrain the characteristic time-scale upon which satellites must quench following infall into the virial volumes of their hosts. The high satellite quenched fraction observed in the Local Group demands an extremely short quenching time-scale (˜2 Gyr) for dwarf satellites in the mass range M⋆ ˜ 106-108 M⊙. This quenching time-scale is significantly shorter than that required to explain the quenched fraction of more massive satellites (˜8 Gyr), both in the Local Group and in more massive host haloes, suggesting a dramatic change in the dominant satellite quenching mechanism at M⋆ ≲ 108 M⊙. Combining our work with the results of complementary analyses in the literature, we conclude that the suppression of star formation in massive satellites (M⋆ ˜ 108-1011 M⊙) is broadly consistent with being driven by starvation, such that the satellite quenching time-scale corresponds to the cold gas depletion time. Below a critical stellar mass scale of ˜108 M⊙, however, the required quenching times are much shorter than the expected cold gas depletion times. Instead, quenching must act on a time-scale comparable to the dynamical time of the host halo. We posit that ram-pressure stripping can naturally explain this behaviour, with the critical mass (of M⋆ ˜ 108 M⊙) corresponding to haloes with gravitational restoring forces that are too weak to overcome the drag force encountered when moving through an extended, hot circumgalactic medium.

  3. Computer considerations for real time simulation of a generalized rotor model

    NASA Technical Reports Server (NTRS)

    Howe, R. M.; Fogarty, L. E.

    1977-01-01

    Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.

  4. Ultrafast spectroscopy reveals subnanosecond peptide conformational dynamics and validates molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Spörlein, Sebastian; Carstens, Heiko; Satzger, Helmut; Renner, Christian; Behrendt, Raymond; Moroder, Luis; Tavan, Paul; Zinth, Wolfgang; Wachtveitl, Josef

    2002-06-01

    Femtosecond time-resolved spectroscopy on model peptides with built-in light switches combined with computer simulation of light-triggered motions offers an attractive integrated approach toward the understanding of peptide conformational dynamics. It was applied to monitor the light-induced relaxation dynamics occurring on subnanosecond time scales in a peptide that was backbone-cyclized with an azobenzene derivative as optical switch and spectroscopic probe. The femtosecond spectra permit the clear distinguishing and characterization of the subpicosecond photoisomerization of the chromophore, the subsequent dissipation of vibrational energy, and the subnanosecond conformational relaxation of the peptide. The photochemical cis/trans-isomerization of the chromophore and the resulting peptide relaxations have been simulated with molecular dynamics calculations. The calculated reaction kinetics, as monitored by the energy content of the peptide, were found to match the spectroscopic data. Thus we verify that all-atom molecular dynamics simulations can quantitatively describe the subnanosecond conformational dynamics of peptides, strengthening confidence in corresponding predictions for longer time scales.

  5. Thermo-Oxidative Induced Damage in Polymer Composites: Microstructure Image-Based Multi-Scale Modeling and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Hussein, Rafid M.; Chandrashekhara, K.

    2017-11-01

    A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.

  6. Determining erosion relevant soil characteristics with a small-scale rainfall simulator

    NASA Astrophysics Data System (ADS)

    Schindewolf, M.; Schmidt, J.

    2009-04-01

    The use of soil erosion models is of great importance in soil and water conservation. Routine application of these models on the regional scale is not at least limited by the high parameter demands. Although the EROSION 3D simulation model is operating with a comparable low number of parameters, some of the model input variables could only be determined by rainfall simulation experiments. The existing data base of EROSION 3D was created in the mid 90s based on large-scale rainfall simulation experiments on 22x2m sized experimental plots. Up to now this data base does not cover all soil and field conditions adequately. Therefore a new campaign of experiments would be essential to produce additional information especially with respect to the effects of new soil management practices (e.g. long time conservation tillage, non tillage). The rainfall simulator used in the actual campaign consists of 30 identic modules, which are equipped with oscillating rainfall nozzles. Veejet 80/100 (Spraying Systems Co., Wheaton, IL) are used in order to ensure best possible comparability to natural rainfalls with respect to raindrop size distribution and momentum transfer. Central objectives of the small-scale rainfall simulator are - effectively application - provision of comparable results to large-scale rainfall simulation experiments. A crucial problem in using the small scale simulator is the restriction on rather small volume rates of surface runoff. Under this conditions soil detachment is governed by raindrop impact. Thus impact of surface runoff on particle detachment cannot be reproduced adequately by a small-scale rainfall simulator With this problem in mind this paper presents an enhanced small-scale simulator which allows a virtual multiplication of the plot length by feeding additional sediment loaded water to the plot from upstream. Thus is possible to overcome the plot length limited to 3m while reproducing nearly similar flow conditions as in rainfall experiments on standard plots. The simulator is extensively applied to plots of different soil types, crop types and management systems. The comparison with existing data sets obtained by large-scale rainfall simulations show that results can adequately be reproduced by the applied combination of small-scale rainfall simulator and sediment loaded water influx.

  7. Bayesian Techniques for Comparing Time-dependent GRMHD Simulations to Variable Event Horizon Telescope Observations

    NASA Astrophysics Data System (ADS)

    Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios

    2016-12-01

    The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.

  8. BAYESIAN TECHNIQUES FOR COMPARING TIME-DEPENDENT GRMHD SIMULATIONS TO VARIABLE EVENT HORIZON TELESCOPE OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan

    2016-12-01

    The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore themore » robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.« less

  9. Transition from large-scale to small-scale dynamo.

    PubMed

    Ponty, Y; Plunian, F

    2011-04-15

    The dynamo equations are solved numerically with a helical forcing corresponding to the Roberts flow. In the fully turbulent regime the flow behaves as a Roberts flow on long time scales, plus turbulent fluctuations at short time scales. The dynamo onset is controlled by the long time scales of the flow, in agreement with the former Karlsruhe experimental results. The dynamo mechanism is governed by a generalized α effect, which includes both the usual α effect and turbulent diffusion, plus all higher order effects. Beyond the onset we find that this generalized α effect scales as O(Rm(-1)), suggesting the takeover of small-scale dynamo action. This is confirmed by simulations in which dynamo occurs even if the large-scale field is artificially suppressed.

  10. Demonstration of coal reburning for cyclone boiler NO{sub x} control. Appendix, Book 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Based on the industry need for a pilot-scale cyclone boiler simulator, Babcock Wilcox (B&W) designed, fabricated, and installed such a facility at its Alliance Research Center (ARC) in 1985. The project involved conversion of an existing pulverized coal-fired facility to be cyclone-firing capable. Additionally, convective section tube banks were installed in the upper furnace in order to simulate a typical boiler convection pass. The small boiler simulator (SBS) is designed to simulate most fireside aspects of full-size utility boilers such as combustion and flue gas emissions characteristics, fireside deposition, etc. Prior to the design of the pilot-scale cyclone boiler simulator,more » the various cyclone boiler types were reviewed in order to identify the inherent cyclone boiler design characteristics which are applicable to the majority of these boilers. The cyclone boiler characteristics that were reviewed include NO{sub x} emissions, furnace exit gas temperature (FEGT) carbon loss, and total furnace residence time. Previous pilot-scale cyclone-fired furnace experience identified the following concerns: (1) Operability of a small cyclone furnace (e.g., continuous slag tapping capability). (2) The optimum cyclone(s) configuration for the pilot-scale unit. (3) Compatibility of NO{sub x} levels, carbon burnout, cyclone ash carryover to the convection pass, cyclone temperature, furnace residence time, and FEGT.« less

  11. WarpIV: In situ visualization and analysis of ion accelerator simulations

    DOE PAGES

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...

    2016-05-09

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  12. A fast image simulation algorithm for scanning transmission electron microscopy.

    PubMed

    Ophus, Colin

    2017-01-01

    Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. We present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this method with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.

  13. A fast image simulation algorithm for scanning transmission electron microscopy

    DOE PAGES

    Ophus, Colin

    2017-05-10

    Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. Here, we present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this methodmore » with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.« less

  14. Integrating macro and micro scale approaches in the agent-based modeling of residential dynamics

    NASA Astrophysics Data System (ADS)

    Saeedi, Sara

    2018-06-01

    With the advancement of computational modeling and simulation (M&S) methods as well as data collection technologies, urban dynamics modeling substantially improved over the last several decades. The complex urban dynamics processes are most effectively modeled not at the macro-scale, but following a bottom-up approach, by simulating the decisions of individual entities, or residents. Agent-based modeling (ABM) provides the key to a dynamic M&S framework that is able to integrate socioeconomic with environmental models, and to operate at both micro and macro geographical scales. In this study, a multi-agent system is proposed to simulate residential dynamics by considering spatiotemporal land use changes. In the proposed ABM, macro-scale land use change prediction is modeled by Artificial Neural Network (ANN) and deployed as the agent environment and micro-scale residential dynamics behaviors autonomously implemented by household agents. These two levels of simulation interacted and jointly promoted urbanization process in an urban area of Tehran city in Iran. The model simulates the behavior of individual households in finding ideal locations to dwell. The household agents are divided into three main groups based on their income rank and they are further classified into different categories based on a number of attributes. These attributes determine the households' preferences for finding new dwellings and change with time. The ABM environment is represented by a land-use map in which the properties of the land parcels change dynamically over the simulation time. The outputs of this model are a set of maps showing the pattern of different groups of households in the city. These patterns can be used by city planners to find optimum locations for building new residential units or adding new services to the city. The simulation results show that combining macro- and micro-level simulation can give full play to the potential of the ABM to understand the driving mechanism of urbanization and provide decision-making support for urban management.

  15. Molecular Dynamics Simulations and Kinetic Measurements to Estimate and Predict Protein-Ligand Residence Times.

    PubMed

    Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea

    2016-08-11

    Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.

  16. Inertial particle manipulation in microscale oscillatory flows

    NASA Astrophysics Data System (ADS)

    Agarwal, Siddhansh; Rallabandi, Bhargav; Raju, David; Hilgenfeldt, Sascha

    2017-11-01

    Recent work has shown that inertial effects in oscillating flows can be exploited for simultaneous transport and differential displacement of microparticles, enabling size sorting of such particles on extraordinarily short time scales. Generalizing previous theory efforts, we here derive a two-dimensional time-averaged version of the Maxey-Riley equation that includes the effect of an oscillating interface to model particle dynamics in such flows. Separating the steady transport time scale from the oscillatory time scale results in a simple and computationally efficient reduced model that preserves all slow-time features of the full unsteady Maxey-Riley simulations, including inertial particle displacement. Comparison is made not only to full simulations, but also to experiments using oscillating bubbles as the driving interfaces. In this case, the theory predicts either an attraction to or a repulsion from the bubble interface due to inertial effects, so that versatile particle manipulation is possible using differences in particle size, particle/fluid density contrast and streaming strength. We also demonstrate that these predictions are in agreement with experiments.

  17. A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr

    We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less

  18. A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit

    DOE PAGES

    Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...

    2017-06-07

    We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less

  19. Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.

    2002-01-01

    Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.

  20. Mesoscale Thermodynamic Analysis of Atomic-Scale Dislocation-Obstacle Interactions Simulated by Molecular Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monet, Giath; Bacon, David J; Osetskiy, Yury N

    2010-01-01

    Given the time and length scales in molecular dynamics (MD) simulations of dislocation-defect interactions, quantitative MD results cannot be used directly in larger scale simulations or compared directly with experiment. A method to extract fundamental quantities from MD simulations is proposed here. The first quantity is a critical stress defined to characterise the obstacle resistance. This mesoscopic parameter, rather than the obstacle 'strength' designed for a point obstacle, is to be used for an obstacle of finite size. At finite temperature, our analyses of MD simulations allow the activation energy to be determined as a function of temperature. The resultsmore » confirm the proportionality between activation energy and temperature that is frequently observed by experiment. By coupling the data for the activation energy and the critical stress as functions of temperature, we show how the activation energy can be deduced at a given value of the critical stress.« less

  1. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    PubMed

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  2. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    PubMed Central

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  3. Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.

    PubMed

    Thiébaut, Anne C M; Bénichou, Jacques

    2004-12-30

    Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.

  4. Real-time gray-scale photolithography for fabrication of continuous microstructure

    NASA Astrophysics Data System (ADS)

    Peng, Qinjun; Guo, Yongkang; Liu, Shijie; Cui, Zheng

    2002-10-01

    A novel real-time gray-scale photolithography technique for the fabrication of continuous microstructures that uses a LCD panel as a real-time gray-scale mask is presented. The principle of design of the technique is explained, and computer simulation results based on partially coherent imaging theory are given for the patterning of a microlens array and a zigzag grating. An experiment is set up, and a microlens array and a zigzag grating on panchromatic silver halide sensitized gelatin with trypsinase etching are obtained.

  5. Assessing self-care and social function using a computer adaptive testing version of the pediatric evaluation of disability inventory.

    PubMed

    Coster, Wendy J; Haley, Stephen M; Ni, Pengsheng; Dumas, Helene M; Fragala-Pinkham, Maria A

    2008-04-01

    To examine score agreement, validity, precision, and response burden of a prototype computer adaptive testing (CAT) version of the self-care and social function scales of the Pediatric Evaluation of Disability Inventory compared with the full-length version of these scales. Computer simulation analysis of cross-sectional and longitudinal retrospective data; cross-sectional prospective study. Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics; community-based day care, preschool, and children's homes. Children with disabilities (n=469) and 412 children with no disabilities (analytic sample); 38 children with disabilities and 35 children without disabilities (cross-validation sample). Not applicable. Summary scores from prototype CAT applications of each scale using 15-, 10-, and 5-item stopping rules; scores from the full-length self-care and social function scales; time (in seconds) to complete assessments and respondent ratings of burden. Scores from both computer simulations and field administration of the prototype CATs were highly consistent with scores from full-length administration (r range, .94-.99). Using computer simulation of retrospective data, discriminant validity, and sensitivity to change of the CATs closely approximated that of the full-length scales, especially when the 15- and 10-item stopping rules were applied. In the cross-validation study the time to administer both CATs was 4 minutes, compared with over 16 minutes to complete the full-length scales. Self-care and social function score estimates from CAT administration are highly comparable with those obtained from full-length scale administration, with small losses in validity and precision and substantial decreases in administration time.

  6. The Stop-Only-While-Shocking algorithm reduces hands-off time by 17% during cardiopulmonary resuscitation - a simulation study.

    PubMed

    Koch Hansen, Lars; Mohammed, Anna; Pedersen, Magnus; Folkestad, Lars; Brodersen, Jacob; Hey, Thomas; Lyhne Christensen, Nicolaj; Carter-Storch, Rasmus; Bendix, Kristoffer; Hansen, Morten R; Brabrand, Mikkel

    2016-12-01

    Reducing hands-off time during cardiopulmonary resuscitation (CPR) is believed to increase survival after cardiac arrests because of the sustaining of organ perfusion. The aim of our study was to investigate whether charging the defibrillator before rhythm analyses and shock delivery significantly reduced hands-off time compared with the European Resuscitation Council (ERC) 2010 CPR guideline algorithm in full-scale cardiac arrest scenarios. The study was designed as a full-scale cardiac arrest simulation study including administration of drugs. Participants were randomized into using the Stop-Only-While-Shocking (SOWS) algorithm or the ERC2010 algorithm. In SOWS, chest compressions were only interrupted for a post-charging rhythm analysis and immediate shock delivery. A Resusci Anne HLR-D manikin and a LIFEPACK 20 defibrillator were used. The manikin recorded time and chest compressions. Sample size was calculated with an α of 0.05 and 80% power showed that we should test four scenarios with each algorithm. Twenty-nine physicians participated in 11 scenarios. Hands-off time was significantly reduced 17% using the SOWS algorithm compared with ERC2010 [22.1% (SD 2.3) hands-off time vs. 26.6% (SD 4.8); P<0.05]. In full-scale cardiac arrest simulations, a minor change consisting of charging the defibrillator before rhythm check reduces hands-off time by 17% compared with ERC2010 guidelines.

  7. Micro-Macro Coupling in Plasma Self-Organization Processes during Island Coalescence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan Weigang; Lapenta, Giovanni; Centrum voor Plasma-Astrofysica, Departement Wiskunde, Katholieke Universiteit Leuven, Celestijnenlaan 200B, 3001 Leuven

    The collisionless island coalescence process is studied with particle-in-cell simulations, as an internal-driven magnetic self-organization scenario. The macroscopic relaxation time, corresponding to the total time required for the coalescence to complete, is found to depend crucially on the scale of the system. For small-scale systems, where the macroscopic scales and the dissipation scales are more tightly coupled, the relaxation time is independent of the strength of the internal driving force: the small-scale processes of magnetic reconnection adjust to the amount of the initial magnetic flux to be reconnected, indicating that at the microscopic scales reconnection is enslaved by the macroscopicmore » drive. However, for large-scale systems, where the micro-macro scale separation is larger, the relaxation time becomes dependent on the driving force.« less

  8. Power spectrum for the small-scale Universe

    NASA Astrophysics Data System (ADS)

    Widrow, Lawrence M.; Elahi, Pascal J.; Thacker, Robert J.; Richardson, Mark; Scannapieco, Evan

    2009-08-01

    The first objects to arise in a cold dark matter (CDM) universe present a daunting challenge for models of structure formation. In the ultra small-scale limit, CDM structures form nearly simultaneously across a wide range of scales. Hierarchical clustering no longer provides a guiding principle for theoretical analyses and the computation time required to carry out credible simulations becomes prohibitively high. To gain insight into this problem, we perform high-resolution (N = 7203-15843) simulations of an Einstein-de Sitter cosmology where the initial power spectrum is P(k) ~ kn, with -2.5 <= n <= - 1. Self-similar scaling is established for n = -1 and -2 more convincingly than in previous, lower resolution simulations and for the first time, self-similar scaling is established for an n = -2.25 simulation. However, finite box-size effects induce departures from self-similar scaling in our n = -2.5 simulation. We compare our results with the predictions for the power spectrum from (one-loop) perturbation theory and demonstrate that the renormalization group approach suggested by McDonald improves perturbation theory's ability to predict the power spectrum in the quasi-linear regime. In the non-linear regime, our power spectra differ significantly from the widely used fitting formulae of Peacock & Dodds and Smith et al. and a new fitting formula is presented. Implications of our results for the stable clustering hypothesis versus halo model debate are discussed. Our power spectra are inconsistent with predictions of the stable clustering hypothesis in the high-k limit and lend credence to the halo model. Nevertheless, the fitting formula advocated in this paper is purely empirical and not derived from a specific formulation of the halo model.

  9. The impact of ARM on climate modeling

    DOE PAGES

    Randall, David A.; Del Genio, Anthony D.; Donner, Lee J.; ...

    2016-07-15

    Climate models are among humanity’s most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of Earth down to 100 km or smaller and implicitly include the effects of processes on even smaller scales down to a micron or so. In addition, themore » atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM).« less

  10. Towards Data-Driven Simulations of Wildfire Spread using Ensemble-based Data Assimilation

    NASA Astrophysics Data System (ADS)

    Rochoux, M. C.; Bart, J.; Ricci, S. M.; Cuenot, B.; Trouvé, A.; Duchaine, F.; Morel, T.

    2012-12-01

    Real-time predictions of a propagating wildfire remain a challenging task because the problem involves both multi-physics and multi-scales. The propagation speed of wildfires, also called the rate of spread (ROS), is indeed determined by complex interactions between pyrolysis, combustion and flow dynamics, atmospheric dynamics occurring at vegetation, topographical and meteorological scales. Current operational fire spread models are mainly based on a semi-empirical parameterization of the ROS in terms of vegetation, topographical and meteorological properties. For the fire spread simulation to be predictive and compatible with operational applications, the uncertainty on the ROS model should be reduced. As recent progress made in remote sensing technology provides new ways to monitor the fire front position, a promising approach to overcome the difficulties found in wildfire spread simulations is to integrate fire modeling and fire sensing technologies using data assimilation (DA). For this purpose we have developed a prototype data-driven wildfire spread simulator in order to provide optimal estimates of poorly known model parameters [*]. The data-driven simulation capability is adapted for more realistic wildfire spread : it considers a regional-scale fire spread model that is informed by observations of the fire front location. An Ensemble Kalman Filter algorithm (EnKF) based on a parallel computing platform (OpenPALM) was implemented in order to perform a multi-parameter sequential estimation where wind magnitude and direction are in addition to vegetation properties (see attached figure). The EnKF algorithm shows its good ability to track a small-scale grassland fire experiment and ensures a good accounting for the sensitivity of the simulation outcomes to the control parameters. As a conclusion, it was shown that data assimilation is a promising approach to more accurately forecast time-varying wildfire spread conditions as new airborne-like observations of the fire front location get available. [*] Rochoux, M.C., Delmotte, B., Cuenot, B., Ricci, S., and Trouvé, A. (2012) "Regional-scale simulations of wildland fire spread informed by real-time flame front observations", Proc. Combust. Inst., 34, in press http://dx.doi.org/10.1016/j.proci.2012.06.090 EnKF-based tracking of small-scale grassland fire experiment, with estimation of wind and fuel parameters.

  11. Forecast model for great earthquakes at the Nankai Trough subduction zone

    USGS Publications Warehouse

    Stuart, W.D.

    1988-01-01

    An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.

  12. Extending the Community Multiscale Air Quality (CMAQ) Modeling System to Hemispheric Scales: Overview of Process Considerations and Initial Applications

    PubMed Central

    Mathur, Rohit; Xing, Jia; Gilliam, Robert; Sarwar, Golam; Hogrefe, Christian; Pleim, Jonathan; Pouliot, George; Roselle, Shawn; Spero, Tanya L.; Wong, David C.; Young, Jeffrey

    2018-01-01

    The Community Multiscale Air Quality (CMAQ) modeling system is extended to simulate ozone, particulate matter, and related precursor distributions throughout the Northern Hemisphere. Modelled processes were examined and enhanced to suitably represent the extended space and time scales for such applications. Hemispheric scale simulations with CMAQ and the Weather Research and Forecasting (WRF) model are performed for multiple years. Model capabilities for a range of applications including episodic long-range pollutant transport, long-term trends in air pollution across the Northern Hemisphere, and air pollution-climate interactions are evaluated through detailed comparison with available surface, aloft, and remotely sensed observations. The expansion of CMAQ to simulate the hemispheric scales provides a framework to examine interactions between atmospheric processes occurring at various spatial and temporal scales with physical, chemical, and dynamical consistency. PMID:29681922

  13. Real-time micro-modelling of city evacuations

    NASA Astrophysics Data System (ADS)

    Löhner, Rainald; Haug, Eberhard; Zinggerling, Claudio; Oñate, Eugenio

    2018-01-01

    A methodology to integrate geographical information system (GIS) data with large-scale pedestrian simulations has been developed. Advances in automatic data acquisition and archiving from GIS databases, automatic input for pedestrian simulations, as well as scalable pedestrian simulation tools have made it possible to simulate pedestrians at the individual level for complete cities in real time. An example that simulates the evacuation of the city of Barcelona demonstrates that this is now possible. This is the first step towards a fully integrated crowd prediction and management tool that takes into account not only data gathered in real time from cameras, cell phones or other sensors, but also merges these with advanced simulation tools to predict the future state of the crowd.

  14. As a Matter of Force—Systematic Biases in Idealized Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Grete, Philipp; O’Shea, Brian W.; Beckwith, Kris

    2018-05-01

    Many astrophysical systems encompass very large dynamical ranges in space and time, which are not accessible by direct numerical simulations. Thus, idealized subvolumes are often used to study small-scale effects including the dynamics of turbulence. These turbulent boxes require an artificial driving in order to mimic energy injection from large-scale processes. In this Letter, we show and quantify how the autocorrelation time of the driving and its normalization systematically change the properties of an isothermal compressible magnetohydrodynamic flow in the sub- and supersonic regime and affect astrophysical observations such as Faraday rotation. For example, we find that δ-in-time forcing with a constant energy injection leads to a steeper slope in kinetic energy spectrum and less-efficient small-scale dynamo action. In general, we show that shorter autocorrelation times require more power in the acceleration field, which results in more power in compressive modes that weaken the anticorrelation between density and magnetic field strength. Thus, derived observables, such as the line-of-sight (LOS) magnetic field from rotation measures, are systematically biased by the driving mechanism. We argue that δ-in-time forcing is unrealistic and numerically unresolved, and conclude that special care needs to be taken in interpreting observational results based on the use of idealized simulations.

  15. Advances and issues from the simulation of planetary magnetospheres with recent supercomputer systems

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2016-12-01

    Planetary magnetospheres are very large, while phenomena within them occur on meso- and micro-scales. These scales range from 10s of planetary radii to kilometers. To understand dynamics in these multi-scale systems, numerical simulations have been performed by using the supercomputer systems. We have studied the magnetospheres of Earth, Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations for a long time, however, we have not obtained the phenomena near the limits of the MHD approximation. In particular, we have not studied meso-scale phenomena that can be addressed by using MHD.Recently we performed our MHD simulation of Earth's magnetosphere by using the K-computer which is the first 10PFlops supercomputer and obtained multi-scale flow vorticity for the both northward and southward IMF. Furthermore, we have access to supercomputer systems which have Xeon, SPARC64, and vector-type CPUs and can compare simulation results between the different systems. Finally, we have compared the results of our parameter survey of the magnetosphere with observations from the HISAKI spacecraft.We have encountered a number of difficulties effectively using the latest supercomputer systems. First the size of simulation output increases greatly. Now a simulation group produces over 1PB of output. Storage and analysis of this much data is difficult. The traditional way to analyze simulation results is to move the results to the investigator's home computer. This takes over three months using an end-to-end 10Gbps network. In reality, there are problems at some nodes such as firewalls that can increase the transfer time to over one year. Another issue is post-processing. It is hard to treat a few TB of simulation output due to the memory limitations of a post-processing computer. To overcome these issues, we have developed and introduced the parallel network storage, the highly efficient network protocol and the CUI based visualization tools.In this study, we will show the latest simulation results using the petascale supercomputer and problems from the use of these supercomputer systems.

  16. Super massive black hole in galactic nuclei with tidal disruption of stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Shiyan; Berczik, Peter; Spurzem, Rainer

    Tidal disruption of stars by super massive central black holes from dense star clusters is modeled by high-accuracy direct N-body simulation. The time evolution of the stellar tidal disruption rate, the effect of tidal disruption on the stellar density profile, and, for the first time, the detailed origin of tidally disrupted stars are carefully examined and compared with classic papers in the field. Up to 128k particles are used in simulation to model the star cluster around a super massive black hole, and we use the particle number and the tidal radius of the black hole as free parameters formore » a scaling analysis. The transition from full to empty loss-cone is analyzed in our data, and the tidal disruption rate scales with the particle number, N, in the expected way for both cases. For the first time in numerical simulations (under certain conditions) we can support the concept of a critical radius of Frank and Rees, which claims that most stars are tidally accreted on highly eccentric orbits originating from regions far outside the tidal radius. Due to the consumption of stars moving on radial orbits, a velocity anisotropy is found inside the cluster. Finally we estimate the real galactic center based on our simulation results and the scaling analysis.« less

  17. Super Massive Black Hole in Galactic Nuclei with Tidal Disruption of Stars

    NASA Astrophysics Data System (ADS)

    Zhong, Shiyan; Berczik, Peter; Spurzem, Rainer

    2014-09-01

    Tidal disruption of stars by super massive central black holes from dense star clusters is modeled by high-accuracy direct N-body simulation. The time evolution of the stellar tidal disruption rate, the effect of tidal disruption on the stellar density profile, and, for the first time, the detailed origin of tidally disrupted stars are carefully examined and compared with classic papers in the field. Up to 128k particles are used in simulation to model the star cluster around a super massive black hole, and we use the particle number and the tidal radius of the black hole as free parameters for a scaling analysis. The transition from full to empty loss-cone is analyzed in our data, and the tidal disruption rate scales with the particle number, N, in the expected way for both cases. For the first time in numerical simulations (under certain conditions) we can support the concept of a critical radius of Frank & Rees, which claims that most stars are tidally accreted on highly eccentric orbits originating from regions far outside the tidal radius. Due to the consumption of stars moving on radial orbits, a velocity anisotropy is found inside the cluster. Finally we estimate the real galactic center based on our simulation results and the scaling analysis.

  18. Modeling the MJO rain rates using parameterized large scale dynamics: vertical structure, radiation, and horizontal advection of dry air

    NASA Astrophysics Data System (ADS)

    Wang, S.; Sobel, A. H.; Nie, J.

    2015-12-01

    Two Madden Julian Oscillation (MJO) events were observed during October and November 2011 in the equatorial Indian Ocean during the DYNAMO field campaign. Precipitation rates and large-scale vertical motion profiles derived from the DYNAMO northern sounding array are simulated in a small-domain cloud-resolving model using parameterized large-scale dynamics. Three parameterizations of large-scale dynamics --- the conventional weak temperature gradient (WTG) approximation, vertical mode based spectral WTG (SWTG), and damped gravity wave coupling (DGW) --- are employed. The target temperature profiles and radiative heating rates are taken from a control simulation in which the large-scale vertical motion is imposed (rather than directly from observations), and the model itself is significantly modified from that used in previous work. These methodological changes lead to significant improvement in the results.Simulations using all three methods, with imposed time -dependent radiation and horizontal moisture advection, capture the time variations in precipitation associated with the two MJO events well. The three methods produce significant differences in the large-scale vertical motion profile, however. WTG produces the most top-heavy and noisy profiles, while DGW's is smoother with a peak in midlevels. SWTG produces a smooth profile, somewhere between WTG and DGW, and in better agreement with observations than either of the others. Numerical experiments without horizontal advection of moisture suggest that that process significantly reduces the precipitation and suppresses the top-heaviness of large-scale vertical motion during the MJO active phases, while experiments in which the effect of cloud on radiation are disabled indicate that cloud-radiative interaction significantly amplifies the MJO. Experiments in which interactive radiation is used produce poorer agreement with observation than those with imposed time-varying radiative heating. Our results highlight the importance of both horizontal advection of moisture and cloud-radiative feedback to the dynamics of the MJO, as well as to accurate simulation and prediction of it in models.

  19. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers

    PubMed Central

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation. PMID:28239346

  20. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers.

    PubMed

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation.

  1. Simulation and scaling analysis of a spherical particle-laden blast wave

    NASA Astrophysics Data System (ADS)

    Ling, Y.; Balachandar, S.

    2018-02-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  2. Simulation and scaling analysis of a spherical particle-laden blast wave

    NASA Astrophysics Data System (ADS)

    Ling, Y.; Balachandar, S.

    2018-05-01

    A spherical particle-laden blast wave, generated by a sudden release of a sphere of compressed gas-particle mixture, is investigated by numerical simulation. The present problem is a multiphase extension of the classic finite-source spherical blast-wave problem. The gas-particle flow can be fully determined by the initial radius of the spherical mixture and the properties of gas and particles. In many applications, the key dimensionless parameters, such as the initial pressure and density ratios between the compressed gas and the ambient air, can vary over a wide range. Parametric studies are thus performed to investigate the effects of these parameters on the characteristic time and spatial scales of the particle-laden blast wave, such as the maximum radius the contact discontinuity can reach and the time when the particle front crosses the contact discontinuity. A scaling analysis is conducted to establish a scaling relation between the characteristic scales and the controlling parameters. A length scale that incorporates the initial pressure ratio is proposed, which is able to approximately collapse the simulation results for the gas flow for a wide range of initial pressure ratios. This indicates that an approximate similarity solution for a spherical blast wave exists, which is independent of the initial pressure ratio. The approximate scaling is also valid for the particle front if the particles are small and closely follow the surrounding gas.

  3. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  4. Reaching extended length-scales with temperature-accelerated dynamics

    NASA Astrophysics Data System (ADS)

    Amar, Jacques G.; Shim, Yunsic

    2013-03-01

    In temperature-accelerated dynamics (TAD) a high-temperature molecular dynamics (MD) simulation is used to accelerate the search for the next low-temperature activated event. While TAD has been quite successful in extending the time-scales of simulations of non-equilibrium processes, due to the fact that the computational work scales approximately as the cube of the number of atoms, until recently only simulations of relatively small systems have been carried out. Recently, we have shown that by combining spatial decomposition with our synchronous sublattice algorithm, significantly improved scaling is possible. However, in this approach the size of activated events is limited by the processor size while the dynamics is not exact. Here we discuss progress in developing an alternate approach in which high-temperature parallel MD along with localized saddle-point (LSAD) calculations, are used to carry out TAD simulations without restricting the size of activated events while keeping the dynamics ``exact'' within the context of harmonic transition-state theory. In tests of our LSAD method applied to Ag/Ag(100) annealing and Cu/Cu(100) growth simulations we find significantly improved scaling of TAD, while maintaining a negligibly small error in the energy barriers. Supported by NSF DMR-0907399.

  5. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    NASA Astrophysics Data System (ADS)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  6. Transient ensemble dynamics in time-independent galactic potentials

    NASA Astrophysics Data System (ADS)

    Mahon, M. Elaine; Abernathy, Robert A.; Bradley, Brendan O.; Kandrup, Henry E.

    1995-07-01

    This paper summarizes a numerical investigation of the short-time, possibly transient, behaviour of ensembles of stochastic orbits evolving in fixed non-integrable potentials, with the aim of deriving insights into the structure and evolution of galaxies. The simulations involved three different two-dimensional potentials, quite different in appearance. However, despite these differences, ensembles in all three potentials exhibit similar behaviour. This suggests that the conclusions inferred from the simulations are robust, relying only on basic topological properties, e.g., the existence of KAM tori and cantori. Generic ensembles of initial conditions, corresponding to stochastic orbits, exhibit a rapid coarse-grained approach towards a near-invariant distribution on a time-scale <>t_H, although various irregularities associated with external and/or internal irregularities can drastically accelerate this process. A principal tool in the analysis is the notion of a local Liapounov exponent, which provides a statistical characterization of the overall instability of stochastic orbits over finite time intervals. In particular, there is a precise sense in which confined stochastic orbits are less unstable, with smaller local Liapounov exponents, than are unconfined stochastic orbits.

  7. Effects of simulated turbulence on aircraft handling qualities

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.; Joshi, D. S.

    1977-01-01

    The influence of simulated turbulence on aircraft handling qualities is presented. Pilot opinions of the handling qualities of a light general aviation aircraft were evaluated in a motion-base simulator using a simulated turbulence environment. A realistic representation of turbulence disturbances is described in terms of rms intensity and scale length and their random variations with time. The time histories generated by the proposed turbulence models showed characteristics which are more similar to real turbulence than the frequently-used Gaussian turbulence model. The proposed turbulence models flexibly accommodate changes in atmospheric conditions and are easily implemented in flight simulator studies.

  8. The need for enhanced initial moisture information in simulations of a complex summertime precipitation event

    NASA Technical Reports Server (NTRS)

    Waight, Kenneth T., III; Zack, John W.; Karyampudi, V. Mohan

    1989-01-01

    Initial simulations of the June 28, 1986 Cooperative Huntsville Meteorological Experiment case illustrate the need for mesoscale moisture information in a summertime situation in which deep convection is organized by weak large scale forcing. A methodology is presented for enhancing the initial moisture field from a combination of IR satellite imagery, surface-based cloud observations, and manually digitized radar data. The Mesoscale Atmospheric Simulation Model is utilized to simulate the events of June 28-29. This procedure insures that areas known to have precipitation at the time of initialization will be nearly saturated on the grid scale, which should decrease the time needed by the model to produce the observed Bonnie (a relatively weak hurricane that moved on shore two days before) convection. This method will also result in an initial distribution of model cloudiness (transmissivity) that is very similar to that of the IR satellite image.

  9. Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.

    PubMed

    Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy

    2018-01-23

    Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.

  10. Theoretical restrictions on longest implicit time scales in Markov state models of biomolecular dynamics

    NASA Astrophysics Data System (ADS)

    Sinitskiy, Anton V.; Pande, Vijay S.

    2018-01-01

    Markov state models (MSMs) have been widely used to analyze computer simulations of various biomolecular systems. They can capture conformational transitions much slower than an average or maximal length of a single molecular dynamics (MD) trajectory from the set of trajectories used to build the MSM. A rule of thumb claiming that the slowest implicit time scale captured by an MSM should be comparable by the order of magnitude to the aggregate duration of all MD trajectories used to build this MSM has been known in the field. However, this rule has never been formally proved. In this work, we present analytical results for the slowest time scale in several types of MSMs, supporting the above rule. We conclude that the slowest implicit time scale equals the product of the aggregate sampling and four factors that quantify: (1) how much statistics on the conformational transitions corresponding to the longest implicit time scale is available, (2) how good the sampling of the destination Markov state is, (3) the gain in statistics from using a sliding window for counting transitions between Markov states, and (4) a bias in the estimate of the implicit time scale arising from finite sampling of the conformational transitions. We demonstrate that in many practically important cases all these four factors are on the order of unity, and we analyze possible scenarios that could lead to their significant deviation from unity. Overall, we provide for the first time analytical results on the slowest time scales captured by MSMs. These results can guide further practical applications of MSMs to biomolecular dynamics and allow for higher computational efficiency of simulations.

  11. Dependence of Snowmelt Simulations on Scaling of the Forcing Processes (Invited)

    NASA Astrophysics Data System (ADS)

    Winstral, A. H.; Marks, D. G.; Gurney, R. J.

    2009-12-01

    The spatial organization and scaling relationships of snow distribution in mountain environs is ultimately dependent on the controlling processes. These processes include interactions between weather, topography, vegetation, snow state, and seasonally-dependent radiation inputs. In large scale snow modeling it is vital to know these dependencies to obtain accurate predictions while reducing computational costs. This study examined the scaling characteristics of the forcing processes and the dependency of distributed snowmelt simulations to their scaling. A base model simulation characterized these processes with 10m resolution over a 14.0 km2 basin with an elevation range of 1474 - 2244 masl. Each of the major processes affecting snow accumulation and melt - precipitation, wind speed, solar radiation, thermal radiation, temperature, and vapor pressure - were independently degraded to 1 km resolution. Seasonal and event-specific results were analyzed. Results indicated that scale effects on melt vary by process and weather conditions. The dependence of melt simulations on the scaling of solar radiation fluxes also had a seasonal component. These process-based scaling characteristics should remain static through time as they are based on physical considerations. As such, these results not only provide guidance for current modeling efforts, but are also well suited to predicting how potential climate changes will affect the heterogeneity of mountain snow distributions.

  12. Time-dependent vibrational spectral analysis of first principles trajectory of methylamine with wavelet transform.

    PubMed

    Biswas, Sohag; Mallik, Bhabani S

    2017-04-12

    The fluctuation dynamics of amine stretching frequencies, hydrogen bonds, dangling N-D bonds, and the orientation profile of the amine group of methylamine (MA) were investigated under ambient conditions by means of dispersion-corrected density functional theory-based first principles molecular dynamics (FPMD) simulations. Along with the dynamical properties, various equilibrium properties such as radial distribution function, spatial distribution function, combined radial and angular distribution functions and hydrogen bonding were also calculated. The instantaneous stretching frequencies of amine groups were obtained by wavelet transform of the trajectory obtained from FPMD simulations. The frequency-structure correlation reveals that the amine stretching frequency is weakly correlated with the nearest nitrogen-deuterium distance. The frequency-frequency correlation function has a short time scale of around 110 fs and a longer time scale of about 1.15 ps. It was found that the short time scale originates from the underdamped motion of intact hydrogen bonds of MA pairs. However, the long time scale of the vibrational spectral diffusion of N-D modes is determined by the overall dynamics of hydrogen bonds as well as the dangling ND groups and the inertial rotation of the amine group of the molecule.

  13. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE PAGES

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...

    2016-10-20

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  14. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  15. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    PubMed Central

    Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.

    2016-01-01

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187

  16. Simulation of particle diversity and mixing state over Greater Paris: a model-measurement inter-comparison.

    PubMed

    Zhu, Shupeng; Sartelet, Karine N; Healy, Robert M; Wenger, John C

    2016-07-18

    Air quality models are used to simulate and forecast pollutant concentrations, from continental scales to regional and urban scales. These models usually assume that particles are internally mixed, i.e. particles of the same size have the same chemical composition, which may vary in space and time. Although this assumption may be realistic for continental-scale simulations, where particles originating from different sources have undergone sufficient mixing to achieve a common chemical composition for a given model grid cell and time, it may not be valid for urban-scale simulations, where particles from different sources interact on shorter time scales. To investigate the role of the mixing state assumption on the formation of particles, a size-composition resolved aerosol model (SCRAM) was developed and coupled to the Polyphemus air quality platform. Two simulations, one with the internal mixing hypothesis and another with the external mixing hypothesis, have been carried out for the period 15 January to 11 February 2010, when the MEGAPOLI winter field measurement campaign took place in Paris. The simulated bulk concentrations of chemical species and the concentrations of individual particle classes are compared with the observations of Healy et al. (Atmos. Chem. Phys., 2013, 13, 9479-9496) for the same period. The single particle diversity and the mixing-state index are computed based on the approach developed by Riemer et al. (Atmos. Chem. Phys., 2013, 13, 11423-11439), and they are compared to the measurement-based analyses of Healy et al. (Atmos. Chem. Phys., 2014, 14, 6289-6299). The average value of the single particle diversity, which represents the average number of species within each particle, is consistent between simulation and measurement (2.91 and 2.79 respectively). Furthermore, the average value of the mixing-state index is also well represented in the simulation (69% against 59% from the measurements). The spatial distribution of the mixing-state index shows that the particles are not mixed in urban areas, while they are well mixed in rural areas. This indicates that the assumption of internal mixing traditionally used in transport chemistry models is well suited to rural areas, but this assumption is less realistic for urban areas close to emission sources.

  17. A Lagrangian subgrid-scale model with dynamic estimation of Lagrangian time scale for large eddy simulation of complex flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman; Mahesh, Krishnan

    2012-08-01

    The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.

  18. Single molecule translocation in smectics illustrates the challenge for time-mapping in simulations on multiple scales

    NASA Astrophysics Data System (ADS)

    Mukherjee, Biswaroop; Peter, Christine; Kremer, Kurt

    2017-09-01

    Understanding the connections between the characteristic dynamical time scales associated with a coarse-grained (CG) and a detailed representation is central to the applicability of the coarse-graining methods to understand molecular processes. The process of coarse graining leads to an accelerated dynamics, owing to the smoothening of the underlying free-energy landscapes. Often a single time-mapping factor is used to relate the time scales associated with the two representations. We critically examine this idea using a model system ideally suited for this purpose. Single molecular transport properties are studied via molecular dynamics simulations of the CG and atomistic representations of a liquid crystalline, azobenzene containing mesogen, simulated in the smectic and the isotropic phases. The out-of-plane dynamics in the smectic phase occurs via molecular hops from one smectic layer to the next. Hopping can occur via two mechanisms, with and without significant reorientation. The out-of-plane transport can be understood as a superposition of two (one associated with each mode of transport) independent continuous time random walks for which a single time-mapping factor would be rather inadequate. A comparison of the free-energy surfaces, relevant to the out-of-plane transport, qualitatively supports the above observations. Thus, this work underlines the need for building CG models that exhibit both structural and dynamical consistency to the underlying atomistic model.

  19. Scale-Dependent Rates of Uranyl Surface Complexation Reaction in Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chongxuan; Shang, Jianying; Kerisit, Sebastien N.

    Scale-dependency of uranyl[U(VI)] surface complexation rates was investigated in stirred flow-cell and column systems using a U(VI)-contaminated sediment from the US Department of Energy, Hanford site, WA. The experimental results were used to estimate the apparent rate of U(VI) surface complexation at the grain-scale and in porous media. Numerical simulations using molecular, pore-scale, and continuum models were performed to provide insights into and to estimate the rate constants of U(VI) surface complexation at the different scales. The results showed that the grain-scale rate constant of U(VI) surface complexation was over 3 to 10 orders of magnitude smaller, dependent on themore » temporal scale, than the rate constant calculated using the molecular simulations. The grain-scale rate was faster initially and slower with time, showing the temporal scale-dependency. The largest rate constant at the grain-scale decreased additional 2 orders of magnitude when the rate was scaled to the porous media in the column. The scaling effect from the grain-scale to the porous media became less important for the slower sorption sites. Pore-scale simulations revealed the importance of coupled mass transport and reactions in both intragranular and inter-granular domains, which caused both spatial and temporal dependence of U(VI) surface complexation rates in the sediment. Pore-scale simulations also revealed a new rate-limiting mechanism in the intragranular porous domains that the rate of coupled diffusion and surface complexation reaction was slower than either process alone. The results provided important implications for developing models to scale geochemical/biogeochemical reactions.« less

  20. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  1. Comparison of Cellulose Iβ Simulations with Three Carbohydrate Force Fields.

    PubMed

    Matthews, James F; Beckham, Gregg T; Bergenstråhle-Wohlert, Malin; Brady, John W; Himmel, Michael E; Crowley, Michael F

    2012-02-14

    Molecular dynamics simulations of cellulose have recently become more prevalent due to increased interest in renewable energy applications, and many atomistic and coarse-grained force fields exist that can be applied to cellulose. However, to date no systematic comparison between carbohydrate force fields has been conducted for this important system. To that end, we present a molecular dynamics simulation study of hydrated, 36-chain cellulose Iβ microfibrils at room temperature with three carbohydrate force fields (CHARMM35, GLYCAM06, and Gromos 45a4) up to the near-microsecond time scale. Our results indicate that each of these simulated microfibrils diverge from the cellulose Iβ crystal structure to varying degrees under the conditions tested. The CHARMM35 and GLYCAM06 force fields eventually result in structures similar to those observed at 500 K with the same force fields, which are consistent with the experimentally observed high-temperature behavior of cellulose I. The third force field, Gromos 45a4, produces behavior significantly different from experiment, from the other two force fields, and from previously reported simulations with this force field using shorter simulation times and constrained periodic boundary conditions. For the GLYCAM06 force field, initial hydrogen-bond conformations and choice of electrostatic scaling factors significantly affect the rate of structural divergence. Our results suggest dramatically different time scales for convergence of properties of interest, which is important in the design of computational studies and comparisons to experimental data. This study highlights that further experimental and theoretical work is required to understand the structure of small diameter cellulose microfibrils typical of plant cellulose.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  3. Flow topologies and turbulence scales in a jet-in-cross-flow

    DOE PAGES

    Oefelein, Joseph C.; Ruiz, Anthony M.; Lacaze, Guilhem

    2015-04-03

    This study presents a detailed analysis of the flow topologies and turbulence scales in the jet-in-cross-flow experiment of [Su and Mungal JFM 2004]. The analysis is performed using the Large Eddy Simulation (LES) technique with a highly resolved grid and time-step and well controlled boundary conditions. This enables quantitative agreement with the first and second moments of turbulence statistics measured in the experiment. LES is used to perform the analysis since experimental measurements of time-resolved 3D fields are still in their infancy and because sampling periods are generally limited with direct numerical simulation. A major focal point is the comprehensivemore » characterization of the turbulence scales and their evolution. Time-resolved probes are used with long sampling periods to obtain maps of the integral scales, Taylor microscales, and turbulent kinetic energy spectra. Scalar-fluctuation scales are also quantified. In the near-field, coherent structures are clearly identified, both in physical and spectral space. Along the jet centerline, turbulence scales grow according to a classical one-third power law. However, the derived maps of turbulence scales reveal strong inhomogeneities in the flow. From the modeling perspective, these insights are useful to design optimized grids and improve numerical predictions in similar configurations.« less

  4. BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs

    NASA Astrophysics Data System (ADS)

    Govender, Nicolin; Wilke, Daniel; Pizette, Patrick; Khinast, Johannes

    2017-06-01

    Accurately predicting the dynamics of particulate materials is of importance to numerous scientific and industrial areas with applications ranging across particle scales from powder flow to ore crushing. Computational discrete element simulations is a viable option to aid in the understanding of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory scale tests comes at a significant cost. However, the computational time required to simulate an industrial scale simulation which consists of tens of millions of particles can take months to complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for industrial applications. Simulations are therefore typically restricted to tens of thousands of particles with highly detailed particle shapes or a few million of particles with often oversimplified particle shapes. However, a number of applications require accurate representation of the particle shape to capture the macroscopic behaviour of the particulate system. In this paper we give an overview of the recent extensions to the open source GPU based DEM code, BlazeDEM3D-GPU, that can simulate millions of polyhedra and tens of millions of spheres on a desktop computer with a single or multiple GPUs.

  5. Parsec-Scale Obscuring Accretion Disk with Large-Scale Magnetic Field in AGNs

    NASA Technical Reports Server (NTRS)

    Dorodnitsyn, A.; Kallman, T.

    2017-01-01

    A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc (parsec) -scale torus in AGNs (Active Galactic Nuclei). Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate that the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.

  6. Numerical simulation of turbulence and terahertz magnetosonic waves generation in collisionless plasmas

    NASA Astrophysics Data System (ADS)

    Kumar, Narender; Singh, Ram Kishor; Sharma, Swati; Uma, R.; Sharma, R. P.

    2018-01-01

    This paper presents numerical simulations of laser beam (x-mode) coupling with a magnetosonic wave (MSW) in a collisionless plasma. The coupling arises through ponderomotive non-linearity. The pump beam has been perturbed by a periodic perturbation that leads to the nonlinear evolution of the laser beam. It is observed that the frequency spectra of the MSW have peaks at terahertz frequencies. The simulation results show quite complex localized structures that grow with time. The ensemble averaged power spectrum has also been studied which indicates that the spectral index follows an approximate scaling of the order of ˜ k-2.1 at large scales and scaling of the order of ˜ k-3.6 at smaller scales. The results indicate considerable randomness in the spatial structure of the magnetic field profile which gives sufficient indication of turbulence.

  7. Parsec-scale Obscuring Accretion Disk with Large-scale Magnetic Field in AGNs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorodnitsyn, A.; Kallman, T.

    A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc-scale torus in AGNs. Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate thatmore » the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.« less

  8. Solute-defect interactions in Al-Mg alloys from diffusive variational Gaussian calculations

    NASA Astrophysics Data System (ADS)

    Dontsova, E.; Rottler, J.; Sinclair, C. W.

    2014-11-01

    Resolving atomic-scale defect topologies and energetics with accurate atomistic interaction models provides access to the nonlinear phenomena inherent at atomic length and time scales. Coarse graining the dynamics of such simulations to look at the migration of, e.g., solute atoms, while retaining the rich atomic-scale detail required to properly describe defects, is a particular challenge. In this paper, we present an adaptation of the recently developed "diffusive molecular dynamics" model to describe the energetics and kinetics of binary alloys on diffusive time scales. The potential of the technique is illustrated by applying it to the classic problems of solute segregation to a planar boundary (stacking fault) and edge dislocation in the Al-Mg system. Our approach provides fully dynamical solutions in situations with an evolving energy landscape in a computationally efficient way, where atomistic kinetic Monte Carlo simulations are difficult or impractical to perform.

  9. Model simulations and proxy-based reconstructions for the European region in the past millennium (Invited)

    NASA Astrophysics Data System (ADS)

    Zorita, E.

    2009-12-01

    One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales. However, as the focus shifts towards higher frequency variability, decadal or multidecadal, the need for larger simulation ensembles becomes more evident. Nevertheless,the comparison at these time scales may expose some lines of research on the origin of multidecadal regional climate variability.

  10. Turbulence dissipation challenge: particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Roytershteyn, V.; Karimabadi, H.; Omelchenko, Y.; Germaschewski, K.

    2015-12-01

    We discuss application of three particle in cell (PIC) codes to the problems relevant to turbulence dissipation challenge. VPIC is a fully kinetic code extensively used to study a variety of diverse problems ranging from laboratory plasmas to astrophysics. PSC is a flexible fully kinetic code offering a variety of algorithms that can be advantageous to turbulence simulations, including high order particle shapes, dynamic load balancing, and ability to efficiently run on Graphics Processing Units (GPUs). Finally, HYPERS is a novel hybrid (kinetic ions+fluid electrons) code, which utilizes asynchronous time advance and a number of other advanced algorithms. We present examples drawn both from large-scale turbulence simulations and from the test problems outlined by the turbulence dissipation challenge. Special attention is paid to such issues as the small-scale intermittency of inertial range turbulence, mode content of the sub-proton range of scales, the formation of electron-scale current sheets and the role of magnetic reconnection, as well as numerical challenges of applying PIC codes to simulations of astrophysical turbulence.

  11. Automatic Selection of Order Parameters in the Analysis of Large Scale Molecular Dynamics Simulations.

    PubMed

    Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S

    2014-12-09

    Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.

  12. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.

    PubMed

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data.

  13. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  14. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  15. AIRS-Observed Interrelationships of Anomaly Time-Series of Moist Process-Related Parameters and Inferred Feedback Values on Various Spatial Scales

    NASA Technical Reports Server (NTRS)

    Molnar, Gyula I.; Susskind, Joel; Iredell, Lena

    2011-01-01

    In the beginning, a good measure of a GMCs performance was their ability to simulate the observed mean seasonal cycle. That is, a reasonable simulation of the means (i.e., small biases) and standard deviations of TODAY?S climate would suffice. Here, we argue that coupled GCM (CG CM for short) simulations of FUTURE climates should be evaluated in much more detail, both spatially and temporally. Arguably, it is not the bias, but rather the reliability of the model-generated anomaly time-series, even down to the [C]GCM grid-scale, which really matter. This statement is underlined by the social need to address potential REGIONAL climate variability, and climate drifts/changes in a manner suitable for policy decisions.

  16. Gyrokinetic particle simulation of a field reversed configuration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulton, D. P., E-mail: dfulton@uci.edu; Lau, C. K.; Holod, I.

    2016-01-15

    Gyrokinetic particle simulation of the field-reversed configuration (FRC) has been developed using the gyrokinetic toroidal code (GTC). The magnetohydrodynamic equilibrium is mapped from cylindrical coordinates to Boozer coordinates for the FRC core and scrape-off layer (SOL), respectively. A field-aligned mesh is constructed for solving self-consistent electric fields using a semi-spectral solver in a partial torus FRC geometry. This new simulation capability has been successfully verified and driftwave instability in the FRC has been studied using the gyrokinetic simulation for the first time. Initial GTC simulations find that in the FRC core, the ion-scale driftwave is stabilized by the large ionmore » gyroradius. In the SOL, the driftwave is unstable on both ion and electron scales.« less

  17. Convection in a Very Compressible Fluid: Comparison of Simulations With Experiments

    NASA Technical Reports Server (NTRS)

    Meyer, H.; Furukawa, A.; Onuki, A.; Kogan, A. B.

    2003-01-01

    The time profile (Delta)T(t) of the temperature difference, measured across a very compressible fluid layer of supercritical He-3 after the start of a heat flow, shows a damped oscillatory behavior before steady state convection is reached. The results for (Delta)T(t) obtained from numerical simulations and from laboratory experiments are compared over a temperature range where the compressibility varies by a factor of approx. = 40. First the steady-state convective heat current j(sup conv) as a function of the Rayleigh number R(alpha) is presented, and the agreement is found to be good. Second, the shape of the time profile and two characteristic times in the transient part of (Delta)T(t) from simulations and experiments are compared, namely: 1) t(sub osc), the oscillatory period and 2) t(sub p), the time of the first peak after starting the heat flow. These times, scaled by the diffusive time tau(sub D) versus R(alpha), are presented. The agreement is good for t(sup osc)/tau(sub D), where the results collapse on a single curve showing a powerlaw behavior. The simulation hence confirms the universal scaling behavior found experimentally. However for t(sub p)/tau(sub D), where the experimental data also collapse on a single curve, the simulation results show systematic departures from such a behavior. A possible reason for some of the disagreements, both in the time profile and in t(sub p) is discussed. In the Appendix a third characteristic time, t(sub m), between the first peak and the first oscillation minimum is plotted and a comparison between the results of experiments and simulations is made.

  18. Inferring multi-scale neural mechanisms with brain network modelling

    PubMed Central

    Schirner, Michael; McIntosh, Anthony Randal; Jirsa, Viktor; Deco, Gustavo

    2018-01-01

    The neurophysiological processes underlying non-invasive brain activity measurements are incompletely understood. Here, we developed a connectome-based brain network model that integrates individual structural and functional data with neural population dynamics to support multi-scale neurophysiological inference. Simulated populations were linked by structural connectivity and, as a novelty, driven by electroencephalography (EEG) source activity. Simulations not only predicted subjects' individual resting-state functional magnetic resonance imaging (fMRI) time series and spatial network topologies over 20 minutes of activity, but more importantly, they also revealed precise neurophysiological mechanisms that underlie and link six empirical observations from different scales and modalities: (1) resting-state fMRI oscillations, (2) functional connectivity networks, (3) excitation-inhibition balance, (4, 5) inverse relationships between α-rhythms, spike-firing and fMRI on short and long time scales, and (6) fMRI power-law scaling. These findings underscore the potential of this new modelling framework for general inference and integration of neurophysiological knowledge to complement empirical studies. PMID:29308767

  19. Flash-Flood hydrological simulations at regional scale. Scale signature on road flooding vulnerability

    NASA Astrophysics Data System (ADS)

    Anquetin, Sandrine; Vannier, Olivier; Ollagnier, Mélody; Braud, Isabelle

    2015-04-01

    This work contributes to the evaluation of the dynamics of the human exposure during flash-flood events in the Mediterranean region. Understanding why and how the commuters modify their daily mobility in the Cévennes - Vivarais area (France) is the long-term objective of the study. To reach this objective, the methodology relies on three steps: i) evaluation of daily travel patterns, ii) reconstitution of road flooding events in the region based on hydrological simulation at regional scale in order to capture the time evolution and the intensity of flood and iii) identification of the daily fluctuation of the exposition according to road flooding scenarios and the time evolution of mobility patterns. This work deals with the second step. To do that, the physically based and non-calibrated hydrological model CVN (Vannier, 2013) is implemented to retrieve the hydrological signature of past flash-flood events in Southern France. Four past events are analyzed (September 2002; September 2005 (split in 2 different events); October 2008). Since the regional scale is investigated, the scales of the studied catchments range from few km2 to few hundreds of km2 where many catchments are ungauged. The evaluation is based on a multi-scale approach using complementary observations coming from post-flood experiments (for small and/or ungaugged catchments) and operational hydrological network (for larger catchments). The scales of risk (time and location of the road flooding) are also compared to observed data of road cuts. The discussion aims at improving our understanding on the hydrological processes associated with road flooding vulnerability. We specifically analyze runoff coefficient and the ratio between surface and groundwater flows at regional scale. The results show that on the overall, the three regional simulations provide good scores for the probability of detection and false alarms concerning road flooding (1600 points are analyzed for the whole region). Our evaluation procedure provides new insights on the active hydrological processes at small scales (catchments area < 10 km²) since these small scales, distributed over the whole region, are analyzed through road cuts data and post-flood field investigations. As shown in Vannier (2013), the signature of the altered geological layer is significant on the simulated discharges. For catchments under schisty geology, the simulated discharge, whatever the catchment size, is usually overestimated. Vannier, O, 2013, Apport de la modélisation hydrologique régionale à la compréhension des processus de crue en zone méditerranéenne, PhD-Thesis (in French), Grenoble University.

  20. Scalable streaming tools for analyzing N-body simulations: Finding halos and investigating excursion sets in one pass

    NASA Astrophysics Data System (ADS)

    Ivkin, N.; Liu, Z.; Yang, L. F.; Kumar, S. S.; Lemson, G.; Neyrinck, M.; Szalay, A. S.; Braverman, V.; Budavari, T.

    2018-04-01

    Cosmological N-body simulations play a vital role in studying models for the evolution of the Universe. To compare to observations and make a scientific inference, statistic analysis on large simulation datasets, e.g., finding halos, obtaining multi-point correlation functions, is crucial. However, traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our prior paper (Liu et al., 2015) proposes memory-efficient streaming algorithms that can find the largest halos in a simulation with up to 109 particles on a small server or desktop. However, this approach fails when directly scaling to larger datasets. This paper presents a robust streaming tool that leverages state-of-the-art techniques on GPU boosting, sampling, and parallel I/O, to significantly improve performance and scalability. Our rigorous analysis of the sketch parameters improves the previous results from finding the centers of the 103 largest halos (Liu et al., 2015) to ∼ 104 - 105, and reveals the trade-offs between memory, running time and number of halos. Our experiments show that our tool can scale to datasets with up to ∼ 1012 particles while using less than an hour of running time on a single GPU Nvidia GTX 1080.

  1. SIMSAT: An object oriented architecture for real-time satellite simulation

    NASA Technical Reports Server (NTRS)

    Williams, Adam P.

    1993-01-01

    Real-time satellite simulators are vital tools in the support of satellite missions. They are used in the testing of ground control systems, the training of operators, the validation of operational procedures, and the development of contingency plans. The simulators must provide high-fidelity modeling of the satellite, which requires detailed system information, much of which is not available until relatively near launch. The short time-scales and resulting high productivity required of such simulator developments culminates in the need for a reusable infrastructure which can be used as a basis for each simulator. This paper describes a major new simulation infrastructure package, the Software Infrastructure for Modelling Satellites (SIMSAT). It outlines the object oriented design methodology used, describes the resulting design, and discusses the advantages and disadvantages experienced in applying the methodology.

  2. Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors.

    PubMed

    Hines, Michael L; Eichner, Hubert; Schürmann, Felix

    2008-08-01

    Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.

  3. Short- and long-time diffusion and dynamic scaling in suspensions of charged colloidal particles.

    PubMed

    Banchio, Adolfo J; Heinen, Marco; Holmqvist, Peter; Nägele, Gerhard

    2018-04-07

    We report on a comprehensive theory-simulation-experimental study of collective and self-diffusion in concentrated suspensions of charge-stabilized colloidal spheres. In theory and simulation, the spheres are assumed to interact directly by a hard-core plus screened Coulomb effective pair potential. The intermediate scattering function, f c (q, t), is calculated by elaborate accelerated Stokesian dynamics (ASD) simulations for Brownian systems where many-particle hydrodynamic interactions (HIs) are fully accounted for, using a novel extrapolation scheme to a macroscopically large system size valid for all correlation times. The study spans the correlation time range from the colloidal short-time to the long-time regime. Additionally, Brownian Dynamics (BD) simulation and mode-coupling theory (MCT) results of f c (q, t) are generated where HIs are neglected. Using these results, the influence of HIs on collective and self-diffusion and the accuracy of the MCT method are quantified. It is shown that HIs enhance collective and self-diffusion at intermediate and long times. At short times self-diffusion, and for wavenumbers outside the structure factor peak region also collective diffusion, are slowed down by HIs. MCT significantly overestimates the slowing influence of dynamic particle caging. The dynamic scattering functions obtained in the ASD simulations are in overall good agreement with our dynamic light scattering (DLS) results for a concentration series of charged silica spheres in an organic solvent mixture, in the experimental time window and wavenumber range. From the simulation data for the time derivative of the width function associated with f c (q, t), there is indication of long-time exponential decay of f c (q, t), for wavenumbers around the location of the static structure factor principal peak. The experimental scattering functions in the probed time range are consistent with a time-wavenumber factorization scaling behavior of f c (q, t) that was first reported by Segrè and Pusey [Phys. Rev. Lett. 77, 771 (1996)] for suspensions of hard spheres. Our BD simulation and MCT results predict a significant violation of exact factorization scaling which, however, is approximately restored according to the ASD results when HIs are accounted for, consistent with the experimental findings for f c (q, t). Our study of collective diffusion is amended by simulation and theoretical results for the self-intermediate scattering function, f s (q, t), and its non-Gaussian parameter α 2 (t) and for the particle mean squared displacement W(t) and its time derivative. Since self-diffusion properties are not assessed in standard DLS measurements, a method to deduce W(t) approximately from f c (q, t) is theoretically validated.

  4. On the estimation and detection of the Rees-Sciama effect

    NASA Astrophysics Data System (ADS)

    Fullana, M. J.; Arnau, J. V.; Thacker, R. J.; Couchman, H. M. P.; Sáez, D.

    2017-02-01

    Maps of the Rees-Sciama (RS) effect are simulated using the parallel N-body code, HYDRA, and a run-time ray-tracing procedure. A method designed for the analysis of small, square cosmic microwave background (CMB) maps is applied to our RS maps. Each of these techniques has been tested and successfully applied in previous papers. Within a range of angular scales, our estimate of the RS angular power spectrum due to variations in the peculiar gravitational potential on scales smaller than 42/h megaparsecs is shown to be robust. An exhaustive study of the redshifts and spatial scales relevant for the production of RS anisotropy is developed for the first time. Results from this study demonstrate that (I) to estimate the full integrated RS effect, the initial redshift for the calculations (integration) must be greater than 25, (II) the effect produced by strongly non-linear structures is very small and peaks at angular scales close to 4.3 arcmin, and (III) the RS anisotropy cannot be detected either directly-in temperature CMB maps-or by looking for cross-correlations between these maps and tracers of the dark matter distribution. To estimate the RS effect produced by scales larger than 42/h megaparsecs, where the density contrast is not strongly non-linear, high accuracy N-body simulations appear unnecessary. Simulations based on approximations such as the Zel'dovich approximation and adhesion prescriptions, for example, may be adequate. These results can be used to guide the design of future RS simulations.

  5. Tidal dissipation in a viscoelastic planet

    NASA Technical Reports Server (NTRS)

    Ross, M.; Schubert, G.

    1986-01-01

    Tidal dissipation is examined using Maxwell standard liner solid (SLS), and Kelvin-Voigt models, and viscosity parameters are derived from the models that yield the amount of dissipation previously calculated for a moon model with QW = 100 in a hypothetical orbit closer to the earth. The relevance of these models is then assessed for simulating planetary tidal responses. Viscosities of 10 exp 14 and 10 ex 18 Pa s for the Kelvin-Voigt and Maxwell rheologies, respectively, are needed to match the dissipation rate calculated using the Q approach with a quality factor = 100. The SLS model requires a short time viscosity of 3 x 10 exp 17 Pa s to match the Q = 100 dissipation rate independent of the model's relaxation strength. Since Q = 100 is considered a representative value for the interiors of terrestrial planets, it is proposed that derived viscosities should characterize planetary materials. However, it is shown that neither the Kelvin-Voigt nor the SLS models simulate the behavior of real planetary materials on long time scales. The Maxwell model, by contrast, behaves realistically on both long and short time scales. The inferred Maxwell viscosity, corresponding to the time scale of days, is several times smaller than the longer time scale (greater than or equal to 10 exp 14 years) viscosity of the earth's mantle.

  6. Assessing self-care and social function using a computer adaptive testing version of the Pediatric Evaluation of Disability Inventory Accepted for Publication, Archives of Physical Medicine and Rehabilitation

    PubMed Central

    Coster, Wendy J.; Haley, Stephen M.; Ni, Pengsheng; Dumas, Helene M.; Fragala-Pinkham, Maria A.

    2009-01-01

    Objective To examine score agreement, validity, precision, and response burden of a prototype computer adaptive testing (CAT) version of the Self-Care and Social Function scales of the Pediatric Evaluation of Disability Inventory (PEDI) compared to the full-length version of these scales. Design Computer simulation analysis of cross-sectional and longitudinal retrospective data; cross-sectional prospective study. Settings Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics; community-based day care, preschool, and children’s homes. Participants Four hundred sixty-nine children with disabilities and 412 children with no disabilities (analytic sample); 38 children with disabilities and 35 children without disabilities (cross-validation sample). Interventions Not applicable. Main Outcome Measures Summary scores from prototype CAT applications of each scale using 15-, 10-, and 5-item stopping rules; scores from the full-length Self-Care and Social Function scales; time (in seconds) to complete assessments and respondent ratings of burden. Results Scores from both computer simulations and field administration of the prototype CATs were highly consistent with scores from full-length administration (all r’s between .94 and .99). Using computer simulation of retrospective data, discriminant validity and sensitivity to change of the CATs closely approximated that of the full-length scales, especially when the 15- and 10-item stopping rules were applied. In the cross-validation study the time to administer both CATs was 4 minutes, compared to over 16 minutes to complete the full-length scales. Conclusions Self-care and Social Function score estimates from CAT administration are highly comparable to those obtained from full-length scale administration, with small losses in validity and precision and substantial decreases in administration time. PMID:18373991

  7. Development of the Transport Class Model (TCM) Aircraft Simulation From a Sub-Scale Generic Transport Model (GTM) Simulation

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.

    2011-01-01

    A six degree-of-freedom, flat-earth dynamics, non-linear, and non-proprietary aircraft simulation was developed that is representative of a generic mid-sized twin-jet transport aircraft. The simulation was developed from a non-proprietary, publicly available, subscale twin-jet transport aircraft simulation using scaling relationships and a modified aerodynamic database. The simulation has an extended aerodynamics database with aero data outside the normal transport-operating envelope (large angle-of-attack and sideslip values). The simulation has representative transport aircraft surface actuator models with variable rate-limits and generally fixed position limits. The simulation contains a generic 40,000 lb sea level thrust engine model. The engine model is a first order dynamic model with a variable time constant that changes according to simulation conditions. The simulation provides a means for interfacing a flight control system to use the simulation sensor variables and to command the surface actuators and throttle position of the engine model.

  8. A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics

    NASA Astrophysics Data System (ADS)

    Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno

    2017-07-01

    In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.

  9. Evaluation and projected changes of precipitation statistics in convection-permitting WRF climate simulations over Central Europe

    NASA Astrophysics Data System (ADS)

    Knist, Sebastian; Goergen, Klaus; Simmer, Clemens

    2018-02-01

    We perform simulations with the WRF regional climate model at 12 and 3 km grid resolution for the current and future climates over Central Europe and evaluate their added value with a focus on the daily cycle and frequency distribution of rainfall and the relation between extreme precipitation and air temperature. First, a 9 year period of ERA-Interim driven simulations is evaluated against observations; then global climate model runs (MPI-ESM-LR RCP4.5 scenario) are downscaled and analyzed for three 12-year periods: a control, a mid-of-century and an end-of-century projection. The higher resolution simulations reproduce both the diurnal cycle and the hourly intensity distribution of precipitation more realistically compared to the 12 km simulation. Moreover, the observed increase of the temperature-extreme precipitation scaling from the Clausius-Clapeyron (C-C) scaling rate of 7% K-1 to a super-adiabatic scaling rate for temperatures above 11 °C is reproduced only by the 3 km simulation. The drop of the scaling rates at high temperatures under moisture limited conditions differs between sub-regions. For both future scenario time spans both simulations suggest a slight decrease in mean summer precipitation and an increase in hourly heavy and extreme precipitation. This increase is stronger in the 3 km runs. Temperature-extreme precipitation scaling curves in the future climate are projected to shift along the 7% K-1 trajectory to higher peak extreme precipitation values at higher temperatures. The curves keep their typical shape of C-C scaling followed by super-adiabatic scaling and a drop-off at higher temperatures due to moisture limitation.

  10. High Fidelity, “Faster than Real-Time” Simulator for Predicting Power System Dynamic Behavior - Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flueck, Alex

    The “High Fidelity, Faster than Real­Time Simulator for Predicting Power System Dynamic Behavior” was designed and developed by Illinois Institute of Technology with critical contributions from Electrocon International, Argonne National Laboratory, Alstom Grid and McCoy Energy. Also essential to the project were our two utility partners: Commonwealth Edison and AltaLink. The project was a success due to several major breakthroughs in the area of large­scale power system dynamics simulation, including (1) a validated faster than real­ time simulation of both stable and unstable transient dynamics in a large­scale positive sequence transmission grid model, (2) a three­phase unbalanced simulation platform formore » modeling new grid devices, such as independently controlled single­phase static var compensators (SVCs), (3) the world’s first high fidelity three­phase unbalanced dynamics and protection simulator based on Electrocon’s CAPE program, and (4) a first­of­its­ kind implementation of a single­phase induction motor model with stall capability. The simulator results will aid power grid operators in their true time of need, when there is a significant risk of cascading outages. The simulator will accelerate performance and enhance accuracy of dynamics simulations, enabling operators to maintain reliability and steer clear of blackouts. In the long­term, the simulator will form the backbone of the newly conceived hybrid real­time protection and control architecture that will coordinate local controls, wide­area measurements, wide­area controls and advanced real­time prediction capabilities. The nation’s citizens will benefit in several ways, including (1) less down time from power outages due to the faster­than­real­time simulator’s predictive capability, (2) higher levels of reliability due to the detailed dynamics plus protection simulation capability, and (3) more resiliency due to the three­ phase unbalanced simulator’s ability to model three­phase and single­ phase networks and devices.« less

  11. Synergistic cross-scale coupling of turbulence in a tokamak plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howard, N. T., E-mail: nthoward@psfc.mit.edu; Holland, C.; White, A. E.

    2014-11-15

    For the first time, nonlinear gyrokinetic simulations spanning both the ion and electron spatio-temporal scales have been performed with realistic electron mass ratio ((m{sub D}∕m{sub e}){sup 1∕2 }= 60.0), realistic geometry, and all experimental inputs, demonstrating the coexistence and synergy of ion (k{sub θ}ρ{sub s}∼O(1.0)) and electron-scale (k{sub θ}ρ{sub e}∼O(1.0)) turbulence in the core of a tokamak plasma. All multi-scale simulations utilized the GYRO code [J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] to study the coupling of ion and electron-scale turbulence in the core (r/a = 0.6) of an Alcator C-Mod L-mode discharge shown previously to exhibit an under-predictionmore » of the electron heat flux when using simulations only including ion-scale turbulence. Electron-scale turbulence is found to play a dominant role in setting the electron heat flux level and radially elongated (k{sub r} ≪ k{sub θ}) “streamers” are found to coexist with ion-scale eddies in experimental plasma conditions. Inclusion of electron-scale turbulence in these simulations is found to increase both ion and electron heat flux levels by enhancing the transport at the ion-scale while also driving electron heat flux at sub-ρ{sub i} scales. The combined increases in the low and high-k driven electron heat flux may explain previously observed discrepancies between simulated and experimental electron heat fluxes and indicates a complex interaction of short and long wavelength turbulence.« less

  12. The Contribution of Human Factors in Military System Development: Methodological Considerations

    DTIC Science & Technology

    1980-07-01

    Risk/Uncertainty Analysis - Project Scoring - Utility Scales - Relevance Tree Techniques (Reverse Factor Analysis) 2. Computer Simulation Simulation...effectiveness of mathematical models for R&D project selection. Management Science, April 1973, 18. 6-43 .1~ *.-. Souder, W.E. h scoring methodology for...per some interval PROFICIENCY test scores (written) RADIATION radiation effects aircrew performance on radiation environments REACTION TIME 1) (time

  13. [Collaborative application of BEPS at different time steps.

    PubMed

    Lu, Wei; Fan, Wen Yi; Tian, Tian

    2016-09-01

    BEPSHourly is committed to simulate the ecological and physiological process of vegetation at hourly time steps, and is often applied to analyze the diurnal change of gross primary productivity (GPP), net primary productivity (NPP) at site scale because of its more complex model structure and time-consuming solving process. However, daily photosynthetic rate calculation in BEPSDaily model is simpler and less time-consuming, not involving many iterative processes. It is suitable for simulating the regional primary productivity and analyzing the spatial distribution of regional carbon sources and sinks. According to the characteristics and applicability of BEPSDaily and BEPSHourly models, this paper proposed a method of collaborative application of BEPS at daily and hourly time steps. Firstly, BEPSHourly was used to optimize the main photosynthetic parameters: the maximum rate of carboxylation (V c max ) and the maximum rate of photosynthetic electron transport (J max ) at site scale, and then the two optimized parameters were introduced into BEPSDaily model to estimate regional NPP at regional scale. The results showed that optimization of the main photosynthesis parameters based on the flux data could improve the simulate ability of the model. The primary productivity of different forest types in descending order was deciduous broad-leaved forest, mixed forest, coniferous forest in 2011. The collaborative application of carbon cycle models at different steps proposed in this study could effectively optimize the main photosynthesis parameters V c max and J max , simulate the monthly averaged diurnal GPP, NPP, calculate the regional NPP, and analyze the spatial distribution of regional carbon sources and sinks.

  14. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.

    2003-01-01

    Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques

  15. A simple phenomenological model for grain clustering in turbulence

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-01-01

    We propose a simple model for density fluctuations of aerodynamic grains, embedded in a turbulent, gravitating gas disc. The model combines a calculation for the behaviour of a group of grains encountering a single turbulent eddy, with a hierarchical approximation of the eddy statistics. This makes analytic predictions for a range of quantities including: distributions of grain densities, power spectra and correlation functions of fluctuations, and maximum grain densities reached. We predict how these scale as a function of grain drag time ts, spatial scale, grain-to-gas mass ratio tilde{ρ }, strength of turbulence α, and detailed disc properties. We test these against numerical simulations with various turbulence-driving mechanisms. The simulations agree well with the predictions, spanning ts Ω ˜ 10-4-10, tilde{ρ }˜ 0{-}3, α ˜ 10-10-10-2. Results from `turbulent concentration' simulations and laboratory experiments are also predicted as a special case. Vortices on a wide range of scales disperse and concentrate grains hierarchically. For small grains this is most efficient in eddies with turnover time comparable to the stopping time, but fluctuations are also damped by local gas-grain drift. For large grains, shear and gravity lead to a much broader range of eddy scales driving fluctuations, with most power on the largest scales. The grain density distribution has a log-Poisson shape, with fluctuations for large grains up to factors ≳1000. We provide simple analytic expressions for the predictions, and discuss implications for planetesimal formation, grain growth, and the structure of turbulence.

  16. First-principles calculation of the optical properties of an amphiphilic cyanine dye aggregate.

    PubMed

    Haverkort, Frank; Stradomska, Anna; de Vries, Alex H; Knoester, Jasper

    2014-02-13

    Using a first-principles approach, we calculate electronic and optical properties of molecular aggregates of the dye amphi-pseudoisocyanine, whose structures we obtained from molecular dynamics (MD) simulations of the self-aggregation process. Using quantum chemistry methods, we translate the structural information into an effective time-dependent Frenkel exciton Hamiltonian for the dominant optical transitions in the aggregate. This Hamiltonian is used to calculate the absorption spectrum. Detailed analysis of the dynamic fluctuations in the molecular transition energies and intermolecular excitation transfer interactions in this Hamiltonian allows us to elucidate the origin of the relevant time scales; short time scales, on the order of up to a few hundreds of femtoseconds, result from internal motions of the dye molecules, while the longer (a few picosecond) time scales we ascribe to environmental motions. The absorption spectra of the aggregate structures obtained from MD feature a blue-shifted peak compared to that of the monomer; thus, our aggregates can be classified as H-aggregates, although considerable oscillator strength is carried by states along the entire exciton band. Comparison to the experimental absorption spectrum of amphi-PIC aggregates shows that the simulated line shape is too wide, pointing to too much disorder in the internal structure of the simulated aggregates.

  17. Microscopic modeling of gas-surface scattering: II. Application to argon atom adsorption on a platinum (111) surface

    NASA Astrophysics Data System (ADS)

    Filinov, A.; Bonitz, M.; Loffhagen, D.

    2018-06-01

    A new combination of first principle molecular dynamics (MD) simulations with a rate equation model presented in the preceding paper (paper I) is applied to analyze in detail the scattering of argon atoms from a platinum (111) surface. The combined model is based on a classification of all atom trajectories according to their energies into trapped, quasi-trapped and scattering states. The number of particles in each of the three classes obeys coupled rate equations. The coefficients in the rate equations are the transition probabilities between these states which are obtained from MD simulations. While these rates are generally time-dependent, after a characteristic time scale t E of several tens of picoseconds they become stationary allowing for a rather simple analysis. Here, we investigate this time scale by analyzing in detail the temporal evolution of the energy distribution functions of the adsorbate atoms. We separately study the energy loss distribution function of the atoms and the distribution function of in-plane and perpendicular energy components. Further, we compute the sticking probability of argon atoms as a function of incident energy, angle and lattice temperature. Our model is important for plasma-surface modeling as it allows to extend accurate simulations to longer time scales.

  18. Gyrokinetic predictions of multiscale transport in a DIII-D ITER baseline discharge

    DOE PAGES

    Holland, C.; Howard, N. T.; Grierson, B. A.

    2017-05-08

    New multiscale gyrokinetic simulations predict that electron energy transport in a DIII-D ITER baseline discharge with dominant electron heating and low input torque is multiscale in nature, with roughly equal amounts of the electron energy flux Q e coming from long wavelength ion-scale (k yρ s < 1) and short wavelength electron-scale (k yρ s > 1) fluctuations when the gyrokinetic results match independent power balance calculations. Corresponding conventional ion-scale simulations are able to match the power balance ion energy flux Q i, but systematically underpredict Q e when doing so. We observe significant nonlinear cross-scale couplings in the multiscalemore » simulations, but the exact simulation predictions are found to be extremely sensitive to variations of model input parameters within experimental uncertainties. Most notably, depending upon the exact value of the equilibrium E x B shearing rate γ E x B used, either enhancement or suppression of the long-wavelength turbulence and transport levels in the multiscale simulations is observed relative to what is predicted by ion-scale simulations. And while the enhancement of the long wavelength fluctuations by inclusion of the short wavelength turbulence was previously observed in similar multiscale simulations of an Alcator C-Mod L-mode discharge, these new results show for the first time a complete suppression of long-wavelength turbulence in a multiscale simulation, for parameters at which conventional ion-scale simulation predicts small but finite levels of low-k turbulence and transport consistent with the power balance Q i. Though computational resource limitations prevent a fully rigorous validation assessment of these new results, they provide significant new evidence that electron energy transport in burning plasmas is likely to have a strong multiscale character, with significant nonlinear cross-scale couplings that must be fully understood to predict the performance of those plasmas with confidence.« less

  19. Gyrokinetic predictions of multiscale transport in a DIII-D ITER baseline discharge

    NASA Astrophysics Data System (ADS)

    Holland, C.; Howard, N. T.; Grierson, B. A.

    2017-06-01

    New multiscale gyrokinetic simulations predict that electron energy transport in a DIII-D ITER baseline discharge with dominant electron heating and low input torque is multiscale in nature, with roughly equal amounts of the electron energy flux Q e coming from long wavelength ion-scale (k y ρ s  <  1) and short wavelength electron-scale (k y ρ s  >  1) fluctuations when the gyrokinetic results match independent power balance calculations. Corresponding conventional ion-scale simulations are able to match the power balance ion energy flux Q i, but systematically underpredict Q e when doing so. Significant nonlinear cross-scale couplings are observed in the multiscale simulations, but the exact simulation predictions are found to be extremely sensitive to variations of model input parameters within experimental uncertainties. Most notably, depending upon the exact value of the equilibrium E  ×  B shearing rate γ E×B used, either enhancement or suppression of the long-wavelength turbulence and transport levels in the multiscale simulations is observed relative to what is predicted by ion-scale simulations. While the enhancement of the long wavelength fluctuations by inclusion of the short wavelength turbulence was previously observed in similar multiscale simulations of an Alcator C-Mod L-mode discharge, these new results show for the first time a complete suppression of long-wavelength turbulence in a multiscale simulation, for parameters at which conventional ion-scale simulation predicts small but finite levels of low-k turbulence and transport consistent with the power balance Q i. Although computational resource limitations prevent a fully rigorous validation assessment of these new results, they provide significant new evidence that electron energy transport in burning plasmas is likely to have a strong multiscale character, with significant nonlinear cross-scale couplings that must be fully understood to predict the performance of those plasmas with confidence.

  20. 3D Hybrid Simulations of Interactions of High-Velocity Plasmoids with Obstacles

    NASA Astrophysics Data System (ADS)

    Omelchenko, Y. A.; Weber, T. E.; Smith, R. J.

    2015-11-01

    Interactions of fast plasma streams and objects with magnetic obstacles (dipoles, mirrors, etc) lie at the core of many space and laboratory plasma phenomena ranging from magnetoshells and solar wind interactions with planetary magnetospheres to compact fusion plasmas (spheromaks and FRCs) to astrophysics-in-lab experiments. Properly modeling ion kinetic, finite-Larmor radius and Hall effects is essential for describing large-scale plasma dynamics, turbulence and heating in complex magnetic field geometries. Using an asynchronous parallel hybrid code, HYPERS, we conduct 3D hybrid (particle-in-cell ion, fluid electron) simulations of such interactions under realistic conditions that include magnetic flux coils, ion-ion collisions and the Chodura resistivity. HYPERS does not step simulation variables synchronously in time but instead performs time integration by executing asynchronous discrete events: updates of particles and fields carried out as frequently as dictated by local physical time scales. Simulations are compared with data from the MSX experiment which studies the physics of magnetized collisionless shocks through the acceleration and subsequent stagnation of FRC plasmoids against a strong magnetic mirror and flux-conserving boundary.

  1. Simulation of High-Beta Plasma Confinement

    NASA Astrophysics Data System (ADS)

    Font, Gabriel; Welch, Dale; Mitchell, Robert; McGuire, Thomas

    2017-10-01

    The Lockheed Martin Compact Fusion Reactor concept utilizes magnetic cusps to confine the plasma. In order to minimize losses through the axial and ring cusps, the plasma is pushed to a high-beta state. Simulations were made of the plasma and magnetic field system in an effort to quantify particle confinement times and plasma behavior characteristics. Computations are carried out with LSP using implicit PIC methods. Simulations of different sub-scale geometries at high-Beta fusion conditions are used to determine particle loss scaling with reactor size, plasma conditions, and gyro radii. ©2017 Lockheed Martin Corporation. All Rights Reserved.

  2. Multiscale Aspects of Modeling Gas-Phase Nanoparticle Synthesis

    PubMed Central

    Buesser, B.; Gröhn, A.J.

    2013-01-01

    Aerosol reactors are utilized to manufacture nanoparticles in industrially relevant quantities. The development, understanding and scale-up of aerosol reactors can be facilitated with models and computer simulations. This review aims to provide an overview of recent developments of models and simulations and discuss their interconnection in a multiscale approach. A short introduction of the various aerosol reactor types and gas-phase particle dynamics is presented as a background for the later discussion of the models and simulations. Models are presented with decreasing time and length scales in sections on continuum, mesoscale, molecular dynamics and quantum mechanics models. PMID:23729992

  3. Time-Variable Transit Time Distributions in the Hyporheic Zone of a Headwater Mountain Stream

    NASA Astrophysics Data System (ADS)

    Ward, Adam S.; Schmadel, Noah M.; Wondzell, Steven M.

    2018-03-01

    Exchange of water between streams and their hyporheic zones is known to be dynamic in response to hydrologic forcing, variable in space, and to exist in a framework with nested flow cells. The expected result of heterogeneous geomorphic setting, hydrologic forcing, and between-feature interaction is hyporheic transit times that are highly variable in both space and time. Transit time distributions (TTDs) are important as they reflect the potential for hyporheic processes to impact biogeochemical transformations and ecosystems. In this study we simulate time-variable transit time distributions based on dynamic vertical exchange in a headwater mountain stream with observed, heterogeneous step-pool morphology. Our simulations include hyporheic exchange over a 600 m river corridor reach driven by continuously observed, time-variable hydrologic conditions for more than 1 year. We found that spatial variability at an instance in time is typically larger than temporal variation for the reach. Furthermore, we found reach-scale TTDs were marginally variable under all but the most extreme hydrologic conditions, indicating that TTDs are highly transferable in time. Finally, we found that aggregation of annual variation in space and time into a "master TTD" reasonably represents most of the hydrologic dynamics simulated, suggesting that this aggregation approach may provide a relevant basis for scaling from features or short reaches to entire networks.

  4. Ultrafast Photodissociation Dynamics of Nitromethane.

    PubMed

    Nelson, Tammie; Bjorgaard, Josiah; Greenfield, Margo; Bolme, Cindy; Brown, Katie; McGrane, Shawn; Scharff, R Jason; Tretiak, Sergei

    2016-02-04

    Nitromethane (NM), a high explosive (HE) with low sensitivity, is known to undergo photolysis upon ultraviolet (UV) irradiation. The optical transparency, homogeneity, and extensive study of NM make it an ideal system for studying photodissociation mechanisms in conventional HE materials. The photochemical processes involved in the decomposition of NM could be applied to the future design of controllable photoactive HE materials. In this study, the photodecomposition of NM from the nπ* state excited at 266 nm is being investigated on the femtosecond time scale. UV femtosecond transient absorption (TA) spectroscopy and excited state femtosecond stimulated Raman spectroscopy (FSRS) are combined with nonadiabatic excited state molecular dynamics (NA-ESMD) simulations to provide a unified picture of NM photodecomposition. The FSRS spectrum of the photoproduct exhibits peaks in the NO2 region and slightly shifted C-N vibrational peaks pointing to methyl nitrite formation as the dominant photoproduct. A total photolysis quantum yield of 0.27 and an nπ* state lifetime of ∼20 fs were predicted from NA-ESMD simulations. Predicted time scales revealed that NO2 dissociation occurs in 81 ± 4 fs and methyl nitrite formation is much slower having a time scale of 452 ± 9 fs corresponding to the excited state absorption feature with a decay of 480 ± 17 fs observed in the TA spectrum. Although simulations predict C-N bond cleavage as the primary photochemical process, the relative time scales are consistent with isomerization occurring via NO2 dissociation and subsequent rebinding of the methyl radical and nitrogen dioxide.

  5. Simulation of turbulent shear flows at Stanford and NASA-Ames - What can we do and what have we learned?

    NASA Technical Reports Server (NTRS)

    Reynolds, W. C.

    1983-01-01

    The capabilities and limitations of large eddy simulation (LES) and full turbulence simulation (FTS) are outlined. It is pointed out that LES, although limited at the present time by the need for periodic boundary conditions, produces large-scale flow behavior in general agreement with experiments. What is more, FTS computations produce small-scale behavior that is consistent with available experiments. The importance of the development work being done on the National Aerodynamic Simulator is emphasized. Studies at present are limited to situations in which periodic boundary conditions can be applied on boundaries of the computational domain where the flow is turbulent.

  6. Cancellation exponent and multifractal structure in two-dimensional magnetohydrodynamics: direct numerical simulations and Lagrangian averaged modeling.

    PubMed

    Graham, Jonathan Pietarila; Mininni, Pablo D; Pouquet, Annick

    2005-10-01

    We present direct numerical simulations and Lagrangian averaged (also known as alpha model) simulations of forced and free decaying magnetohydrodynamic turbulence in two dimensions. The statistics of sign cancellations of the current at small scales is studied using both the cancellation exponent and the fractal dimension of the structures. The alpha model is found to have the same scaling behavior between positive and negative contributions as the direct numerical simulations. The alpha model is also able to reproduce the time evolution of these quantities in free decaying turbulence. At large Reynolds numbers, an independence of the cancellation exponent with the Reynolds numbers is observed.

  7. Simulations of fluorescence solvatochromism in substituted PPV oligomers from excited state molecular dynamics with implicit solvent

    DOE PAGES

    Bjorgaard, J. A.; Nelson, T.; Kalinin, K.; ...

    2015-04-28

    In this study, an efficient method of treating solvent effects in excited state molecular dynamics (ESMD) is implemented and tested by exploring the solvatochromic effects in substituted p-phenylene vinylene oligomers. A continuum solvent model is used which has very little computational overhead. This allows simulations of ESMD with solvent effects on the scale of hundreds of picoseconds for systems of up to hundreds of atoms. At these time scales, solvatochromic shifts in fluoresence spectra can be described. Solvatochromic shifts in absorption and fluorescence spectra from ESMD are compared with time-dependent density functional theory calculations and experiments.

  8. Plasticity-mediated collapse and recrystallization in hollow copper nanowires: a molecular dynamics simulation.

    PubMed

    Dutta, Amlan; Raychaudhuri, Arup Kumar; Saha-Dasgupta, Tanusri

    2016-01-01

    We study the thermal stability of hollow copper nanowires using molecular dynamics simulation. We find that the plasticity-mediated structural evolution leads to transformation of the initial hollow structure to a solid wire. The process involves three distinct stages, namely, collapse, recrystallization and slow recovery. We calculate the time scales associated with different stages of the evolution process. Our findings suggest a plasticity-mediated mechanism of collapse and recrystallization. This contradicts the prevailing notion of diffusion driven transport of vacancies from the interior to outer surface being responsible for collapse, which would involve much longer time scales as compared to the plasticity-based mechanism.

  9. The nonlinear Galerkin method: A multi-scale method applied to the simulation of homogeneous turbulent flows

    NASA Technical Reports Server (NTRS)

    Debussche, A.; Dubois, T.; Temam, R.

    1993-01-01

    Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.

  10. Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?

    NASA Astrophysics Data System (ADS)

    Lin, Guangxing; Wan, Hui; Zhang, Kai; Qian, Yun; Ghan, Steven J.

    2016-09-01

    Efficient simulation strategies are crucial for the development and evaluation of high-resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity of the constrained simulations depends on the detailed implementation of nudging and the mechanism through which the perturbed parameter affects precipitation and cloud. The relative computational costs of nudged and free-running simulations are determined by the magnitude of internal variability in the physical quantities of interest, as well as the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature, and/or wind nudging with a 6 h relaxation time scale leads to nonnegligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while 1 year free-running simulations can satisfactorily capture the annual mean precipitation and cloud forcing sensitivities. In the case of a relatively weak perturbation in the large-scale condensation scheme, results from 1 year free-running simulations are strongly affected by natural noise, while nudging winds effectively reduces the noise, and reasonably reproduces the sensitivities. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.

  11. Development of an informatics infrastructure for data exchange of biomolecular simulations: architecture, data models and ontology$

    PubMed Central

    Thibault, J. C.; Roe, D. R.; Eilbeck, K.; Cheatham, T. E.; Facelli, J. C.

    2015-01-01

    Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data – both within the same organization and among different ones – remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations. PMID:26387907

  12. Development of an informatics infrastructure for data exchange of biomolecular simulations: Architecture, data models and ontology.

    PubMed

    Thibault, J C; Roe, D R; Eilbeck, K; Cheatham, T E; Facelli, J C

    2015-01-01

    Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data - both within the same organization and among different ones - remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations.

  13. Disks around merging binary black holes: From GW150914 to supermassive black holes

    NASA Astrophysics Data System (ADS)

    Khan, Abid; Paschalidis, Vasileios; Ruiz, Milton; Shapiro, Stuart L.

    2018-02-01

    We perform magnetohydrodynamic simulations in full general relativity of disk accretion onto nonspinning black hole binaries with mass ratio q =29 /36 . We survey different disk models which differ in their scale height, total size and magnetic field to quantify the robustness of previous simulations on the initial disk model. Scaling our simulations to LIGO GW150914 we find that such systems could explain possible gravitational wave and electromagnetic counterparts such as the Fermi GBM hard x-ray signal reported 0.4 s after GW150915 ended. Scaling our simulations to supermassive binary black holes, we find that observable flow properties such as accretion rate periodicities, the emergence of jets throughout inspiral, merger and postmerger, disk temperatures, thermal frequencies, and the time delay between merger and the boost in jet outflows that we reported in earlier studies display only modest dependence on the initial disk model we consider here.

  14. Advanced computational simulations of water waves interacting with wave energy converters

    NASA Astrophysics Data System (ADS)

    Pathak, Ashish; Freniere, Cole; Raessi, Mehdi

    2017-03-01

    Wave energy converter (WEC) devices harness the renewable ocean wave energy and convert it into useful forms of energy, e.g. mechanical or electrical. This paper presents an advanced 3D computational framework to study the interaction between water waves and WEC devices. The computational tool solves the full Navier-Stokes equations and considers all important effects impacting the device performance. To enable large-scale simulations in fast turnaround times, the computational solver was developed in an MPI parallel framework. A fast multigrid preconditioned solver is introduced to solve the computationally expensive pressure Poisson equation. The computational solver was applied to two surface-piercing WEC geometries: bottom-hinged cylinder and flap. Their numerically simulated response was validated against experimental data. Additional simulations were conducted to investigate the applicability of Froude scaling in predicting full-scale WEC response from the model experiments.

  15. Understanding Quantum Tunneling through Quantum Monte Carlo Simulations.

    PubMed

    Isakov, Sergei V; Mazzola, Guglielmo; Smelyanskiy, Vadim N; Jiang, Zhang; Boixo, Sergio; Neven, Hartmut; Troyer, Matthias

    2016-10-28

    The tunneling between the two ground states of an Ising ferromagnet is a typical example of many-body tunneling processes between two local minima, as they occur during quantum annealing. Performing quantum Monte Carlo (QMC) simulations we find that the QMC tunneling rate displays the same scaling with system size, as the rate of incoherent tunneling. The scaling in both cases is O(Δ^{2}), where Δ is the tunneling splitting (or equivalently the minimum spectral gap). An important consequence is that QMC simulations can be used to predict the performance of a quantum annealer for tunneling through a barrier. Furthermore, by using open instead of periodic boundary conditions in imaginary time, equivalent to a projector QMC algorithm, we obtain a quadratic speedup for QMC simulations, and achieve linear scaling in Δ. We provide a physical understanding of these results and their range of applicability based on an instanton picture.

  16. Challenge toward the prediction of typhoon behaviour and down pour

    NASA Astrophysics Data System (ADS)

    Takahashi, K.; Onishi, R.; Baba, Y.; Kida, S.; Matsuda, K.; Goto, K.; Fuchigami, H.

    2013-08-01

    Mechanisms of interactions among different scale phenomena play important roles for forecasting of weather and climate. Multi-scale Simulator for the Geoenvironment (MSSG), which deals with multi-scale multi-physics phenomena, is a coupled non-hydrostatic atmosphere-ocean model designed to be run efficiently on the Earth Simulator. We present simulation results with the world-highest 1.9km horizontal resolution for the entire globe and regional heavy rain with 1km horizontal resolution and 5m horizontal/vertical resolution for urban area simulation. To gain high performance by exploiting the system capabilities, we propose novel performance evaluation metrics introduced in previous studies that incorporate the effects of the data caching mechanism between CPU and memory. With a useful code optimization guideline based on such metrics, we demonstrate that MSSG can achieve an excellent peak performance ratio of 32.2% on the Earth Simulator with the single-core performance found to be a key to a reduced time-to-solution.

  17. Reaching extended length-scales with accelerated dynamics

    NASA Astrophysics Data System (ADS)

    Hubartt, Bradley; Shim, Yunsic; Amar, Jacques

    2012-02-01

    While temperature-accelerated dynamics (TAD) has been quite successful in extending the time-scales for non-equilibrium simulations of small systems, the computational time increases rapidly with system size. One possible solution to this problem, which we refer to as parTAD^1 is to use spatial decomposition combined with our previously developed semi-rigorous synchronous sublattice algorithm^2. However, while such an approach leads to significantly better scaling as a function of system-size, it also artificially limits the size of activated events and is not completely rigorous. Here we discuss progress we have made in developing an alternative approach in which localized saddle-point searches are combined with parallel GPU-based molecular dynamics in order to improve the scaling behavior. By using this method, along with the use of an adaptive method to determine the optimal high-temperature^3, we have been able to significantly increase the range of time- and length-scales over which accelerated dynamics simulations may be carried out. [1] Y. Shim et al, Phys. Rev. B 76, 205439 (2007); ibid, Phys. Rev. Lett. 101, 116101 (2008). [2] Y. Shim and J.G. Amar, Phys. Rev. B 71, 125432 (2005). [3] Y. Shim and J.G. Amar, J. Chem. Phys. 134, 054127 (2011).

  18. Long-time dynamics through parallel trajectory splicing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez, Danny; Cubuk, Ekin D.; Waterland, Amos

    2015-11-24

    Simulating the atomistic evolution of materials over long time scales is a longstanding challenge, especially for complex systems where the distribution of barrier heights is very heterogeneous. Such systems are difficult to investigate using conventional long-time scale techniques, and the fact that they tend to remain trapped in small regions of configuration space for extended periods of time strongly limits the physical insights gained from short simulations. We introduce a novel simulation technique, Parallel Trajectory Splicing (ParSplice), that aims at addressing this problem through the timewise parallelization of long trajectories. The computational efficiency of ParSplice stems from a speculation strategymore » whereby predictions of the future evolution of the system are leveraged to increase the amount of work that can be concurrently performed at any one time, hence improving the scalability of the method. ParSplice is also able to accurately account for, and potentially reuse, a substantial fraction of the computational work invested in the simulation. We validate the method on a simple Ag surface system and demonstrate substantial increases in efficiency compared to previous methods. As a result, we then demonstrate the power of ParSplice through the study of topology changes in Ag 42Cu 13 core–shell nanoparticles.« less

  19. Demonstrating an Order-of-Magnitude Sampling Enhancement in Molecular Dynamics Simulations of Complex Protein Systems.

    PubMed

    Pan, Albert C; Weinreich, Thomas M; Piana, Stefano; Shaw, David E

    2016-03-08

    Molecular dynamics (MD) simulations can describe protein motions in atomic detail, but transitions between protein conformational states sometimes take place on time scales that are infeasible or very expensive to reach by direct simulation. Enhanced sampling methods, the aim of which is to increase the sampling efficiency of MD simulations, have thus been extensively employed. The effectiveness of such methods when applied to complex biological systems like proteins, however, has been difficult to establish because even enhanced sampling simulations of such systems do not typically reach time scales at which convergence is extensive enough to reliably quantify sampling efficiency. Here, we obtain sufficiently converged simulations of three proteins to evaluate the performance of simulated tempering, a member of a widely used class of enhanced sampling methods that use elevated temperature to accelerate sampling. Simulated tempering simulations with individual lengths of up to 100 μs were compared to (previously published) conventional MD simulations with individual lengths of up to 1 ms. With two proteins, BPTI and ubiquitin, we evaluated the efficiency of sampling of conformational states near the native state, and for the third, the villin headpiece, we examined the rate of folding and unfolding. Our comparisons demonstrate that simulated tempering can consistently achieve a substantial sampling speedup of an order of magnitude or more relative to conventional MD.

  20. Atomistic simulations of materials: Methods for accurate potentials and realistic time scales

    NASA Astrophysics Data System (ADS)

    Tiwary, Pratyush

    This thesis deals with achieving more realistic atomistic simulations of materials, by developing accurate and robust force-fields, and algorithms for practical time scales. I develop a formalism for generating interatomic potentials for simulating atomistic phenomena occurring at energy scales ranging from lattice vibrations to crystal defects to high-energy collisions. This is done by fitting against an extensive database of ab initio results, as well as to experimental measurements for mixed oxide nuclear fuels. The applicability of these interactions to a variety of mixed environments beyond the fitting domain is also assessed. The employed formalism makes these potentials applicable across all interatomic distances without the need for any ambiguous splining to the well-established short-range Ziegler-Biersack-Littmark universal pair potential. We expect these to be reliable potentials for carrying out damage simulations (and molecular dynamics simulations in general) in nuclear fuels of varying compositions for all relevant atomic collision energies. A hybrid stochastic and deterministic algorithm is proposed that while maintaining fully atomistic resolution, allows one to achieve milliseconds and longer time scales for several thousands of atoms. The method exploits the rare event nature of the dynamics like other such methods, but goes beyond them by (i) not having to pick a scheme for biasing the energy landscape, (ii) providing control on the accuracy of the boosted time scale, (iii) not assuming any harmonic transition state theory (HTST), and (iv) not having to identify collective coordinates or interesting degrees of freedom. The method is validated by calculating diffusion constants for vacancy-mediated diffusion in iron metal at low temperatures, and comparing against brute-force high temperature molecular dynamics. We also calculate diffusion constants for vacancy diffusion in tantalum metal, where we compare against low-temperature HTST as well. The robustness of the algorithm with respect to the only free parameter it involves is ascertained. The method is then applied to perform tensile tests on gold nanopillars on strain rates as low as 100/s, bringing out the perils of high strain-rate molecular dynamics calculations. We also calculate temperature and stress dependence of activation free energy for surface nucleation of dislocations in pristine gold nanopillars under realistic loads. While maintaining fully atomistic resolution, we reach the fraction-of-a-second time scale regime. It is found that the activation free energy depends significantly and nonlinearly on the driving force (stress or strain) and temperature, leading to very high activation entropies for surface dislocation nucleation.

  1. Simulations of Sea Level Rise Effects on Complex Coastal Systems

    NASA Astrophysics Data System (ADS)

    Niedoroda, A. W.; Ye, M.; Saha, B.; Donoghue, J. F.; Reed, C. W.

    2009-12-01

    It is now established that complex coastal systems with elements such as beaches, inlets, bays, and rivers adjust their morphologies according to time-varying balances in between the processes that control the exchange of sediment. Accelerated sea level rise introduces a major perturbation into the sediment-sharing systems. A modeling framework based on a new SL-PR model which is an advanced version of the aggregate-scale CST Model and the event-scale CMS-2D and CMS-Wave combination have been used to simulate the recent evolution of a portion of the Florida panhandle coast. This combination of models provides a method to evaluate coefficients in the aggregate-scale model that were previously treated as fitted parameters. That is, by carrying out simulations of a complex coastal system with runs of the event-scale model representing more than a year it is now possible to directly relate the coefficients in the large-scale SL-PR model to measureable physical parameters in the current and wave fields. This cross-scale modeling procedure has been used to simulate the shoreline evolution at the Santa Rosa Island, a long barrier which houses significant military infrastructure at the north Gulf Coast. The model has been used to simulate 137 years of measured shoreline change and to extend these to predictions of future rates of shoreline migration.

  2. A Decade-Long European-Scale Convection-Resolving Climate Simulation on GPUs

    NASA Astrophysics Data System (ADS)

    Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.

    2016-12-01

    Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer designs that involve conventional multi-core CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation over Europe using the GPU-enabled COSMO version on a computational domain with 1536x1536x60 gridpoints. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss some of the advantages and prospects from using GPUs, and focus on the performance of the convection-resolving modeling approach on the European scale. Specifically we investigate the organization of convective clouds and on validate hourly rainfall distributions with various high-resolution data sets.

  3. Double Scaling in the Relaxation Time in the β -Fermi-Pasta-Ulam-Tsingou Model

    NASA Astrophysics Data System (ADS)

    Lvov, Yuri V.; Onorato, Miguel

    2018-04-01

    We consider the original β -Fermi-Pasta-Ulam-Tsingou system; numerical simulations and theoretical arguments suggest that, for a finite number of masses, a statistical equilibrium state is reached independently of the initial energy of the system. Using ensemble averages over initial conditions characterized by different Fourier random phases, we numerically estimate the time scale of equipartition and we find that for very small nonlinearity it matches the prediction based on exact wave-wave resonant interaction theory. We derive a simple formula for the nonlinear frequency broadening and show that when the phenomenon of overlap of frequencies takes place, a different scaling for the thermalization time scale is observed. Our result supports the idea that the Chirikov overlap criterion identifies a transition region between two different relaxation time scalings.

  4. Variable order fractional Fokker-Planck equations derived from Continuous Time Random Walks

    NASA Astrophysics Data System (ADS)

    Straka, Peter

    2018-08-01

    Continuous Time Random Walk models (CTRW) of anomalous diffusion are studied, where the anomalous exponent β(x) ∈(0 , 1) varies in space. This type of situation occurs e.g. in biophysics, where the density of the intracellular matrix varies throughout a cell. Scaling limits of CTRWs are known to have probability distributions which solve fractional Fokker-Planck type equations (FFPE). This correspondence between stochastic processes and FFPE solutions has many useful extensions e.g. to nonlinear particle interactions and reactions, but has not yet been sufficiently developed for FFPEs of the "variable order" type with non-constant β(x) . In this article, variable order FFPEs (VOFFPE) are derived from scaling limits of CTRWs. The key mathematical tool is the 1-1 correspondence of a CTRW scaling limit to a bivariate Langevin process, which tracks the cumulative sum of jumps in one component and the cumulative sum of waiting times in the other. The spatially varying anomalous exponent is modelled by spatially varying β(x) -stable Lévy noise in the waiting time component. The VOFFPE displays a spatially heterogeneous temporal scaling behaviour, with generalized diffusivity and drift coefficients whose units are length2/timeβ(x) resp. length/timeβ(x). A global change of the time scale results in a spatially varying change in diffusivity and drift. A consequence of the mathematical derivation of a VOFFPE from CTRW limits in this article is that a solution of a VOFFPE can be approximated via Monte Carlo simulations. Based on such simulations, we are able to confirm that the VOFFPE is consistent under a change of the global time scale.

  5. Modeling the influence of preferential flow on the spatial variability and time-dependence of mineral weathering rates

    DOE PAGES

    Pandey, Sachin; Rajaram, Harihar

    2016-12-05

    Inferences of weathering rates from laboratory and field observations suggest significant scale and time-dependence. Preferential flow induced by heterogeneity (manifest as permeability variations or discrete fractures) has been suggested as one potential mechanism causing scale/time-dependence. In this paper, we present a quantitative evaluation of the influence of preferential flow on weathering rates using reactive transport modeling. Simulations were performed in discrete fracture networks (DFNs) and correlated random permeability fields (CRPFs), and compared to simulations in homogeneous permeability fields. The simulations reveal spatial variability in the weathering rate, multidimensional distribution of reactions zones, and the formation of rough weathering interfaces andmore » corestones due to preferential flow. In the homogeneous fields and CRPFs, the domain-averaged weathering rate is initially constant as long as the weathering front is contained within the domain, reflecting equilibrium-controlled behavior. The behavior in the CRPFs was influenced by macrodispersion, with more spread-out weathering profiles, an earlier departure from the initial constant rate and longer persistence of weathering. DFN simulations exhibited a sustained time-dependence resulting from the formation of diffusion-controlled weathering fronts in matrix blocks, which is consistent with the shrinking core mechanism. A significant decrease in the domain-averaged weathering rate is evident despite high remaining mineral volume fractions, but the decline does not follow a math formula dependence, characteristic of diffusion, due to network scale effects and advection-controlled behavior near the inflow boundary. Finally, the DFN simulations also reveal relatively constant horizontally averaged weathering rates over a significant depth range, challenging the very notion of a weathering front.« less

  6. Modeling the influence of preferential flow on the spatial variability and time-dependence of mineral weathering rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandey, Sachin; Rajaram, Harihar

    Inferences of weathering rates from laboratory and field observations suggest significant scale and time-dependence. Preferential flow induced by heterogeneity (manifest as permeability variations or discrete fractures) has been suggested as one potential mechanism causing scale/time-dependence. In this paper, we present a quantitative evaluation of the influence of preferential flow on weathering rates using reactive transport modeling. Simulations were performed in discrete fracture networks (DFNs) and correlated random permeability fields (CRPFs), and compared to simulations in homogeneous permeability fields. The simulations reveal spatial variability in the weathering rate, multidimensional distribution of reactions zones, and the formation of rough weathering interfaces andmore » corestones due to preferential flow. In the homogeneous fields and CRPFs, the domain-averaged weathering rate is initially constant as long as the weathering front is contained within the domain, reflecting equilibrium-controlled behavior. The behavior in the CRPFs was influenced by macrodispersion, with more spread-out weathering profiles, an earlier departure from the initial constant rate and longer persistence of weathering. DFN simulations exhibited a sustained time-dependence resulting from the formation of diffusion-controlled weathering fronts in matrix blocks, which is consistent with the shrinking core mechanism. A significant decrease in the domain-averaged weathering rate is evident despite high remaining mineral volume fractions, but the decline does not follow a math formula dependence, characteristic of diffusion, due to network scale effects and advection-controlled behavior near the inflow boundary. Finally, the DFN simulations also reveal relatively constant horizontally averaged weathering rates over a significant depth range, challenging the very notion of a weathering front.« less

  7. A new battery-charging method suggested by molecular dynamics simulations.

    PubMed

    Abou Hamad, Ibrahim; Novotny, M A; Wipf, D O; Rikvold, P A

    2010-03-20

    Based on large-scale molecular dynamics simulations, we propose a new charging method that should be capable of charging a lithium-ion battery in a fraction of the time needed when using traditional methods. This charging method uses an additional applied oscillatory electric field. Our simulation results show that this charging method offers a great reduction in the average intercalation time for Li(+) ions, which dominates the charging time. The oscillating field not only increases the diffusion rate of Li(+) ions in the electrolyte but, more importantly, also enhances intercalation by lowering the corresponding overall energy barrier.

  8. Challenges in scaling NLO generators to leadership computers

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  9. Digital signal processing techniques for pitch shifting and time scaling of audio signals

    NASA Astrophysics Data System (ADS)

    Buś, Szymon; Jedrzejewski, Konrad

    2016-09-01

    In this paper, we present the techniques used for modifying the spectral content (pitch shifting) and for changing the time duration (time scaling) of an audio signal. A short introduction gives a necessary background for understanding the discussed issues and contains explanations of the terms used in the paper. In subsequent sections we present three different techniques appropriate both for pitch shifting and for time scaling. These techniques use three different time-frequency representations of a signal, namely short-time Fourier transform (STFT), continuous wavelet transform (CWT) and constant-Q transform (CQT). The results of simulation studies devoted to comparison of the properties of these methods are presented and discussed in the paper.

  10. Numerical Simulation of a High Mach Number Jet Flow

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Turkel, Eli; Mankbadi, Reda R.

    1993-01-01

    The recent efforts to develop accurate numerical schemes for transition and turbulent flows are motivated, among other factors, by the need for accurate prediction of flow noise. The success of developing high speed civil transport plane (HSCT) is contingent upon our understanding and suppression of the jet exhaust noise. The radiated sound can be directly obtained by solving the full (time-dependent) compressible Navier-Stokes equations. However, this requires computational storage that is beyond currently available machines. This difficulty can be overcome by limiting the solution domain to the near field where the jet is nonlinear and then use acoustic analogy (e.g., Lighthill) to relate the far-field noise to the near-field sources. The later requires obtaining the time-dependent flow field. The other difficulty in aeroacoustics computations is that at high Reynolds numbers the turbulent flow has a large range of scales. Direct numerical simulations (DNS) cannot obtain all the scales of motion at high Reynolds number of technological interest. However, it is believed that the large scale structure is more efficient than the small-scale structure in radiating noise. Thus, one can model the small scales and calculate the acoustically active scales. The large scale structure in the noise-producing initial region of the jet can be viewed as a wavelike nature, the net radiated sound is the net cancellation after integration over space. As such, aeroacoustics computations are highly sensitive to errors in computing the sound sources. It is therefore essential to use a high-order numerical scheme to predict the flow field. The present paper presents the first step in a ongoing effort to predict jet noise. The emphasis here is in accurate prediction of the unsteady flow field. We solve the full time-dependent Navier-Stokes equations by a high order finite difference method. Time accurate spatial simulations of both plane and axisymmetric jet are presented. Jet Mach numbers of 1.5 and 2.1 are considered. Reynolds number in the simulations was about a million. Our numerical model is based on the 2-4 scheme by Gottlieb & Turkel. Bayliss et al. applied the 2-4 scheme in boundary layer computations. This scheme was also used by Ragab and Sheen to study the nonlinear development of supersonic instability waves in a mixing layer. In this study, we present two dimensional direct simulation results for both plane and axisymmetric jets. These results are compared with linear theory predictions. These computations were made for near nozzle exit region and velocity in spanwise/azimuthal direction was assumed to be zero.

  11. Structures and Intermittency in a Passive Scalar Model

    NASA Astrophysics Data System (ADS)

    Vergassola, M.; Mazzino, A.

    1997-09-01

    Perturbative expansions for intermittency scaling exponents in the Kraichnan passive scalar model [Phys. Rev. Lett. 72, 1016 (1994)] are investigated. A one-dimensional compressible model is considered for this purpose. High resolution Monte Carlo simulations using an Ito approach adapted to an advecting velocity field with a very short correlation time are performed and lead to clean scaling behavior for passive scalar structure functions. Perturbative predictions for the scaling exponents around the Gaussian limit of the model are derived as in the Kraichnan model. Their comparison with the simulations indicates that the scale-invariant perturbative scheme correctly captures the inertial range intermittency corrections associated with the intense localized structures observed in the dynamics.

  12. GAPPARD: a computationally efficient method of approximating gap-scale disturbance in vegetation models

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.

    2013-02-01

    Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, and to explore patterns of spatial scaling in forests, we developed a new method for simulating stand-replacing disturbances that is both accurate and 10-50x faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing, e.g., as a result of climate change, GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the forest models LPJ-GUESS and TreeM-LPJ, and evaluated these in a series of simulations along an altitudinal transect of an inner-alpine valley. With GAPPARD applied to LPJ-GUESS results were insignificantly different from the output of the original model LPJ-GUESS using 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and forest models.

  13. An open source platform for multi-scale spatially distributed simulations of microbial ecosystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segre, Daniel

    2014-08-14

    The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.

  14. Real-time simulation of an F110/STOVL turbofan engine

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.; Ouzts, Peter J.

    1989-01-01

    A traditional F110-type turbofan engine model was extended to include a ventral nozzle and two thrust-augmenting ejectors for Short Take-Off Vertical Landing (STOVL) aircraft applications. Development of the real-time F110/STOVL simulation required special attention to the modeling approach to component performance maps, the low pressure turbine exit mixing region, and the tailpipe dynamic approximation. Simulation validation derives by comparing output from the ADSIM simulation with the output for a validated F110/STOVL General Electric Aircraft Engines FORTRAN deck. General Electric substantiated basic engine component characteristics through factory testing and full scale ejector data.

  15. HIV competition dynamics over sexual networks: first comer advantage conserves founder effects.

    PubMed

    Ferdinandy, Bence; Mones, Enys; Vicsek, Tamás; Müller, Viktor

    2015-02-01

    Outside Africa, the global phylogeography of HIV is characterized by compartmentalized local epidemics that are typically dominated by a single subtype, which indicates strong founder effects. We hypothesized that the competition of viral strains at the epidemic level may involve an advantage of the resident strain that was the first to colonize a population. Such an effect would slow down the invasion of new strains, and thus also the diversification of the epidemic. We developed a stochastic modelling framework to simulate HIV epidemics over dynamic contact networks. We simulated epidemics in which the second strain was introduced into a population where the first strain had established a steady-state epidemic, and assessed whether, and on what time scale, the second strain was able to spread in the population. Simulations were parameterized based on empirical data; we tested scenarios with varying levels of overall prevalence. The spread of the second strain occurred on a much slower time scale compared with the initial expansion of the first strain. With strains of equal transmission efficiency, the second strain was unable to invade on a time scale relevant for the history of the HIV pandemic. To become dominant over a time scale of decades, the second strain needed considerable (>25%) advantage in transmission efficiency over the resident strain. The inhibition effect was weaker if the second strain was introduced while the first strain was still in its growth phase. We also tested how possible mechanisms of interference (inhibition of superinfection, depletion of highly connected hubs in the network, one-time acute peak of infectiousness) contribute to the inhibition effect. Our simulations confirmed a strong first comer advantage in the competition dynamics of HIV at the population level, which may explain the global phylogeography of the virus and may influence the future evolution of the pandemic.

  16. Using a million cell simulation of the cerebellum: network scaling and task generality.

    PubMed

    Li, Wen-Ke; Hausknecht, Matthew J; Stone, Peter; Mauk, Michael D

    2013-11-01

    Several factors combine to make it feasible to build computer simulations of the cerebellum and to test them in biologically realistic ways. These simulations can be used to help understand the computational contributions of various cerebellar components, including the relevance of the enormous number of neurons in the granule cell layer. In previous work we have used a simulation containing 12000 granule cells to develop new predictions and to account for various aspects of eyelid conditioning, a form of motor learning mediated by the cerebellum. Here we demonstrate the feasibility of scaling up this simulation to over one million granule cells using parallel graphics processing unit (GPU) technology. We observe that this increase in number of granule cells requires only twice the execution time of the smaller simulation on the GPU. We demonstrate that this simulation, like its smaller predecessor, can emulate certain basic features of conditioned eyelid responses, with a slight improvement in performance in one measure. We also use this simulation to examine the generality of the computation properties that we have derived from studying eyelid conditioning. We demonstrate that this scaled up simulation can learn a high level of performance in a classic machine learning task, the cart-pole balancing task. These results suggest that this parallel GPU technology can be used to build very large-scale simulations whose connectivity ratios match those of the real cerebellum and that these simulations can be used guide future studies on cerebellar mediated tasks and on machine learning problems. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.

    PubMed

    Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve

    2011-11-01

    Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Power in the loop real time simulation platform for renewable energy generation

    NASA Astrophysics Data System (ADS)

    Li, Yang; Shi, Wenhui; Zhang, Xing; He, Guoqing

    2018-02-01

    Nowadays, a large scale of renewable energy sources has been connecting to power system and the real time simulation platform is widely used to carry out research on integration control algorithm, power system stability etc. Compared to traditional pure digital simulation and hardware in the loop simulation, power in the loop simulation has higher accuracy and degree of reliability. In this paper, a power in the loop analog digital hybrid simulation platform has been built and it can be used not only for the single generation unit connecting to grid, but also for multiple new energy generation units connecting to grid. A wind generator inertia control experiment was carried out on the platform. The structure of the inertia control platform was researched and the results verify that the platform is up to need for renewable power in the loop real time simulation.

  19. Scale-dependency of effective hydraulic conductivity on fire-affected hillslopes

    NASA Astrophysics Data System (ADS)

    Langhans, Christoph; Lane, Patrick N. J.; Nyman, Petter; Noske, Philip J.; Cawson, Jane G.; Oono, Akiko; Sheridan, Gary J.

    2016-07-01

    Effective hydraulic conductivity (Ke) for Hortonian overland flow modeling has been defined as a function of rainfall intensity and runon infiltration assuming a distribution of saturated hydraulic conductivities (Ks). But surface boundary condition during infiltration and its interactions with the distribution of Ks are not well represented in models. As a result, the mean value of the Ks distribution (KS¯), which is the central parameter for Ke, varies between scales. Here we quantify this discrepancy with a large infiltration data set comprising four different methods and scales from fire-affected hillslopes in SE Australia using a relatively simple yet widely used conceptual model of Ke. Ponded disk (0.002 m2) and ring infiltrometers (0.07 m2) were used at the small scales and rainfall simulations (3 m2) and small catchments (ca 3000 m2) at the larger scales. We compared KS¯ between methods measured at the same time and place. Disk and ring infiltrometer measurements had on average 4.8 times higher values of KS¯ than rainfall simulations and catchment-scale estimates. Furthermore, the distribution of Ks was not clearly log-normal and scale-independent, as supposed in the conceptual model. In our interpretation, water repellency and preferential flow paths increase the variance of the measured distribution of Ks and bias ponding toward areas of very low Ks during rainfall simulations and small catchment runoff events while areas with high preferential flow capacity remain water supply-limited more than the conceptual model of Ke predicts. The study highlights problems in the current theory of scaling runoff generation.

  20. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  1. Scaling and efficiency determine the irreversible evolution of a market

    PubMed Central

    Baldovin, F.; Stella, A. L.

    2007-01-01

    In setting up a stochastic description of the time evolution of a financial index, the challenge consists in devising a model compatible with all stylized facts emerging from the analysis of financial time series and providing a reliable basis for simulating such series. Based on constraints imposed by market efficiency and on an inhomogeneous-time generalization of standard simple scaling, we propose an analytical model which accounts simultaneously for empirical results like the linear decorrelation of successive returns, the power law dependence on time of the volatility autocorrelation function, and the multiscaling associated to this dependence. In addition, our approach gives a justification and a quantitative assessment of the irreversible character of the index dynamics. This irreversibility enters as a key ingredient in a novel simulation strategy of index evolution which demonstrates the predictive potential of the model.

  2. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP [Investigating the scale dependence of SCM simulated precipitation and cloud by using gridded forcing data at SGP

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-05

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  3. The Programming Language Python In Earth System Simulations

    NASA Astrophysics Data System (ADS)

    Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.

    2004-12-01

    Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.

  4. GAPPARD: a computationally efficient method of approximating gap-scale disturbance in vegetation models

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.

    2013-09-01

    Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, we developed a new method for simulating stand-replacing disturbances that is both accurate and faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing (e.g., as a result of climate change), GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the vegetation model LPJ-GUESS, and evaluated it in a series of simulations along an altitudinal transect of an inner-Alpine valley. We obtained results very similar to the output of the original LPJ-GUESS model that uses 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited for rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and results of other forest models.

  5. Re-design of a physically-based catchment scale agrochemical model for the simulation of parameter spaces and flexible transformation schemes

    NASA Astrophysics Data System (ADS)

    Stegen, Ronald; Gassmann, Matthias

    2017-04-01

    The use of a broad variation of agrochemicals is essential for the modern industrialized agriculture. During the last decades, the awareness of the side effects of their use has grown and with it the requirement to reproduce, understand and predict the behaviour of these agrochemicals in the environment, in order to optimize their use and minimize the side effects. The modern modelling has made great progress in understanding and predicting these chemicals with digital methods. While the behaviour of the applied chemicals is often investigated and modelled, most studies only simulate parent chemicals, considering total annihilation of the substance. However, due to a diversity of chemical, physical and biological processes, the substances are rather transformed into new chemicals, which themselves are transformed until, at the end of the chain, the substance is completely mineralized. During this process, the fate of each transformation product is determined by its own environmental characteristics and the pathway and results of transformation can differ largely by substance and environmental influences, that can occur in different compartments of the same site. Simulating transformation products introduces additional model uncertainties. Thus, the calibration effort increases compared to simulations of the transport and degradation of the primary substance alone. The simulation of the necessary physical processes needs a lot of calculation time. Due to that, few physically-based models offer the possibility to simulate transformation products at all, mostly at the field scale. The few models available for the catchment scale are not optimized for this duty, i.e. they are only able to simulate a single parent compound and up to two transformation products. Thus, for simulations of large physico-chemical parameter spaces, the enormous calculation time of the underlying hydrological model diminishes the overall performance. In this study, the structure of the model ZIN-AGRITRA is re-designed for the transport and transformation of an unlimited amount of agrochemicals in the soil-water-plant system at catchment scale. The focus is, besides a good hydrological standard, on a flexible variation of transformation processes and the optimization for the use of large numbers of different substances. Due to the new design, a reduction of the calculation time per tested substance is acquired, allowing faster testing of parameter spaces. Additionally, the new concept allows for the consideration of different transformation processes and products in different environmental compartments. A first test of calculation time improvements and flexible transformation pathways was performed in a Mediterranean meso-scale catchment, using the insecticide Chlorpyrifos and two of its transformation products, which emerge from different transformation processes, as test substances.

  6. Challenges of NDE Simulation Tool Challenges of NDE Simulation Tool

    NASA Technical Reports Server (NTRS)

    Leckey, Cara A. C.; Juarez, Peter D.; Seebo, Jeffrey P.; Frank, Ashley L.

    2015-01-01

    Realistic nondestructive evaluation (NDE) simulation tools enable inspection optimization and predictions of inspectability for new aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of advanced aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation cannot rapidly simulate damage detection techniques for large scale, complex geometry composite components/vehicles with realistic damage types. This paper discusses some of the challenges of model development and validation for composites, such as the level of realism and scale of simulation needed for NASA' applications. Ongoing model development work is described along with examples of model validation studies. The paper will also discuss examples of the use of simulation tools at NASA to develop new damage characterization methods, and associated challenges of validating those methods.

  7. Recent Advances in Transferable Coarse-Grained Modeling of Proteins

    PubMed Central

    Kar, Parimal; Feig, Michael

    2017-01-01

    Computer simulations are indispensable tools for studying the structure and dynamics of biological macromolecules. Biochemical processes occur on different scales of length and time. Atomistic simulations cannot cover the relevant spatiotemporal scales at which the cellular processes occur. To address this challenge, coarse-grained (CG) modeling of the biological systems are employed. Over the last few years, many CG models for proteins continue to be developed. However, many of them are not transferable with respect to different systems and different environments. In this review, we discuss those CG protein models that are transferable and that retain chemical specificity. We restrict ourselves to CG models of soluble proteins only. We also briefly review recent progress made in the multi-scale hybrid all-atom/coarse-grained simulations of proteins. PMID:25443957

  8. Deep Convection, Magnetism and Solar Supergranulation

    NASA Astrophysics Data System (ADS)

    Lord, J. W.

    We examine the effect of deep convection and magnetic fields on solar supergranulation. While supergranulation was originally identified as a convective flow from relatively great depth below the solar surface, recent work suggests that supergranules may originate near the surface. We use the MURaM code to simulate solar-like surface convection with a realistic photosphere and domain size up to 197 x 197 x 49 Mm3. This yields nearly five orders of magnitude of density contrast between the bottom of the domain and the photosphere which is the most stratified solar-like convection simulations that we are aware of. Magnetic fields were thought to be a passive tracer in the photosphere, but recent work suggests that magnetism could provide a mechanism that enhances the supergranular scale flows at the surface. In particular, the enhanced radiative losses through long lived magnetic network elements may increase the lifetime of photospheric downflows and help organize low wavenumber flows. Since our simulation does not have sufficient resolution to resolve increased cooling by magnetic bright points, we artificially increase the radiative cooling in elements with strong magnetic flux. These simulations increase the cooling by 10% for magnetic field strength greater than 100 G. We find no statistically significant difference in the velocity or magnetic field spectrum by enhancing the radiative cooling. We also find no differences in the time scale of the flows or the length scales of the magnetic energy spectrum. This suggests that the magnetic field is determined by the flows and is largely a passive tracer. We use these simulations to construct a two-component model of the flows: for scales smaller than the driving (integral) scale (which is four times the local density scale height) the flows follow a Kolmogorov (k-5/3) spectrum, while larger scale modes decay with height from their driving depth (i.e. the depth where the wavelength of the mode is equal to the driving (integral) scale). This model reproduces the MURaM results well and suggests that the low wavenumber power in the photosphere imprints from below. In particular, the amplitude of the driving (integral) scale mode at each depth determines how much power imprints on the surface flows. This is validated by MURaM simulations of varying depth that show that increasing depths contribute power at a particular scale (or range of scales) that is always at lower wavenumbers than shallower flows. The mechanism for this imprinting remains unclear but, given the importance of the balances in the continuity equation to determining the spectrum of the flows, we suggest that pressure perturbations in the convective upflows are the imprinting mechanism. By comparing the MURaM simulations to SDO/HMI observations (using the coherent structure tracking code to compute the inferred horizontal velocities on both data sets), we find that the simulations have significant excess power for scales larger than supergranulation. The only way to match observations is by using an artificial energy flux to transport the solar luminosity for all depths greater than 10 Mm below the photosphere (down to the bottom of the domain at 49 Mm depth). While magnetic fields from small-scale dynamo simulations help reduce the rms velocity required to transport the solar luminosity below the surface, this provides only a small reduction in low wavenumber power in the photosphere. The convective energy transport in the Sun is constrained by theoretical models and the solar radiative luminosity. The amplitude or scale of the convective flows that transport the energy, however, are not constrained. The strong low wavenumber flows found in these local simulations are also present in current generation global simulations. While local or global dynamo magnetic fields may help suppress these large-scale flows, the magnetic fields must be substantially stronger throughout the convection domains for these simulations to match observations. The significant decrease in low wavenumber flow amplitude in the artificial energy flux simulation that matches the observed photospheric horizontal velocity spectrum suggests that convection in the Sun transports the solar luminosity with much weaker large-scale flows. This suggests that we do not understand how convective transport works in the Sun for depths greater than 10 Mm below the photosphere.

  9. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE PAGES

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  10. Teaching Real Science with a Microcomputer.

    ERIC Educational Resources Information Center

    Naiman, Adeline

    1983-01-01

    Discusses various ways science can be taught using microcomputers, including simulations/games which allow large-scale or historic experiments to be replicated on a manageable scale in a brief time. Examples of several computer programs are also presented, including "Experiments in Human Physiology,""Health Awareness…

  11. A Compact Synchronous Cellular Model of Nonlinear Calcium Dynamics: Simulation and FPGA Synthesis Results.

    PubMed

    Soleimani, Hamid; Drakakis, Emmanuel M

    2017-06-01

    Recent studies have demonstrated that calcium is a widespread intracellular ion that controls a wide range of temporal dynamics in the mammalian body. The simulation and validation of such studies using experimental data would benefit from a fast large scale simulation and modelling tool. This paper presents a compact and fully reconfigurable cellular calcium model capable of mimicking Hopf bifurcation phenomenon and various nonlinear responses of the biological calcium dynamics. The proposed cellular model is synthesized on a digital platform for a single unit and a network model. Hardware synthesis, physical implementation on FPGA, and theoretical analysis confirm that the proposed cellular model can mimic the biological calcium behaviors with considerably low hardware overhead. The approach has the potential to speed up large-scale simulations of slow intracellular dynamics by sharing more cellular units in real-time. To this end, various networks constructed by pipelining 10 k to 40 k cellular calcium units are compared with an equivalent simulation run on a standard PC workstation. Results show that the cellular hardware model is, on average, 83 times faster than the CPU version.

  12. A hybrid parallel framework for the cellular Potts model simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Yi; He, Kejing; Dong, Shoubin

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approachmore » achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).« less

  13. NIMROD: A computational laboratory for studying nonlinear fusion magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Sovinec, C. R.; Gianakon, T. A.; Held, E. D.; Kruger, S. E.; Schnack, D. D.

    2003-05-01

    Nonlinear numerical studies of macroscopic modes in a variety of magnetic fusion experiments are made possible by the flexible high-order accurate spatial representation and semi-implicit time advance in the NIMROD simulation code [A. H. Glasser et al., Plasma Phys. Controlled Fusion 41, A747 (1999)]. Simulation of a resistive magnetohydrodynamics mode in a shaped toroidal tokamak equilibrium demonstrates computation with disparate time scales, simulations of discharge 87009 in the DIII-D tokamak [J. L. Luxon et al., Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. I, p. 159] confirm an analytic scaling for the temporal evolution of an ideal mode subject to plasma-β increasing beyond marginality, and a spherical torus simulation demonstrates nonlinear free-boundary capabilities. A comparison of numerical results on magnetic relaxation finds the n=1 mode and flux amplification in spheromaks to be very closely related to the m=1 dynamo modes and magnetic reversal in reversed-field pinch configurations. Advances in local and nonlocal closure relations developed for modeling kinetic effects in fluid simulation are also described.

  14. Poiseuille flow of soft glasses in narrow channels: from quiescence to steady state.

    PubMed

    Chaudhuri, Pinaki; Horbach, Jürgen

    2014-10-01

    Using numerical simulations, the onset of Poiseuille flow in a confined soft glass is investigated. Starting from the quiescent state, steady flow sets in at a time scale which increases with a decrease in applied forcing. At this onset time scale, a rapid transition occurs via the simultaneous fluidization of regions having different local stresses. In the absence of steady flow at long times, creep is observed even in regions where the local stress is larger than the bulk yielding threshold. Finally, we show that the time scale to attain steady flow depends strongly on the history of the initial state.

  15. Comparative performance of different scale-down simulators of substrate gradients in Penicillium chrysogenum cultures: the need of a biological systems response analysis.

    PubMed

    Wang, Guan; Zhao, Junfei; Haringa, Cees; Tang, Wenjun; Xia, Jianye; Chu, Ju; Zhuang, Yingping; Zhang, Siliang; Deshmukh, Amit T; van Gulik, Walter; Heijnen, Joseph J; Noorman, Henk J

    2018-05-01

    In a 54 m 3 large-scale penicillin fermentor, the cells experience substrate gradient cycles at the timescales of global mixing time about 20-40 s. Here, we used an intermittent feeding regime (IFR) and a two-compartment reactor (TCR) to mimic these substrate gradients at laboratory-scale continuous cultures. The IFR was applied to simulate substrate dynamics experienced by the cells at full scale at timescales of tens of seconds to minutes (30 s, 3 min and 6 min), while the TCR was designed to simulate substrate gradients at an applied mean residence time (τc) of 6 min. A biological systems analysis of the response of an industrial high-yielding P. chrysogenum strain has been performed in these continuous cultures. Compared to an undisturbed continuous feeding regime in a single reactor, the penicillin productivity (q PenG ) was reduced in all scale-down simulators. The dynamic metabolomics data indicated that in the IFRs, the cells accumulated high levels of the central metabolites during the feast phase to actively cope with external substrate deprivation during the famine phase. In contrast, in the TCR system, the storage pool (e.g. mannitol and arabitol) constituted a large contribution of carbon supply in the non-feed compartment. Further, transcript analysis revealed that all scale-down simulators gave different expression levels of the glucose/hexose transporter genes and the penicillin gene clusters. The results showed that q PenG did not correlate well with exposure to the substrate regimes (excess, limitation and starvation), but there was a clear inverse relation between q PenG and the intracellular glucose level. © 2018 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  16. Effect of real-time boundary wind conditions on the air flow and pollutant dispersion in an urban street canyon—Large eddy simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Yun-Wei; Gu, Zhao-Lin; Cheng, Yan; Lee, Shun-Cheng

    2011-07-01

    Air flow and pollutant dispersion characteristics in an urban street canyon are studied under the real-time boundary conditions. A new scheme for realizing real-time boundary conditions in simulations is proposed, to keep the upper boundary wind conditions consistent with the measured time series of wind data. The air flow structure and its evolution under real-time boundary wind conditions are simulated by using this new scheme. The induced effect of time series of ambient wind conditions on the flow structures inside and above the street canyon is investigated. The flow shows an obvious intermittent feature in the street canyon and the flapping of the shear layer forms near the roof layer under real-time wind conditions, resulting in the expansion or compression of the air mass in the canyon. The simulations of pollutant dispersion show that the pollutants inside and above the street canyon are transported by different dispersion mechanisms, relying on the time series of air flow structures. Large scale air movements in the processes of the air mass expansion or compression in the canyon exhibit obvious effects on pollutant dispersion. The simulations of pollutant dispersion also show that the transport of pollutants from the canyon to the upper air flow is dominated by the shear layer turbulence near the roof level and the expansion or compression of the air mass in street canyon under real-time boundary wind conditions. Especially, the expansion of the air mass, which features the large scale air movement of the air mass, makes more contribution to the pollutant dispersion in this study. Comparisons of simulated results under different boundary wind conditions indicate that real-time boundary wind conditions produces better condition for pollutant dispersion than the artificially-designed steady boundary wind conditions.

  17. Multiscale modeling and general theory of non-equilibrium plasma-assisted ignition and combustion

    NASA Astrophysics Data System (ADS)

    Yang, Suo; Nagaraja, Sharath; Sun, Wenting; Yang, Vigor

    2017-11-01

    A self-consistent framework for modeling and simulations of plasma-assisted ignition and combustion is established. In this framework, a ‘frozen electric field’ modeling approach is applied to take advantage of the quasi-periodic behaviors of the electrical characteristics to avoid the re-calculation of electric field for each pulse. The correlated dynamic adaptive chemistry (CO-DAC) method is employed to accelerate the calculation of large and stiff chemical mechanisms. The time-step is dynamically updated during the simulation through a three-stage multi-time scale modeling strategy, which utilizes the large separation of time scales in nanosecond pulsed plasma discharges. A general theory of plasma-assisted ignition and combustion is then proposed. Nanosecond pulsed plasma discharges for ignition and combustion can be divided into four stages. Stage I is the discharge pulse, with time scales of O (1-10 ns). In this stage, input energy is coupled into electron impact excitation and dissociation reactions to generate charged/excited species and radicals. Stage II is the afterglow during the gap between two adjacent pulses, with time scales of O (1 0 0 ns). In this stage, quenching of excited species dissociates O2 and fuel molecules, and provides fast gas heating. Stage III is the remaining gap between pulses, with time scales of O (1-100 µs). The radicals generated during Stages I and II significantly enhance exothermic reactions in this stage. The cumulative effects of multiple pulses is seen in Stage IV, with time scales of O (1-1000 ms), which include preheated gas temperatures and a large pool of radicals and fuel fragments to trigger ignition. For flames, plasma could significantly enhance the radical generation and gas heating in the pre-heat zone, thereby enhancing the flame establishment.

  18. An analytical study of reduced-gravity propellant settling

    NASA Technical Reports Server (NTRS)

    Bradshaw, R. D.; Kramer, J. L.; Masica, W. J.

    1974-01-01

    Full-scale propellant reorientation flow dynamics for the Centaur D-1T fuel tank were analyzed. A computer code using the simplified marker and cell technique was modified to include the capability for a variable-grid mesh configuration. Use of smaller cells near the boundary, near baffles, and in corners provides improved flow resolution. Two drop tower model cases were simulated to verify program validity: one case without baffles, the other with baffles and geometry identical to Centaur D-1T. Flow phenomena using the new code successfully modeled drop tower data. Baffles are a positive factor in the settling flow. Two full-scale Centaur D-1T cases were simulated using parameters based on the Titan/Centaur proof flight. These flow simulations indicated the time to clear the vent area and an indication of time to orient and collect the propellant. The results further indicated the complexity of the reorientation flow and the long time period required for settling.

  19. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.

    PubMed

    Dhar, Amrit; Minin, Vladimir N

    2017-05-01

    Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.

  20. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time

    PubMed Central

    Dhar, Amrit

    2017-01-01

    Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780

  1. VME rollback hardware for time warp multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Robb, Michael J.; Buzzell, Calvin A.

    1992-01-01

    The purpose of the research effort is to develop and demonstrate innovative hardware to implement specific rollback and timing functions required for efficient queue management and precision timekeeping in multiprocessor discrete event simulations. The previously completed phase 1 effort demonstrated the technical feasibility of building hardware modules which eliminate the state saving overhead of the Time Warp paradigm used in distributed simulations on multiprocessor systems. The current phase 2 effort will build multiple pre-production rollback hardware modules integrated with a network of Sun workstations, and the integrated system will be tested by executing a Time Warp simulation. The rollback hardware will be designed to interface with the greatest number of multiprocessor systems possible. The authors believe that the rollback hardware will provide for significant speedup of large scale discrete event simulation problems and allow multiprocessors using Time Warp to dramatically increase performance.

  2. Numerical evaluation of the scale problem on the wind flow of a windbreak

    PubMed Central

    Liu, Benli; Qu, Jianjun; Zhang, Weimin; Tan, Lihai; Gao, Yanhong

    2014-01-01

    The airflow field around wind fences with different porosities, which are important in determining the efficiency of fences as a windbreak, is typically studied via scaled wind tunnel experiments and numerical simulations. However, the scale problem in wind tunnels or numerical models is rarely researched. In this study, we perform a numerical comparison between a scaled wind-fence experimental model and an actual-sized fence via computational fluid dynamics simulations. The results show that although the general field pattern can be captured in a reduced-scale wind tunnel or numerical model, several flow characteristics near obstacles are not proportional to the size of the model and thus cannot be extrapolated directly. For example, the small vortex behind a low-porosity fence with a scale of 1:50 is approximately 4 times larger than that behind a full-scale fence. PMID:25311174

  3. The effects of magnetic fields and protostellar feedback on low-mass cluster formation

    NASA Astrophysics Data System (ADS)

    Cunningham, Andrew J.; Krumholz, Mark R.; McKee, Christopher F.; Klein, Richard I.

    2018-05-01

    We present a large suite of simulations of the formation of low-mass star clusters. Our simulations include an extensive set of physical processes - magnetohydrodynamics, radiative transfer, and protostellar outflows - and span a wide range of virial parameters and magnetic field strengths. Comparing the outcomes of our simulations to observations, we find that simulations remaining close to virial balance throughout their history produce star formation efficiencies and initial mass function (IMF) peaks that are stable in time and in reasonable agreement with observations. Our results indicate that small-scale dissipation effects near the protostellar surface provide a feedback loop for stabilizing the star formation efficiency. This is true regardless of whether the balance is maintained by input of energy from large-scale forcing or by strong magnetic fields that inhibit collapse. In contrast, simulations that leave virial balance and undergo runaway collapse form stars too efficiently and produce an IMF that becomes increasingly top heavy with time. In all cases, we find that the competition between magnetic flux advection towards the protostar and outward advection due to magnetic interchange instabilities, and the competition between turbulent amplification and reconnection close to newly formed protostars renders the local magnetic field structure insensitive to the strength of the large-scale field, ensuring that radiation is always more important than magnetic support in setting the fragmentation scale and thus the IMF peak mass. The statistics of multiple stellar systems are similarly insensitive to variations in the initial conditions and generally agree with observations within the range of statistical uncertainty.

  4. Probabilistic Simulation of Multi-Scale Composite Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2012-01-01

    A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.

  5. Large scale simulation of liquid water transport in a gas diffusion layer of polymer electrolyte membrane fuel cells using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi

    2017-09-01

    A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.

  6. Use NU-WRF and GCE Model to Simulate the Precipitation Processes During MC3E Campaign

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Wu, Di; Matsui, Toshi; Li, Xiaowen; Zeng, Xiping; Peter-Lidard, Christa; Hou, Arthur

    2012-01-01

    One of major CRM approaches to studying precipitation processes is sometimes referred to as "cloud ensemble modeling". This approach allows many clouds of various sizes and stages of their lifecycles to be present at any given simulation time. Large-scale effects derived from observations are imposed into CRMs as forcing, and cyclic lateral boundaries are used. The advantage of this approach is that model results in terms of rainfall and QI and Q2 usually are in good agreement with observations. In addition, the model results provide cloud statistics that represent different types of clouds/cloud systems during their lifetime (life cycle). The large-scale forcing derived from MC3EI will be used to drive GCE model simulations. The model-simulated results will be compared with observations from MC3E. These GCE model-simulated datasets are especially valuable for LH algorithm developers. In addition, the regional scale model with very high-resolution, NASA Unified WRF is also used to real time forecast during the MC3E campaign to ensure that the precipitation and other meteorological forecasts are available to the flight planning team and to interpret the forecast results in terms of proposed flight scenarios. Post Mission simulations are conducted to examine the sensitivity of initial and lateral boundary conditions to cloud and precipitation processes and rainfall. We will compare model results in terms of precipitation and surface rainfall using GCE model and NU-WRF

  7. Origin of two time-scale regimes in potentiometric titration of metal oxides. A replica kinetic Monte Carlo study.

    PubMed

    Zarzycki, Piotr; Rosso, Kevin M

    2009-06-16

    Replica kinetic Monte Carlo simulations were used to study the characteristic time scales of potentiometric titration of the metal oxides and (oxy)hydroxides. The effect of surface heterogeneity and surface transformation on the titration kinetics were also examined. Two characteristic relaxation times are often observed experimentally, with the trailing slower part attributed to surface nonuniformity, porosity, polymerization, amorphization, and other dynamic surface processes induced by unbalanced surface charge. However, our simulations show that these two characteristic relaxation times are intrinsic to the proton-binding reaction for energetically homogeneous surfaces, and therefore surface heterogeneity or transformation does not necessarily need to be invoked. However, all such second-order surface processes are found to intensify the separation and distinction of the two kinetic regimes. The effect of surface energetic-topographic nonuniformity, as well dynamic surface transformation, interface roughening/smoothing were described in a statistical fashion. Furthermore, our simulations show that a shift in the point-of-zero charge is expected from increased titration speed, and the pH-dependence of the titration measurement error is in excellent agreement with experimental studies.

  8. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  9. General relativistic screening in cosmological simulations

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Paranjape, Aseem

    2016-10-01

    We revisit the issue of interpreting the results of large volume cosmological simulations in the context of large-scale general relativistic effects. We look for simple modifications to the nonlinear evolution of the gravitational potential ψ that lead on large scales to the correct, fully relativistic description of density perturbations in the Newtonian gauge. We note that the relativistic constraint equation for ψ can be cast as a diffusion equation, with a diffusion length scale determined by the expansion of the Universe. Exploiting the weak time evolution of ψ in all regimes of interest, this equation can be further accurately approximated as a Helmholtz equation, with an effective relativistic "screening" scale ℓ related to the Hubble radius. We demonstrate that it is thus possible to carry out N-body simulations in the Newtonian gauge by replacing Poisson's equation with this Helmholtz equation, involving a trivial change in the Green's function kernel. Our results also motivate a simple, approximate (but very accurate) gauge transformation—δN(k )≈δsim(k )×(k2+ℓ-2)/k2 —to convert the density field δsim of standard collisionless N -body simulations (initialized in the comoving synchronous gauge) into the Newtonian gauge density δN at arbitrary times. A similar conversion can also be written in terms of particle positions. Our results can be interpreted in terms of a Jeans stability criterion induced by the expansion of the Universe. The appearance of the screening scale ℓ in the evolution of ψ , in particular, leads to a natural resolution of the "Jeans swindle" in the presence of superhorizon modes.

  10. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method

    PubMed Central

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller’s scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller’s algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller’s algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller’s algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data. PMID:26958442

  11. Validating the simulation of large-scale parallel applications using statistical characteristics

    DOE PAGES

    Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...

    2016-03-01

    Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less

  12. Multiple time scale analysis of pressure oscillations in solid rocket motors

    NASA Astrophysics Data System (ADS)

    Ahmed, Waqas; Maqsood, Adnan; Riaz, Rizwan

    2018-03-01

    In this study, acoustic pressure oscillations for single and coupled longitudinal acoustic modes in Solid Rocket Motor (SRM) are investigated using Multiple Time Scales (MTS) method. Two independent time scales are introduced. The oscillations occur on fast time scale whereas the amplitude and phase changes on slow time scale. Hopf bifurcation is employed to investigate the properties of the solution. The supercritical bifurcation phenomenon is observed for linearly unstable system. The amplitude of the oscillations result from equal energy gain and loss rates of longitudinal acoustic modes. The effect of linear instability and frequency of longitudinal modes on amplitude and phase of oscillations are determined for both single and coupled modes. For both cases, the maximum amplitude of oscillations decreases with the frequency of acoustic mode and linear instability of SRM. The comparison of analytical MTS results and numerical simulations demonstrate an excellent agreement.

  13. Impact of Sub-grid Soil Textural Properties on Simulations of Hydrological Fluxes at the Continental Scale Mississippi River Basin

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Samaniego, L. E.; Livneh, B.

    2013-12-01

    Knowledge of soil hydraulic properties such as porosity and saturated hydraulic conductivity is required to accurately model the dynamics of near-surface hydrological processes (e.g. evapotranspiration and root-zone soil moisture dynamics) and provide reliable estimates of regional water and energy budgets. Soil hydraulic properties are commonly derived from pedo-transfer functions using soil textural information recorded during surveys, such as the fractions of sand and clay, bulk density, and organic matter content. Typically large scale land-surface models are parameterized using a relatively coarse soil map with little or no information on parametric sub-grid variability. In this study we analyze the impact of sub-grid soil variability on simulated hydrological fluxes over the Mississippi River Basin (≈3,240,000 km2) at multiple spatio-temporal resolutions. A set of numerical experiments were conducted with the distributed mesoscale hydrologic model (mHM) using two soil datasets: (a) the Digital General Soil Map of the United States or STATSGO2 (1:250 000) and (b) the recently collated Harmonized World Soil Database based on the FAO-UNESCO Soil Map of the World (1:5 000 000). mHM was parameterized with the multi-scale regionalization technique that derives distributed soil hydraulic properties via pedo-transfer functions and regional coefficients. Within the experimental framework, the 3-hourly model simulations were conducted at four spatial resolutions ranging from 0.125° to 1°, using meteorological datasets from the NLDAS-2 project for the time period 1980-2012. Preliminary results indicate that the model was able to capture observed streamflow behavior reasonably well with both soil datasets, in the major sub-basins (i.e. the Missouri, the Upper Mississippi, the Ohio, the Red, and the Arkansas). However, the spatio-temporal patterns of simulated water fluxes and states (e.g. soil moisture, evapotranspiration) from both simulations, showed marked differences; particularly at a shorter time scale (hours to days) in regions with coarse texture sandy soils. Furthermore, the partitioning of total runoff into near-surface interflows and baseflow components was also significantly different between the two simulations. Simulations with the coarser soil map produced comparatively higher baseflows. At longer time scales (months to seasons) where climatic factors plays a major role, the integrated fluxes and states from both sets of model simulations match fairly closely, despite the apparent discrepancy in the partitioning of total runoff.

  14. Parallel Adjective High-Order CFD Simulations Characterizing SOFIA Cavity Acoustics

    NASA Technical Reports Server (NTRS)

    Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak

    2016-01-01

    This paper presents large-scale MPI-parallel computational uid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft fuselage of a Boeing 747SP. These simulations focus on how the unsteady ow eld inside and over the cavity interferes with the optical path and mounting structure of the telescope. A temporally fourth-order accurate Runge-Kutta, and spatially fth-order accurate WENO- 5Z scheme was used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh re nement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32k CPU cores and 4 billion compu- tational cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregular numerical cost associated with blocks con- taining boundaries. Limits to scaling beyond 32k cores are identi ed, and targeted code optimizations are discussed.

  15. Parallel Adaptive High-Order CFD Simulations Characterizing SOFIA Cavitiy Acoustics

    NASA Technical Reports Server (NTRS)

    Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak

    2015-01-01

    This paper presents large-scale MPI-parallel computational uid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft fuselage of a Boeing 747SP. These simulations focus on how the unsteady ow eld inside and over the cavity interferes with the optical path and mounting structure of the telescope. A tempo- rally fourth-order accurate Runge-Kutta, and a spatially fth-order accurate WENO-5Z scheme were used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh re nement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32k CPU cores and 4 billion compu- tational cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregular numerical cost associated with blocks con- taining boundaries. Limits to scaling beyond 32k cores are identi ed, and targeted code optimizations are discussed.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrnstein, Aaron R.

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration,more » and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO 2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No dramatic or persistent signs of error growth in the passive tracer outgassing or the ocean circulation are observed to result from AMR.« less

  17. Load Balancing Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearce, Olga Tkachyshyn

    2014-12-01

    The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one atmore » the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.« less

  18. Comparison of Phase Field Crystal and Molecular Dynamics Simulations for a Shrinking Grain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radhakrishnan, Balasubramaniam; Gorti, Sarma B; Nicholson, Don M

    2012-01-01

    The Phase-Field Crystal (PFC) model represents the atomic density as a continuous function, whose spatial distribution evolves at diffusional, rather than vibrational time scales. PFC provides a tool to study defect interactions at the atomistic level but over longer time scales than in molecular dynamics (MD). We examine the behavior of the PFC model with the goal of relating the PFC parameters to physical parameters of real systems, derived from MD simulations. For this purpose we model the phenomenon of the shrinking of a spherical grain situated in a matrix. By comparing the rate of shrinking of the central grainmore » using MD and PFC we obtain a relationship between PFC and MD time scales for processes driven by grain boundary diffusion. The morphological changes in the central grain including grain shape and grain rotation are also examined in order to assess the accuracy of the PFC in capturing the evolution path predicted by MD.« less

  19. Fluid pumping using magnetic cilia

    NASA Astrophysics Data System (ADS)

    Hanasoge, Srinivas; Ballard, Matt; Alexeev, Alexander; Hesketh, Peter; Woodruff School of Mechanical Engineering Team

    2016-11-01

    Using experiments and computer simulations, we examine fluid pumping by artificial magnetic cilia fabricated using surface micromachining techniques. An asymmetry in forward and recovery strokes of the elastic cilia causes the net pumping in a creeping flow regime. We show this asymmetry in the ciliary strokes is due to the change in magnetization of the elastic cilia combined with viscous force due to the fluid. Specifically, the time scale for forward stroke is mostly governed by the magnetic forces, whereas the time scale for the recovery stroke is determined by the elastic and viscous forces. These different time scales result in different cilia deformation during forward and backward strokes which in turn lead to the asymmetry in the ciliary motion. To disclose the physics of magnetic cilia pumping we use a hybrid lattice Boltzmann and lattice spring method. We validate our model by comparing the simulation results with the experimental data. The results of our study will be useful to design microfluidic systems for fluid mixing and particle manipulation including different biological particles. USDA and NSF.

  20. Large-Scale Simulation of Multi-Asset Ising Financial Markets

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2017-03-01

    We perform a large-scale simulation of an Ising-based financial market model that includes 300 asset time series. The financial system simulated by the model shows a fat-tailed return distribution and volatility clustering and exhibits unstable periods indicated by the volatility index measured as the average of absolute-returns. Moreover, we determine that the cumulative risk fraction, which measures the system risk, changes at high volatility periods. We also calculate the inverse participation ratio (IPR) and its higher-power version, IPR6, from the absolute-return cross-correlation matrix. Finally, we show that the IPR and IPR6 also change at high volatility periods.

  1. Scaled boundary finite element simulation and modeling of the mechanical behavior of cracked nanographene sheets

    NASA Astrophysics Data System (ADS)

    Honarmand, M.; Moradi, M.

    2018-06-01

    In this paper, by using scaled boundary finite element method (SBFM), a perfect nanographene sheet or cracked ones were simulated for the first time. In this analysis, the atomic carbon bonds were modeled by simple bar elements with circular cross-sections. Despite of molecular dynamics (MD), the results obtained from SBFM analysis are quite acceptable for zero degree cracks. For all angles except zero, Griffith criterion can be applied for the relation between critical stress and crack length. Finally, despite the simplifications used in nanographene analysis, obtained results can simulate the mechanical behavior with high accuracy compared with experimental and MD ones.

  2. A k-epsilon modeling of near wall turbulence

    NASA Technical Reports Server (NTRS)

    Yang, Z.; Shih, T. H.

    1991-01-01

    A k-epsilon model is proposed for turbulent bounded flows. In this model, the turbulent velocity scale and turbulent time scale are used to define the eddy viscosity. The time scale is shown to be bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using the time scale, removing the need to introduce the pseudo-dissipation. A damping function is chosen such that the shear stress satisfies the near wall asymptotic behavior. The model constants used are the same as the model constants in the commonly used high turbulent Reynolds number k-epsilon model. Fully developed turbulent channel flows and turbulent boundary layer flows over a flat plate at various Reynolds numbers are used to validate the model. The model predictions were found to be in good agreement with the direct numerical simulation data.

  3. T-cell epitope prediction and immune complex simulation using molecular dynamics: state of the art and persisting challenges

    PubMed Central

    2010-01-01

    Atomistic Molecular Dynamics provides powerful and flexible tools for the prediction and analysis of molecular and macromolecular systems. Specifically, it provides a means by which we can measure theoretically that which cannot be measured experimentally: the dynamic time-evolution of complex systems comprising atoms and molecules. It is particularly suitable for the simulation and analysis of the otherwise inaccessible details of MHC-peptide interaction and, on a larger scale, the simulation of the immune synapse. Progress has been relatively tentative yet the emergence of truly high-performance computing and the development of coarse-grained simulation now offers us the hope of accurately predicting thermodynamic parameters and of simulating not merely a handful of proteins but larger, longer simulations comprising thousands of protein molecules and the cellular scale structures they form. We exemplify this within the context of immunoinformatics. PMID:21067546

  4. Large-eddy simulation of a turbulent mixing layer

    NASA Technical Reports Server (NTRS)

    Mansour, N. N.; Ferziger, J. H.; Reynolds, W. C.

    1978-01-01

    The three dimensional, time dependent (incompressible) vorticity equations were used to simulate numerically the decay of isotropic box turbulence and time developing mixing layers. The vorticity equations were spatially filtered to define the large scale turbulence field, and the subgrid scale turbulence was modeled. A general method was developed to show numerical conservation of momentum, vorticity, and energy. The terms that arise from filtering the equations were treated (for both periodic boundary conditions and no stress boundary conditions) in a fast and accurate way by using fast Fourier transforms. Use of vorticity as the principal variable is shown to produce results equivalent to those obtained by use of the primitive variable equations.

  5. Plasticity-mediated collapse and recrystallization in hollow copper nanowires: a molecular dynamics simulation

    PubMed Central

    Raychaudhuri, Arup Kumar; Saha-Dasgupta, Tanusri

    2016-01-01

    Summary We study the thermal stability of hollow copper nanowires using molecular dynamics simulation. We find that the plasticity-mediated structural evolution leads to transformation of the initial hollow structure to a solid wire. The process involves three distinct stages, namely, collapse, recrystallization and slow recovery. We calculate the time scales associated with different stages of the evolution process. Our findings suggest a plasticity-mediated mechanism of collapse and recrystallization. This contradicts the prevailing notion of diffusion driven transport of vacancies from the interior to outer surface being responsible for collapse, which would involve much longer time scales as compared to the plasticity-based mechanism. PMID:26977380

  6. Dynamic measurements and simulations of airborne picolitre-droplet coalescence in holographic optical tweezers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bzdek, Bryan R.; Reid, Jonathan P., E-mail: j.p.reid@bristol.ac.uk; Collard, Liam

    We report studies of the coalescence of pairs of picolitre aerosol droplets manipulated with holographic optical tweezers, probing the shape relaxation dynamics following coalescence by simultaneously monitoring the intensity of elastic backscattered light (EBL) from the trapping laser beam (time resolution on the order of 100 ns) while recording high frame rate camera images (time resolution <10 μs). The goals of this work are to: resolve the dynamics of droplet coalescence in holographic optical traps; assign the origin of key features in the time-dependent EBL intensity; and validate the use of the EBL alone to precisely determine droplet surface tensionmore » and viscosity. For low viscosity droplets, two sequential processes are evident: binary coalescence first results from the overlap of the optical traps on the time scale of microseconds followed by the recapture of the composite droplet in an optical trap on the time scale of milliseconds. As droplet viscosity increases, the relaxation in droplet shape eventually occurs on the same time scale as recapture, resulting in a convoluted evolution of the EBL intensity that inhibits quantitative determination of the relaxation time scale. Droplet coalescence was simulated using a computational framework to validate both experimental approaches. The results indicate that time-dependent monitoring of droplet shape from the EBL intensity allows for robust determination of properties such as surface tension and viscosity. Finally, the potential of high frame rate imaging to examine the coalescence of dissimilar viscosity droplets is discussed.« less

  7. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    PubMed

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  8. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

    PubMed Central

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation. PMID:28596730

  9. What FIREs Up Star Formation: the Emergence of the Kennicutt-Schmidt Law from Feedback

    NASA Astrophysics Data System (ADS)

    Orr, Matthew E.; Hayward, Christopher C.; Hopkins, Philip F.; Chan, T. K.; Faucher-Giguère, Claude-André; Feldmann, Robert; Kereš, Dušan; Murray, Norman; Quataert, Eliot

    2018-05-01

    We present an analysis of the global and spatially-resolved Kennicutt-Schmidt (KS) star formation relation in the FIRE (Feedback In Realistic Environments) suite of cosmological simulations, including halos with z = 0 masses ranging from 1010 - 1013 M⊙. We show that the KS relation emerges and is robustly maintained due to the effects of feedback on local scales regulating star-forming gas, independent of the particular small-scale star formation prescriptions employed. We demonstrate that the time-averaged KS relation is relatively independent of redshift and spatial averaging scale, and that the star formation rate surface density is weakly dependent on metallicity and inversely dependent on orbital dynamical time. At constant star formation rate surface density, the `Cold & Dense' gas surface density (gas with T < 300 K and n > 10 cm-3, used as a proxy for the molecular gas surface density) of the simulated galaxies is ˜0.5 dex less than observed at ˜kpc scales. This discrepancy may arise from underestimates of the local column density at the particle-scale for the purposes of shielding in the simulations. Finally, we show that on scales larger than individual giant molecular clouds, the primary condition that determines whether star formation occurs is whether a patch of the galactic disk is thermally Toomre-unstable (not whether it is self-shielding): once a patch can no longer be thermally stabilized against fragmentation, it collapses, becomes self-shielding, cools, and forms stars, regardless of epoch or environment.

  10. Simulations of eddy kinetic energy transport in barotropic turbulence

    NASA Astrophysics Data System (ADS)

    Grooms, Ian

    2017-11-01

    Eddy energy transport in rotating two-dimensional turbulence is investigated using numerical simulation. Stochastic forcing is used to generate an inhomogeneous field of turbulence and the time-mean energy profile is diagnosed. An advective-diffusive model for the transport is fit to the simulation data by requiring the model to accurately predict the observed time-mean energy distribution. Isotropic harmonic diffusion of energy is found to be an accurate model in the case of uniform, solid-body background rotation (the f plane), with a diffusivity that scales reasonably well with a mixing-length law κ ∝V ℓ , where V and ℓ are characteristic eddy velocity and length scales. Passive tracer dynamics are added and it is found that the energy diffusivity is 75 % of the tracer diffusivity. The addition of a differential background rotation with constant vorticity gradient β leads to significant changes to the energy transport. The eddies generate and interact with a mean flow that advects the eddy energy. Mean advection plus anisotropic diffusion (with reduced diffusivity in the direction of the background vorticity gradient) is moderately accurate for flows with scale separation between the eddies and mean flow, but anisotropic diffusion becomes a much less accurate model of the transport when scale separation breaks down. Finally, it is observed that the time-mean eddy energy does not look like the actual eddy energy distribution at any instant of time. In the future, stochastic models of the eddy energy transport may prove more useful than models of the mean transport for predicting realistic eddy energy distributions.

  11. Parameter Studies, time-dependent simulations and design with automated Cartesian methods

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael

    2005-01-01

    Over the past decade, NASA has made a substantial investment in developing adaptive Cartesian grid methods for aerodynamic simulation. Cartesian-based methods played a key role in both the Space Shuttle Accident Investigation and in NASA's return to flight activities. The talk will provide an overview of recent technological developments focusing on the generation of large-scale aerodynamic databases, automated CAD-based design, and time-dependent simulations with of bodies in relative motion. Automation, scalability and robustness underly all of these applications and research in each of these topics will be presented.

  12. Final Report for DE-FG02-99ER45795

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkins, John Warren

    The research supported by this grant focuses on atomistic studies of defects, phase transitions, electronic and magnetic properties, and mechanical behaviors of materials. We have been studying novel properties of various emerging nanoscale materials on multiple levels of length and time scales, and have made accurate predictions on many technologically important properties. A significant part of our research has been devoted to exploring properties of novel nano-scale materials by pushing the limit of quantum mechanical simulations, and development of a rigorous scheme to design accurate classical inter-atomic potentials for larger scale atomistic simulations for many technologically important metals and metalmore » alloys.« less

  13. Matching time and spatial scales of rapid solidification: dynamic TEM experiments coupled to CALPHAD-informed phase-field simulations

    NASA Astrophysics Data System (ADS)

    Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.; Fattebert, Jean-Luc; McKeown, Joseph T.

    2018-01-01

    A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu-Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid-liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu-Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying from ˜0.1 to ˜0.6 m s-1. After an ‘incubation’ time, the velocity of the planar solid-liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Finally, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid-liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).

  14. Matching time and spatial scales of rapid solidification: Dynamic TEM experiments coupled to CALPHAD-informed phase-field simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.

    A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu–Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid–liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu–Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying frommore » ~0.1 to ~0.6 m s –1. After an 'incubation' time, the velocity of the planar solid–liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Lastly, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid–liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).« less

  15. Matching time and spatial scales of rapid solidification: Dynamic TEM experiments coupled to CALPHAD-informed phase-field simulations

    DOE PAGES

    Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.; ...

    2017-12-05

    A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu–Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid–liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu–Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying frommore » ~0.1 to ~0.6 m s –1. After an 'incubation' time, the velocity of the planar solid–liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Lastly, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid–liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).« less

  16. Comparison of an algebraic multigrid algorithm to two iterative solvers used for modeling ground water flow and transport

    USGS Publications Warehouse

    Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.

    2002-01-01

    Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.

  17. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    NASA Astrophysics Data System (ADS)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  18. Investigation of relationships between parameters of solar nano-flares and solar activity

    NASA Astrophysics Data System (ADS)

    Safari, Hossein; Javaherian, Mohsen; Kaki, Bardia

    2016-07-01

    Solar flares are one of the important coronal events which are originated in solar magnetic activity. They release lots of energy during the interstellar medium, right after the trigger. Flare prediction can play main role in avoiding eventual damages on the Earth. Here, to interpret solar large-scale events (e.g., flares), we investigate relationships between small-scale events (nano-flares) and large-scale events (e.g., flares). In our method, by using simulations of nano-flares based on Monte Carlo method, the intensity time series of nano-flares are simulated. Then, the solar full disk images taken at 171 angstrom recorded by SDO/AIA are employed. Some parts of the solar disk (quiet Sun (QS), coronal holes (CHs), and active regions (ARs)) are cropped and the time series of these regions are extracted. To compare the simulated intensity time series of nano-flares with the intensity time series of real data extracted from different parts of the Sun, the artificial neural networks is employed. Therefore, we are able to extract physical parameters of nano-flares like both kick and decay rate lifetime, and the power of their power-law distributions. The procedure of variations in the power value of power-law distributions within QS, CH is similar to AR. Thus, by observing the small part of the Sun, we can follow the procedure of solar activity.

  19. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  20. End-monomer Dynamics in Semiflexible Polymers

    PubMed Central

    Hinczewski, Michael; Schlagberger, Xaver; Rubinstein, Michael; Krichevsky, Oleg; Netz, Roland R.

    2009-01-01

    Spurred by an experimental controversy in the literature, we investigate the end-monomer dynamics of semiflexible polymers through Brownian hydrodynamic simulations and dynamic mean-field theory. Precise experimental observations over the last few years of end-monomer dynamics in the diffusion of double-stranded DNA have given conflicting results: one study indicated an unexpected Rouse-like scaling of the mean squared displacement (MSD) 〈r2(t)〉 ~ t1/2 at intermediate times, corresponding to fluctuations at length scales larger than the persistence length but smaller than the coil size; another study claimed the more conventional Zimm scaling 〈r2(t)〉 ~ t2/3 in the same time range. Using hydrodynamic simulations, analytical and scaling theories, we find a novel intermediate dynamical regime where the effective local exponent of the end-monomer MSD, α(t) = d log〈r2(t)〉/d log t, drops below the Zimm value of 2/3 for sufficiently long chains. The deviation from the Zimm prediction increases with chain length, though it does not reach the Rouse limit of 1/2. The qualitative features of this intermediate regime, found in simulations and in an improved mean-field theory for semiflexible polymers, in particular the variation of α(t) with chain and persistence lengths, can be reproduced through a heuristic scaling argument. Anomalously low values of the effective exponent α are explained by hydrodynamic effects related to the slow crossover from dynamics on length scales smaller than the persistence length to dynamics on larger length scales. PMID:21359118

  1. Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale

    NASA Astrophysics Data System (ADS)

    Sobolev, S. V.; Muldashev, I. A.

    2015-12-01

    Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the upper plate during multiple earthquake cycles at times of hundred thousand and million years and discuss effect of great earthquakes in changing long-term stress field in the upper plate.

  2. Mirrored continuum and molecular scale simulations of the ignition of high-pressure phases of RDX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kibaek; Stewart, D. Scott, E-mail: santc@illinois.edu, E-mail: dss@illinois.edu; Joshi, Kaushik

    2016-05-14

    We present a mirrored atomistic and continuum framework that is used to describe the ignition of energetic materials, and a high-pressure phase of RDX in particular. The continuum formulation uses meaningful averages of thermodynamic properties obtained from the atomistic simulation and a simplification of enormously complex reaction kinetics. In particular, components are identified based on molecular weight bin averages and our methodology assumes that both the averaged atomistic and continuum simulations are represented on the same time and length scales. The atomistic simulations of thermally initiated ignition of RDX are performed using reactive molecular dynamics (RMD). The continuum model ismore » based on multi-component thermodynamics and uses a kinetics scheme that describes observed chemical changes of the averaged atomistic simulations. Thus the mirrored continuum simulations mimic the rapid change in pressure, temperature, and average molecular weight of species in the reactive mixture. This mirroring enables a new technique to simplify the chemistry obtained from reactive MD simulations while retaining the observed features and spatial and temporal scales from both the RMD and continuum model. The primary benefit of this approach is a potentially powerful, but familiar way to interpret the atomistic simulations and understand the chemical events and reaction rates. The approach is quite general and thus can provide a way to model chemistry based on atomistic simulations and extend the reach of those simulations.« less

  3. Simulation of long-term influence from technical systems on permafrost with various short-scale and hourly operation modes in Arctic region

    NASA Astrophysics Data System (ADS)

    Vaganova, N. A.

    2017-12-01

    Technogenic and climatic influences have a significant impact on the degradation of permafrost. Long-term forecasts of such changes during long-time periods have to be taken into account in the oil and gas and construction industries in view to development the Arctic and Subarctic regions. There are considered constantly operating technical systems (for example, oil and gas wells) that affect changes in permafrost, as well as the technical systems that have a short-term impact on permafrost (for example, flare systems for emergency flaring of associated gas). The second type of technical systems is rather complex for simulation, since it is required to reserve both short and long-scales in computations with variable time steps describing the complex technological processes. The main attention is paid to the simulation of long-term influence on the permafrost from the second type of the technical systems.

  4. A Decade-long Continental-Scale Convection-Resolving Climate Simulation on GPUs

    NASA Astrophysics Data System (ADS)

    Leutwyler, David; Fuhrer, Oliver; Lapillonne, Xavier; Lüthi, Daniel; Schär, Christoph

    2016-04-01

    The representation of moist convection in climate models represents a major challenge, due to the small scales involved. Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. Using horizontal grid spacings of O(1km), they allow to explicitly resolve deep convection leading to an improved representation of the water cycle. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer-designs that involve conventional multicore CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation using the GPU-enabled COSMO version. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss the performance of the convection-resolving modeling approach on the European scale. Specifically we focus on the annual cycle of convection in Europe, on the organization of convective clouds and on the verification of hourly rainfall with various high resolution datasets.

  5. The Dynamics of Small-Scale Turbulence Driven Flows

    NASA Astrophysics Data System (ADS)

    Beer, M. A.; Hammett, G. W.

    1997-11-01

    The dynamics of small-scale fluctuation driven flows are of great interest for micro-instability driven turbulence, since nonlinear toroidal simulations have shown that these flows play an important role in the regulation of the turbulence and transport levels. The gyrofluid treatment of these flows was shown to be accurate for times shorter than a bounce time.(Beer, M. A., Ph. D. thesis, Princeton University (1995).) Since the decorrelation times of the turbulence are generally shorter than a bounce time, our original hypothesis was that this description was adequate. Recent work(Hinton, F. L., Rosenbluth, M. N., and Waltz, R. E., International Sherwood Fusion Theory Conference (1997).) pointed out possible problems with this hypothesis, emphasizing the existence of a linearly undamped component of the flow which could build up in time and lower the final turbulence level. While our original gyrofluid model reproduces some aspects of the linear flow, there are differences between the long time gyrofluid and kinetic linear results in some cases. On the other hand, if the long time behavior of these flows is dominated by nonlinear damping (which seems reasonable), then the existing nonlinear gyrofluid simulations may be sufficiently accurate. We test these possibilities by modifying the gyrofluid description of these flows and diagnosing the flow evolution in nonlinear simulations.

  6. A new time scale based k-epsilon model for near wall turbulence

    NASA Technical Reports Server (NTRS)

    Yang, Z.; Shih, T. H.

    1992-01-01

    A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.

  7. New time scale based k-epsilon model for near-wall turbulence

    NASA Technical Reports Server (NTRS)

    Yang, Z.; Shih, T. H.

    1993-01-01

    A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.

  8. Multi-scale approach to the modeling of fission gas discharge during hypothetical loss-of-flow accident in gen-IV sodium fast reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behafarid, F.; Shaver, D. R.; Bolotnov, I. A.

    The required technological and safety standards for future Gen IV Reactors can only be achieved if advanced simulation capabilities become available, which combine high performance computing with the necessary level of modeling detail and high accuracy of predictions. The purpose of this paper is to present new results of multi-scale three-dimensional (3D) simulations of the inter-related phenomena, which occur as a result of fuel element heat-up and cladding failure, including the injection of a jet of gaseous fission products into a partially blocked Sodium Fast Reactor (SFR) coolant channel, and gas/molten sodium transport along the coolant channels. The computational approachmore » to the analysis of the overall accident scenario is based on using two different inter-communicating computational multiphase fluid dynamics (CMFD) codes: a CFD code, PHASTA, and a RANS code, NPHASE-CMFD. Using the geometry and time history of cladding failure and the gas injection rate, direct numerical simulations (DNS), combined with the Level Set method, of two-phase turbulent flow have been performed by the PHASTA code. The model allows one to track the evolution of gas/liquid interfaces at a centimeter scale. The simulated phenomena include the formation and breakup of the jet of fission products injected into the liquid sodium coolant. The PHASTA outflow has been averaged over time to obtain mean phasic velocities and volumetric concentrations, as well as the liquid turbulent kinetic energy and turbulence dissipation rate, all of which have served as the input to the core-scale simulations using the NPHASE-CMFD code. A sliding window time averaging has been used to capture mean flow parameters for transient cases. The results presented in the paper include testing and validation of the proposed models, as well the predictions of fission-gas/liquid-sodium transport along a multi-rod fuel assembly of SFR during a partial loss-of-flow accident. (authors)« less

  9. A dataset of future daily weather data for crop modelling over Europe derived from climate change scenarios

    NASA Astrophysics Data System (ADS)

    Duveiller, G.; Donatelli, M.; Fumagalli, D.; Zucchini, A.; Nelson, R.; Baruth, B.

    2017-02-01

    Coupled atmosphere-ocean general circulation models (GCMs) simulate different realizations of possible future climates at global scale under contrasting scenarios of land-use and greenhouse gas emissions. Such data require several additional processing steps before it can be used to drive impact models. Spatial downscaling, typically by regional climate models (RCM), and bias-correction are two such steps that have already been addressed for Europe. Yet, the errors in resulting daily meteorological variables may be too large for specific model applications. Crop simulation models are particularly sensitive to these inconsistencies and thus require further processing of GCM-RCM outputs. Moreover, crop models are often run in a stochastic manner by using various plausible weather time series (often generated using stochastic weather generators) to represent climate time scale for a period of interest (e.g. 2000 ± 15 years), while GCM simulations typically provide a single time series for a given emission scenario. To inform agricultural policy-making, data on near- and medium-term decadal time scale is mostly requested, e.g. 2020 or 2030. Taking a sample of multiple years from these unique time series to represent time horizons in the near future is particularly problematic because selecting overlapping years may lead to spurious trends, creating artefacts in the results of the impact model simulations. This paper presents a database of consolidated and coherent future daily weather data for Europe that addresses these problems. Input data consist of daily temperature and precipitation from three dynamically downscaled and bias-corrected regional climate simulations of the IPCC A1B emission scenario created within the ENSEMBLES project. Solar radiation is estimated from temperature based on an auto-calibration procedure. Wind speed and relative air humidity are collected from historical series. From these variables, reference evapotranspiration and vapour pressure deficit are estimated ensuring consistency within daily records. The weather generator ClimGen is then used to create 30 synthetic years of all variables to characterize the time horizons of 2000, 2020 and 2030, which can readily be used for crop modelling studies.

  10. Particle Acceleration and Magnetic Field Generation in Electron-Positron Relativistic Shocks

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-L.; Hardee, P.; Richardson, G.; Preece, R.; Sol, H.; Fishman, G. J.

    2004-01-01

    Shock acceleration is an ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., Buneman, Weibel and other two-stream instabilities) created in collisionless shocks are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic electron-positron jet front propagating into an ambient electron-positron plasma with and without initial magnetic fields. We find small differences in the results for no ambient and modest ambient magnetic fields. Simulations show that the Weibel instability created in the collisionless shock front accelerates jet and ambient particles both perpendicular and parallel to the jet propagation direction. The non-linear fluctuation amplitudes of densities, currents, electric, and magnetic fields in the electron-positron shock are larger than those found in the electron-ion shock studied in a previous paper at the comparable simulation time. This comes from the fact that both electrons and positrons contribute to generation of the Weibel instability. Additionally, we have performed simulations with different electron skin depths. We find that growth times scale inversely with the plasma frequency, and the sizes of structures created by the Weibel instability scale proportional to the electron skin depth. This is the expected result and indicates that the simulations have sufficient grid resolution. While some Fermi acceleration may occur at the jet front, the majority of electron and positron acceleration takes place behind the jet front and cannot be characterized as Fermi acceleration. The simulation results show that the Weibel instability is responsible for generating and amplifying nonuniform: small-scale magnetic fields which contribute to the electron's (positron's) transverse deflection behind the jet head. This small scale magnetic field structure is appropriate to the generation of jitter radiation from deflected electrons (positrons) as opposed to synchrotron radiation. The jitter radiation has different properties than synchrotron radiation calculated assuming a a uniform magnetic field. The jitter radiation resulting from small scale magnetic field structures may be important for understanding the complex time structure and spectral evolution observed in gamma-ray bursts or other astrophysical sources containing relativistic jets and relativistic collisionless shocks.

  11. Particle Acceleration and Magnetic Field Generation in Electron-Positron Relativistic Shocks

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Hardee, P.; Richardson, G.; Preece, R.; Sol, H.; Fishman, G. J.

    2005-01-01

    Shock acceleration is a ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., Buneman, Weibel, and other two-stream instabilities) created in collisionless shocks are responsible for particle (electron, positron, and ion) acceleration. Using a three-dimensional relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic electron-positron jet front propagating into an ambient electron-positron plasma with and without initial magnetic fields. We find small differences in the results for no ambient and modest ambient magnetic fields. New simulations show that the Weibel instability created in the collisionless shock front accelerates jet and ambient particles both perpendicular and parallel to the jet propagation direction. Furthermore, the nonlinear fluctuation amplitudes of densities, currents, and electric and magnetic fields in the electron-positron shock are larger than those found in the electron-ion shock studied in a previous paper at a comparable simulation time. This comes from the fact that both electrons and positrons contribute to generation of the Weibel instability. In addition, we have performed simulations with different electron skin depths. We find that growth times scale inversely with the plasma frequency, and the sizes of structures created by tine Weibel instability scale proportionally to the electron skin depth. This is the expected result and indicates that the simulations have sufficient grid resolution. While some Fermi acceleration may occur at the jet front, the majority of electron and positron acceleration takes place behind the jet front and cannot be characterized as Fermi acceleration. The simulation results show that the Weibel instability is responsible for generating and amplifying nonuniform, small-scale magnetic fields, which contribute to the electron s (positron s) transverse deflection behind the jet head. This small- scale magnetic field structure is appropriate to the generation of "jitter" radiation from deflected electrons (positrons) as opposed to synchrotron radiation. The jitter radiation has different properties than synchrotron radiation calculated assuming a uniform magnetic field. The jitter radiation resulting from small-scale magnetic field structures may be important for understanding the complex time structure and spectral evolution observed in gamma-ray bursts or other astrophysical sources containing relativistic jets and relativistic collisionless shocks.

  12. Real-time electron dynamics for massively parallel excited-state simulations

    NASA Astrophysics Data System (ADS)

    Andrade, Xavier

    The simulation of the real-time dynamics of electrons, based on time dependent density functional theory (TDDFT), is a powerful approach to study electronic excited states in molecular and crystalline systems. What makes the method attractive is its flexibility to simulate different kinds of phenomena beyond the linear-response regime, including strongly-perturbed electronic systems and non-adiabatic electron-ion dynamics. Electron-dynamics simulations are also attractive from a computational point of view. They can run efficiently on massively parallel architectures due to the low communication requirements. Our implementations of electron dynamics, based on the codes Octopus (real-space) and Qball (plane-waves), allow us to simulate systems composed of thousands of atoms and to obtain good parallel scaling up to 1.6 million processor cores. Due to the versatility of real-time electron dynamics and its parallel performance, we expect it to become the method of choice to apply the capabilities of exascale supercomputers for the simulation of electronic excited states.

  13. Rheological behavior of the crust and mantle in subduction zones in the time-scale range from earthquake (minute) to mln years inferred from thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    The key achievement of the geodynamic modelling community greatly contributed by the work of Evgenii Burov and his students is application of "realistic" mineral-physics based non-linear rheological models to simulate deformation processes in crust and mantle. Subduction being a type example of such process is an essentially multi-scale phenomenon with the time-scales spanning from geological to earthquake scale with the seismic cycle in-between. In this study we test the possibility to simulate the entire subduction process from rupture (1 min) to geological time (Mln yr) with the single cross-scale thermomechanical model that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. First we generate a thermo-mechanical model of subduction zone at geological time-scale including a narrow subduction channel with "wet-quartz" visco-elasto-plastic rheology and low static friction. We next introduce in the same model classic rate-and state friction law in subduction channel, leading to stick-slip instability. This model generates spontaneous earthquake sequence. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We observe many interesting deformation patterns and demonstrate that contrary to the conventional ideas, this model predicts that postseismic deformation is controlled by visco-elastic relaxation in the mantle wedge already since hour to day after the great (M>9) earthquakes. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-4year time range.

  14. Pore-scale observation and 3D simulation of wettability effects on supercritical CO2 - brine immiscible displacement in drainage

    NASA Astrophysics Data System (ADS)

    Hu, R.; Wan, J.; Chen, Y.

    2016-12-01

    Wettability is a factor controlling the fluid-fluid displacement pattern in porous media and significantly affects the flow and transport of supercritical (sc) CO2 in geologic carbon sequestration. Using a high-pressure micromodel-microscopy system, we performed drainage experiments of scCO2 invasion into brine-saturated water-wet and intermediate-wet micromodels; we visualized the scCO2 invasion morphology at pore-scale under reservoir conditions. We also performed pore-scale numerical simulations of the Navier-Stokes equations to obtain 3D details of fluid-fluid displacement processes. Simulation results are qualitatively consistent with the experiments, showing wider scCO2 fingering, higher percentage of scCO2 and more compact displacement pattern in intermediate-wet micromodel. Through quantitative analysis based on pore-scale simulation, we found that the reduced wettability reduces the displacement front velocity, promotes the pore-filling events in the longitudinal direction, delays the breakthrough time of invading fluid, and then increases the displacement efficiency. Simulated results also show that the fluid-fluid interface area follows a unified power-law relation with scCO2 saturation, and show smaller interface area in intermediate-wet case which suppresses the mass transfer between the phases. These pore-scale results provide insights for the wettability effects on CO2 - brine immiscible displacement in geologic carbon sequestration.

  15. Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.

    PubMed

    Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E

    2017-07-01

    We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.

  16. Interactions between hyporheic flow produced by stream meanders, bars, and dunes

    USGS Publications Warehouse

    Stonedahl, Susa H.; Harvey, Judson W.; Packman, Aaron I.

    2013-01-01

    Stream channel morphology from grain-scale roughness to large meanders drives hyporheic exchange flow. In practice, it is difficult to model hyporheic flow over the wide spectrum of topographic features typically found in rivers. As a result, many studies only characterize isolated exchange processes at a single spatial scale. In this work, we simulated hyporheic flows induced by a range of geomorphic features including meanders, bars and dunes in sand bed streams. Twenty cases were examined with 5 degrees of river meandering. Each meandering river model was run initially without any small topographic features. Models were run again after superimposing only bars and then only dunes, and then run a final time after including all scales of topographic features. This allowed us to investigate the relative importance and interactions between flows induced by different scales of topography. We found that dunes typically contributed more to hyporheic exchange than bars and meanders. Furthermore, our simulations show that the volume of water exchanged and the distributions of hyporheic residence times resulting from various scales of topographic features are close to, but not linearly additive. These findings can potentially be used to develop scaling laws for hyporheic flow that can be widely applied in streams and rivers.

  17. Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES

    NASA Astrophysics Data System (ADS)

    Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu

    2016-11-01

    Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.

  18. Numerical and experimental analysis of heat transfer in injector plate of hydrogen peroxide hybrid rocket motor

    NASA Astrophysics Data System (ADS)

    Cai, Guobiao; Li, Chengen; Tian, Hui

    2016-11-01

    This paper is aimed to analyze heat transfer in injector plate of hydrogen peroxide hybrid rocket motor by two-dimensional axisymmetric numerical simulations and full-scale firing tests. Long-time working, which is an advantage of hybrid rocket motor over conventional solid rocket motor, puts forward new challenges for thermal protection. Thermal environments of full-scale hybrid rocket motors designed for long-time firing tests are studied through steady-state coupled numerical simulations of flow field and heat transfer in chamber head. The motor adopts 98% hydrogen peroxide (98HP) oxidizer and hydroxyl-terminated poly-butadiene (HTPB) based fuel as the propellants. Simulation results reveal that flowing liquid 98HP in head oxidizer chamber could cool the injector plate of the motor. The cooling of 98HP is similar to the regenerative cooling in liquid rocket engines. However, the temperature of the 98HP in periphery portion of the head oxidizer chamber is higher than its boiling point. In order to prevent the liquid 98HP from unexpected decomposition, a thermal protection method for chamber head utilizing silica-phenolics annular insulating board is proposed. The simulation results show that the annular insulating board could effectively decrease the temperature of the 98HP in head oxidizer chamber. Besides, the thermal protection method for long-time working hydrogen peroxide hybrid rocket motor is verified through full-scale firing tests. The ablation of the insulating board in oxygen-rich environment is also analyzed.

  19. High Fidelity Simulations of Large-Scale Wireless Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onunkwo, Uzoma; Benz, Zachary

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulationsmore » (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.« less

  20. Application of particle and lattice codes to simulation of hydraulic fracturing

    NASA Astrophysics Data System (ADS)

    Damjanac, Branko; Detournay, Christine; Cundall, Peter A.

    2016-04-01

    With the development of unconventional oil and gas reservoirs over the last 15 years, the understanding and capability to model the propagation of hydraulic fractures in inhomogeneous and naturally fractured reservoirs has become very important for the petroleum industry (but also for some other industries like mining and geothermal). Particle-based models provide advantages over other models and solutions for the simulation of fracturing of rock masses that cannot be assumed to be continuous and homogeneous. It has been demonstrated (Potyondy and Cundall Int J Rock Mech Min Sci Geomech Abstr 41:1329-1364, 2004) that particle models based on a simple force criterion for fracture propagation match theoretical solutions and scale effects derived using the principles of linear elastic fracture mechanics (LEFM). The challenge is how to apply these models effectively (i.e., with acceptable models sizes and computer run times) to the coupled hydro-mechanical problems of relevant time and length scales for practical field applications (i.e., reservoir scale and hours of injection time). A formulation of a fully coupled hydro-mechanical particle-based model and its application to the simulation of hydraulic treatment of unconventional reservoirs are presented. Model validation by comparing with available analytical asymptotic solutions (penny-shape crack) and some examples of field application (e.g., interaction with DFN) are also included.

  1. Properties of galaxies reproduced by a hydrodynamic simulation

    NASA Astrophysics Data System (ADS)

    Vogelsberger, M.; Genel, S.; Springel, V.; Torrey, P.; Sijacki, D.; Xu, D.; Snyder, G.; Bird, S.; Nelson, D.; Hernquist, L.

    2014-05-01

    Previous simulations of the growth of cosmic structures have broadly reproduced the `cosmic web' of galaxies that we see in the Universe, but failed to create a mixed population of elliptical and spiral galaxies, because of numerical inaccuracies and incomplete physical models. Moreover, they were unable to track the small-scale evolution of gas and stars to the present epoch within a representative portion of the Universe. Here we report a simulation that starts 12 million years after the Big Bang, and traces 13 billion years of cosmic evolution with 12 billion resolution elements in a cube of 106.5 megaparsecs a side. It yields a reasonable population of ellipticals and spirals, reproduces the observed distribution of galaxies in clusters and characteristics of hydrogen on large scales, and at the same time matches the `metal' and hydrogen content of galaxies on small scales.

  2. Role of monsoon intraseasonal oscillation and its interannual variability in simulation of seasonal mean in CFSv2

    NASA Astrophysics Data System (ADS)

    Pillai, Prasanth A.; Aher, Vaishali R.

    2018-01-01

    Intraseasonal oscillation (ISO), which appears as "active" and "break" spells of rainfall, is an important component of Indian summer monsoon (ISM). The present study investigates the potential of new National Centre for Environmental Prediction (NCEP) climate forecast system version 2 (CFSv2) in simulating the ISO with emphasis to its interannual variability (IAV) and its possible role in the seasonal mean rainfall. The present analysis shows that the spatial distribution of CFSv2 rainfall has noticeable differences with observations in both ISO and IAV time scales. Active-break cycle of CFSv2 has similar evolution during both strong and weak years. Regardless of a reasonable El Niño Southern Oscillation (ENSO)-monsoon teleconnection in the model, the overestimated Arabian Sea (AS) sea surface temperature (SST)-convection relationship hinters the large-scale influence of ENSO over the ISM region and adjacent oceans. The ISO scale convections over AS and Bay of Bengal (BoB) have noteworthy contribution to the seasonal mean rainfall, opposing the influence of boundary forcing in these areas. At the same time, overwhelming contribution of ISO component over AS towards the seasonal mean modifies the effect of slow varying boundary forcing to large-scale summer monsoon. The results here underline that, along with the correct simulation of monsoon ISO, its IAV and relationship with the boundary forcing also need to be well captured in coupled models for the accurate simulation of seasonal mean anomalies of the monsoon and its teleconnections.

  3. Dispersion of Response Times Reveals Cognitive Dynamics

    ERIC Educational Resources Information Center

    Holden, John G.; Van Orden, Guy C.; Turvey, Michael T.

    2009-01-01

    Trial-to-trial variation in word-pronunciation times exhibits 1/f scaling. One explanation is that human performances are consequent on multiplicative interactions among interdependent processes-interaction dominant dynamics. This article describes simulated distributions of pronunciation times in a further test for multiplicative interactions and…

  4. Scaling Characteristics of Mesoscale Wind Fields in the Lower Atmospheric Boundary Layer: Implications for Wind Energy

    NASA Astrophysics Data System (ADS)

    Kiliyanpilakkil, Velayudhan Praju

    Atmospheric motions take place in spatial scales of sub-millimeters to few thousands of kilometers with temporal changes in the atmospheric variables occur in fractions of seconds to several years. Consequently, the variations in atmospheric kinetic energy associated with these atmospheric motions span over a broad spectrum of space and time. The mesoscale region acts as an energy transferring regime between the energy generating synoptic scale and the energy dissipating microscale. Therefore, the scaling characterizations of mesoscale wind fields are significant in the accurate estimation of the atmospheric energy budget. Moreover, the precise knowledge of the scaling characteristics of atmospheric mesoscale wind fields is important for the validation of the numerical models those focus on wind forecasting, dispersion, diffusion, horizontal transport, and optical turbulence. For these reasons, extensive studies have been conducted in the past to characterize the mesoscale wind fields. Nevertheless, the majority of these studies focused on near-surface and upper atmosphere mesoscale regimes. The present study attempt to identify the existence and to quantify the scaling of mesoscale wind fields in the lower atmospheric boundary layer (ABL; in the wind turbine layer) using wind observations from various research-grade instruments (e.g., sodars, anemometers). The scaling characteristics of the mesoscale wind speeds over diverse homogeneous flat terrains, conducted using structure function based analysis, revealed an altitudinal dependence of the scaling exponents. This altitudinal dependence of the wind speed scaling may be attributed to the buoyancy forcing. Subsequently, we use the framework of extended self-similarity (ESS) to characterize the observed scaling behavior. In the ESS framework, the relative scaling exponents of the mesoscale atmospheric boundary layer wind speed exhibit quasi-universal behavior; even far beyond the inertial range of turbulence (Delta t within 10 minutes to 6 hours range). The ESS framework based study is extended further to enquire its validity over complex terrain. This study, based on multiyear wind observations, demonstrate that the ESS holds for the lower ABL wind speed over the complex terrain as well. Another important inference from this study is that the ESS relative scaling exponents corresponding to the mesoscale wind speed closely matches the scaling characteristics of the inertial range turbulence, albeit not exactly identical. The current study proposes benchmark using ESS-based quasi-universal wind speed scaling characteristics in the ABL for the mesoscale modeling community. Using a state-of-the-art atmospheric mesoscale model in conjunction with different planetary boundary layer (PBL) parameterization schemes, multiple wind speed simulations have been conducted. This study reveals that the ESS scaling characteristics of the model simulated wind speed time series in the lower ABL vary significantly from their observational counterparts. The study demonstrate that the model simulated wind speed time series for the time intervals Delta t < 2 hours do not capture the ESS-based scaling characteristics. The detailed analysis of model simulations using different PBL schemes lead to the conclusion that there is a need for significant improvements in the turbulent closure parameterizations adapted in the new-generation atmospheric models. This study is unique as the ESS framework has never been reported or examined for the validation of PBL parameterizations.

  5. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  6. Simulating the complex output of rainfall and hydrological processes using the information contained in large data sets: the Direct Sampling approach.

    NASA Astrophysics Data System (ADS)

    Oriani, Fabio

    2017-04-01

    The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002

  7. Simulations of inorganic-bioorganic interfaces to discover new materials: insights, comparisons to experiment, challenges, and opportunities.

    PubMed

    Heinz, Hendrik; Ramezani-Dakhel, Hadi

    2016-01-21

    Natural and man-made materials often rely on functional interfaces between inorganic and organic compounds. Examples include skeletal tissues and biominerals, drug delivery systems, catalysts, sensors, separation media, energy conversion devices, and polymer nanocomposites. Current laboratory techniques are limited to monitor and manipulate assembly on the 1 to 100 nm scale, time-consuming, and costly. Computational methods have become increasingly reliable to understand materials assembly and performance. This review explores the merit of simulations in comparison to experiment at the 1 to 100 nm scale, including connections to smaller length scales of quantum mechanics and larger length scales of coarse-grain models. First, current simulation methods, advances in the understanding of chemical bonding, in the development of force fields, and in the development of chemically realistic models are described. Then, the recognition mechanisms of biomolecules on nanostructured metals, semimetals, oxides, phosphates, carbonates, sulfides, and other inorganic materials are explained, including extensive comparisons between modeling and laboratory measurements. Depending on the substrate, the role of soft epitaxial binding mechanisms, ion pairing, hydrogen bonds, hydrophobic interactions, and conformation effects is described. Applications of the knowledge from simulation to predict binding of ligands and drug molecules to the inorganic surfaces, crystal growth and shape development, catalyst performance, as well as electrical properties at interfaces are examined. The quality of estimates from molecular dynamics and Monte Carlo simulations is validated in comparison to measurements and design rules described where available. The review further describes applications of simulation methods to polymer composite materials, surface modification of nanofillers, and interfacial interactions in building materials. The complexity of functional multiphase materials creates opportunities to further develop accurate force fields, including reactive force fields, and chemically realistic surface models, to enable materials discovery at a million times lower computational cost compared to quantum mechanical methods. The impact of modeling and simulation could further be increased by the advancement of a uniform simulation platform for organic and inorganic compounds across the periodic table and new simulation methods to evaluate system performance in silico.

  8. Slow dynamics in protein fluctuations revealed by time-structure based independent component analysis: The case of domain motions

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2011-02-01

    Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.

  9. Slow dynamics in protein fluctuations revealed by time-structure based independent component analysis: the case of domain motions.

    PubMed

    Naritomi, Yusuke; Fuchigami, Sotaro

    2011-02-14

    Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.

  10. The role of heat transfer time scale in the evolution of the subsea permafrost and associated methane hydrates stability zone during glacial cycles

    NASA Astrophysics Data System (ADS)

    Malakhova, Valentina V.; Eliseev, Alexey V.

    2017-10-01

    Climate warming may lead to degradation of the subsea permafrost developed during Pleistocene glaciations and release methane from the hydrates, which are stored in this permafrost. It is important to quantify time scales at which this release is plausible. While, in principle, such time scale might be inferred from paleoarchives, this is hampered by considerable uncertainty associated with paleodata. In the present paper, to reduce such uncertainty, one-dimensional simulations with a model for thermal state of subsea sediments forced by the data obtained from the ice core reconstructions are performed. It is shown that heat propagates in the sediments with a time scale of ∼ 10-20 kyr. This time scale is longer than the present interglacial and is determined by the time needed for heat penetration in the unfrozen part of thick sediments. We highlight also that timings of shelf exposure during oceanic regressions and flooding during transgressions are important for simulating thermal state of the sediments and methane hydrates stability zone (HSZ). These timings should be resolved with respect to the contemporary shelf depth (SD). During glacial cycles, the temperature at the top of the sediments is a major driver for moving the HSZ vertical boundaries irrespective of SD. In turn, pressure due to oceanic water is additionally important for SD ≥ 50 m. Thus, oceanic transgressions and regressions do not instantly determine onsets of HSZ and/or its disappearance. Finally, impact of initial conditions in the subsea sediments is lost after ∼ 100 kyr. Our results are moderately sensitive to intensity of geothermal heat flux.

  11. Effects of coupled dark energy on the Milky Way and its satellites

    NASA Astrophysics Data System (ADS)

    Penzo, Camilla; Macciò, Andrea V.; Baldi, Marco; Casarini, Luciano; Oñorbe, Jose; Dutton, Aaron A.

    2016-09-01

    We present the first numerical simulations in coupled dark energy cosmologies with high enough resolution to investigate the effects of the coupling on galactic and subgalactic scales. We choose two constant couplings and a time-varying coupling function and we run simulations of three Milky Way-sized haloes (˜1012 M⊙), a lower mass halo (6 × 1011 M⊙) and a dwarf galaxy halo (5 × 109 M⊙). We resolve each halo with several million dark matter particles. On all scales, the coupling causes lower halo concentrations and a reduced number of substructures with respect to Λ cold dark matter (ΛCDM). We show that the reduced concentrations are not due to different formation times. We ascribe them to the extra terms that appear in the equations describing the gravitational dynamics. On the scale of the Milky Way satellites, we show that the lower concentrations can help in reconciling observed and simulated rotation curves, but the coupling values necessary to have a significant difference from ΛCDM are outside the current observational constraints. On the other hand, if other modifications to the standard model allowing a higher coupling (e.g. massive neutrinos) are considered, coupled dark energy can become an interesting scenario to alleviate the small-scale issues of the ΛCDM model.

  12. Fixed-scale statistics and the geometry of turbulent dispersion at high Reynolds number via numerical simulation

    NASA Astrophysics Data System (ADS)

    Hackl, Jason F.

    The relative dispersion of one uid particle with respect to another is fundamentally related to the transport and mixing of contaminant species in turbulent flows. The most basic consequence of Kolmogorov's 1941 similarity hypotheses for relative dispersion, the Richardson-Obukhov law that mean-square pair separation distance grows with the cube of time t3 at intermediate times in the inertial subrange, is notoriously difficult to observe in the environment, laboratory, and direct numerical simulations (DNS). Inertial subrange scaling in size parameters like requires careful adjustment for the initial conditions of the dispersion process as well as a very wide range of scales (high Reynolds number) in the flow being studied. However, the statistical evolution of the shapes of clusters of more than two particles has already exhibited statistical invariance at intermediate times in existing DNS. This invariance is identified with inertial-subrange scaling and is more readily observed than inertial-subrange scaling for the seemingly simpler quantity . Results from dispersion of clusters of four particles (called tetrads) in large-scale DNS at grid resolutions up to 40963 and Taylor-scale Reynolds numbers Rlambda from 140 to 1000 are used to explore the question of statistical universality in measures of the size and shape of tetrahedra in homogeneous isotropic turbulence in distinct scaling regimes at very small times (ballistic), intermediate times (inertial) and very late times (diffusive). Derivatives of 1/3 with respect to time normalized by the characteristic time scale at the initial tetrad size r0 constitute a powerful technique in isolating t3 scaling in . This technique is applied to the eigenvalues of a moment-of-inertia-like tensor formed from the separation vectors between particles in the tetrad. Estimates of the proportionality constant g in the Richardson-Obukhov t3 law from DNS at Rlambda ≈ 1000 converge towards the value g ≈ 0.56 reported in previous studies. The exit time taken by a particle pair to first reach successively larger thresholds of fixed separation distance is also brie y discussed and found to have unexplained dependence on initial separation distance for negative moments, but good inertial range scaling for positive moments. The use of diffusion models of relative dispersion in the inertial subrange to connect mean exit time to g is also tested and briefly discussed in these simulations. Mean values and probability density functions of shape parameters including the triangle aspect ratio w, tetrahedron volume-to-gyration radius ratio V2/3/R 2 and normalized moment-of-inertia eigenvalues are all found to approach invariant forms in the inertial subrange for a wider range of initial separations than size parameters such as mean-square gyration radius. These results constitute the clearest evidence to date that turbulence has a tendency to distort and elongate multiparticle configurations more severely in the inertial subrange than it does in the diffusive regime at asymptotically late time. Triangle statistics are found to be independent of initial shape for all time beyond the ballistic regime. The development and testing of different schemes for parallelizing the cubic spline interpolation procedure for particle velocities needed to track particles in DNS is also covered. A "pipeline" method of moving batches of particles from processor to processor is adopted due to its low memory overhead, but there are challenges in achieving good performance scaling.

  13. Hiatus-like decades in the absence of equatorial Pacific cooling and accelerated global ocean heat uptake

    NASA Astrophysics Data System (ADS)

    von Känel, Lukas; Frölicher, Thomas L.; Gruber, Nicolas

    2017-08-01

    A surface cooling pattern in the equatorial Pacific associated with a negative phase of the Interdecadal Pacific Oscillation is the leading hypothesis to explain the smaller rate of global warming during 1998-2012, with these cooler than normal conditions thought to have accelerated the oceanic heat uptake. Here using a 30-member ensemble simulation of a global Earth system model, we show that in 10% of all simulated decades with a global cooling trend, the eastern equatorial Pacific actually warms. This implies that there is a 1 in 10 chance that decadal hiatus periods may occur without the equatorial Pacific being the dominant pacemaker. In addition, the global ocean heat uptake tends to slow down during hiatus decades implying a fundamentally different global climate feedback factor on decadal time scales than on centennial time scales and calling for caution inferring climate sensitivity from decadal-scale variability.

  14. Resolving the Multi-scale Behavior of Geochemical Weathering in the Critical Zone Using High Resolution Hydro-geochemical Models

    NASA Astrophysics Data System (ADS)

    Pandey, S.; Rajaram, H.

    2015-12-01

    This work investigates hydrologic and geochemical interactions in the Critical Zone (CZ) using high-resolution reactive transport modeling. Reactive transport models can be used to predict the response of geochemical weathering and solute fluxes in the CZ to changes in a dynamic environment, such as those pertaining to human activities and climate change in recent years. The scales of hydrology and geochemistry in the CZ range from days to eons in time and centimeters to kilometers in space. Here, we present results of a multi-dimensional, multi-scale hydro-geochemical model to investigate the role of subsurface heterogeneity on the formation of mineral weathering fronts in the CZ, which requires consideration of many of these spatio-temporal scales. The model is implemented using the reactive transport code PFLOTRAN, an open source subsurface flow and reactive transport code that utilizes parallelization over multiple processing nodes and provides a strong framework for simulating weathering in the CZ. The model is set up to simulate weathering dynamics in the mountainous catchments representative of the Colorado Front Range. Model parameters were constrained based on hydrologic, geochemical, and geophysical observations from the Boulder Creek Critical Zone Observatory (BcCZO). Simulations were performed in fractured rock systems and compared with systems of heterogeneous and homogeneous permeability fields. Tracer simulations revealed that the mean residence time of solutes was drastically accelerated as fracture density increased. In simulations that include mineral reactions, distinct signatures of transport limitations on weathering arose when discrete flow paths were included. This transport limitation was related to both advective and diffusive processes in the highly heterogeneous systems (i.e. fractured media and correlated random permeability fields with σlnk > 3). The well-known time-dependence of mineral weathering rates was found to be the most pronounced in the fractured systems, with a departure from the maximum system-averaged dissolution rate occurring after ~100 kyr followed by a gradual decrease in the reaction rate with time that persists beyond 104 kyr.

  15. Toward economic flood loss characterization via hazard simulation

    NASA Astrophysics Data System (ADS)

    Czajkowski, Jeffrey; Cunha, Luciana K.; Michel-Kerjan, Erwann; Smith, James A.

    2016-08-01

    Among all natural disasters, floods have historically been the primary cause of human and economic losses around the world. Improving flood risk management requires a multi-scale characterization of the hazard and associated losses—the flood loss footprint. But this is typically not available in a precise and timely manner, yet. To overcome this challenge, we propose a novel and multidisciplinary approach which relies on a computationally efficient hydrological model that simulates streamflow for scales ranging from small creeks to large rivers. We adopt a normalized index, the flood peak ratio (FPR), to characterize flood magnitude across multiple spatial scales. The simulated FPR is then shown to be a key statistical driver for associated economic flood losses represented by the number of insurance claims. Importantly, because it is based on a simulation procedure that utilizes generally readily available physically-based data, our flood simulation approach has the potential to be broadly utilized, even for ungauged and poorly gauged basins, thus providing the necessary information for public and private sector actors to effectively reduce flood losses and save lives.

  16. Ultra-Scale Computing for Emergency Evacuation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaduri, Budhendra L; Nutaro, James J; Liu, Cheng

    2010-01-01

    Emergency evacuations are carried out in anticipation of a disaster such as hurricane landfall or flooding, and in response to a disaster that strikes without a warning. Existing emergency evacuation modeling and simulation tools are primarily designed for evacuation planning and are of limited value in operational support for real time evacuation management. In order to align with desktop computing, these models reduce the data and computational complexities through simple approximations and representations of real network conditions and traffic behaviors, which rarely represent real-world scenarios. With the emergence of high resolution physiographic, demographic, and socioeconomic data and supercomputing platforms, itmore » is possible to develop micro-simulation based emergency evacuation models that can foster development of novel algorithms for human behavior and traffic assignments, and can simulate evacuation of millions of people over a large geographic area. However, such advances in evacuation modeling and simulations demand computational capacity beyond the desktop scales and can be supported by high performance computing platforms. This paper explores the motivation and feasibility of ultra-scale computing for increasing the speed of high resolution emergency evacuation simulations.« less

  17. Application of Wavelet-Based Methods for Accelerating Multi-Time-Scale Simulation of Bistable Heterogeneous Catalysis

    DOE PAGES

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...

    2017-02-16

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  18. DO NOT FORGET THE FOREST FOR THE TREES: THE STELLAR-MASS HALO-MASS RELATION IN DIFFERENT ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonnesen, Stephanie; Cen, Renyue, E-mail: stonnes@gmail.com, E-mail: cen@astro.princeton.edu

    2015-10-20

    The connection between dark matter halos and galactic baryons is often not well constrained nor well resolved in cosmological hydrodynamical simulations. Thus, halo occupation distribution models that assign galaxies to halos based on halo mass are frequently used to interpret clustering observations, even though it is well known that the assembly history of dark matter halos is related to their clustering. In this paper we use high-resolution hydrodynamical cosmological simulations to compare the halo and stellar mass growth of galaxies in a large-scale overdensity to those in a large-scale underdensity (on scales of about 20 Mpc). The simulation reproduces assemblymore » bias, in which halos have earlier formation times in overdense environments than in underdense regions. We find that the ratio of stellar mass to halo mass is larger in overdense regions in central galaxies residing in halos with masses between 10{sup 11} and 10{sup 12.9} M{sub ⊙}. When we force the local density (within 2 Mpc) at z = 0 to be the same for galaxies in the large-scale over- and underdensities, we find the same results. We posit that this difference can be explained by a combination of earlier formation times, more interactions at early times with neighbors, and more filaments feeding galaxies in overdense regions. This result puts the standard practice of assigning stellar mass to halos based only on their mass, rather than considering their larger environment, into question.« less

  19. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    NASA Astrophysics Data System (ADS)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul

    2016-04-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning retrospective predictions at the decadal (5-years), seasonal and sub-seasonal time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and sub-seasonal time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.

  20. Comparison of the Efficacy and Efficiency of the Use of Virtual Reality Simulation With High-Fidelity Mannequins for Simulation-Based Training of Fiberoptic Bronchoscope Manipulation.

    PubMed

    Jiang, Bailin; Ju, Hui; Zhao, Ying; Yao, Lan; Feng, Yi

    2018-04-01

    This study compared the efficacy and efficiency of virtual reality simulation (VRS) with high-fidelity mannequin in the simulation-based training of fiberoptic bronchoscope manipulation in novices. Forty-six anesthesia residents with no experience in fiberoptic intubation were divided into two groups: VRS (group VRS) and mannequin (group M). After a standard didactic teaching session, group VRS trained 25 times on VRS, whereas group M performed the same process on a mannequin. After training, participants' performance was assessed on a mannequin five consecutive times. Procedure times during training were recorded as pooled data to construct learning curves. Procedure time and global rating scale scores of manipulation ability were compared between groups, as well as changes in participants' confidence after training. Plateaus in the learning curves were achieved after 19 (95% confidence interval = 15-26) practice sessions in group VRS and 24 (95% confidence interval = 20-32) in group M. There was no significant difference in procedure time [13.7 (6.6) vs. 11.9 (4.1) seconds, t' = 1.101, P = 0.278] or global rating scale [3.9 (0.4) vs. 3.8 (0.4), t = 0.791, P = 0.433] between groups. Participants' confidence increased after training [group VRS: 1.8 (0.7) vs. 3.9 (0.8), t = 8.321, P < 0.001; group M = 2.0 (0.7) vs. 4.0 (0.6), t = 13.948, P < 0.001] but did not differ significantly between groups. Virtual reality simulation is more efficient than mannequin in simulation-based training of flexible fiberoptic manipulation in novices, but similar effects can be achieved in both modalities after adequate training.

  1. Spatial and Temporal Uncertainty of Crop Yield Aggregations

    NASA Technical Reports Server (NTRS)

    Porwollik, Vera; Mueller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Iizumi, Toshichika; Ray, Deepak K.; Ruane, Alex C.; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; hide

    2016-01-01

    The aggregation of simulated gridded crop yields to national or regional scale requires information on temporal and spatial patterns of crop-specific harvested areas. This analysis estimates the uncertainty of simulated gridded yield time series related to the aggregation with four different harvested area data sets. We compare aggregated yield time series from the Global Gridded Crop Model Inter-comparison project for four crop types from 14 models at global, national, and regional scale to determine aggregation-driven differences in mean yields and temporal patterns as measures of uncertainty. The quantity and spatial patterns of harvested areas differ for individual crops among the four datasets applied for the aggregation. Also simulated spatial yield patterns differ among the 14 models. These differences in harvested areas and simulated yield patterns lead to differences in aggregated productivity estimates, both in mean yield and in the temporal dynamics. Among the four investigated crops, wheat yield (17% relative difference) is most affected by the uncertainty introduced by the aggregation at the global scale. The correlation of temporal patterns of global aggregated yield time series can be as low as for soybean (r = 0.28).For the majority of countries, mean relative differences of nationally aggregated yields account for10% or less. The spatial and temporal difference can be substantial higher for individual countries. Of the top-10 crop producers, aggregated national multi-annual mean relative difference of yields can be up to 67% (maize, South Africa), 43% (wheat, Pakistan), 51% (rice, Japan), and 427% (soybean, Bolivia).Correlations of differently aggregated yield time series can be as low as r = 0.56 (maize, India), r = 0.05*Corresponding (wheat, Russia), r = 0.13 (rice, Vietnam), and r = -0.01 (soybean, Uruguay). The aggregation to sub-national scale in comparison to country scale shows that spatial uncertainties can cancel out in countries with large harvested areas per crop type. We conclude that the aggregation uncertainty can be substantial for crop productivity and production estimations in the context of food security, impact assessment, and model evaluation exercises.

  2. Photoluminescence decay dynamics in γ-Ga2O3 nanocrystals: The role of exclusion distance at short time scales

    NASA Astrophysics Data System (ADS)

    Fernandes, Brian; Hegde, Manu; Stanish, Paul C.; Mišković, Zoran L.; Radovanovic, Pavle V.

    2017-09-01

    We developed a comprehensive theoretical model describing the photoluminescence decay dynamics at short and long time scales based on the donor-acceptor defect interactions in γ-Ga2O3 nanocrystals, and quantitatively determined the importance of exclusion distance and spatial distribution of defects. We allowed for donors and acceptors to be adjacent to each other or separated by different exclusion distances. The optimal exclusion distance was found to be comparable to the donor Bohr radius and have a strong effect on the photoluminescence decay curve at short times. The importance of the exclusion distance at short time scales was confirmed by Monte Carlo simulations.

  3. Statistical test for ΔρDCCA cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Guedes, E. F.; Brito, A. A.; Oliveira Filho, F. M.; Fernandez, B. F.; de Castro, A. P. N.; da Silva Filho, A. M.; Zebende, G. F.

    2018-07-01

    In this paper we propose a new statistical test for ΔρDCCA, Detrended Cross-Correlation Coefficient Difference, a tool to measure contagion/interdependence effect in time series of size N at different time scale n. For this proposition we analyzed simulated and real time series. The results showed that the statistical significance of ΔρDCCA depends on the size N and the time scale n, and we can define a critical value for this dependency in 90%, 95%, and 99% of confidence level, as will be shown in this paper.

  4. Parallel Adaptive High-Order CFD Simulations Characterizing Cavity Acoustics for the Complete SOFIA Aircraft

    NASA Technical Reports Server (NTRS)

    Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak

    2014-01-01

    This paper presents one-of-a-kind MPI-parallel computational fluid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft of a Boeing 747SP. These simulations focus on how the unsteady flow field inside and over the cavity interferes with the optical path and mounting of the telescope. A temporally fourth-order Runge-Kutta, and spatially fifth-order WENO-5Z scheme was used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh refinement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32,000 cores and 4 billion cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregularities caused by the highly complex geometry. Limits to scaling beyond 32K cores are identified, and targeted code optimizations are discussed.

  5. Unbiased Rare Event Sampling in Spatial Stochastic Systems Biology Models Using a Weighted Ensemble of Trajectories

    PubMed Central

    Donovan, Rory M.; Tapia, Jose-Juan; Sullivan, Devin P.; Faeder, James R.; Murphy, Robert F.; Dittrich, Markus; Zuckerman, Daniel M.

    2016-01-01

    The long-term goal of connecting scales in biological simulation can be facilitated by scale-agnostic methods. We demonstrate that the weighted ensemble (WE) strategy, initially developed for molecular simulations, applies effectively to spatially resolved cell-scale simulations. The WE approach runs an ensemble of parallel trajectories with assigned weights and uses a statistical resampling strategy of replicating and pruning trajectories to focus computational effort on difficult-to-sample regions. The method can also generate unbiased estimates of non-equilibrium and equilibrium observables, sometimes with significantly less aggregate computing time than would be possible using standard parallelization. Here, we use WE to orchestrate particle-based kinetic Monte Carlo simulations, which include spatial geometry (e.g., of organelles, plasma membrane) and biochemical interactions among mobile molecular species. We study a series of models exhibiting spatial, temporal and biochemical complexity and show that although WE has important limitations, it can achieve performance significantly exceeding standard parallel simulation—by orders of magnitude for some observables. PMID:26845334

  6. Kinetic turbulence simulations at extreme scale on leadership-class systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    2013-01-01

    Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less

  7. Time-variant Lagrangian transport formulation reduces aggregation bias of water and solute mean travel time in heterogeneous catchments

    NASA Astrophysics Data System (ADS)

    Danesh-Yazdi, Mohammad; Botter, Gianluca; Foufoula-Georgiou, Efi

    2017-05-01

    Lack of hydro-bio-chemical data at subcatchment scales necessitates adopting an aggregated system approach for estimating water and solute transport properties, such as residence and travel time distributions, at the catchment scale. In this work, we show that within-catchment spatial heterogeneity, as expressed in spatially variable discharge-storage relationships, can be appropriately encapsulated within a lumped time-varying stochastic Lagrangian formulation of transport. This time (variability) for space (heterogeneity) substitution yields mean travel times (MTTs) that are not significantly biased to the aggregation of spatial heterogeneity. Despite the significant variability of MTT at small spatial scales, there exists a characteristic scale above which the MTT is not impacted by the aggregation of spatial heterogeneity. Extensive simulations of randomly generated river networks reveal that the ratio between the characteristic scale and the mean incremental area is on average independent of river network topology and the spatial arrangement of incremental areas.

  8. Preferential flow across scales: how important are plot scale processes for a catchment scale model?

    NASA Astrophysics Data System (ADS)

    Glaser, Barbara; Jackisch, Conrad; Hopp, Luisa; Klaus, Julian

    2017-04-01

    Numerous experimental studies showed the importance of preferential flow for solute transport and runoff generation. As a consequence, various approaches exist to incorporate preferential flow in hydrological models. However, few studies have applied models that incorporate preferential flow at hillslope scale and even fewer at catchment scale. Certainly, one main difficulty for progress is the determination of an adequate parameterization for preferential flow at these spatial scales. This study applies a 3D physically based model (HydroGeoSphere) of a headwater region (6 ha) of the Weierbach catchment (Luxembourg). The base model was implemented without preferential flow and was limited in simulating fast catchment responses. Thus we hypothesized that the discharge performance can be improved by utilizing a dual permeability approach for a representation of preferential flow. We used the information of bromide irrigation experiments performed on three 1m2 plots to parameterize preferential flow. In a first step we ran 20.000 Monte Carlo simulations of these irrigation experiments in a 1m2 column of the headwater catchment model, varying the dual permeability parameters (15 variable parameters). These simulations identified many equifinal, yet very different parameter sets that reproduced the bromide depth profiles well. Therefore, in the next step we chose 52 parameter sets (the 40 best and 12 low performing sets) for testing the effect of incorporating preferential flow in the headwater catchment scale model. The variability of the flow pattern responses at the headwater catchment scale was small between the different parameterizations and did not coincide with the variability at plot scale. The simulated discharge time series of the different parameterizations clustered in six groups of similar response, ranging from nearly unaffected to completely changed responses compared to the base case model without dual permeability. Yet, in none of the groups the simulated discharge response clearly improved compared to the base case. Same held true for some observed soil moisture time series, although at plot scale the incorporation of preferential flow was necessary to simulate the irrigation experiments correctly. These results rejected our hypothesis and open a discussion on how important plot scale processes and heterogeneities are at catchment scale. Our preliminary conclusion is that vertical preferential flow is important for the irrigation experiments at the plot scale, while discharge generation at the catchment scale is largely controlled by lateral preferential flow. The lateral component, however, was already considered in the base case model with different hydraulic conductivities in different soil layers. This can explain why the internal behavior of the model at single spots seems not to be relevant for the overall hydrometric catchment response. Nonetheless, the inclusion of vertical preferential flow improved the realism of internal processes of the model (fitting profiles at plot scale, unchanged response at catchment scale) and should be considered depending on the intended use of the model. Furthermore, we cannot exclude with certainty yet that the quantitative discharge performance at catchment scale cannot be improved by utilizing a dual permeability approach, which will be tested in parameter optimization process.

  9. Nonequilibrium dynamic critical scaling of the quantum Ising chain.

    PubMed

    Kolodrubetz, Michael; Clark, Bryan K; Huse, David A

    2012-07-06

    We solve for the time-dependent finite-size scaling functions of the one-dimensional transverse-field Ising chain during a linear-in-time ramp of the field through the quantum critical point. We then simulate Mott-insulating bosons in a tilted potential, an experimentally studied system in the same equilibrium universality class, and demonstrate that universality holds for the dynamics as well. We find qualitatively athermal features of the scaling functions, such as negative spin correlations, and we show that they should be robustly observable within present cold atom experiments.

  10. Using adaptive-mesh refinement in SCFT simulations of surfactant adsorption

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Kumar, Rajeev; Jamroz, Ben; Crockett, Robert; Pletzer, Alex

    2013-03-01

    Adsorption of surfactants at interfaces is relevant to many applications such as detergents, adhesives, emulsions and ferrofluids. Atomistic simulations of interface adsorption are challenging due to the difficulty of modeling the wide range of length scales in these problems: the thin interface region in equilibrium with a large bulk region that serves as a reservoir for the adsorbed species. Self-consistent field theory (SCFT) has been extremely useful for studying the morphologies of dense block copolymer melts. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. However, even SCFT methods can be difficult to apply to systems in which small spatial regions might require finer resolution than most of the simulation grid (eg. interface adsorption and confinement). We will present results on interface adsorption simulations using PolySwift++, an object-oriented, polymer SCFT simulation code aided by the Tech-X Chompst library that enables via block-structured AMR calculations with PETSc.

  11. A New Approach to Modeling Jupiter's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Katoh, Y.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2017-12-01

    The scales in planetary magnetospheres range from 10s of planetary radii to kilometers. For a number of years we have studied the magnetospheres of Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations. However, we have not been able to reach even the limits of the MHD approximation because of the large amount of computer resources required. Recently thanks to the progress in supercomputer systems, we have obtained the capability to simulate Jupiter's magnetosphere with 1000 times the number of grid points used in our previous simulations. This has allowed us to combine the high resolution global simulation with a micro-scale simulation of the Jovian magnetosphere. In particular we can combine a hybrid (kinetic ions and fluid electrons) simulation with the MHD simulation. In addition, the new capability enables us to run multi-parameter survey simulations of the Jupiter-solar wind system. In this study we performed a high-resolution simulation of Jovian magnetosphere to connect with the hybrid simulation, and lower resolution simulations under the various solar wind conditions to compare with Hisaki and Juno observations. In the high-resolution simulation we used a regular Cartesian gird with 0.15 RJ grid spacing and placed the inner boundary at 7 RJ. From these simulation settings, we provide the magnetic field out to around 20 RJ from Jupiter as a background field for the hybrid simulation. For the first time we have been able to resolve Kelvin Helmholtz waves on the magnetopause. We have investigated solar wind dynamic pressures between 0.01 and 0.09 nPa for a number of IMF values. These simulation data are open for the registered users to download the raw data. We have compared the results of these simulations with Hisaki auroral observations.

  12. Length scale effects and multiscale modeling of thermally induced phase transformation kinetics in NiTi SMA

    NASA Astrophysics Data System (ADS)

    Frantziskonis, George N.; Gur, Sourav

    2017-06-01

    Thermally induced phase transformation in NiTi shape memory alloys (SMAs) shows strong size and shape, collectively termed length scale effects, at the nano to micrometer scales, and that has important implications for the design and use of devices and structures at such scales. This paper, based on a recently developed multiscale model that utilizes molecular dynamics (MDs) simulations at small scales and MD-verified phase field (PhF) simulations at larger scales, reports results on specific length scale effects, i.e. length scale effects in martensite phase fraction (MPF) evolution, transformation temperatures (martensite and austenite start and finish) and in the thermally cyclic transformation between austenitic and martensitic phase. The multiscale study identifies saturation points for length scale effects and studies, for the first time, the length scale effect on the kinetics (i.e. developed internal strains) in the B19‧ phase during phase transformation. The major part of the work addresses small scale single crystals in specific orientations. However, the multiscale method is used in a unique and novel way to indirectly study length scale and grain size effects on evolution kinetics in polycrystalline NiTi, and to compare the simulation results to experiments. The interplay of the grain size and the length scale effect on the thermally induced MPF evolution is also shown in this present study. Finally, the multiscale coupling results are employed to improve phenomenological material models for NiTi SMA.

  13. Most suitable mother wavelet for the analysis of fractal properties of stride interval time series via the average wavelet coefficient

    PubMed Central

    Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S.; Perera, Subashan

    2016-01-01

    Human gait is a complex interaction of many nonlinear systems and stride intervals exhibit self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a biomarker of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of sixteen mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. PMID:27960102

  14. DES Prediction of Cavitation Erosion and Its Validation for a Ship Scale Propeller

    NASA Astrophysics Data System (ADS)

    Ponkratov, Dmitriy, Dr

    2015-12-01

    Lloyd's Register Technical Investigation Department (LR TID) have developed numerical functions for the prediction of cavitation erosion aggressiveness within Computational Fluid Dynamics (CFD) simulations. These functions were previously validated for a model scale hydrofoil and ship scale rudder [1]. For the current study the functions were applied to a cargo ship's full scale propeller, on which the severe cavitation erosion was reported. The performed Detach Eddy Simulation (DES) required a fine computational mesh (approximately 22 million cells), together with a very small time step (2.0E-4 s). As the cavitation for this type of vessel is primarily caused by a highly non-uniform wake, the hull was also included in the simulation. The applied method under predicted the cavitation extent and did not fully resolve the tip vortex; however, the areas of cavitation collapse were captured successfully. Consequently, the developed functions showed a very good prediction of erosion areas, as confirmed by comparison with underwater propeller inspection results.

  15. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas

    2009-01-01

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors,more » other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million atom biological systems scale well up to 30k cores, producing 30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.« less

  16. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer.

    PubMed

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-10-13

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million-atom biological systems scale well up to ∼30k cores, producing ∼30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.

  17. Resolving Properties of Polymers and Nanoparticle Assembly through Coarse-Grained Computational Studies.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grest, Gary S.

    2017-09-01

    Coupled length and time scales determine the dynamic behavior of polymers and polymer nanocomposites and underlie their unique properties. To resolve the properties over large time and length scales it is imperative to develop coarse grained models which retain the atomistic specificity. Here we probe the degree of coarse graining required to simultaneously retain significant atomistic details a nd access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using polyethylene as a model system, we probe how the coarse - graining scale affects themore » measured dynamics with different number methylene group s per coarse - grained beads. Using these models we simulate polyethylene melts for times over 500 ms to study the viscoelastic properties of well - entangled polymer melts and large nanoparticle assembly as the nanoparticles are driven close enough to form nanostructures.« less

  18. Discovering mechanisms relevant for radiation damage evolution

    DOE PAGES

    Uberuaga, Blas Pedro; Martinez, Enrique Saez; Perez, Danny; ...

    2018-02-22

    he response of a material to irradiation is a consequence of the kinetic evolution of defects produced during energetic damage events. Thus, accurate predictions of radiation damage evolution require knowing the atomic scale mechanisms associated with those defects. Atomistic simulations are a key tool in providing insight into the types of mechanisms possible. Further, by extending the time scale beyond what is achievable with conventional molecular dynamics, even greater insight can be obtained. Here, we provide examples in which such simulations have revealed new kinetic mechanisms that were not obvious before performing the simulations. We also demonstrate, through the couplingmore » with higher level models, how those mechanisms impact experimental observables in irradiated materials. Lastly, we discuss the importance of these types of simulations in the context of predicting material behavior.« less

  19. Discovering mechanisms relevant for radiation damage evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uberuaga, Blas Pedro; Martinez, Enrique Saez; Perez, Danny

    he response of a material to irradiation is a consequence of the kinetic evolution of defects produced during energetic damage events. Thus, accurate predictions of radiation damage evolution require knowing the atomic scale mechanisms associated with those defects. Atomistic simulations are a key tool in providing insight into the types of mechanisms possible. Further, by extending the time scale beyond what is achievable with conventional molecular dynamics, even greater insight can be obtained. Here, we provide examples in which such simulations have revealed new kinetic mechanisms that were not obvious before performing the simulations. We also demonstrate, through the couplingmore » with higher level models, how those mechanisms impact experimental observables in irradiated materials. Lastly, we discuss the importance of these types of simulations in the context of predicting material behavior.« less

  20. Dispersion Analysis Using Particle Tracking Simulations Through Heterogeneity Based on Outcrop Lidar Imagery

    NASA Astrophysics Data System (ADS)

    Klise, K. A.; Weissmann, G. S.; McKenna, S. A.; Tidwell, V. C.; Frechette, J. D.; Wawrzyniec, T. F.

    2007-12-01

    Solute plumes are believed to disperse in a non-Fickian manner due to small-scale heterogeneity and variable velocities that create preferential pathways. In order to accurately predict dispersion in naturally complex geologic media, the connection between heterogeneity and dispersion must be better understood. Since aquifer properties can not be measured at every location, it is common to simulate small-scale heterogeneity with random field generators based on a two-point covariance (e.g., through use of sequential simulation algorithms). While these random fields can produce preferential flow pathways, it is unknown how well the results simulate solute dispersion through natural heterogeneous media. To evaluate the influence that complex heterogeneity has on dispersion, we utilize high-resolution terrestrial lidar to identify and model lithofacies from outcrop for application in particle tracking solute transport simulations using RWHet. The lidar scan data are used to produce a lab (meter) scale two-dimensional model that captures 2-8 mm scale natural heterogeneity. Numerical simulations utilize various methods to populate the outcrop structure captured by the lidar-based image with reasonable hydraulic conductivity values. The particle tracking simulations result in residence time distributions used to evaluate the nature of dispersion through complex media. Particle tracking simulations through conductivity fields produced from the lidar images are then compared to particle tracking simulations through hydraulic conductivity fields produced from sequential simulation algorithms. Based on this comparison, the study aims to quantify the difference in dispersion when using realistic and simplified representations of aquifer heterogeneity. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  1. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less

  2. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less

  3. Upscaling anomalous reactive kinetics (A+B-->C) from pore scale Lagrangian velocity analysis

    NASA Astrophysics Data System (ADS)

    De Anna, P.; Tartakovsky, A. M.; Le Borgne, T.; Dentz, M.

    2011-12-01

    Natural flow fields in porous media display a complex spatio-temporal organization due to heterogeneous geological structures at different scales. This multiscale disorder implies anomalous dispersion, mixing and reaction kinetics (Berkowitz et al. RG 2006, Tartakovsky PRE 2010). Here, we focus on the upscaling of anomalous kinetics arising from pore scale, non Gaussian and correlated, velocity distributions. We consider reactive front simulations, where a component A displaces a component B that saturates initially the porous domain. The reactive component C is produced at the dispersive front located at interface between the A and B domains. The simulations are performed with the SPH method. As the mixing zone grows, the total mass of C produced increases with time. The scaling of this evolution with time is different from that which would be obtained from the homogeneous advection dispersion reaction equation. This anomalous kinetics property is related to spatial structure of the reactive mixture, and its evolution with time under the combined action of advective and diffusive processes. We discuss the different scaling regimes arising depending on the dominant process that governs mixing. In order to upscale these processes, we analyze the Lagrangian velocity properties, which are characterized by the non Gaussian distributions and long range temporal correlation. The main origin of these properties is the existence of very low velocity regions where solute particles can remain trapped for a long time. Another source of strong correlation is the channeling of flow in localized high velocity regions, which created finger-like structures in the concentration field. We show the spatial Markovian, and temporal non Markovian, nature of the Lagrangian velocity field. Therefore, an upscaled model can be defined as a correlated Continuous Time Random Walk (Le Borgne et al. PRL 2008). A key feature of this model is the definition of a transition probability density for Lagrangian velocities across a characteristic correlation distance. We quantify this transition probability density from pore scale simulations and use it in the effective stochastic model. In this framework, we investigate the ability of this effective model to represent correctly dispersion and mixing.

  4. AgroEcoSystem-Watershed (AgES-W) model delineation and scaling

    USDA-ARS?s Scientific Manuscript database

    Water movement and storage within an agricultural watershed can be simulated at different spatial resolutions of land areas or hydrological response units (HRUs). Interactions between HRUs in space and time vary with the HRU sizes, such that natural scaling relationships are confounded with the simu...

  5. Evaluation and error apportionment of an ensemble of atmospheric chemistry transport modeling systems: multivariable temporal and spatial breakdown

    EPA Science Inventory

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...

  6. Multi-time scale energy management of wind farms based on comprehensive evaluation technology

    NASA Astrophysics Data System (ADS)

    Xu, Y. P.; Huang, Y. H.; Liu, Z. J.; Wang, Y. F.; Li, Z. Y.; Guo, L.

    2017-11-01

    A novel energy management of wind farms is proposed in this paper. Firstly, a novel comprehensive evaluation system is proposed to quantify economic properties of each wind farm to make the energy management more economical and reasonable. Then, a combination of multi time-scale schedule method is proposed to develop a novel energy management. The day-ahead schedule optimizes unit commitment of thermal power generators. The intraday schedule is established to optimize power generation plan for all thermal power generating units, hydroelectric generating sets and wind power plants. At last, the power generation plan can be timely revised in the process of on-line schedule. The paper concludes with simulations conducted on a real provincial integrated energy system in northeast China. Simulation results have validated the proposed model and corresponding solving algorithms.

  7. Accelerating large-scale simulation of seismic wave propagation by multi-GPUs and three-dimensional domain decomposition

    NASA Astrophysics Data System (ADS)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki

    2010-12-01

    We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.

  8. Properties of Decaying Plasma Turbulence at Subproton Scales

    NASA Astrophysics Data System (ADS)

    Olshevsky, Vyacheslav; Servidio, Sergio; Pucci, Francesco; Primavera, Leonardo; Lapenta, Giovanni

    2018-06-01

    We study the properties of plasma turbulence at subproton scales using kinetic electromagnetic three-dimensional simulations with nonidentical initial conditions. Particle-in-cell modeling of the Taylor–Green vortex has been performed, starting from three different magnetic field configurations. All simulations expose very similar energy evolution in which the large-scale ion flows and magnetic structures deteriorate and transfer their energy into particle heating. Heating is more intense for electrons, decreasing the initial temperature ratio and leading to temperature equipartition between the two species. A full turbulent cascade, with a well-defined power-law shape at subproton scales, is established within a characteristic turnover time. Spectral indices for magnetic field fluctuations in two simulations are close to α B ≈ 2.9, and are steeper in the remaining case with α B ≈ 3.05. Energy is dissipated by a complex mixture of plasma instabilities and magnetic reconnection and is milder in the latter simulation. The number of magnetic nulls, and the dissipation pattern observed in this case, differ from two others. Spectral indices for the kinetic energy deviate from magnetic spectra by ≈1 in the first simulation, and by ≈0.75 in two other runs. The difference between magnetic and electric slopes confirm the previously observed value of α B ‑ α E ≈ 2.

  9. Evaluation of high-level clouds in cloud resolving model simulations with ARM and KWAJEX observations

    DOE PAGES

    Liu, Zheng; Muhlbauer, Andreas; Ackerman, Thomas

    2015-11-05

    In this paper, we evaluate high-level clouds in a cloud resolving model during two convective cases, ARM9707 and KWAJEX. The simulated joint histograms of cloud occurrence and radar reflectivity compare well with cloud radar and satellite observations when using a two-moment microphysics scheme. However, simulations performed with a single moment microphysical scheme exhibit low biases of approximately 20 dB. During convective events, two-moment microphysical overestimate the amount of high-level cloud and one-moment microphysics precipitate too readily and underestimate the amount and height of high-level cloud. For ARM9707, persistent large positive biases in high-level cloud are found, which are not sensitivemore » to changes in ice particle fall velocity and ice nuclei number concentration in the two-moment microphysics. These biases are caused by biases in large-scale forcing and maintained by the periodic lateral boundary conditions. The combined effects include significant biases in high-level cloud amount, radiation, and high sensitivity of cloud amount to nudging time scale in both convective cases. The high sensitivity of high-level cloud amount to the thermodynamic nudging time scale suggests that thermodynamic nudging can be a powerful ‘‘tuning’’ parameter for the simulated cloud and radiation but should be applied with caution. The role of the periodic lateral boundary conditions in reinforcing the biases in cloud and radiation suggests that reducing the uncertainty in the large-scale forcing in high levels is important for similar convective cases and has far reaching implications for simulating high-level clouds in super-parameterized global climate models such as the multiscale modeling framework.« less

  10. Advanced computations in plasma physics

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2002-05-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  11. Advanced Computation in Plasma Physics

    NASA Astrophysics Data System (ADS)

    Tang, William

    2001-10-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  12. Simulation of Atmospheric Dispersion of Elevated Releases from Point Sources in Mississippi Gulf Coast with Different Meteorological Data

    PubMed Central

    Yerramilli, Anjaneyulu; Srinivas, Challa Venkata; Dasari, Hari Prasad; Tuluri, Francis; White, Loren D.; Baham, Julius M.; Young, John H.; Hughes, Robert; Patrick, Chuck; Hardy, Mark G.; Swanier, Shelton J.

    2009-01-01

    Atmospheric dispersion calculations are made using the HYSPLIT Particle Dispersion Model for studying the transport and dispersion of air-borne releases from point elevated sources in the Mississippi Gulf coastal region. Simulations are performed separately with three meteorological data sets having different spatial and temporal resolution for a typical summer period in 1–3 June 2006 representing a weak synoptic condition. The first two data are the NCEP global and regional analyses (FNL, EDAS) while the third is a meso-scale simulation generated using the Weather Research and Forecasting model with nested domains at a fine resolution of 4 km. The meso-scale model results show significant temporal and spatial variations in the meteorological fields as a result of the combined influences of the land-sea breeze circulation, the large scale flow field and diurnal alteration in the mixing depth across the coast. The model predicted SO2 concentrations showed that the trajectory and the concentration distribution varied in the three cases of input data. While calculations with FNL data show an overall higher correlation, there is a significant positive bias during daytime and negative bias during night time. Calculations with EDAS fields are significantly below the observations during both daytime and night time though plume behavior follows the coastal circulation. The diurnal plume behavior and its distribution are better simulated using the mesoscale WRF meteorological fields in the coastal environment suggesting its suitability for pollution dispersion impact assessment in the local scale. Results of different cases of simulation, comparison with observations, correlation and bias in each case are presented. PMID:19440433

  13. Reynolds and Prandtl number scaling of viscous heating in isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Pushkarev, Andrey; Balarac, Guillaume; Bos, Wouter J. T.

    2017-08-01

    Viscous heating is investigated using high-resolution direct numerical simulations. Scaling relations are derived and verified for different values of the Reynolds and Prandtl numbers. The scaling of the heat fluctuations is shown to depend on Lagrangian correlation times and on the scaling of dissipation-rate fluctuations. The convergence of the temperature spectrum to asymptotic scaling is observed to be slow, due to the broadband character of the temperature production spectrum and the slow convergence of the dissipation-rate spectrum to its asymptotic form.

  14. Extended-range high-resolution dynamical downscaling over a continental-scale spatial domain with atmospheric and surface nudging

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.

  15. Characterizing and understanding the climatic determinism of high- to low-frequency variations in precipitation in northwestern France using a coupled wavelet multiresolution/statistical downscaling approach

    NASA Astrophysics Data System (ADS)

    Massei, Nicolas; Dieppois, Bastien; Hannah, David; Lavers, David; Fossa, Manuel; Laignel, Benoit; Debret, Maxime

    2017-04-01

    Geophysical signals oscillate over several time-scales that explain different amount of their overall variability and may be related to different physical processes. Characterizing and understanding such variabilities in hydrological variations and investigating their determinism is one important issue in a context of climate change, as these variabilities can be occasionally superimposed to long-term trend possibly due to climate change. It is also important to refine our understanding of time-scale dependent linkages between large-scale climatic variations and hydrological responses on the regional or local-scale. Here we investigate such links by conducting a wavelet multiresolution statistical dowscaling approach of precipitation in northwestern France (Seine river catchment) over 1950-2016 using sea level pressure (SLP) and sea surface temperature (SST) as indicators of atmospheric and oceanic circulations, respectively. Previous results demonstrated that including multiresolution decomposition in a statistical downscaling model (within a so-called multiresolution ESD model) using SLP as large-scale predictor greatly improved simulation of low-frequency, i.e. interannual to interdecadal, fluctuations observed in precipitation. Building on these results, continuous wavelet transform of simulated precipiation using multiresolution ESD confirmed the good performance of the model to better explain variability at all time-scales. A sensitivity analysis of the model to the choice of the scale and wavelet function used was also tested. It appeared that whatever the wavelet used, the model performed similarly. The spatial patterns of SLP found as the best predictors for all time-scales, which resulted from the wavelet decomposition, revealed different structures according to time-scale, showing possible different determinisms. More particularly, some low-frequency components ( 3.2-yr and 19.3-yr) showed a much wide-spread spatial extentsion across the Atlantic. Moreover, in accordance with other previous studies, the wavelet components detected in SLP and precipitation on interannual to interdecadal time-scales could be interpreted in terms of influence of the Gulf-Stream oceanic front on atmospheric circulation. Current works are now conducted including SST over the Atlantic in order to get further insights into this mechanism.

  16. Evaluating the Global Precipitation Measurement mission with NOAA/NSSL Multi-Radar Multisensor: current status and future directions.

    NASA Astrophysics Data System (ADS)

    Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C. D.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.

    2016-12-01

    Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.

  17. Visualizing and measuring flow in shale matrix using in situ synchrotron X-ray microtomography

    NASA Astrophysics Data System (ADS)

    Kohli, A. H.; Kiss, A. M.; Kovscek, A. R.; Bargar, J.

    2017-12-01

    Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.

  18. Coarse-grained stochastic processes and kinetic Monte Carlo simulators for the diffusion of interacting particles

    NASA Astrophysics Data System (ADS)

    Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2003-11-01

    We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.

  19. A theoretical and simulation study of the contact discontinuities based on a Vlasov simulation code

    NASA Astrophysics Data System (ADS)

    Tsai, T. C.; Lyu, L. H.; Chao, J. K.; Chen, M. Q.; Tsai, W. H.

    2009-12-01

    Contact discontinuity (CD) is the simplest solution that can be obtained from the magnetohydrodynamics (MHD) Rankine-Hugoniot jump conditions. Due to the limitations of the previous kinetic simulation models, the stability of the CD has become a controversial issue in the past 10 years. The stability of the CD is reexamined analytically and numerically. Our theoretical analysis shows that the electron temperature profile and the ion temperature profile must be out of phase across the CD if the CD structure is to be stable in the electron time scale and with zero electron heat flux on either side of the CD. Both a newly developed fourth-order implicit electrostatic Vlasov simulation code and an electromagnetic finite-size particle code are used to examine the stability and the electrostatic nature of the CD structure. Our theoretical prediction is verified by both simulations. Our results of Vlasov simulation also indicate that a simulation with initial electron temperature profile and ion temperature profile varying in phase across the CD will undergo very transient changes in the electron time scale but will relax into a quasi-steady CD structure within a few ion plasma oscillation periods if a real ion-electron mass ratio is used in the simulation and if the boundary conditions allow nonzero heat flux to be presented at the boundaries of the simulation box. The simulation results of this study indicate that the Vlasov simulation is a powerful tool to study nonlinear phenomena with nonperiodic boundary conditions and with nonzero heat flux at the boundaries of the simulation box.

  20. Communication interval selection in distributed heterogeneous simulation of large-scale dynamical systems

    NASA Astrophysics Data System (ADS)

    Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.

    2003-09-01

    In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.

  1. Large eddy simulation of turbine wakes using higher-order methods

    NASA Astrophysics Data System (ADS)

    Deskos, Georgios; Laizet, Sylvain; Piggott, Matthew D.; Sherwin, Spencer

    2017-11-01

    Large eddy simulations (LES) of a horizontal-axis turbine wake are presented using the well-known actuator line (AL) model. The fluid flow is resolved by employing higher-order numerical schemes on a 3D Cartesian mesh combined with a 2D Domain Decomposition strategy for an efficient use of supercomputers. In order to simulate flows at relatively high Reynolds numbers for a reasonable computational cost, a novel strategy is used to introduce controlled numerical dissipation to a selected range of small scales. The idea is to mimic the contribution of the unresolved small-scales by imposing a targeted numerical dissipation at small scales when evaluating the viscous term of the Navier-Stokes equations. The numerical technique is shown to behave similarly to the traditional eddy viscosity sub-filter scale models such as the classic or the dynamic Smagorinsky models. The results from the simulations are compared to experimental data for a Reynolds number scaled by the diameter equal to ReD =1,000,000 and both the time-averaged stream wise velocity and turbulent kinetic energy (TKE) are showing a good overall agreement. At the end, suggestions for the amount of numerical dissipation required by our approach are made for the particular case of horizontal-axis turbine wakes.

  2. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    NASA Astrophysics Data System (ADS)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  3. Effects of distribution density and cell dimension of 3D vegetation model on canopy NDVI simulation base on DART

    NASA Astrophysics Data System (ADS)

    Tao, Zhu; Shi, Runhe; Zeng, Yuyan; Gao, Wei

    2017-09-01

    The 3D model is an important part of simulated remote sensing for earth observation. Regarding the small-scale spatial extent of DART software, both the details of the model itself and the number of models of the distribution have an important impact on the scene canopy Normalized Difference Vegetation Index (NDVI).Taking the phragmitesaustralis in the Yangtze Estuary as an example, this paper studied the effect of the P.australias model on the canopy NDVI, based on the previous studies of the model precision, mainly from the cell dimension of the DART software and the density distribution of the P.australias model in the scene, As well as the choice of the density of the P.australiass model under the cost of computer running time in the actual simulation. The DART Cell dimensions and the density of the scene model were set by using the optimal precision model from the existing research results. The simulation results of NDVI with different model densities under different cell dimensions were analyzed by error analysis. By studying the relationship between relative error, absolute error and time costs, we have mastered the density selection method of P.australias model in the simulation of small-scale spatial scale scene. Experiments showed that the number of P.australias in the simulated scene need not be the same as those in the real environment due to the difference between the 3D model and the real scenarios. The best simulation results could be obtained by keeping the density ratio of about 40 trees per square meter, simultaneously, of the visual effects.

  4. Research on Multi Hydrological Models Applicability and Modelling Data Uncertainty Analysis for Flash Flood Simulation in Hilly Area

    NASA Astrophysics Data System (ADS)

    Ye, L.; Wu, J.; Wang, L.; Song, T.; Ji, R.

    2017-12-01

    Flooding in small-scale watershed in hilly area is characterized by short time periods and rapid rise and recession due to the complex underlying surfaces, various climate type and strong effect of human activities. It is almost impossible for a single hydrological model to describe the variation of flooding in both time and space accurately for all the catchments in hilly area because the hydrological characteristics can vary significantly among different catchments. In this study, we compare the performance of 5 hydrological models with varying degrees of complexity for simulation of flash flood for 14 small-scale watershed in China in order to find the relationship between the applicability of the hydrological models and the catchments characteristics. Meanwhile, given the fact that the hydrological data is sparse in hilly area, the effect of precipitation data, DEM resolution and their interference on the uncertainty of flood simulation is also illustrated. In general, the results showed that the distributed hydrological model (HEC-HMS in this study) performed better than the lumped hydrological models. Xinajiang and API models had good simulation for the humid catchments when long-term and continuous rainfall data is provided. Dahuofang model can simulate the flood peak well while the runoff generation module is relatively poor. In addition, the effect of diverse modelling data on the simulations is not simply superposed, and there is a complex interaction effect among different modelling data. Overall, both the catchment hydrological characteristics and modelling data situation should be taken into consideration in order to choose the suitable hydrological model for flood simulation for small-scale catchment in hilly area.

  5. Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture

    NASA Astrophysics Data System (ADS)

    Hassan, Ezeldin A.

    Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.

  6. A model for including thermal conduction in molecular dynamics simulations

    NASA Technical Reports Server (NTRS)

    Wu, Yue; Friauf, Robert J.

    1989-01-01

    A technique is introduced for including thermal conduction in molecular dynamics simulations for solids. A model is developed to allow energy flow between the computational cell and the bulk of the solid when periodic boundary conditions cannot be used. Thermal conduction is achieved by scaling the velocities of atoms in a transitional boundary layer. The scaling factor is obtained from the thermal diffusivity, and the results show good agreement with the solution for a continuous medium at long times. The effects of different temperature and size of the system, and of variations in strength parameter, atomic mass, and thermal diffusivity were investigated. In all cases, no significant change in simulation results has been found.

  7. Development of mpi_EPIC model for global agroecosystem modeling

    DOE PAGES

    Kang, Shujiang; Wang, Dali; Jeff A. Nichols; ...

    2014-12-31

    Models that address policy-maker concerns about multi-scale effects of food and bioenergy production systems are computationally demanding. We integrated the message passing interface algorithm into the process-based EPIC model to accelerate computation of ecosystem effects. Simulation performance was further enhanced by applying the Vampir framework. When this enhanced mpi_EPIC model was tested, total execution time for a global 30-year simulation of a switchgrass cropping system was shortened to less than 0.5 hours on a supercomputer. The results illustrate that mpi_EPIC using parallel design can balance simulation workloads and facilitate large-scale, high-resolution analysis of agricultural production systems, management alternatives and environmentalmore » effects.« less

  8. Numerical simulation of small-scale thermal convection in the atmosphere

    NASA Technical Reports Server (NTRS)

    Somerville, R. C. J.

    1973-01-01

    A Boussinesq system is integrated numerically in three dimensions and time in a study of nonhydrostatic convection in the atmosphere. Simulation of cloud convection is achieved by the inclusion of parametrized effects of latent heat and small-scale turbulence. The results are compared with the cell structure observed in Rayleigh-Benard laboratory conversion experiments in air. At a Rayleigh number of 4000, the numerical model adequately simulates the experimentally observed evolution, including some prominent transients of a flow from a randomly perturbed initial conductive state into the final state of steady large-amplitude two-dimensional rolls. At Rayleigh number 9000, the model reproduces the experimentally observed unsteady equilibrium of vertically coherent oscillatory waves superimposed on rolls.

  9. Influence of injection mode on transport properties in kilometer-scale three-dimensional discrete fracture networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyman, Jeffrey De'Haven; Painter, S. L.; Viswanathan, H.

    We investigate how the choice of injection mode impacts transport properties in kilometer-scale three-dimensional discrete fracture networks (DFN). The choice of injection mode, resident and flux-weighted, is designed to mimic different physical phenomena. It has been hypothesized that solute plumes injected under resident conditions evolve to behave similarly to solutes injected under flux-weighted conditions. Previously, computational limitations have prohibited the large-scale simulations required to investigate this hypothesis. We investigate this hypothesis by using a high-performance DFN suite, dfnWorks, to simulate flow in kilometer-scale three-dimensional DFNs based on fractured granite at the Forsmark site in Sweden, and adopt a Lagrangian approachmore » to simulate transport therein. Results show that after traveling through a pre-equilibrium region, both injection methods exhibit linear scaling of the first moment of travel time and power law scaling of the breakthrough curve with similar exponents, slightly larger than 2. Lastly, the physical mechanisms behind this evolution appear to be the combination of in-network channeling of mass into larger fractures, which offer reduced resistance to flow, and in-fracture channeling, which results from the topology of the DFN.« less

  10. Influence of injection mode on transport properties in kilometer-scale three-dimensional discrete fracture networks

    DOE PAGES

    Hyman, Jeffrey De'Haven; Painter, S. L.; Viswanathan, H.; ...

    2015-09-12

    We investigate how the choice of injection mode impacts transport properties in kilometer-scale three-dimensional discrete fracture networks (DFN). The choice of injection mode, resident and flux-weighted, is designed to mimic different physical phenomena. It has been hypothesized that solute plumes injected under resident conditions evolve to behave similarly to solutes injected under flux-weighted conditions. Previously, computational limitations have prohibited the large-scale simulations required to investigate this hypothesis. We investigate this hypothesis by using a high-performance DFN suite, dfnWorks, to simulate flow in kilometer-scale three-dimensional DFNs based on fractured granite at the Forsmark site in Sweden, and adopt a Lagrangian approachmore » to simulate transport therein. Results show that after traveling through a pre-equilibrium region, both injection methods exhibit linear scaling of the first moment of travel time and power law scaling of the breakthrough curve with similar exponents, slightly larger than 2. Lastly, the physical mechanisms behind this evolution appear to be the combination of in-network channeling of mass into larger fractures, which offer reduced resistance to flow, and in-fracture channeling, which results from the topology of the DFN.« less

  11. Correlation Lengths for Estimating the Large-Scale Carbon and Heat Content of the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Mazloff, M. R.; Cornuelle, B. D.; Gille, S. T.; Verdy, A.

    2018-02-01

    The spatial correlation scales of oceanic dissolved inorganic carbon, heat content, and carbon and heat exchanges with the atmosphere are estimated from a realistic numerical simulation of the Southern Ocean. Biases in the model are assessed by comparing the simulated sea surface height and temperature scales to those derived from optimally interpolated satellite measurements. While these products do not resolve all ocean scales, they are representative of the climate scale variability we aim to estimate. Results show that constraining the carbon and heat inventory between 35°S and 70°S on time-scales longer than 90 days requires approximately 100 optimally spaced measurement platforms: approximately one platform every 20° longitude by 6° latitude. Carbon flux has slightly longer zonal scales, and requires a coverage of approximately 30° by 6°. Heat flux has much longer scales, and thus a platform distribution of approximately 90° by 10° would be sufficient. Fluxes, however, have significant subseasonal variability. For all fields, and especially fluxes, sustained measurements in time are required to prevent aliasing of the eddy signals into the longer climate scale signals. Our results imply a minimum of 100 biogeochemical-Argo floats are required to monitor the Southern Ocean carbon and heat content and air-sea exchanges on time-scales longer than 90 days. However, an estimate of formal mapping error using the current Argo array implies that in practice even an array of 600 floats (a nominal float density of about 1 every 7° longitude by 3° latitude) will result in nonnegligible uncertainty in estimating climate signals.

  12. Laboratory-Scale Simulation and Real-Time Tracking of a Microbial Contamination Event and Subsequent Shock-Chlorination in Drinking Water

    PubMed Central

    Besmer, Michael D.; Sigrist, Jürg A.; Props, Ruben; Buysschaert, Benjamin; Mao, Guannan; Boon, Nico; Hammes, Frederik

    2017-01-01

    Rapid contamination of drinking water in distribution and storage systems can occur due to pressure drop, backflow, cross-connections, accidents, and bio-terrorism. Small volumes of a concentrated contaminant (e.g., wastewater) can contaminate large volumes of water in a very short time with potentially severe negative health impacts. The technical limitations of conventional, cultivation-based microbial detection methods neither allow for timely detection of such contaminations, nor for the real-time monitoring of subsequent emergency remediation measures (e.g., shock-chlorination). Here we applied a newly developed continuous, ultra high-frequency flow cytometry approach to track a rapid pollution event and subsequent disinfection of drinking water in an 80-min laboratory scale simulation. We quantified total (TCC) and intact (ICC) cell concentrations as well as flow cytometric fingerprints in parallel in real-time with two different staining methods. The ingress of wastewater was detectable almost immediately (i.e., after 0.6% volume change), significantly changing TCC, ICC, and the flow cytometric fingerprint. Shock chlorination was rapid and detected in real time, causing membrane damage in the vast majority of bacteria (i.e., drop of ICC from more than 380 cells μl-1 to less than 30 cells μl-1 within 4 min). Both of these effects as well as the final wash-in of fresh tap water followed calculated predictions well. Detailed and highly quantitative tracking of microbial dynamics at very short time scales and for different characteristics (e.g., concentration, membrane integrity) is feasible. This opens up multiple possibilities for targeted investigation of a myriad of bacterial short-term dynamics (e.g., disinfection, growth, detachment, operational changes) both in laboratory-scale research and full-scale system investigations in practice. PMID:29085343

  13. A compositional reservoir simulator on distributed memory parallel computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rame, M.; Delshad, M.

    1995-12-31

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. Amore » portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented.« less

  14. Shock compression response of cold-rolled Ni/Al multilayer composites

    DOE PAGES

    Specht, Paul E.; Weihs, Timothy P.; Thadhani, Naresh N.

    2017-01-06

    Uniaxial strain, plate-on-plate impact experiments were performed on cold-rolled Ni/Al multilayer composites and the resulting Hugoniot was determined through time-resolved measurements combined with impedance matching. The experimental Hugoniot agreed with that previously predicted by two dimensional (2D) meso-scale calculations. Additional 2D meso-scale simulations were performed using the same computational method as the prior study to reproduce the experimentally measured free surface velocities and stress profiles. Finally, these simulations accurately replicated the experimental profiles, providing additional validation for the previous computational work.

  15. Quantitative modeling of soil genesis processes

    NASA Technical Reports Server (NTRS)

    Levine, E. R.; Knox, R. G.; Kerber, A. G.

    1992-01-01

    For fine spatial scale simulation, a model is being developed to predict changes in properties over short-, meso-, and long-term time scales within horizons of a given soil profile. Processes that control these changes can be grouped into five major process clusters: (1) abiotic chemical reactions; (2) activities of organisms; (3) energy balance and water phase transitions; (4) hydrologic flows; and (5) particle redistribution. Landscape modeling of soil development is possible using digitized soil maps associated with quantitative soil attribute data in a geographic information system (GIS) framework to which simulation models are applied.

  16. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    NASA Astrophysics Data System (ADS)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  17. Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Pohlmann, K.

    2016-12-01

    Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.

  18. Coupled Long-Term Simulation of Reach-Scale Water and Heat Fluxes Across the River-Groundwater Interface for Retrieving Hyporheic Residence Times and Temperature Dynamics

    NASA Astrophysics Data System (ADS)

    Munz, Matthias; Oswald, Sascha E.; Schmidt, Christian

    2017-11-01

    Flow patterns in conjunction with seasonal and diurnal temperature variations control ecological and biogeochemical conditions in hyporheic sediments. In particular, hyporheic temperatures have a great impact on many temperature-sensitive microbial processes. In this study, we used 3-D coupled water flow and heat transport simulations applying the HydroGeoSphere code in combination with high-resolution observations of hydraulic heads and temperatures to quantify reach-scale water and heat flux across the river-groundwater interface and hyporheic temperature dynamics of a lowland gravel bed river. The model was calibrated in order to constrain estimates of the most sensitive model parameters. The magnitude and variations of the simulated temperatures matched the observed ones, with an average mean absolute error of 0.7°C and an average Nash Sutcliffe efficiency of 0.87. Our results indicate that nonsubmerged streambed structures such as gravel bars cause substantial thermal heterogeneity within the saturated sediment at the reach scale. Individual hyporheic flow path temperatures strongly depend on the flow path residence time, flow path depth, river, and groundwater temperature. Variations in individual hyporheic flow path temperatures were up to 7.9°C, significantly higher than the daily average (2.8°C), but still lower than the average seasonal hyporheic temperature difference (19.2°C). The distribution between flow path temperatures and residence times follows a power law relationship with exponent of about 0.37. Based on this empirical relation, we further estimated the influence of hyporheic flow path residence time and temperature on oxygen consumption which was found to partly increase by up to 29% in simulations.

  19. 3D ion-scale dynamics of BBFs and their associated emissions in Earth's magnetotail using 3D hybrid simulations and MMS multi-spacecraft observations

    NASA Astrophysics Data System (ADS)

    Breuillard, H.; Aunai, N.; Le Contel, O.; Catapano, F.; Alexandrova, A.; Retino, A.; Cozzani, G.; Gershman, D. J.; Giles, B. L.; Khotyaintsev, Y. V.; Lindqvist, P. A.; Ergun, R.; Strangeway, R. J.; Russell, C. T.; Magnes, W.; Plaschke, F.; Nakamura, R.; Fuselier, S. A.; Turner, D. L.; Schwartz, S. J.; Torbert, R. B.; Burch, J.

    2017-12-01

    Transient and localized jets of hot plasma, also known as Bursty Bulk Flows (BBFs), play a crucial role in Earth's magnetotail dynamics because the energy input from the solar wind is partly dissipated in their vicinity, notably in their embedded dipolarization front (DF). This dissipation is in the form of strong low-frequency waves that can heat and accelerate energetic particles up to the high-latitude plasma sheet. The ion-scale dynamics of BBFs have been revealed by the Cluster and THEMIS multi-spacecraft missions. However, the dynamics of BBF propagation in the magnetotail are still under debate due to instrumental limitations and spacecraft separation distances, as well as simulation limitations. The NASA/MMS fleet, which features unprecedented high time resolution instruments and four spacecraft separated by kinetic-scale distances, has also shown recently that the DF normal dynamics and its associated emissions are below the ion gyroradius scale in this region. Large variations in the dawn-dusk direction were also observed. However, most of large-scale simulations are using the MHD approach and are assumed 2D in the XZ plane. Thus, in this study we take advantage of both multi-spacecraft observations by MMS and large-scale 3D hybrid simulations to investigate the 3D dynamics of BBFs and their associated emissions at ion-scale in Earth's magnetotail, and their impact on particle heating and acceleration.

  20. Mean-cluster approach indicates cell sorting time scales are determined by collective dynamics

    NASA Astrophysics Data System (ADS)

    Beatrici, Carine P.; de Almeida, Rita M. C.; Brunnet, Leonardo G.

    2017-03-01

    Cell migration is essential to cell segregation, playing a central role in tissue formation, wound healing, and tumor evolution. Considering random mixtures of two cell types, it is still not clear which cell characteristics define clustering time scales. The mass of diffusing clusters merging with one another is expected to grow as td /d +2 when the diffusion constant scales with the inverse of the cluster mass. Cell segregation experiments deviate from that behavior. Explanations for that could arise from specific microscopic mechanisms or from collective effects, typical of active matter. Here we consider a power law connecting diffusion constant and cluster mass to propose an analytic approach to model cell segregation where we explicitly take into account finite-size corrections. The results are compared with active matter model simulations and experiments available in the literature. To investigate the role played by different mechanisms we considered different hypotheses describing cell-cell interaction: differential adhesion hypothesis and different velocities hypothesis. We find that the simulations yield normal diffusion for long time intervals. Analytic and simulation results show that (i) cluster evolution clearly tends to a scaling regime, disrupted only at finite-size limits; (ii) cluster diffusion is greatly enhanced by cell collective behavior, such that for high enough tendency to follow the neighbors, cluster diffusion may become independent of cluster size; (iii) the scaling exponent for cluster growth depends only on the mass-diffusion relation, not on the detailed local segregation mechanism. These results apply for active matter systems in general and, in particular, the mechanisms found underlying the increase in cell sorting speed certainly have deep implications in biological evolution as a selection mechanism.

  1. Solar Magnetic Carpet III: Coronal Modelling of Synthetic Magnetograms

    NASA Astrophysics Data System (ADS)

    Meyer, K. A.; Mackay, D. H.; van Ballegooijen, A. A.; Parnell, C. E.

    2013-09-01

    This article is the third in a series working towards the construction of a realistic, evolving, non-linear force-free coronal-field model for the solar magnetic carpet. Here, we present preliminary results of 3D time-dependent simulations of the small-scale coronal field of the magnetic carpet. Four simulations are considered, each with the same evolving photospheric boundary condition: a 48-hour time series of synthetic magnetograms produced from the model of Meyer et al. ( Solar Phys. 272, 29, 2011). Three simulations include a uniform, overlying coronal magnetic field of differing strength, the fourth simulation includes no overlying field. The build-up, storage, and dissipation of magnetic energy within the simulations is studied. In particular, we study their dependence upon the evolution of the photospheric magnetic field and the strength of the overlying coronal field. We also consider where energy is stored and dissipated within the coronal field. The free magnetic energy built up is found to be more than sufficient to power small-scale, transient phenomena such as nanoflares and X-ray bright points, with the bulk of the free energy found to be stored low down, between 0.5 - 0.8 Mm. The energy dissipated is currently found to be too small to account for the heating of the entire quiet-Sun corona. However, the form and location of energy-dissipation regions qualitatively agree with what is observed on small scales on the Sun. Future MHD modelling using the same synthetic magnetograms may lead to a higher energy release.

  2. Evaluating synoptic systems in the CMIP5 climate models over the Australian region

    NASA Astrophysics Data System (ADS)

    Gibson, Peter B.; Uotila, Petteri; Perkins-Kirkpatrick, Sarah E.; Alexander, Lisa V.; Pitman, Andrew J.

    2016-10-01

    Climate models are our principal tool for generating the projections used to inform climate change policy. Our confidence in projections depends, in part, on how realistically they simulate present day climate and associated variability over a range of time scales. Traditionally, climate models are less commonly assessed at time scales relevant to daily weather systems. Here we explore the utility of a self-organizing maps (SOMs) procedure for evaluating the frequency, persistence and transitions of daily synoptic systems in the Australian region simulated by state-of-the-art global climate models. In terms of skill in simulating the climatological frequency of synoptic systems, large spread was observed between models. A positive association between all metrics was found, implying that relative skill in simulating the persistence and transitions of systems is related to skill in simulating the climatological frequency. Considering all models and metrics collectively, model performance was found to be related to model horizontal resolution but unrelated to vertical resolution or representation of the stratosphere. In terms of the SOM procedure, the timespan over which evaluation was performed had some influence on model performance skill measures, as did the number of circulation types examined. These findings have implications for selecting models most useful for future projections over the Australian region, particularly for projections related to synoptic scale processes and phenomena. More broadly, this study has demonstrated the utility of the SOMs procedure in providing a process-based evaluation of climate models.

  3. ESiWACE: A Center of Excellence for HPC applications to support cloud resolving earth system modelling

    NASA Astrophysics Data System (ADS)

    Biercamp, Joachim; Adamidis, Panagiotis; Neumann, Philipp

    2017-04-01

    With the exa-scale era approaching, length and time scales used for climate research on one hand and numerical weather prediction on the other hand blend into each other. The Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE) represents a European consortium comprising partners from climate, weather and HPC in their effort to address key scientific challenges that both communities have in common. A particular challenge is to reach global models with spatial resolutions that allow simulating convective clouds and small-scale ocean eddies. These simulations would produce better predictions of trends and provide much more fidelity in the representation of high-impact regional events. However, running such models in operational mode, i.e with sufficient throughput in ensemble mode clearly will require exa-scale computing and data handling capability. We will discuss the ESiWACE initiative and relate it to work-in-progress on high-resolution simulations in Europe. We present recent strong scalability measurements from ESiWACE to demonstrate current computability in weather and climate simulation. A special focus in this particular talk is on the Icosahedal Nonhydrostatic (ICON) model used for a comparison of high resolution regional and global simulations with high quality observation data. We demonstrate that close-to-optimal parallel efficiency can be achieved in strong scaling global resolution experiments on Mistral/DKRZ, e.g. 94% for 5km resolution simulations using 36k cores on Mistral/DKRZ. Based on our scalability and high-resolution experiments, we deduce and extrapolate future capabilities for ICON that are expected for weather and climate research at exascale.

  4. The scale invariant generator technique for quantifying anisotropic scale invariance

    NASA Astrophysics Data System (ADS)

    Lewis, G. M.; Lovejoy, S.; Schertzer, D.; Pecknold, S.

    1999-11-01

    Scale invariance is rapidly becoming a new paradigm for geophysics. However, little attention has been paid to the anisotropy that is invariably present in geophysical fields in the form of differential stratification and rotation, texture and morphology. In order to account for scaling anisotropy, the formalism of generalized scale invariance (GSI) was developed. Until now there has existed only a single fairly ad hoc GSI analysis technique valid for studying differential rotation. In this paper, we use a two-dimensional representation of the linear approximation to generalized scale invariance, to obtain a much improved technique for quantifying anisotropic scale invariance called the scale invariant generator technique (SIG). The accuracy of the technique is tested using anisotropic multifractal simulations and error estimates are provided for the geophysically relevant range of parameters. It is found that the technique yields reasonable estimates for simulations with a diversity of anisotropic and statistical characteristics. The scale invariant generator technique can profitably be applied to the scale invariant study of vertical/horizontal and space/time cross-sections of geophysical fields as well as to the study of the texture/morphology of fields.

  5. Time-dependent magnetohydrodynamic simulations of the inner heliosphere

    NASA Astrophysics Data System (ADS)

    Merkin, V. G.; Lyon, J. G.; Lario, D.; Arge, C. N.; Henney, C. J.

    2016-04-01

    This paper presents results from a simulation study exploring heliospheric consequences of time-dependent changes at the Sun. We selected a 2 month period in the beginning of year 2008 that was characterized by very low solar activity. The heliosphere in the equatorial region was dominated by two coronal holes whose changing structure created temporal variations distorting the classical steady state picture of the heliosphere. We used the Air Force Data Assimilate Photospheric Flux Transport (ADAPT) model to obtain daily updated photospheric magnetograms and drive the Wang-Sheeley-Arge (WSA) model of the corona. This leads to a formulation of a time-dependent boundary condition for our three-dimensional (3-D) magnetohydrodynamic (MHD) model, LFM-helio, which is the heliospheric adaptation of the Lyon-Fedder-Mobarry MHD simulation code. The time-dependent coronal conditions were propagated throughout the inner heliosphere, and the simulation results were compared with the spacecraft located near 1 astronomical unit (AU) heliocentric distance: Advanced Composition Explorer (ACE), Solar Terrestrial Relations Observatory (STEREO-A and STEREO-B), and the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft that was in cruise phase measuring the heliospheric magnetic field between 0.35 and 0.6 AU. In addition, during the selected interval MESSENGER and ACE aligned radially allowing minimization of the effects of temporal variation at the Sun versus radial evolution of structures. Our simulations show that time-dependent simulationsreproduce the gross-scale structure of the heliosphere with higher fidelity, while on smaller spatial and faster time scales (e.g., 1 day) they provide important insights for interpretation of the data. The simulations suggest that moving boundaries of slow-fast wind transitions at 0.1 AU may result in the formation of inverted magnetic fields near pseudostreamers which is an intrinsically time-dependent process. Finally, we show that heliospheric current sheet corrugation, which may result in multiple sector boundary crossings when observed by spacecraft, is caused by solar wind velocity shears. Overall, our simulations demonstrate that time-dependent heliosphere modeling is a promising direction of research both for space weather applications and fundamental physics of the heliosphere.

  6. Running climate model on a commercial cloud computing environment: A case study using Community Earth System Model (CESM) on Amazon AWS

    NASA Astrophysics Data System (ADS)

    Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock

    2017-01-01

    The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.

  7. Accelerated molecular dynamics: A promising and efficient simulation method for biomolecules

    NASA Astrophysics Data System (ADS)

    Hamelberg, Donald; Mongan, John; McCammon, J. Andrew

    2004-06-01

    Many interesting dynamic properties of biological molecules cannot be simulated directly using molecular dynamics because of nanosecond time scale limitations. These systems are trapped in potential energy minima with high free energy barriers for large numbers of computational steps. The dynamic evolution of many molecular systems occurs through a series of rare events as the system moves from one potential energy basin to another. Therefore, we have proposed a robust bias potential function that can be used in an efficient accelerated molecular dynamics approach to simulate the transition of high energy barriers without any advance knowledge of the location of either the potential energy wells or saddle points. In this method, the potential energy landscape is altered by adding a bias potential to the true potential such that the escape rates from potential wells are enhanced, which accelerates and extends the time scale in molecular dynamics simulations. Our definition of the bias potential echoes the underlying shape of the potential energy landscape on the modified surface, thus allowing for the potential energy minima to be well defined, and hence properly sampled during the simulation. We have shown that our approach, which can be extended to biomolecules, samples the conformational space more efficiently than normal molecular dynamics simulations, and converges to the correct canonical distribution.

  8. Using Mesoscale Weather Model Output as Boundary Conditions for Atmospheric Large-Eddy Simulations and Wind-Plant Aerodynamic Simulations (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, M. J.; Michalakes, J.; Vanderwende, B.

    Wind plant aerodynamics are directly affected by the microscale weather, which is directly influenced by the mesoscale weather. Microscale weather refers to processes that occur within the atmospheric boundary layer with the largest scales being a few hundred meters to a few kilometers depending on the atmospheric stability of the boundary layer. Mesoscale weather refers to large weather patterns, such as weather fronts, with the largest scales being hundreds of kilometers wide. Sometimes microscale simulations that capture mesoscale-driven variations (changes in wind speed and direction over time or across the spatial extent of a wind plant) are important in windmore » plant analysis. In this paper, we present our preliminary work in coupling a mesoscale weather model with a microscale atmospheric large-eddy simulation model. The coupling is one-way beginning with the weather model and ending with a computational fluid dynamics solver using the weather model in coarse large-eddy simulation mode as an intermediary. We simulate one hour of daytime moderately convective microscale development driven by the mesoscale data, which are applied as initial and boundary conditions to the microscale domain, at a site in Iowa. We analyze the time and distance necessary for the smallest resolvable microscales to develop.« less

  9. The new ATLAS Fast Calorimeter Simulation

    NASA Astrophysics Data System (ADS)

    Schaarschmidt, J.; ATLAS Collaboration

    2017-10-01

    Current and future need for large scale simulated samples motivate the development of reliable fast simulation techniques. The new Fast Calorimeter Simulation is an improved parameterized response of single particles in the ATLAS calorimeter that aims to accurately emulate the key features of the detailed calorimeter response as simulated with Geant4, yet approximately ten times faster. Principal component analysis and machine learning techniques are used to improve the performance and decrease the memory need compared to the current version of the ATLAS Fast Calorimeter Simulation. A prototype of this new Fast Calorimeter Simulation is in development and its integration into the ATLAS simulation infrastructure is ongoing.

  10. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator

    PubMed Central

    Wang, Runchun M.; Thakur, Chetan S.; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks. PMID:29692702

  11. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator.

    PubMed

    Wang, Runchun M; Thakur, Chetan S; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.

  12. Similarities between principal components of protein dynamics and random diffusion

    NASA Astrophysics Data System (ADS)

    Hess, Berk

    2000-12-01

    Principal component analysis, also called essential dynamics, is a powerful tool for finding global, correlated motions in atomic simulations of macromolecules. It has become an established technique for analyzing molecular dynamics simulations of proteins. The first few principal components of simulations of large proteins often resemble cosines. We derive the principal components for high-dimensional random diffusion, which are almost perfect cosines. This resemblance between protein simulations and noise implies that for many proteins the time scales of current simulations are too short to obtain convergence of collective motions.

  13. Set-free Markov state model building

    NASA Astrophysics Data System (ADS)

    Weber, Marcus; Fackeldey, Konstantin; Schütte, Christof

    2017-03-01

    Molecular dynamics (MD) simulations face challenging problems since the time scales of interest often are much longer than what is possible to simulate; and even if sufficiently long simulations are possible the complex nature of the resulting simulation data makes interpretation difficult. Markov State Models (MSMs) help to overcome these problems by making experimentally relevant time scales accessible via coarse grained representations that also allow for convenient interpretation. However, standard set-based MSMs exhibit some caveats limiting their approximation quality and statistical significance. One of the main caveats results from the fact that typical MD trajectories repeatedly re-cross the boundary between the sets used to build the MSM which causes statistical bias in estimating the transition probabilities between these sets. In this article, we present a set-free approach to MSM building utilizing smooth overlapping ansatz functions instead of sets and an adaptive refinement approach. This kind of meshless discretization helps to overcome the recrossing problem and yields an adaptive refinement procedure that allows us to improve the quality of the model while exploring state space and inserting new ansatz functions into the MSM.

  14. Computing general-relativistic effects from Newtonian N-body simulations: Frame dragging in the post-Friedmann approach

    NASA Astrophysics Data System (ADS)

    Bruni, Marco; Thomas, Daniel B.; Wands, David

    2014-02-01

    We present the first calculation of an intrinsically relativistic quantity, the leading-order correction to Newtonian theory, in fully nonlinear cosmological large-scale structure studies. Traditionally, nonlinear structure formation in standard ΛCDM cosmology is studied using N-body simulations, based on Newtonian gravitational dynamics on an expanding background. When one derives the Newtonian regime in a way that is a consistent approximation to the Einstein equations, the first relativistic correction to the usual Newtonian scalar potential is a gravitomagnetic vector potential, giving rise to frame dragging. At leading order, this vector potential does not affect the matter dynamics, thus it can be computed from Newtonian N-body simulations. We explain how we compute the vector potential from simulations in ΛCDM and examine its magnitude relative to the scalar potential, finding that the power spectrum of the vector potential is of the order 10-5 times the scalar power spectrum over the range of nonlinear scales we consider. On these scales the vector potential is up to two orders of magnitudes larger than the value predicted by second-order perturbation theory extrapolated to the same scales. We also discuss some possible observable effects and future developments.

  15. Penetration scaling in atomistic simulations of hypervelocity impact

    NASA Astrophysics Data System (ADS)

    Ruestes, C. J.; Bringa, E. M.; Fioretti, F.; Higginbotham, A.; Taylor, E. A.; Graham, G.

    2011-06-01

    We present atomistic molecular dynamics simulations of the impact of copper nano particles at 5 km/s on copper films ranging in thickness from 0.5 to 4 times the projectile diameter. We access both penetration and cratering regimes with final cratering morphologies showing considerable similarity to experimental impacts on both micron and millimeter scales. Both craters and holes are formed from a molten region, with relatively low defect densities remaining after cooling and recrystallisation. Crater diameter and penetration limits are compared to analytical scaling models: in agreement with some models we find the onset of penetration occurs for 1.0 < f/d < 1.5, where f is the film thickness and d is the projectile diameter. However, our results for the hole size agree well with scaling laws based on macroscopic experiments providing enhanced strength of a nano-film that melts completely at the impact region is taken into account. Penetration in films with pre-existing nanocracks is qualitatively similar to penetration in perfect films, including the lack of back-spall. Simulations using ``peridynamics'' are also described and compared to the atomistic simulations. Work supported by PICT2007-PRH, PICT-2008 1325, and SeCTyP.

  16. Large-Eddy Simulation of Wind-Plant Aerodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation, and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done large-eddy simulations of wind plants with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology formore » performing this type of simulation. We used the OpenFOAM CFD toolbox to create our solver. The simulated time-averaged power production of the turbines in the plant agrees well with field observations, except with the sixth turbine and beyond in each wind-aligned. The power produced by each of those turbines is overpredicted by 25-40%. A direct comparison between simulated and field data is difficult because we simulate one wind direction with a speed and turbulence intensity characteristic of Lillgrund, but the field observations were taken over a year of varying conditions. The simulation shows the significant 60-70% decrease in the performance of the turbines behind the front row in this plant that has a spacing of 4.3 rotor diameters in this direction. The overall plant efficiency is well predicted. This work shows the importance of using local grid refinement to simultaneously capture the meter-scale details of the turbine wake and the kilometer-scale turbulent atmospheric structures. Although this work illustrates the power of large-eddy simulation in producing a time-accurate solution, it required about one million processor-hours, showing the significant cost of large-eddy simulation.« less

  17. Bypassing the Kohn-Sham equations with machine learning.

    PubMed

    Brockherde, Felix; Vogt, Leslie; Li, Li; Tuckerman, Mark E; Burke, Kieron; Müller, Klaus-Robert

    2017-10-11

    Last year, at least 30,000 scientific papers used the Kohn-Sham scheme of density functional theory to solve electronic structure problems in a wide variety of scientific fields. Machine learning holds the promise of learning the energy functional via examples, bypassing the need to solve the Kohn-Sham equations. This should yield substantial savings in computer time, allowing larger systems and/or longer time-scales to be tackled, but attempts to machine-learn this functional have been limited by the need to find its derivative. The present work overcomes this difficulty by directly learning the density-potential and energy-density maps for test systems and various molecules. We perform the first molecular dynamics simulation with a machine-learned density functional on malonaldehyde and are able to capture the intramolecular proton transfer process. Learning density models now allows the construction of accurate density functionals for realistic molecular systems.Machine learning allows electronic structure calculations to access larger system sizes and, in dynamical simulations, longer time scales. Here, the authors perform such a simulation using a machine-learned density functional that avoids direct solution of the Kohn-Sham equations.

  18. Antiferroelectric polarization switching and dynamic scaling of energy storage: A Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Huang, B. Y.; Lu, Z. X.; Zhang, Y.; Xie, Y. L.; Zeng, M.; Yan, Z. B.; Liu, J.-M.

    2016-05-01

    The polarization-electric field hysteresis loops and the dynamics of polarization switching in a two-dimensional antiferroelectric (AFE) lattice submitted to a time-oscillating electric field E(t) of frequency f and amplitude E0, is investigated using Monte Carlo simulation based on the Landau-Devonshire phenomenological theory on antiferroelectrics. It is revealed that the AFE double-loop hysteresis area A, i.e., the energy loss in one cycle of polarization switching, exhibits the single-peak frequency dispersion A(f), suggesting the unique characteristic time for polarization switching, which is independent of E0 as long as E0 is larger than the quasi-static coercive field for the antiferroelectric-ferroelectric transitions. However, the dependence of recoverable stored energy W on amplitude E0 seems to be complicated depending on temperature T and frequency f. A dynamic scaling behavior of the energy loss dispersion A(f) over a wide range of E0 is obtained, confirming the unique characteristic time for polarization switching of an AFE lattice. The present simulation may shed light on the dynamics of energy storage and release in AFE thin films.

  19. Study on ion energy distribution in low-frequency oscillation time scale of Hall thrusters

    NASA Astrophysics Data System (ADS)

    Wei, Liqiu; Li, Wenbo; Ding, Yongjie; Han, Liang; Yu, Daren; Cao, Yong

    2017-11-01

    This paper reports on the dynamic characteristics of the distribution of ion energy during Hall thruster discharge in the low-frequency oscillation time scale through experimental studies, and a statistical analysis of the time-varying peak and width of ion energy and the ratio of high-energy ions during the low-frequency oscillation. The results show that the ion energy distribution exhibits a periodic change during the low-frequency oscillation. Moreover, the variation in the ion energy peak is opposite to that of the discharge current, and the variations in width of the ion energy distribution and the ratio of high-energy ions are consistent with that of the discharge current. The variation characteristics of the ion density and discharge potential were simulated by one-dimensional hybrid-direct kinetic simulations; the simulation results and analysis indicate that the periodic change in the distribution of ion energy during the low-frequency oscillation depends on the relationship between the ionization source term and discharge potential distribution during ionization in the discharge channel.

  20. The QBO Impact on Stratospheric Composition as Revealed by the MERRA-2 GMI Simulation and NASA Satellite Observations

    NASA Astrophysics Data System (ADS)

    Oman, L.; Strahan, S. E.

    2017-12-01

    The Quasi-Biennial Oscillation (QBO) is the dominant mode of variability in the tropical stratosphere on interannual time scales. It has been shown to impact both stratospheric dynamics and important trace gas constituent distributions. The QBO timing with respect to the seasonal cycle in each hemisphere is significant in determining its impact on up to decadal scale variability. The composition response to the QBO is examined using the new MERRA-2 GMI "Replay" simulation, an atmospheric composition community resource, run at the native MERRA-2 approximately ½° horizontal resolution on the cubed sphere. MERRA-2 GMI is driven by the online use of key MERRA-2 meteorological quantities (i.e. U, V, T, and P) with all other variables calculated in response to those and boundary condition forcings from 1980-2016. The simulation combined with NASA's UARS and Aura satellite measurements have allowed us to quantify the impact of the QBO on stratospheric composition in more detail than was ever possible before. Revealing preferential pathways and transport timings necessary in understanding the QBO impact on composition throughout the stratosphere.

Top